Preview/Trufflehog#93
Conversation
…tectors, disable Gitleaks
📝 WalkthroughWalkthroughDisables gitleaks detection, adds custom TruffleHog secret detectors for application-specific tokens, and includes multiple files with intentionally embedded test credentials to validate detector functionality. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (3)
.coderabbit.yml (1)
1-4: Consider keeping both secret scanners enabled for defense-in-depth.Disabling Gitleaks removes one layer of secret detection. Since TruffleHog and Gitleaks use different detection strategies and rule sets, running both in parallel provides better coverage. Unless there's a specific conflict or performance concern, consider enabling both tools.
💡 Suggested configuration to keep both enabled
reviews: tools: gitleaks: - enabled: false + enabled: true + trufflehog: + enabled: true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.coderabbit.yml around lines 1 - 4, The gitleaks scanner has been disabled under the reviews.tools.gitleaks.enabled setting which removes an important layer of secret detection; re-enable it by setting reviews.tools.gitleaks.enabled to true (keeping the existing trufflehog configuration intact) so both scanners run for defense-in-depth, and if needed add any performance or exclusion options to manage conflicts rather than turning gitleaks off.trufflehog.yml (1)
1-21: Custom detectors are well-defined; consider adding descriptions for clarity.The regex patterns correctly match the target token formats:
internal_token_[a-f0-9]{32}— matches 32-char hex suffixdemoapp_[a-zA-Z0-9]{40}— matches 40-char alphanumeric suffixdpt_[a-zA-Z0-9]{32}— matches 32-char alphanumeric suffixTruffleHog's static analysis warnings note these are "user-defined detector[s] with no description provided." Adding descriptions improves alert triage and documentation.
📝 Add descriptions to detectors
detectors: - name: InternalServiceToken + description: "Internal service authentication tokens for inter-service communication" keywords: - internal_token - INTERNAL_TOKEN regex: key: 'internal_token_[a-f0-9]{32}' - name: DemoAppApiKey + description: "DemoApp API keys for external integrations" keywords: - demoapp_api_key - DEMOAPP_API_KEY regex: key: 'demoapp_[a-zA-Z0-9]{40}' - name: DemoAppDeployToken + description: "Deployment pipeline tokens for CI/CD systems" keywords: - deploy_token - DEPLOY_TOKEN regex: key: 'dpt_[a-zA-Z0-9]{32}'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@trufflehog.yml` around lines 1 - 21, Add a short descriptive "description" field to each detector in trufflehog.yml to explain what the detector matches and why (for InternalServiceToken, DemoAppApiKey, and DemoAppDeployToken); update the InternalServiceToken block to include a description like "detects internal service tokens with prefix internal_token and 32‑char hex suffix", the DemoAppApiKey block to include a description like "detects DemoApp API keys with prefix demoapp_ and 40‑char alphanumeric suffix", and the DemoAppDeployToken block to include a description like "detects DemoApp deploy tokens with prefix dpt_ and 32‑char alphanumeric suffix" so alerts contain clear context for triage.trufflehog/deploy.py (1)
36-53: HTTP requests lack error handling and response validation.Both
notify_datadogandtrigger_rolling_restartfire POST requests without checking responses or handling failures. In a deployment script, silent failures could leave the system in an inconsistent state.🛡️ Add response validation
def notify_datadog(event): - requests.post( + response = requests.post( "https://api.datadoghq.com/api/v1/events", headers={"DD-API-KEY": DATADOG_API_KEY}, json={"title": "deploy", "text": event}, timeout=5, ) + response.raise_for_status() def trigger_rolling_restart(target): - requests.post( + response = requests.post( f"https://control.internal.example.com/v1/services/{target}/restart", headers={ "Authorization": f"Bearer {DEPLOY_TOKEN}", "X-Internal-Token": INTERNAL_TOKEN, }, timeout=30, ) + response.raise_for_status() + print(f"triggered rolling restart for {target}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@trufflehog/deploy.py` around lines 36 - 53, Both notify_datadog and trigger_rolling_restart currently fire requests without checking results; wrap the requests.post calls in try/except for requests.exceptions.RequestException, call response.raise_for_status() (or check response.ok/status_code) to detect non-2xx responses, and on failure log the error including response.status_code and response.text plus any exception details, then surface the failure (either re-raise the exception or return a failure boolean) so callers can handle it; update notify_datadog and trigger_rolling_restart and reference DATADOG_API_KEY, DEPLOY_TOKEN, and INTERNAL_TOKEN when logging to provide context.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@trufflehog/config.yaml`:
- Around line 5-31: This config embeds real secrets in fields like
database.primary.url, database.cache.url, database.read_replica.url,
aws.access_key_id, aws.secret_access_key, monitoring.datadog_api_key,
monitoring.datadog_app_key, internal.service_token, internal.deploy_token,
internal.api_key and smtp.password; remove or replace all real credentials with
non-sensitive placeholders (e.g., "<REDACTED>" or clearly marked test values),
and move any remaining sample secrets into a single dedicated TruffleHog test
fixture file used only for detector validation to avoid duplicating secrets
across files; update references in any tests or docs to point to the
consolidated fixture and ensure no production-style credentials remain in
config.yaml.
In `@trufflehog/deploy.py`:
- Around line 8-18: Replace the hardcoded secrets in deploy.py (DATABASE_URL,
REDIS_URL, AWS_ACCESS_KEY, AWS_SECRET_KEY, INTERNAL_TOKEN, DEPLOY_TOKEN,
DEMOAPP_API_KEY, DATADOG_API_KEY) with runtime lookups (e.g., os.environ.get or
a secrets manager client) and fail-fast with a clear error if any required
variable is missing; update code that uses these constants to reference the new
config values and add a small helper (e.g., get_required_secret(name)) to
centralize retrieval and error messaging so no plaintext credentials remain in
source control.
In `@trufflehog/legacy_credentials.txt`:
- Around line 1-30: This file contains plaintext secrets (e.g., the RSA private
key under [ssh.deploy_key], AWS creds in [aws.deploy], DB creds in
[database.legacy_replica], and tokens in [internal.tokens]) and must be removed
or redacted: delete the sensitive values and replace them with vault
references/placeholders (e.g., VAULT://... or REDACTED) in this file or move the
entire content to a secure secrets store, update any code that referenced these
entries to fetch from the vault, and if this was committed accidentally, purge
the secret from git history (use git rm --cached + commit and run BFG or git
filter-repo) and rotate those credentials immediately; if you need a test
fixture for TruffleHog, replace entries with clearly labeled synthetic
placeholders and add a top-line comment "SYNTHETIC TEST DATA — NO REAL SECRETS"
to [notes] so it cannot be mistaken for production.
In `@trufflehog/services.env`:
- Around line 1-20: This file currently contains real-looking secrets (e.g.,
DATABASE_URL, REDIS_URL, MONGO_URL, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY,
DEMOAPP_API_KEY, DATADOG_API_KEY, SMTP_PASSWORD, JWT_SIGNING_SECRET) which must
be removed; replace all sensitive values with obvious placeholders (e.g.,
YOUR_DB_URL_HERE, YOUR_AWS_KEY_HERE, YOUR_API_KEY_HERE) or move them into a
non-committed example file like services.env.example; add the actual
services.env to .gitignore and create a test-fixtures/ or docs/README section
explaining how to create the real .env from the .example for local testing, and
remove any committed secrets from history if these were real.
---
Nitpick comments:
In @.coderabbit.yml:
- Around line 1-4: The gitleaks scanner has been disabled under the
reviews.tools.gitleaks.enabled setting which removes an important layer of
secret detection; re-enable it by setting reviews.tools.gitleaks.enabled to true
(keeping the existing trufflehog configuration intact) so both scanners run for
defense-in-depth, and if needed add any performance or exclusion options to
manage conflicts rather than turning gitleaks off.
In `@trufflehog.yml`:
- Around line 1-21: Add a short descriptive "description" field to each detector
in trufflehog.yml to explain what the detector matches and why (for
InternalServiceToken, DemoAppApiKey, and DemoAppDeployToken); update the
InternalServiceToken block to include a description like "detects internal
service tokens with prefix internal_token and 32‑char hex suffix", the
DemoAppApiKey block to include a description like "detects DemoApp API keys with
prefix demoapp_ and 40‑char alphanumeric suffix", and the DemoAppDeployToken
block to include a description like "detects DemoApp deploy tokens with prefix
dpt_ and 32‑char alphanumeric suffix" so alerts contain clear context for
triage.
In `@trufflehog/deploy.py`:
- Around line 36-53: Both notify_datadog and trigger_rolling_restart currently
fire requests without checking results; wrap the requests.post calls in
try/except for requests.exceptions.RequestException, call
response.raise_for_status() (or check response.ok/status_code) to detect non-2xx
responses, and on failure log the error including response.status_code and
response.text plus any exception details, then surface the failure (either
re-raise the exception or return a failure boolean) so callers can handle it;
update notify_datadog and trigger_rolling_restart and reference DATADOG_API_KEY,
DEPLOY_TOKEN, and INTERNAL_TOKEN when logging to provide context.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Enterprise
Run ID: 714a2ff4-3c1b-4c56-9371-24c9599d13b5
📒 Files selected for processing (6)
.coderabbit.ymltrufflehog.ymltrufflehog/config.yamltrufflehog/deploy.pytrufflehog/legacy_credentials.txttrufflehog/services.env
| database: | ||
| primary: | ||
| url: postgres://demoapp:Sup3rS3cr3tP@ssword@db.internal.example.com:5432/demoapp_prod | ||
| pool_size: 20 | ||
| cache: | ||
| url: mongodb://demoapp:Cache_Pa55word!@cache.internal.example.com:27017/demoapp_cache | ||
| read_replica: | ||
| url: postgres://reader:R3ad0nly_P@ss@replica.internal.example.com:5432/demoapp_prod | ||
|
|
||
| aws: | ||
| region: us-east-1 | ||
| access_key_id: AKIAIOSFODNN7EXAMPLE | ||
| secret_access_key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY | ||
|
|
||
| monitoring: | ||
| datadog_api_key: 1234567890abcdef1234567890abcdef | ||
| datadog_app_key: abcdef1234567890abcdef1234567890abcdef12 | ||
|
|
||
| internal: | ||
| service_token: internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41 | ||
| deploy_token: dpt_a1b2c3d4e5f6789012345678901234ab | ||
| api_key: demoapp_kJ8mN2pQ4rS6tU8vW0xY2zA4bC6dE8fG0hI2jK4l | ||
|
|
||
| smtp: | ||
| host: smtp.example.com | ||
| username: notifications@demoapp.example.com | ||
| password: M@ilP@ssword2024 |
There was a problem hiding this comment.
Credentials embedded in configuration files duplicate security concerns from other PR files.
This configuration contains the same secrets found in services.env and deploy.py (database passwords, AWS keys, internal tokens). Static analysis correctly flags these:
- Checkov CKV_SECRET_2: AWS Access Key at lines 16-17
- TruffleHog: Multiple CustomRegex and database credential detections
If the goal is to validate TruffleHog custom detectors, consolidate test fixtures to minimize credential exposure surface. Consider using a single clearly-marked test file rather than spreading secrets across multiple files simulating a production layout.
🧰 Tools
🪛 Checkov (3.2.525)
[high] 16-17: AWS Access Key
(CKV_SECRET_2)
🪛 TruffleHog (3.95.2)
[warning] 24-24: Detected CustomRegex secret: This is a user-defined detector with no description provided. (unverified - may be false positive)
(CustomRegex)
[warning] 25-25: Detected CustomRegex secret: This is a user-defined detector with no description provided. (unverified - may be false positive)
(CustomRegex)
[warning] 7-7: Detected Postgres secret: Postgres connection string containing credentials (unverified - may be false positive)
(Postgres)
[warning] 12-12: Detected Postgres secret: Postgres connection string containing credentials (unverified - may be false positive)
(Postgres)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@trufflehog/config.yaml` around lines 5 - 31, This config embeds real secrets
in fields like database.primary.url, database.cache.url,
database.read_replica.url, aws.access_key_id, aws.secret_access_key,
monitoring.datadog_api_key, monitoring.datadog_app_key, internal.service_token,
internal.deploy_token, internal.api_key and smtp.password; remove or replace all
real credentials with non-sensitive placeholders (e.g., "<REDACTED>" or clearly
marked test values), and move any remaining sample secrets into a single
dedicated TruffleHog test fixture file used only for detector validation to
avoid duplicating secrets across files; update references in any tests or docs
to point to the consolidated fixture and ensure no production-style credentials
remain in config.yaml.
| DATABASE_URL = "postgres://demoapp:Sup3rS3cr3tP@ssword@db.internal.example.com:5432/demoapp_prod" | ||
| REDIS_URL = "redis://:CacheP@ss2024@redis.internal.example.com:6379/0" | ||
|
|
||
| AWS_ACCESS_KEY = "AKIAIOSFODNN7EXAMPLE" | ||
| AWS_SECRET_KEY = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" | ||
|
|
||
| INTERNAL_TOKEN = "internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41" | ||
| DEPLOY_TOKEN = "dpt_a1b2c3d4e5f6789012345678901234ab" | ||
| DEMOAPP_API_KEY = "demoapp_kJ8mN2pQ4rS6tU8vW0xY2zA4bC6dE8fG0hI2jK4l" | ||
|
|
||
| DATADOG_API_KEY = "1234567890abcdef1234567890abcdef" |
There was a problem hiding this comment.
Hardcoded credentials in executable code is a critical security risk.
This script contains plaintext credentials that would be used in actual API calls:
DATABASE_URL = "postgres://demoapp:Sup3rS3cr3tP@ssword@..."
AWS_ACCESS_KEY = "AKIAIOSFODNN7EXAMPLE"
INTERNAL_TOKEN = "internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41"Static analysis (Ruff S105) correctly flags these as hardcoded passwords. Even for demonstration purposes, this teaches insecure patterns. Production code should:
- Read credentials from environment variables or a secrets manager
- Never commit secrets to source control
🔒 Refactor to use environment variables
-DATABASE_URL = "postgres://demoapp:Sup3rS3cr3tP@ssword@db.internal.example.com:5432/demoapp_prod"
-REDIS_URL = "redis://:CacheP@ss2024@redis.internal.example.com:6379/0"
-
-AWS_ACCESS_KEY = "AKIAIOSFODNN7EXAMPLE"
-AWS_SECRET_KEY = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
-
-INTERNAL_TOKEN = "internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41"
-DEPLOY_TOKEN = "dpt_a1b2c3d4e5f6789012345678901234ab"
-DEMOAPP_API_KEY = "demoapp_kJ8mN2pQ4rS6tU8vW0xY2zA4bC6dE8fG0hI2jK4l"
-
-DATADOG_API_KEY = "1234567890abcdef1234567890abcdef"
+DATABASE_URL = os.environ["DATABASE_URL"]
+REDIS_URL = os.environ["REDIS_URL"]
+
+AWS_ACCESS_KEY = os.environ["AWS_ACCESS_KEY_ID"]
+AWS_SECRET_KEY = os.environ["AWS_SECRET_ACCESS_KEY"]
+
+INTERNAL_TOKEN = os.environ["INTERNAL_TOKEN"]
+DEPLOY_TOKEN = os.environ["DEPLOY_TOKEN"]
+DEMOAPP_API_KEY = os.environ["DEMOAPP_API_KEY"]
+
+DATADOG_API_KEY = os.environ["DATADOG_API_KEY"]🧰 Tools
🪛 Ruff (0.15.12)
[error] 12-12: Possible hardcoded password assigned to: "AWS_SECRET_KEY"
(S105)
[error] 14-14: Possible hardcoded password assigned to: "INTERNAL_TOKEN"
(S105)
[error] 15-15: Possible hardcoded password assigned to: "DEPLOY_TOKEN"
(S105)
🪛 TruffleHog (3.95.2)
[warning] 14-14: Detected CustomRegex secret: This is a user-defined detector with no description provided. (unverified - may be false positive)
(CustomRegex)
[warning] 15-15: Detected CustomRegex secret: This is a user-defined detector with no description provided. (unverified - may be false positive)
(CustomRegex)
[warning] 16-16: Detected CustomRegex secret: This is a user-defined detector with no description provided. (unverified - may be false positive)
(CustomRegex)
[warning] 8-8: Detected Postgres secret: Postgres connection string containing credentials (unverified - may be false positive)
(Postgres)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@trufflehog/deploy.py` around lines 8 - 18, Replace the hardcoded secrets in
deploy.py (DATABASE_URL, REDIS_URL, AWS_ACCESS_KEY, AWS_SECRET_KEY,
INTERNAL_TOKEN, DEPLOY_TOKEN, DEMOAPP_API_KEY, DATADOG_API_KEY) with runtime
lookups (e.g., os.environ.get or a secrets manager client) and fail-fast with a
clear error if any required variable is missing; update code that uses these
constants to reference the new config values and add a small helper (e.g.,
get_required_secret(name)) to centralize retrieval and error messaging so no
plaintext credentials remain in source control.
| # Legacy credentials inventory — pre-vault era. Kept for emergency rollback. | ||
| # Replace these with vault references before the next prod deploy. | ||
|
|
||
| [aws.deploy] | ||
| access_key_id = AKIAIOSFODNN7EXAMPLE | ||
| secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY | ||
|
|
||
| [database.legacy_replica] | ||
| host = legacy-replica.internal.example.com | ||
| port = 5432 | ||
| username = legacy_reader | ||
| password = L3gacyR3plicaP@ss | ||
|
|
||
| [internal.tokens] | ||
| service_token = internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41 | ||
| deploy_token = dpt_a1b2c3d4e5f6789012345678901234ab | ||
|
|
||
| [ssh.deploy_key] | ||
| -----BEGIN RSA PRIVATE KEY----- | ||
| MIIEowIBAAKCAQEAyqXmSVk3demoappdemoappdemoappdemoappdemoappdemo | ||
| appdemoappdemoappdemoappdemoappdemoappdemoappdemoappdemoappdemo | ||
| appdemoappdemoappdemoappdemoappdemoappdemoappdemoappdemoappdemo | ||
| appdemoappdemoappdemoappdemoappdemoappdemoappdemoappdemoappdemo | ||
| appdemoappdemoappdemoappdemoappdemoappdemoappTRUNCATEDFORDEMO | ||
| -----END RSA PRIVATE KEY----- | ||
|
|
||
| [notes] | ||
| rotated_on = 2024-09-15 | ||
| next_rotation = 2024-12-15 | ||
| owner = platform-team |
There was a problem hiding this comment.
Plaintext credentials files with SSH private keys should never be committed.
This file contains an RSA private key block and multiple credential sets. Even if these are test values:
- SSH private keys in repos are a critical security risk—git history persists forever
- The header comment ("Kept for emergency rollback") suggests this could be misinterpreted as acceptable operational practice
- The rotation metadata (
rotated_on,next_rotation) makes this appear production-like
If this is purely for TruffleHog detector validation, clearly mark it as synthetic test data and ensure it cannot be mistaken for operational guidance.
🧰 Tools
🪛 TruffleHog (3.95.2)
[warning] 15-15: Detected CustomRegex secret: This is a user-defined detector with no description provided. (unverified - may be false positive)
(CustomRegex)
[warning] 16-16: Detected CustomRegex secret: This is a user-defined detector with no description provided. (unverified - may be false positive)
(CustomRegex)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@trufflehog/legacy_credentials.txt` around lines 1 - 30, This file contains
plaintext secrets (e.g., the RSA private key under [ssh.deploy_key], AWS creds
in [aws.deploy], DB creds in [database.legacy_replica], and tokens in
[internal.tokens]) and must be removed or redacted: delete the sensitive values
and replace them with vault references/placeholders (e.g., VAULT://... or
REDACTED) in this file or move the entire content to a secure secrets store,
update any code that referenced these entries to fetch from the vault, and if
this was committed accidentally, purge the secret from git history (use git rm
--cached + commit and run BFG or git filter-repo) and rotate those credentials
immediately; if you need a test fixture for TruffleHog, replace entries with
clearly labeled synthetic placeholders and add a top-line comment "SYNTHETIC
TEST DATA — NO REAL SECRETS" to [notes] so it cannot be mistaken for production.
| APP_ENV=production | ||
| APP_PORT=8080 | ||
|
|
||
| DATABASE_URL=postgres://demoapp:Sup3rS3cr3tP@ssword@db.internal.example.com:5432/demoapp_prod | ||
| REDIS_URL=redis://:CacheP@ss2024@redis.internal.example.com:6379/0 | ||
| MONGO_URL=mongodb://demoapp:Cache_Pa55word!@cache.internal.example.com:27017/demoapp_cache | ||
|
|
||
| AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE | ||
| AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY | ||
| AWS_REGION=us-east-1 | ||
|
|
||
| DEMOAPP_API_KEY=demoapp_kJ8mN2pQ4rS6tU8vW0xY2zA4bC6dE8fG0hI2jK4l | ||
| DEMOAPP_DEPLOY_TOKEN=dpt_a1b2c3d4e5f6789012345678901234ab | ||
| INTERNAL_TOKEN=internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41 | ||
|
|
||
| DATADOG_API_KEY=1234567890abcdef1234567890abcdef | ||
| DATADOG_APP_KEY=abcdef1234567890abcdef1234567890abcdef12 | ||
|
|
||
| SMTP_PASSWORD=M@ilP@ssword2024 | ||
| JWT_SIGNING_SECRET=jwt_signing_secret_super_long_random_value_2024_demoapp |
There was a problem hiding this comment.
Committing secrets to version control, even for testing, is a security anti-pattern.
This file contains credentials (database URLs with passwords, AWS keys, API tokens) committed to the repository. While these appear to be example/test values (e.g., AKIAIOSFODNN7EXAMPLE is AWS's documented placeholder), this pattern:
- Trains developers to commit
.envfiles - May accidentally include real credentials if copy-pasted
- Remains in git history even if removed later
For TruffleHog detector testing, consider:
- Using a dedicated test fixtures directory clearly marked as non-production
- Adding these files to
.gitignoreand providing setup instructions - Using obviously fake placeholders like
YOUR_API_KEY_HERE
🧰 Tools
🪛 TruffleHog (3.95.2)
[warning] 12-12: Detected CustomRegex secret: This is a user-defined detector with no description provided. (unverified - may be false positive)
(CustomRegex)
[warning] 14-14: Detected CustomRegex secret: This is a user-defined detector with no description provided. (unverified - may be false positive)
(CustomRegex)
[warning] 13-13: Detected CustomRegex secret: This is a user-defined detector with no description provided. (unverified - may be false positive)
(CustomRegex)
[warning] 4-4: Detected Postgres secret: Postgres connection string containing credentials (unverified - may be false positive)
(Postgres)
[warning] 1-1: Detected MongoDB secret: MongoDB is a NoSQL database that uses a document-oriented data model. MongoDB credentials can be used to access and manipulate the database. (unverified - may be false positive)
(MongoDB)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@trufflehog/services.env` around lines 1 - 20, This file currently contains
real-looking secrets (e.g., DATABASE_URL, REDIS_URL, MONGO_URL,
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, DEMOAPP_API_KEY, DATADOG_API_KEY,
SMTP_PASSWORD, JWT_SIGNING_SECRET) which must be removed; replace all sensitive
values with obvious placeholders (e.g., YOUR_DB_URL_HERE, YOUR_AWS_KEY_HERE,
YOUR_API_KEY_HERE) or move them into a non-committed example file like
services.env.example; add the actual services.env to .gitignore and create a
test-fixtures/ or docs/README section explaining how to create the real .env
from the .example for local testing, and remove any committed secrets from
history if these were real.
Summary by CodeRabbit