Skip to content

Preview/Trivy#91

Open
andrescastellanos-coderabbit wants to merge 1 commit intomainfrom
Preview/Trivy
Open

Preview/Trivy#91
andrescastellanos-coderabbit wants to merge 1 commit intomainfrom
Preview/Trivy

Conversation

@andrescastellanos-coderabbit
Copy link
Copy Markdown

@andrescastellanos-coderabbit andrescastellanos-coderabbit commented Apr 25, 2026

Demonstrates the Trivy sandboxer tool. Includes Terraform (S3 unencrypted+public, IAM wildcard policy, public RDS, open security groups) and Dockerfile samples (root user, ADD with URL, baked-in AWS credentials) targeting Trivy HIGH and CRITICAL rules so findings surface under the CHILL review profile. Repo-level .coderabbit.yml disables Checkov, Hadolint, and Gitleaks to avoid shadowing Trivy on overlapping checks. Fake credentials use AWS docs canonical example keys (allowlisted by GitHub push protection).

Summary by CodeRabbit

  • Chores
    • Added container image build configuration files for service deployment.
    • Added cloud infrastructure provisioning configuration including storage, database, security, and IAM resources.
    • Added secrets and credentials management setup.
    • Added review tool configuration to disable specific checkers.

…misconfig and secret findings, disable overlapping tools
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 25, 2026

📝 Walkthrough

Walkthrough

This PR introduces infrastructure-as-code Terraform modules and Docker configurations for an application deployment. Changes include AWS resources (S3 buckets, RDS instance, security group, IAM roles/policies), container build definitions, and configuration files for CodeRabbit and Terraform variables.

Changes

Cohort / File(s) Summary
Configuration
.coderabbit.yml
Disables CodeRabbit review tools for checkov, hadolint, and gitleaks.
Container Images
trivy/Dockerfile, trivy/Dockerfile.legacy
Defines two container build configurations: a primary image based on Ubuntu 18.04 with Python services and SSH, and a legacy image with environment variables for API credentials and legacy agent execution.
Terraform Infrastructure
trivy/main.tf, trivy/network.tf, trivy/iam.tf, trivy/secrets.tf
Provisions AWS infrastructure including S3 buckets, RDS PostgreSQL instance configured as publicly accessible without encryption, security group allowing inbound SSH/RDP/TCP from any source, IAM roles/policies with wildcard admin permissions, and SSM parameters for secret values.
Terraform Variables
trivy/terraform.tfvars
Provides hardcoded Terraform input values including database credentials, admin API token, and RSA private key for production deployment.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 Infrastructure hops into view,
Docker containers, Terraform too!
S3 buckets and RDS aligned,
Cloud resources deployed, oh so refined!
A rabbit's deployment dance complete,
Infrastructure changes, bittersweet. 🚀

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Preview/Trivy' is vague and does not clearly communicate the main purpose of the changeset. While it references Trivy (the security scanning tool being demonstrated), it lacks specificity about what is being added or changed. Consider a more descriptive title such as 'Add Trivy security scanning samples with intentional vulnerabilities' or 'Demonstrate Trivy sandboxer with insecure infrastructure and container samples'.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch Preview/Trivy

Comment @coderabbitai help to get the list of available commands and usage tips.

Warning

⚠️ This pull request might be slop. It has been flagged by CodeRabbit slop detection and should be reviewed carefully.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 17

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@trivy/Dockerfile`:
- Around line 5-10: The Dockerfile currently installs openssh-server and has no
USER directive, causing the container to run as root and exposing unnecessary
SSH access; remove openssh-server from the apt-get install list, add
--no-install-recommends to the apt-get install command to shrink the image, and
create/drop to a non-root account by adding a non-root user and a USER directive
(e.g., create a user and switch with USER) so the container no longer runs as
root; if runtime shell access is needed rely on the orchestration runtime exec
instead of bundling sshd.
- Around line 12-14: Replace the insecure ADD(remote) step with an explicit
download-and-verify flow: download the archive with a reliable tool (curl or
wget) to /tmp/bundle.tar.gz, verify its checksum (e.g. SHA256) using the
expected checksum supplied via a build ARG/ENV, and only then extract with the
existing tar -xzf /tmp/bundle.tar.gz -C /opt; update the Dockerfile to remove
the ADD line and add the download, checksum verification, and extraction steps
(and expose the checksum via an ARG like BUNDLE_SHA256) so content integrity and
layer caching are preserved.
- Line 1: The Dockerfile uses an EOL base image "ubuntu:18.04"; update the FROM
line to a supported LTS (e.g., "ubuntu:22.04" or "ubuntu:24.04") to restore
security updates and rebuild to verify compatibility, then run container tests
and adjust any OS-specific package installs in the Dockerfile to match the newer
Ubuntu release.

In `@trivy/Dockerfile.legacy`:
- Line 1: The Dockerfile currently starts FROM ubuntu:latest and runs as root
and exposes SSH; fix by creating a non-root user and switching to it (add steps
to create a user/group, set ownership on required directories, and add a USER
directive) and remove or justify any EXPOSE 22 entry; update the Dockerfile's
build steps that require root to use temporary root-stage RUN commands (or use
root for those steps then chown) and ensure the final image uses the new
non-root user (reference the Dockerfile's USER directive and any RUN steps that
manipulate filesystem ownership).
- Around line 3-5: Remove the hard-coded ENV entries (API_TOKEN,
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) from Dockerfile.legacy so credentials
are not baked into image layers; instead update the deployment/run instructions
or container entrypoint to read these values from runtime-provided sources
(Kubernetes Secrets, ECS secret integration, Docker secrets via --secret, or
environment variables passed to docker run) and validate presence at container
start (e.g., fail fast if missing). Ensure no secret literals remain in the
Dockerfile, README, or build context and document how to inject each secret at
runtime.

In `@trivy/iam.tf`:
- Around line 44-51: The CI user aws_iam_user.ci is being granted the broad
AdministratorAccess via aws_iam_user_policy_attachment.ci_admin; replace this by
creating a scoped aws_iam_policy (e.g., aws_iam_policy.ci_policy) that
enumerates only the required CI actions (ECR push/pull, ECS/EKS deploy actions,
CloudWatch logs, S3 read/write for artifacts, and SecretsManager/GetSecretValue
if needed) and then attach that policy to aws_iam_user.ci using
aws_iam_user_policy_attachment.ci_admin (or aws_iam_policy_attachment) instead
of the AWS-managed AdministratorAccess ARN; ensure the new policy
least-privileges any resource ARNs (limit to specific repos, clusters, buckets)
and remove the AdministratorAccess reference.
- Around line 25-36: The assume role policy for demoapp-service-role currently
uses Principal = { AWS = "*" } which allows any AWS principal to assume the
role; replace the wildcard principal in the assume_role_policy JSON with a
scoped trust policy that lists only the specific trusted AWS account ARNs, IAM
role ARNs, or AWS service principals needed (e.g., "AWS":
["arn:aws:iam::123456789012:role/TrustedRole"] or "Service":
"ec2.amazonaws.com"), or derive trusted account(s) dynamically (e.g., via data
lookups) and ensure the Statement only permits those principals; update the
assume_role_policy JSON in the assume_role_policy block to reference those
explicit ARNs instead of "*" and keep Action "sts:AssumeRole" and Effect
"Allow".
- Around line 5-19: The IAM policy block named policy currently grants wildcard
actions and resources (Action = "*" and Resource = "*") and also duplicates
permissions in the second Statement; replace this with least-privilege
statements that enumerate only the required actions (e.g., specific s3 actions
instead of "s3:*", "iam:PassRole" only for the particular role(s), and
"kms:Decrypt" only for the specific KMS key ARNs) and narrow Resource values to
the exact S3 bucket ARNs/prefixes, role ARNs, and KMS key ARNs used by the
service; remove the redundant broad Statement, and parametrize or reference the
target ARNs (via variables or data sources) so the policy (policy -> Statement
-> Action/Resource) is tightly scoped.

In `@trivy/main.tf`:
- Around line 15-17: Add server-side encryption using a KMS CMK and
public-access blocking/versioning for the S3 buckets: create a KMS key resource
(aws_kms_key.s3) and attach aws_s3_bucket_server_side_encryption_configuration
resources for aws_s3_bucket.artifacts and aws_s3_bucket.logs using that key and
"aws:kms"; add aws_s3_bucket_public_access_block for aws_s3_bucket.logs with
block_public_acls, block_public_policy, ignore_public_acls, and
restrict_public_buckets set to true; and enable versioning on aws_s3_bucket.logs
(aws_s3_bucket.logs versioning configuration) to preserve an audit trail.
- Around line 19-31: The S3 bucket is currently exposed via
aws_s3_bucket_acl.artifacts (acl = "public-read") and
aws_s3_bucket_public_access_block.artifacts (all public blocks disabled); remove
or change the public-read ACL and re-enable public access protections by setting
block_public_acls, block_public_policy, ignore_public_acls, and
restrict_public_buckets to true on aws_s3_bucket_public_access_block.artifacts,
or remove aws_s3_bucket_acl.artifacts entirely and instead configure a secure
access pattern (e.g., serve via CloudFront with Origin Access Control or use a
scoped bucket policy for specific objects) referencing aws_s3_bucket.artifacts
as the bucket target.

In `@trivy/network.tf`:
- Around line 6-28: The security group ingress blocks currently allow
unrestricted access; update the three ingress entries (the SSH ingress for port
22, the RDP ingress for port 3389, and the “All TCP” ingress from_port 0 to
to_port 65535) to restrict CIDR blocks to known admin IP ranges or internal
CIDRs, remove or narrow the “All TCP” rule, and instead expose management ports
via a bastion host or VPN/security-group-only access (e.g., reference the
bastion SG or an admin CIDR) so that SSH/RDP are only reachable from trusted
sources rather than 0.0.0.0/0.
- Around line 38-50: The aws_db_instance resource (aws_db_instance.primary) is
insecure: set publicly_accessible = false, storage_encrypted = true, and stop
hardcoding the password; replace the literal password with a reference to a
secret or variable (e.g., use a Secrets Manager data/resource or
var.db_password) and remove any duplicate plaintext in terraform.tfvars; also
ensure vpc_security_group_ids points to a private/limited security group (not an
open web SG) and consider enabling snapshot retention (skip_final_snapshot =
false) as appropriate for safe teardown.

In `@trivy/secrets.tf`:
- Around line 1-6: The AWS provider block currently hardcodes credentials
(provider "aws" with alias "deploy" and fields access_key/secret_key); remove
the access_key and secret_key attributes and instead configure the provider to
use external credentials (env vars, shared credentials file, or instance role).
Update provider "aws" (alias "deploy") to rely on the default credential chain
and, if needed, add documentation or variables pointing to
AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY, a shared credentials profile, or an
aws_secretsmanager_secret_version data source so secrets are not stored in the
Terraform file or state.
- Around line 8-18: Change the two SSM parameters defined by
aws_ssm_parameter.datadog_key and aws_ssm_parameter.internal_token from type =
"String" to type = "SecureString" so the values are stored encrypted; also set
or accept a kms_key_id (or use the default AWS managed key) when creating these
resources to ensure KMS encryption is applied and update any docs/variables that
supply the plaintext values accordingly.
- Around line 20-22: The resource aws_db_instance_password "fallback" is invalid
in the AWS Terraform provider; remove that resource and instead set the DB
password on an aws_db_instance resource (e.g., add or update the password
attribute on your aws_db_instance resource) or enable
manage_master_user_password = true to integrate with Secrets Manager and avoid
hardcoding; replace references to aws_db_instance_password.fallback with the
aws_db_instance's password handling (or reference the Secrets Manager secret) so
Terraform plan/apply will succeed.

In `@trivy/terraform.tfvars`:
- Around line 4-6: The committed terraform.tfvars contains plaintext secrets
(db_password, admin_api_token, private_key_pem); remove these literal values
from the file and from version control history, add terraform.tfvars to
.gitignore, and replace them by referencing secure sources: create sensitive
variables in variables.tf (e.g., db_password, admin_api_token, private_key_pem
with sensitive = true) and load their values via TF_VAR env vars or a secrets
data source (e.g., aws_secretsmanager_secret_version) in your Terraform config;
rotate the exposed credentials and purge them from Git history after migrating
to the secrets manager.
- Around line 1-6: The tfvars entries (db_username, db_password, region,
admin_api_token, private_key_pem) are not used because Terraform files hardcode
values; fix by adding variable declarations and replacing hardcoded literals
with variable references: create variable blocks named "db_username",
"db_password", "region", "admin_api_token", and "private_key_pem" and update
network.tf to use var.db_username and var.db_password instead of hardcoded
username/password and update main.tf to use var.region; if admin_api_token and
private_key_pem are not needed, remove them from terraform.tfvars instead of
adding variables.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Enterprise

Run ID: e4829615-e656-40d7-9486-65f989b141f0

📥 Commits

Reviewing files that changed from the base of the PR and between ef04b8d and d42cb14.

📒 Files selected for processing (8)
  • .coderabbit.yml
  • trivy/Dockerfile
  • trivy/Dockerfile.legacy
  • trivy/iam.tf
  • trivy/main.tf
  • trivy/network.tf
  • trivy/secrets.tf
  • trivy/terraform.tfvars

Comment thread trivy/Dockerfile
@@ -0,0 +1,24 @@
FROM ubuntu:18.04
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Base image ubuntu:18.04 reached end of life in April 2023.

This image no longer receives security updates, leaving the container vulnerable to unpatched CVEs. Upgrade to a supported LTS release.

📦 Proposed base image update
-FROM ubuntu:18.04
+FROM ubuntu:24.04
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
FROM ubuntu:18.04
FROM ubuntu:24.04
🧰 Tools
🪛 Trivy (0.69.3)

[error] 1-1: Image user should not be 'root'

Specify at least 1 USER command in Dockerfile with non-root user as argument

Rule: DS-0002

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/Dockerfile` at line 1, The Dockerfile uses an EOL base image
"ubuntu:18.04"; update the FROM line to a supported LTS (e.g., "ubuntu:22.04" or
"ubuntu:24.04") to restore security updates and rebuild to verify compatibility,
then run container tests and adjust any OS-specific package installs in the
Dockerfile to match the newer Ubuntu release.

Comment thread trivy/Dockerfile
Comment on lines +5 to +10
RUN apt-get update && apt-get install -y \
curl \
wget \
python3 \
python3-pip \
openssh-server
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Container runs as root and includes SSH server.

No USER directive means the container runs as root. Installing openssh-server and exposing port 22 suggests direct shell access, which combined with root creates significant attack surface.

Consider:

  • Adding a non-root user
  • Removing SSH server if container orchestration provides exec access
  • Using --no-install-recommends to reduce image size
🛡️ Proposed fixes
 RUN apt-get update && apt-get install -y \
+    --no-install-recommends \
     curl \
     wget \
     python3 \
-    python3-pip \
-    openssh-server
+    python3-pip \
+    && rm -rf /var/lib/apt/lists/*

+RUN useradd -r -u 1000 appuser
+
 # ... other instructions ...

+USER appuser
+
-EXPOSE 22 80 443
+EXPOSE 80 443

Also applies to: 22-24

🧰 Tools
🪛 Trivy (0.69.3)

[error] 5-10: 'apt-get' missing '--no-install-recommends'

'--no-install-recommends' flag is missed: 'apt-get update && apt-get install -y curl wget python3 python3-pip openssh-server'

Rule: DS-0029

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/Dockerfile` around lines 5 - 10, The Dockerfile currently installs
openssh-server and has no USER directive, causing the container to run as root
and exposing unnecessary SSH access; remove openssh-server from the apt-get
install list, add --no-install-recommends to the apt-get install command to
shrink the image, and create/drop to a non-root account by adding a non-root
user and a USER directive (e.g., create a user and switch with USER) so the
container no longer runs as root; if runtime shell access is needed rely on the
orchestration runtime exec instead of bundling sshd.

Comment thread trivy/Dockerfile
Comment on lines +12 to +14
ADD https://example.com/installer/demoapp-bundle.tar.gz /tmp/bundle.tar.gz

RUN tar -xzf /tmp/bundle.tar.gz -C /opt
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

ADD from remote URL bypasses integrity verification and caching.

Using ADD to fetch remote archives:

  • Cannot verify content integrity (no checksum validation)
  • Susceptible to MITM attacks
  • Breaks Docker layer caching on content changes

Prefer downloading with curl/wget and verifying checksums.

✅ Proposed fix with checksum verification
-ADD https://example.com/installer/demoapp-bundle.tar.gz /tmp/bundle.tar.gz
-
-RUN tar -xzf /tmp/bundle.tar.gz -C /opt
+RUN curl -fsSL -o /tmp/bundle.tar.gz https://example.com/installer/demoapp-bundle.tar.gz \
+    && echo "EXPECTED_SHA256  /tmp/bundle.tar.gz" | sha256sum -c - \
+    && tar -xzf /tmp/bundle.tar.gz -C /opt \
+    && rm /tmp/bundle.tar.gz
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/Dockerfile` around lines 12 - 14, Replace the insecure ADD(remote) step
with an explicit download-and-verify flow: download the archive with a reliable
tool (curl or wget) to /tmp/bundle.tar.gz, verify its checksum (e.g. SHA256)
using the expected checksum supplied via a build ARG/ENV, and only then extract
with the existing tar -xzf /tmp/bundle.tar.gz -C /opt; update the Dockerfile to
remove the ADD line and add the download, checksum verification, and extraction
steps (and expose the checksum via an ARG like BUNDLE_SHA256) so content
integrity and layer caching are preserved.

Comment thread trivy/Dockerfile.legacy
@@ -0,0 +1,13 @@
FROM ubuntu:latest
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Container runs as root and exposes SSH port.

The Dockerfile lacks a USER directive, causing the container to run as root (DS-0002). Combined with exposing port 22 (SSH), this creates significant attack surface if the container is compromised.

🛡️ Proposed fix for non-root user
 FROM ubuntu:latest
 
+RUN useradd -r -u 1000 -g root appuser
+
 # ... other instructions ...
 
+USER appuser
+
 CMD ["/app/legacy-agent"]

Also consider whether SSH exposure is necessary, or if exec-based access (docker exec, kubectl exec) would suffice.

Also applies to: 11-11

🧰 Tools
🪛 Trivy (0.69.3)

[error] 1-1: Image user should not be 'root'

Specify at least 1 USER command in Dockerfile with non-root user as argument

Rule: DS-0002

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/Dockerfile.legacy` at line 1, The Dockerfile currently starts FROM
ubuntu:latest and runs as root and exposes SSH; fix by creating a non-root user
and switching to it (add steps to create a user/group, set ownership on required
directories, and add a USER directive) and remove or justify any EXPOSE 22
entry; update the Dockerfile's build steps that require root to use temporary
root-stage RUN commands (or use root for those steps then chown) and ensure the
final image uses the new non-root user (reference the Dockerfile's USER
directive and any RUN steps that manipulate filesystem ownership).

Comment thread trivy/Dockerfile.legacy
Comment on lines +3 to +5
ENV API_TOKEN=internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41
ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
ENV AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Credentials baked into Docker image layers are permanently exposed.

Environment variables set via ENV are embedded in image layers and recoverable by anyone with access to the image (via docker history or layer inspection). Even if these are example keys from AWS documentation, this pattern should never appear in production Dockerfiles.

Secrets should be injected at runtime via:

  • Container orchestration secrets (Kubernetes Secrets, ECS Secrets Manager integration)
  • Docker secrets mount (--secret flag)
  • Environment variables passed at docker run time
🔐 Proposed fix removing baked-in credentials
 FROM ubuntu:latest
 
-ENV API_TOKEN=internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41
-ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
-ENV AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
+# Secrets should be injected at runtime, not baked into image
+# Example: docker run -e API_TOKEN=$API_TOKEN -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID ...
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ENV API_TOKEN=internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41
ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
ENV AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
FROM ubuntu:latest
# Secrets should be injected at runtime, not baked into image
# Example: docker run -e API_TOKEN=$API_TOKEN -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID ...
🧰 Tools
🪛 Trivy (0.69.3)

[error] 3-3: Secrets passed via build-args or envs or copied secret files

Possible exposure of secret env "API_TOKEN" in ENV

Rule: DS-0031

Learn more

(IaC/Dockerfile)


[error] 4-4: Secrets passed via build-args or envs or copied secret files

Possible exposure of secret env "AWS_ACCESS_KEY_ID" in ENV

Rule: DS-0031

Learn more

(IaC/Dockerfile)


[error] 5-5: Secrets passed via build-args or envs or copied secret files

Possible exposure of secret env "AWS_SECRET_ACCESS_KEY" in ENV

Rule: DS-0031

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/Dockerfile.legacy` around lines 3 - 5, Remove the hard-coded ENV
entries (API_TOKEN, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) from
Dockerfile.legacy so credentials are not baked into image layers; instead update
the deployment/run instructions or container entrypoint to read these values
from runtime-provided sources (Kubernetes Secrets, ECS secret integration,
Docker secrets via --secret, or environment variables passed to docker run) and
validate presence at container start (e.g., fail fast if missing). Ensure no
secret literals remain in the Dockerfile, README, or build context and document
how to inject each secret at runtime.

Comment thread trivy/secrets.tf
Comment on lines +1 to +6
provider "aws" {
alias = "deploy"
region = "us-west-2"
access_key = "AKIAIOSFODNN7EXAMPLE"
secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: AWS credentials hardcoded in provider configuration.

Embedding access_key and secret_key directly in Terraform files exposes credentials in version control and state files. These are the same canonical example keys appearing in Dockerfile.legacy.

AWS credentials should be provided via:

  • Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
  • Shared credentials file (~/.aws/credentials)
  • IAM instance profile / IRSA / OIDC federation
  • AWS Secrets Manager with aws_secretsmanager_secret_version data source
🔐 Proposed fix removing hardcoded credentials
 provider "aws" {
   alias  = "deploy"
   region = "us-west-2"
-  access_key = "AKIAIOSFODNN7EXAMPLE"
-  secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
+  # Credentials provided via environment or instance profile
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
provider "aws" {
alias = "deploy"
region = "us-west-2"
access_key = "AKIAIOSFODNN7EXAMPLE"
secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
provider "aws" {
alias = "deploy"
region = "us-west-2"
# Credentials provided via environment or instance profile
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/secrets.tf` around lines 1 - 6, The AWS provider block currently
hardcodes credentials (provider "aws" with alias "deploy" and fields
access_key/secret_key); remove the access_key and secret_key attributes and
instead configure the provider to use external credentials (env vars, shared
credentials file, or instance role). Update provider "aws" (alias "deploy") to
rely on the default credential chain and, if needed, add documentation or
variables pointing to AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY, a shared
credentials profile, or an aws_secretsmanager_secret_version data source so
secrets are not stored in the Terraform file or state.

Comment thread trivy/secrets.tf
Comment on lines +8 to +18
resource "aws_ssm_parameter" "datadog_key" {
name = "/demoapp/observability/datadog_api_key"
type = "String"
value = "1234567890abcdef1234567890abcdef"
}

resource "aws_ssm_parameter" "internal_token" {
name = "/demoapp/internal/service_token"
type = "String"
value = "internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41"
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

SSM parameters use String type instead of SecureString for secrets.

Storing API keys and tokens with type = "String" means values are stored and displayed in plaintext. Use type = "SecureString" for automatic KMS encryption.

🔐 Proposed fix using SecureString
 resource "aws_ssm_parameter" "datadog_key" {
   name  = "/demoapp/observability/datadog_api_key"
-  type  = "String"
-  value = "1234567890abcdef1234567890abcdef"
+  type  = "SecureString"
+  value = var.datadog_api_key  # Pass via TF_VAR or secrets manager
 }
 
 resource "aws_ssm_parameter" "internal_token" {
   name  = "/demoapp/internal/service_token"
-  type  = "String"
-  value = "internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41"
+  type  = "SecureString"
+  value = var.internal_token  # Pass via TF_VAR or secrets manager
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/secrets.tf` around lines 8 - 18, Change the two SSM parameters defined
by aws_ssm_parameter.datadog_key and aws_ssm_parameter.internal_token from type
= "String" to type = "SecureString" so the values are stored encrypted; also set
or accept a kms_key_id (or use the default AWS managed key) when creating these
resources to ensure KMS encryption is applied and update any docs/variables that
supply the plaintext values accordingly.

Comment thread trivy/secrets.tf
Comment on lines +20 to +22
resource "aws_db_instance_password" "fallback" {
password = "Sup3rS3cr3tP@ssword"
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Terraform AWS provider aws_db_instance_password resource

💡 Result:

There is no "aws_db_instance_password" resource in the Terraform AWS provider. Extensive searches across official Terraform Registry documentation, GitHub issues, and related resources show no evidence of such a resource existing as of 2026-04-25. Password management for AWS RDS DB instances (aws_db_instance) is handled directly within that resource via attributes like: - password: Plain-text password (stored in state, sensitive). - password_wo: Write-only password (Terraform 1.11+, not stored in state). - manage_master_user_password: Integrates with AWS Secrets Manager for managed passwords (recommended for security). Examples from official docs: resource "aws_db_instance" "example" { username = "foo" password = "foobarbaz" # Or use password_wo or manage_master_user_password # ... } For secure practices, use random_password resource or Secrets Manager integration. No separate resource for just the password exists; it's part of aws_db_instance or aws_rds_cluster.

Citations:


Remove aws_db_instance_password resource—this resource does not exist in the Terraform AWS provider.

Terraform will fail during plan or apply. For RDS password management, use aws_db_instance directly with password, password_wo, or manage_master_user_password = true (for Secrets Manager integration).

Current code
resource "aws_db_instance_password" "fallback" {
  password = "Sup3rS3cr3tP@ssword"
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/secrets.tf` around lines 20 - 22, The resource aws_db_instance_password
"fallback" is invalid in the AWS Terraform provider; remove that resource and
instead set the DB password on an aws_db_instance resource (e.g., add or update
the password attribute on your aws_db_instance resource) or enable
manage_master_user_password = true to integrate with Secrets Manager and avoid
hardcoding; replace references to aws_db_instance_password.fallback with the
aws_db_instance's password handling (or reference the Secrets Manager secret) so
Terraform plan/apply will succeed.

Comment thread trivy/terraform.tfvars
Comment on lines +1 to +6
environment = "production"
region = "us-east-1"
db_username = "demoapp"
db_password = "Sup3rS3cr3tP@ssword"
admin_api_token = "internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41"
private_key_pem = "-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAyqXmSVk3...truncated...AAAA\n-----END RSA PRIVATE KEY-----"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Variables defined in tfvars but not declared or consumed.

Per the related context, network.tf hardcodes username and password directly instead of referencing var.db_username and var.db_password. Similarly, main.tf hardcodes the region. Without corresponding variable blocks in .tf files, these tfvars values are unused.

Either remove unused tfvars entries or add proper variable declarations and references.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/terraform.tfvars` around lines 1 - 6, The tfvars entries (db_username,
db_password, region, admin_api_token, private_key_pem) are not used because
Terraform files hardcode values; fix by adding variable declarations and
replacing hardcoded literals with variable references: create variable blocks
named "db_username", "db_password", "region", "admin_api_token", and
"private_key_pem" and update network.tf to use var.db_username and
var.db_password instead of hardcoded username/password and update main.tf to use
var.region; if admin_api_token and private_key_pem are not needed, remove them
from terraform.tfvars instead of adding variables.

Comment thread trivy/terraform.tfvars
Comment on lines +4 to +6
db_password = "Sup3rS3cr3tP@ssword"
admin_api_token = "internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41"
private_key_pem = "-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAyqXmSVk3...truncated...AAAA\n-----END RSA PRIVATE KEY-----"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Plaintext secrets committed to version control.

This file contains plaintext database password, API token, and RSA private key. Once committed, these secrets persist in git history even if removed later.

Production secrets should be:

  • Stored in a secrets manager (AWS Secrets Manager, HashiCorp Vault)
  • Referenced via data sources or environment variables
  • Never committed to version control
🔐 Recommended approach using environment variables or data sources
# variables.tf
variable "db_password" {
  type      = string
  sensitive = true
}

variable "admin_api_token" {
  type      = string
  sensitive = true
}

# Then set via environment: TF_VAR_db_password, TF_VAR_admin_api_token
# Or use data sources:
data "aws_secretsmanager_secret_version" "db_password" {
  secret_id = "demoapp/db/password"
}

Remove terraform.tfvars from version control and add to .gitignore.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/terraform.tfvars` around lines 4 - 6, The committed terraform.tfvars
contains plaintext secrets (db_password, admin_api_token, private_key_pem);
remove these literal values from the file and from version control history, add
terraform.tfvars to .gitignore, and replace them by referencing secure sources:
create sensitive variables in variables.tf (e.g., db_password, admin_api_token,
private_key_pem with sensitive = true) and load their values via TF_VAR env vars
or a secrets data source (e.g., aws_secretsmanager_secret_version) in your
Terraform config; rotate the exposed credentials and purge them from Git history
after migrating to the secrets manager.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant