Skip to content

Create infra.ts #6

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Create infra.ts #6

wants to merge 1 commit into from

Conversation

ZIJ
Copy link

@ZIJ ZIJ commented May 29, 2025

No description provided.

Copy link

Terracotta detected changes in your CDK files. Running an initial plan and review of your changes – please hold on for a moment while we process your request.

🔍 Need help? View the Getting Started Guide

Copy link

In order to run a Terraform/CDK plan, a backend credential is required for this repository. Please add the appropriate credentials in the Terracotta app.

🔍 Need help? View How to Add Credentials

Copy link

🔍 What’s Wrong?

The error cdk.tf.json not found in synthesized output indicates that the CDK for Terraform (cdktf) synthesis step did not produce the expected Terraform JSON output file (cdk.tf.json). This file is essential because cdktf internally generates Terraform configuration in JSON format during synthesis, which is then used for the Terraform apply or plan steps.

Common reasons for this failure include:

  1. Synthesis was not run or failed silently — The cdktf synth command, responsible for generating cdk.tf.json, may not have been executed before the Terraform review step or exited with an error that was not captured.

  2. Incorrect output directory or changed project structure — The cdk.tf.json file may have been produced in a different directory, or the output directory has been misconfigured or ignored.

  3. Build or compilation errors in the cdktf app (TypeScript/Python/other) — Errors in the cdktf code can prevent successful synthesis.

  4. Missing dependencies or environment setup — Failing to run npm install, pip install or required build steps will break the build.

  5. CI/CD pipeline misconfiguration — The pipeline might be running Terraform analyzer on source code instead of synthesized output artifacts.


Recommended Fix

To resolve this issue, ensure the following steps are explicitly performed and verifiable in your build or pipeline before Terraform analysis:

  1. Run the synthesis command explicitly

For TypeScript or JavaScript cdktf projects:

cdktf synth

For Python cdktf projects:

cdktf synth

This command generates the cdk.tf.json file in the project root (or configured output directory).

  1. Verify the output file exists

Right after synthesis, run a check to confirm the file:

test -f cdk.tf.json && echo "Synthesis successful" || echo "cdk.tf.json not found"
  1. Run Terraform commands on the synthesized output (cdk.tf.json)

The Terraform plan or security linting tools should use this synthesized file, not the raw cdktf source code.

  1. Example CI step snippet
# Install dependencies
npm install          # or pip install -r requirements.txt for Python

# Compile code (if TypeScript)
npm run build        # optional and depends on your setup

# Synthesize Terraform
cdktf synth

# Confirm synthesis output
if [ ! -f cdk.tf.json ]; then
  echo "cdk.tf.json not found. Synthesis failed."
  exit 1
fi

# Run terraform commands (plan, validate, scan)
terraform init
terraform validate
terraform plan -input=false

🛡 Preventive Measures

  1. Implement automated testing of synthesis

Add unit or integration tests to your CI pipeline to confirm that cdktf synth succeeds and outputs the expected file before running further Terraform commands.

  1. Enforce clear build dependencies

Use toolchain dependencies explicitly and document them in your pipeline manifests—e.g., npm ci or pip install must run before synthesis.

  1. Add linter/static analysis on cdktf code

Use tooling like eslint for TypeScript or flake8 for Python to catch synthesis-blocking errors early.

  1. Validate presence of cdk.tf.json in pre-checks

Incorporate file existence checks in the pipeline so failures are detected early and clearly.

  1. Explicitly separate code and generated artifacts

Avoid running Terraform scans, plans, or applies on the source code directory. Always run them on the generated cdk.tf.json.


Summary

Use the following commands to produce the necessary Terraform config and avoid the cdk.tf.json missing error:

npm install      # or pip install dependencies
cdktf synth      # generates cdk.tf.json
test -f cdk.tf.json && echo "Found cdk.tf.json" || exit 1
terraform init
terraform validate
terraform plan

Integrate these steps systematically in your CI/CD to guarantee the Terraform review step can successfully analyze synthesized Terraform code.

@ZIJ ZIJ closed this May 29, 2025
@ZIJ ZIJ reopened this May 29, 2025
Copy link

Terracotta detected changes in your CDK files. Running an initial plan and review of your changes – please hold on for a moment while we process your request.

🔍 Need help? View the Getting Started Guide

Copy link

In order to run a Terraform/CDK plan, a backend credential is required for this repository. Please add the appropriate credentials in the Terracotta app.

🔍 Need help? View How to Add Credentials

Copy link

🔍 What’s Wrong?

The error cdk.tf.json not found in synthesized output means that the CDK for Terraform (cdktf) synthesis process, which generates Terraform configuration in the JSON file cdk.tf.json, did not produce this output file. This typically happens because the synthesis step (cdktf synth) was either not run, failed during execution, or the output file location differs from the expected default.

Without cdk.tf.json, downstream Terraform commands or linting tools that analyze the synthesized Terraform cannot proceed, causing the Terraform review step to fail.


Recommended Fix

To resolve this issue, ensure that your workflow explicitly runs the synthesis step and verifies its success before attempting Terraform operations:

  1. Run synthesis command:
cdktf synth

This command generates cdk.tf.json in the root of your cdktf project by default.

  1. Verify the output file exists and is non-empty:
if [ ! -s cdk.tf.json ]; then
  echo "Error: cdk.tf.json not found or empty after synthesis."
  exit 1
fi
  1. Proceed with Terraform commands only after successful synthesis:
terraform init
terraform validate
terraform plan

🛡 Preventive Measures

  1. Enforce synthesis as a mandatory pre-step in CI pipelines before any Terraform commands.

  2. Add automated checks to confirm that cdk.tf.json is generated and valid after running cdktf synth. If not, fail early with a clear error message.

  3. Use static code analysis tools (e.g., ESLint for TypeScript, Pylint/Flake8 for Python) on your cdktf source code to catch issues that block synthesis.

  4. Integrate terraform fmt and terraform validate on synthesized JSON as part of your CI/CD to catch formatting or syntax problems early.

  5. Document build and synthesis steps clearly in README or contributing guides, requiring that all developers and CI jobs follow the same sequence.

  6. Track the cdktf version and dependencies to prevent breaking changes that silently break synthesis.


Production-Ready Synthesis and Terraform Workflow Snippet

# Install dependencies
npm ci                  # or pip install -r requirements.txt for Python projects

# Run synthesis step
cdktf synth

# Validate synthesis output exists
if [ ! -f cdk.tf.json ]; then
  echo "cdk.tf.json not found. CDKTF synthesis failed."
  exit 1
fi

# Terraform lifecycle commands
terraform init
terraform validate
terraform plan -out=tfplan.binary

Use the above snippet to ensure the Terraform review step always runs against generated Terraform JSON, resolving errors stemming from missing synthesized outputs.


This authoritative sequence enforces a robust infrastructure IaC pipeline that prevents missing or incomplete synthesized Terraform configurations, guaranteeing that cdk.tf.json is available for Terraform tooling during reviews and CI.

@ZIJ ZIJ closed this May 29, 2025
@ZIJ ZIJ reopened this May 29, 2025
Copy link

Terracotta detected changes in your CDK files. Running an initial plan and review of your changes – please hold on for a moment while we process your request.

🔍 Need help? View the Getting Started Guide

Copy link

In order to run a Terraform/CDK plan, a backend credential is required for this repository. Please add the appropriate credentials in the Terracotta app.

🔍 Need help? View How to Add Credentials

Copy link

🔍 What’s Wrong?

The error cdk.tf.json not found in synthesized output indicates the CDK for Terraform (cdktf) synthesis command did not generate the required cdk.tf.json file. This file is essential because it contains the generated Terraform configuration that Terraform commands and static analysis tools expect.

This failure is typically caused by one or more of the following:

  • The cdktf synth command was not run before the Terraform review step executes.
  • The synthesis process failed silently or with errors (e.g., TypeScript/Python compilation errors, missing dependencies).
  • The synthesis output was redirected to a non-default location or cleaned up unintentionally before the review step.
  • The CI/CD pipeline or local workflow is executing Terraform commands directly on source code instead of synthesized code.

Without this JSON file, Terraform cannot proceed with validation, planning, or scanning, causing the review step to fail.


Recommended Fix

Implement the following CLI workflow to guarantee successful synthesis and Terraform review:

# 1. Install dependencies
npm ci                              # or pip install -r requirements.txt for Python projects

# 2. Run build step (if applicable)
npm run build                       # For TypeScript projects with separate build step; omit if unnecessary

# 3. Run synthesis step to produce cdk.tf.json
cdktf synth

# 4. Validate cdk.tf.json presence and non-emptiness
if [ ! -s cdk.tf.json ]; then
  echo "Error: cdk.tf.json file missing or empty after synthesis."
  exit 1
fi

# 5. Initialize Terraform based on synthesized files
terraform init

# 6. Validate Terraform configuration syntax
terraform validate

# 7. Run Terraform plan (or static security scanning)
terraform plan -input=false

Ensure that your CI pipeline or local workflow always runs the above steps in this order before attempting any Terraform plans or linting. This will make cdk.tf.json available and valid for further steps.


🛡 Preventive Measures

  1. Integrate synthesis checks as a mandatory gate in your CI/CD pipeline before executing Terraform commands.
  2. Use static code analysis tools (e.g., eslint for TypeScript, flake8/pylint for Python) on cdktf source code to catch syntax or logic errors that would block synthesis.
  3. Add post-synthesis verification steps to confirm the expected output files exist and contain valid Terraform JSON.
  4. Document and enforce a consistent build & synthesis process for all developers and automation pipelines.
  5. Link cdktf version to project dependencies ensuring reproducible builds and avoiding silent breaking changes.
  6. Run terraform fmt and terraform validate against synthesized JSON regularly to catch formatting and syntactic issues early.

Following this authoritative process ensures your Terraform review step has the required synthesized JSON files and avoids failures due to missing or invalid cdktf outputs.

@ZIJ ZIJ closed this May 29, 2025
@ZIJ ZIJ reopened this May 29, 2025
Copy link

Terracotta detected changes in your CDK files. Running an initial plan and review of your changes – please hold on for a moment while we process your request.

🔍 Need help? View the Getting Started Guide

Copy link

In order to run a Terraform/CDK plan, a backend credential is required for this repository. Please add the appropriate credentials in the Terracotta app.

🔍 Need help? View How to Add Credentials

Copy link

🔍 What’s Wrong?

The error message cdk.tf.json not found in synthesized output clearly indicates that the CDK for Terraform (cdktf) synthesis step did not produce the expected Terraform JSON configuration file named cdk.tf.json. Since this synthesized JSON file contains the generated Terraform code that downstream tools use (validation, planning, scanning), its absence leads to failure of the Terraform review step.

This situation commonly occurs because:

  • The cdktf synth command was not executed prior to the review step.
  • The synthesis process failed silently or with unnoticed errors (e.g., missing dependencies, compile errors in cdktf code).
  • The output file was placed in a non-default directory or renamed.
  • Pipeline or automation is attempting to run Terraform commands directly on cdktf source code, instead of the synthesized Terraform JSON.
  • The build or environment does not meet prerequisites such as having necessary node or Python packages installed.

Recommended Fix

Use the following CLI sequence in your build and CI pipeline to guarantee that cdk.tf.json is generated properly before Terraform review:

# 1. Install dependencies (adjust for your language)
npm ci                                # For TypeScript/JavaScript projects
# or
pip install -r requirements.txt       # For Python projects

# 2. Build project (if applicable for TypeScript)
npm run build                        # Only if you have a separate build step

# 3. Execute synthesis to generate cdk.tf.json
cdktf synth

# 4. Validate the synthesized output presence and non-empty status
if [[ ! -s cdk.tf.json ]]; then
  echo "Error: cdk.tf.json missing or empty after synthesis"
  exit 1
fi

# 5. Initialize Terraform in the output directory
terraform init

# 6. Validate Terraform configuration
terraform validate

# 7. Run terraform plan or terraform security scans
terraform plan -input=false

Make sure to invoke this command sequence within the working directory where your cdktf app and cdk.tf.json reside, and the CI environment has the correct Node.js or Python runtime and dependencies installed.


🛡 Preventive Measures

  1. Introduce mandatory synthesis validation in your CI/CD pipeline prior to any terraform commands. The build must fail fast if synthesis does not succeed.

  2. Run linters and static analyzers on your cdktf source code to catch syntax or semantic errors that block synthesis:

    • TypeScript: eslint with strict settings
    • Python: flake8, pylint
  3. Add file existence checks for cdk.tf.json after synthesis to prevent silent failures.

  4. Document build and synth commands clearly in README.md or contributing guides, and require their execution before terraform reviews/plans.

  5. Lock your cdktf and Terraform versions in package.json, requirements.txt, or Terraform CLI version files to avoid unexpected breaking changes in tool behavior.

  6. Ensure CI runners have matching runtimes and dependencies with local dev environments to reduce discrepancies.

  7. Use terraform fmt and terraform validate on generated outputs regularly to maintain code quality.


This authoritative guidance guarantees the presence of cdk.tf.json by running proper synthesis with prerequisites and avoids downstream terraform failures related to missing synthesized manifests.

Copy link

infrabaseai bot commented May 29, 2025

🛡️ Security Analysis Results

Found 8 security issues:

Severity Issue File Line Recommendation
🔴 Critical Public S3 bucket with disabled public access bl... cdk-test/infra.ts:15 15 Remove publicReadAccess: true or restrict it to specifi...
🔴 Critical Security Group wide-open ingress rule cdk-test/infra.ts:31 31 Restrict the ingress rule to only the required IP ranges ...
🔴 Critical IAM role with administrator-level managed policy cdk-test/infra.ts:36 36 Define a custom IAM policy or attach only the specific AW...
🔴 Critical RDS instance is publicly accessible cdk-test/infra.ts:45 45 Set publiclyAccessible: false and place the database in...
🔴 Critical RDS instance has storage encryption disabled cdk-test/infra.ts:50 50 Enable encryption at rest by setting `storageEncrypted: t...
🟡 Warning S3 bucket missing server-side encryption cdk-test/infra.ts:15 15 Enable encryption by specifying `encryption: s3.BucketEnc...
🟡 Warning VPC created as raw resource with only public su... cdk-test/infra.ts:26 26 Include PRIVATE_WITH_NAT or ISOLATED subnet configura...
🔵 Info S3 bucket versioning is disabled cdk-test/infra.ts:15 15 Consider enabling versioning by setting versioned: true...
📋 Detailed Descriptions

🔴 Public S3 bucket with disabled public access block settings

File: cdk-test/infra.ts (Line 15-22)

Description: The S3 bucket is created with publicReadAccess: true and blockPublicAccess: s3.BlockPublicAccess.NONE, allowing anyone on the Internet to read bucket contents and upload public policies. This violates best practices for S3 security.

💡 Recommendation: Remove publicReadAccess: true or restrict it to specific principals. Enable public access blocks by setting blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL (or at minimum BLOCK_PUBLIC_ACLS and BLOCK_PUBLIC_POLICY) and ensure the bucket ACL is private unless this is explicitly a website bucket.


🔴 Security Group wide-open ingress rule

File: cdk-test/infra.ts (Line 31-33)

Description: The security group OpenSg allows all inbound traffic from 0.0.0.0/0 on all ports (ec2.Port.allTraffic()). This exposes any attached resources to the entire Internet.

💡 Recommendation: Restrict the ingress rule to only the required IP ranges and ports. For example, use ec2.Peer.ipv4('203.0.113.0/24') and specify only the necessary port (e.g., ec2.Port.tcp(22) for SSH).


🔴 IAM role with administrator-level managed policy

File: cdk-test/infra.ts (Line 36-42)

Description: The OverPermissiveRole is granted the AWS-managed AdministratorAccess policy, giving it full access to every AWS service and resource. This violates the principle of least privilege.

💡 Recommendation: Define a custom IAM policy or attach only the specific AWS managed policies that grant the minimal set of permissions required for your application to function.


🔴 RDS instance is publicly accessible

File: cdk-test/infra.ts (Line 45-52)

Description: The RDS instance is created with publiclyAccessible: true and resides in public subnets. This exposes your database endpoints directly to the Internet.

💡 Recommendation: Set publiclyAccessible: false and place the database in private or isolated subnets. Use security groups and VPN/Direct Connect or Bastion hosts for legitimate access.


🔴 RDS instance has storage encryption disabled

File: cdk-test/infra.ts (Line 50-52)

Description: The RDS instance is created with storageEncrypted: false, meaning data at rest in the database is not encrypted.

💡 Recommendation: Enable encryption at rest by setting storageEncrypted: true and, optionally, specifying a KMS key with kmsKey.


🟡 S3 bucket missing server-side encryption

File: cdk-test/infra.ts (Line 15-22)

Description: The S3 bucket is created without any server-side encryption configuration. Data at rest in S3 should always be encrypted to meet compliance and security requirements.

💡 Recommendation: Enable encryption by specifying encryption: s3.BucketEncryption.S3_MANAGED (or use a KMS key) in the bucket definition.


🟡 VPC created as raw resource with only public subnets

File: cdk-test/infra.ts (Line 26-29)

Description: A raw ec2.Vpc is defined with only public subnets, which may lead to placing critical resources in publicly routable subnets. You should use private or isolated subnets for backend services.

💡 Recommendation: Include PRIVATE_WITH_NAT or ISOLATED subnet configurations for backend resources (e.g., databases) and place them there. For consistency and maintainability, consider using your organization's common VPC CDK module instead of defining the VPC inline.


🔵 S3 bucket versioning is disabled

File: cdk-test/infra.ts (Line 15-22)

Description: Versioning is turned off for this bucket. Without versioning, accidental or malicious deletions/overwrites cannot be recovered.

💡 Recommendation: Consider enabling versioning by setting versioned: true to protect against unintended object deletions or overwrites.

📊 Summary

  • 🔴 Critical: 5
  • 🟡 Warning: 2
  • 🔵 Info: 1

🤖 Analysis powered by Infrabase AI

Copy link

try-terracotta bot commented May 29, 2025

🔍 What’s Wrong?

Your Terraform code synthesized via CDKTF is failing the review step because the file cdk.tf.json—the essential output of cdktf synth—is missing. This prevents Terraform commands and static analyzers from operating on the expected generated Terraform JSON configuration.

Separately, your infrastructure code (cdk-test/infra.ts) has multiple critical security issues that would fail security review if the Terraform review succeeded. Among these are:

  • A publicly accessible S3 bucket with no public access blocks and no encryption
  • A security group open to all ingress on all ports
  • An IAM role with administrator-level access
  • A publicly accessible, unencrypted RDS instance
  • VPC configuration using only public subnets for all resources (not recommended)

Addressing the missing cdk.tf.json file is step one to enable any further validation or security scanning.


Recommended Fix

  1. Fix the missing cdk.tf.json via proper cdktf synthesis

Make sure your project runs these commands in the project root where your cdktf app resides before the Terraform review step:

# Install dependencies (adjust for your language)
npm ci                       # for TypeScript
# or
pip install -r requirements.txt  # for Python

# Optional build step (TypeScript only)
npm run build                # if you have a build script, else omit

# Run synthesis to generate cdk.tf.json
cdktf synth

# Confirm the synthesized output exists
if [[ ! -s cdk.tf.json ]]; then
  echo "Error: cdk.tf.json missing or empty after synthesis"
  exit 1
fi

# Afterwards, run terraform commands on the synthesized JSON
terraform init
terraform validate
terraform plan -input=false
  1. Secure your CDK infrastructure code by addressing the critical security issues

Use the production-ready CDK patterns below to fix the critical issues in your cdk-test/infra.ts file:

import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as iam from 'aws-cdk-lib/aws-iam';
import * as rds from 'aws-cdk-lib/aws-rds';

export class SecureStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // Secure S3 bucket with encryption and public access blocked
    const bucket = new s3.Bucket(this, 'SecureBucket', {
      bucketName: 'my-secure-bucket-001',
      encryption: s3.BucketEncryption.S3_MANAGED, // Enable SSE-S3 encryption
      blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL, // Block all public access
      versioned: true, // Enable versioning
      removalPolicy: cdk.RemovalPolicy.RETAIN,
      autoDeleteObjects: false,
    });

    // VPC with public + private isolated subnets for security
    const vpc = new ec2.Vpc(this, 'SecureVpc', {
      maxAzs: 2,
      subnetConfiguration: [
        {
          cidrMask: 24,
          name: 'public',
          subnetType: ec2.SubnetType.PUBLIC,
        },
        {
          cidrMask: 24,
          name: 'private',
          subnetType: ec2.SubnetType.PRIVATE_WITH_NAT,
        },
      ],
    });

    // Security Group allowing only limited ingress (e.g., SSH from narrow IP)
    const sg = new ec2.SecurityGroup(this, 'RestrictedSg', {
      vpc,
      description: 'Allow SSH from corporate CIDR',
      allowAllOutbound: true,
    });
    sg.addIngressRule(
      ec2.Peer.ipv4('203.0.113.0/24'), // Restrict to known IP range
      ec2.Port.tcp(22),
      'Allow SSH from trusted IP range'
    );

    // IAM role with least privilege policy example (no administrator access)
    const role = new iam.Role(this, 'LeastPrivilegeRole', {
      assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
      description: 'Role with scoped permissions',
    });
    const policy = new iam.Policy(this, 'ExamplePolicy', {
      statements: [
        new iam.PolicyStatement({
          actions: ['s3:GetObject'],
          resources: [bucket.bucketArn + '/*'],
        }),
      ],
    });
    role.attachInlinePolicy(policy);

    // RDS instance placed in private subnets with encryption enabled
    new rds.DatabaseInstance(this, 'SecureDb', {
      engine: rds.DatabaseInstanceEngine.postgres({
        version: rds.PostgresEngineVersion.VER_15,
      }),
      vpc,
      vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_NAT },
      publiclyAccessible: false,
      storageEncrypted: true,
      allocatedStorage: 20,
      credentials: rds.Credentials.fromGeneratedSecret('postgres'),
      removalPolicy: cdk.RemovalPolicy.RETAIN,
    });
  }
}

🛡 Preventive Measures

  1. Automate synthesis checks: Add a CI job step to validate that cdktf synth produces cdk.tf.json and fail fast if missing.

  2. Run static analyzers on cdktf source: Use eslint (TypeScript) or flake8/pylint (Python) before synthesis to reduce compile-time errors.

  3. Incorporate security scanning: Use tools like tfsec, checkov, or cdk-nag that run on synthesized JSON to detect risky configurations early.

  4. Define strict CI/CD pipeline sequences:

    • Install dependencies
    • Build (if necessary)
    • Synthesize (cdktf synth)
    • Validate synthesized Terraform (terraform validate)
    • Run security scans and plan
  5. Enforce least privilege IAM roles: Avoid attaching overly permissive managed policies such as AdministratorAccess. Instead, define explicit policies scoped to resource needs.

  6. Block public access by default: For S3 buckets and other public resources, ensure public access is blocked unless explicitly intended.

  7. Use private subnets for sensitive resources: RDS and EC2 instances with internal workloads should reside in private or isolated subnets, avoiding direct internet exposure.


Following this approach ensures your CI pipeline produces valid Terraform artifacts for review, and your infrastructure code complies with best security practices to pass detailed security analysis in review steps.

Comment on lines +15 to +22
const bucket = new s3.Bucket(this, 'PublicBucket', {
bucketName: 'my-public-bucket-001',
publicReadAccess: true,
blockPublicAccess: s3.BlockPublicAccess.NONE, // 🔴 no block-public-access settings
versioned: false,
removalPolicy: cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
});

Check failure

Code scanning / Infrabase AI

S3 Buckets Must Block Public Access Error

In cdk-test/infra.ts (lines ~14-21), a new S3 bucket is created with publicReadAccess set to true, blockPublicAccess disabled, versioning off, and no server-side encryption. This exposes all objects to anonymous users and stores data unencrypted at rest. Recommendation: Remove publicReadAccess or scope it to a narrow set of principals. Enable BlockPublicAccess.BLOCK_ALL (or at minimum block public ACLs and policies), set the ACL to private, and configure server-side encryption (e.g. bucketEncryption: s3.BucketEncryption.S3_MANAGED). Enable versioning to protect against accidental deletes if appropriate.
Comment on lines +26 to +29
subnetConfiguration: [
{ name: 'public', subnetType: ec2.SubnetType.PUBLIC },
],
maxAzs: 2,

Check warning

Code scanning / Infrabase AI

No raw resources when possible Warning

In cdk-test/infra.ts (lines ~36-40), the IAM role "OverPermissiveRole" is assigned the AWS-managed AdministratorAccess policy, granting broad privileges across all services and resources. Recommendation: Follow the principle of least privilege. Define a custom IAM policy or attach only the specific AWS-managed policies that grant the minimal set of actions and resources required by your workload.
Comment on lines +31 to +33

const sg = new ec2.SecurityGroup(this, 'OpenSg', {
vpc,

Check failure

Code scanning / Infrabase AI

No raw resources when possible Error

In cdk-test/infra.ts (lines ~29-33), the security group "OpenSg" allows all inbound IPv4 traffic on all ports (0.0.0.0/0). This effectively exposes any resources in the VPC to the entire Internet. Recommendation: Restrict ingress rules to only the required ports and trusted CIDR ranges. For example, replace ec2.Peer.anyIpv4() and ec2.Port.allTraffic() with specific Port.tcp(portNumber) and your organization’s IP ranges.
Comment on lines +36 to +42
});
sg.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.allTraffic(), 'Wide-open SG'); // 🔴 0.0.0.0/0 ALL

/* 3️⃣ Wild-card IAM permissions */
const role = new iam.Role(this, 'OverPermissiveRole', {
assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
description: 'Wild-card role for demo',

Check failure

Code scanning / Infrabase AI

S3 Buckets Must Block Public Access Error

In cdk-test/infra.ts (lines ~43-50), an Amazon RDS instance is created with publiclyAccessible set to true, storageEncrypted disabled, and removalPolicy DESTROY. The database is exposed to the Internet and data at rest is unencrypted. Recommendation: Set publiclyAccessible to false unless absolutely required. Enable storageEncrypted: true and specify a KMS key if you need customer-managed encryption keys. Consider setting removalPolicy to RETAIN or enabling deletionProtection in production environments.
@ZIJ ZIJ closed this May 29, 2025
@ZIJ ZIJ reopened this May 29, 2025
Copy link

Terracotta detected changes in your CDK files. Running an initial plan and review of your changes – please hold on for a moment while we process your request.

🔍 Need help? View the Getting Started Guide

Copy link

In order to run a Terraform/CDK plan, a backend credential is required for this repository. Please add the appropriate credentials in the Terracotta app.

🔍 Need help? View How to Add Credentials

Copy link

infrabaseai bot commented May 29, 2025

🛡️ Security Analysis Results

Found 6 security issues:

Severity Issue File Line Recommendation
🔴 Critical Public S3 bucket with no public access controls... cdk-test/infra.ts:14 14 Remove publicReadAccess or scope it to a narrow set of pr...
🔴 Critical Security Group open to all traffic cdk-test/infra.ts:29 29 Restrict ingress rules to only the required ports and tru...
🔴 Critical Publicly accessible, unencrypted RDS instance cdk-test/infra.ts:43 43 Set publiclyAccessible to false unless absolutely require...
🟡 Warning IAM Role granted AdministratorAccess wildcard p... cdk-test/infra.ts:36 36 Follow the principle of least privilege. Define a custom ...
🔵 Info Raw VPC resource instead of internal module cdk-test/infra.ts:23 23 Use the internal VPC module to ensure consistency: e.g., ...
Note Destructive removal policies enabled cdk-test/infra.ts:20 20 In non-development environments, use RemovalPolicy.RETAIN...
📋 Detailed Descriptions

🔴 Public S3 bucket with no public access controls or encryption

File: cdk-test/infra.ts (Line 14-21)

Description: In cdk-test/infra.ts (lines ~14-21), a new S3 bucket is created with publicReadAccess set to true, blockPublicAccess disabled, versioning off, and no server-side encryption. This exposes all objects to anonymous users and stores data unencrypted at rest.

💡 Recommendation: Remove publicReadAccess or scope it to a narrow set of principals. Enable BlockPublicAccess.BLOCK_ALL (or at minimum block public ACLs and policies), set the ACL to private, and configure server-side encryption (e.g. bucketEncryption: s3.BucketEncryption.S3_MANAGED). Enable versioning to protect against accidental deletes if appropriate.


🔴 Security Group open to all traffic

File: cdk-test/infra.ts (Line 29-33)

Description: In cdk-test/infra.ts (lines ~29-33), the security group "OpenSg" allows all inbound IPv4 traffic on all ports (0.0.0.0/0). This effectively exposes any resources in the VPC to the entire Internet.

💡 Recommendation: Restrict ingress rules to only the required ports and trusted CIDR ranges. For example, replace ec2.Peer.anyIpv4() and ec2.Port.allTraffic() with specific Port.tcp(portNumber) and your organization’s IP ranges.


🔴 Publicly accessible, unencrypted RDS instance

File: cdk-test/infra.ts (Line 43-50)

Description: In cdk-test/infra.ts (lines ~43-50), an Amazon RDS instance is created with publiclyAccessible set to true, storageEncrypted disabled, and removalPolicy DESTROY. The database is exposed to the Internet and data at rest is unencrypted.

💡 Recommendation: Set publiclyAccessible to false unless absolutely required. Enable storageEncrypted: true and specify a KMS key if you need customer-managed encryption keys. Consider setting removalPolicy to RETAIN or enabling deletionProtection in production environments.


🟡 IAM Role granted AdministratorAccess wildcard permission

File: cdk-test/infra.ts (Line 36-40)

Description: In cdk-test/infra.ts (lines ~36-40), the IAM role "OverPermissiveRole" is assigned the AWS-managed AdministratorAccess policy, granting broad privileges across all services and resources.

💡 Recommendation: Follow the principle of least privilege. Define a custom IAM policy or attach only the specific AWS-managed policies that grant the minimal set of actions and resources required by your workload.


🔵 Raw VPC resource instead of internal module

File: cdk-test/infra.ts (Line 23-27)

Description: In cdk-test/infra.ts (lines ~23-27), a VPC is created directly using ec2.Vpc. Your organization maintains an internal VPC module (git::https://github.com/diggerhq/common-modules//vpc) that enforces standard tagging, flow logging, and subnet layouts.

💡 Recommendation: Use the internal VPC module to ensure consistency: e.g., new ModuleVPC(this, 'MyVpc', { source: 'git::https://github.com/diggerhq/common-modules//vpc', /* module inputs */ });


⚪ Destructive removal policies enabled

File: cdk-test/infra.ts (Line 20-50)

Description: In cdk-test/infra.ts (lines ~20 and ~50), both the S3 bucket and the RDS instance use removalPolicy: DESTROY. This configuration causes permanent data loss upon stack deletion.

💡 Recommendation: In non-development environments, use RemovalPolicy.RETAIN or enable snapshot/deletionProtection options to prevent accidental data loss.

📊 Summary

  • 🔴 Critical: 3
  • 🟡 Warning: 1
  • 🔵 Info: 1
  • Note: 1

🤖 Analysis powered by Infrabase AI

Copy link

try-terracotta bot commented May 29, 2025

🔍 What’s Wrong?

The Terraform review step failed because cdk.tf.json — the synthesized Terraform JSON file generated by CDK for Terraform (cdktf) — was not found. This file is essential as it contains the Terraform configuration synthesized from your cdktf code, and without it, Terraform cannot proceed with validation or plan steps.

Additionally, your infrastructure code (cdk-test/infra.ts) has multiple critical security issues, including:

  • A publicly accessible S3 bucket without any public access blocks or encryption.
  • A security group open to all IPs on all ports, exposing resources to the Internet.
  • An RDS instance exposed publicly and unencrypted.
  • An IAM role with AdministratorAccess wildcard permissions—violating least privilege.
  • Direct creation of a raw VPC resource instead of using the internal VPC module, causing inconsistency.
  • Use of destructive removal policies (RemovalPolicy.DESTROY) in potentially non-development environments.

Fixing the missing cdk.tf.json issue is prerequisite before Terraform can analyze or apply this infrastructure.


Recommended Fix

1. Ensure Proper CDKTF Synthesis to Generate cdk.tf.json

Add the following CLI steps to your pipeline or local workflow before running Terraform commands:

# Install dependencies (adjust for your project’s language)
npm ci                            # For TypeScript projects
# or
pip install -r requirements.txt   # For Python projects

# (Optional) Build project if required (TypeScript)
npm run build                   

# Generate Terraform JSON output with CDKTF synth
cdktf synth

# Verify that cdk.tf.json exists and has content
if [[ ! -s cdk.tf.json ]]; then
  echo "Error: cdk.tf.json is missing or empty after synthesis."
  exit 1
fi

# Now run Terraform commands in the output directory
terraform init
terraform validate
terraform plan -input=false

2. Correct Security Issues in cdk-test/infra.ts

Replace your insecure stack code with this secure, best-practice example incorporating the recommendations:

import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as iam from 'aws-cdk-lib/aws-iam';
import * as rds from 'aws-cdk-lib/aws-rds';

export class SecureStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // Secure S3 bucket with encryption and public access blocked
    const bucket = new s3.Bucket(this, 'SecureBucket', {
      bucketName: 'my-secure-bucket-001',
      encryption: s3.BucketEncryption.S3_MANAGED,
      blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
      versioned: true,
      removalPolicy: cdk.RemovalPolicy.RETAIN,
      autoDeleteObjects: false,
    });

    // Use internal VPC module if available, else define with private subnets
    const vpc = new ec2.Vpc(this, 'SecureVpc', {
      maxAzs: 2,
      subnetConfiguration: [
        { name: 'public', subnetType: ec2.SubnetType.PUBLIC },
        { name: 'private', subnetType: ec2.SubnetType.PRIVATE_WITH_NAT },
      ],
    });

    // Security group restricting access to trusted IP ranges and ports
    const sg = new ec2.SecurityGroup(this, 'RestrictedSg', {
      vpc,
      description: 'Restricted ingress',
      allowAllOutbound: true,
    });
    sg.addIngressRule(
      ec2.Peer.ipv4('203.0.113.0/24'), // Replace with trusted CIDR ranges
      ec2.Port.tcp(22),
      'Allow SSH from trusted IP range'
    );

    // IAM Role with least privilege policy
    const role = new iam.Role(this, 'LeastPrivilegeRole', {
      assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
    });
    role.attachInlinePolicy(new iam.Policy(this, 'ScopedPolicy', {
      statements: [
        new iam.PolicyStatement({
          actions: ['s3:GetObject'],
          resources: [bucket.bucketArn + '/*'],
        }),
      ],
    }));

    // Private, encrypted RDS instance with retention policy
    new rds.DatabaseInstance(this, 'SecureDB', {
      engine: rds.DatabaseInstanceEngine.postgres({ version: rds.PostgresEngineVersion.VER_15 }),
      vpc,
      vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_NAT },
      publiclyAccessible: false,
      storageEncrypted: true,
      allocatedStorage: 20,
      credentials: rds.Credentials.fromGeneratedSecret('postgres'),
      removalPolicy: cdk.RemovalPolicy.RETAIN,
      deletionProtection: true,
    });
  }
}

🛡 Preventive Measures

  • Automate synthesis validation: Include cdktf synth execution and verification as mandatory steps in your CI pipeline before any Terraform commands.

  • Run linters and static analysis: Use eslint for TypeScript or flake8/pylint for Python before synthesis to catch syntax errors early.

  • Integrate security scanners: Use tools like tfsec, checkov, or cdk-nag against the synthesized JSON to catch configuration vulnerabilities before deployment.

  • Strictly enforce least privilege: Avoid wildcard administrator policies; tailor IAM permissions closely to requirements.

  • Block public access on S3 buckets by default: Enable BlockPublicAccess.BLOCK_ALL and use encryption.

  • Deploy backend services in private subnets: Avoid public exposure of databases and internal instances.

  • Prefer retention over destruction for critical resources: Use RemovalPolicy.RETAIN and enable snapshotting or deletion protection in production environments.

  • Keep build environment consistent: Pin your cdktf, provider, and runtime versions, and replicate the CI environment locally to reduce build failures.


By enforcing this workflow and refactoring your infrastructure code to follow security best practices, you will both eliminate the critical synthesis error and substantially improve the security posture of your cloud infrastructure.

Copy link

🔍 What’s Wrong?

The Terraform review step failed because the cdk.tf.json file, which is the synthesized Terraform configuration output generated by CDK for Terraform (cdktf), was not found in the expected output directory. This synthesized file is critical for any Terraform validation, plan, or security analysis, as it translates the cdktf application’s code into the declarative Terraform JSON configuration.

The root causes of this failure typically include:

  • The synthesis command (cdktf synth) was not executed or failed before the Terraform review step.
  • The synthesis process completed but output the file to a different directory or under a different name.
  • The cdktf project failed to compile or build correctly due to missing dependencies or code errors, resulting in a failed or skipped synth step.
  • The pipeline or local run is attempting to apply Terraform commands directly on the source CDK code instead of the synthesized Terraform JSON.
  • Environment setup in CI is incomplete or missing prerequisites for cdktf (e.g., node modules, Python packages).

Recommended Fix

To resolve this, enforce the following production-ready commands and workflow before the Terraform review step:

# 1. Install project dependencies
npm ci                              # For TypeScript/JavaScript projects
# or
pip install -r requirements.txt     # For Python projects

# 2. (If applicable) Build the project to transpile TypeScript to JavaScript
npm run build                      # Only needed if your project requires compilation before synth

# 3. Run synthesis to generate cdk.tf.json
cdktf synth

# 4. Confirm that cdk.tf.json exists and is not empty
if [[ ! -s cdk.tf.json ]]; then
  echo "Error: cdk.tf.json not found or empty after synthesis."
  exit 1
fi

# 5. Initialize Terraform in the synthesis output folder
terraform init

# 6. Validate the Terraform configuration
terraform validate

# 7. Run Terraform plan or security scanning tools
terraform plan -input=false

Make sure that:

  • The working directory where these commands run contains the cdktf project and will host cdk.tf.json.
  • CI runners have Node.js (or Python) installed with compatible versions matching your local development environment.
  • Your pipeline explicitly enforces running cdktf synth and checking its success before any Terraform commands.

🛡 Preventive Measures

  1. Add explicit CI checks to verify successful cdktf synth and presence of output: fail early with clear errors if synthesis fails or output is missing.
  2. Static analysis on cdktf source code: Integrate linters (eslint for TypeScript, flake8 for Python) and compile-time checks in your pipeline to catch issues before synthesis.
  3. Use version pinning for your cdktf package and provider versions to avoid breaking changes causing silent synthesis failures.
  4. Document your build and CI pipeline requirements to ensure consistent environment setup for all developers and automation.
  5. Run terraform fmt and terraform validate routinely on the synthesized JSON output to maintain quality and surface syntactic issues early.
  6. Integrate infrastructure security scanners (tfsec, checkov) targeting the synthesized Terraform JSON to catch vulnerabilities post-synthesis.

This authoritative approach guarantees your cdk.tf.json output exists and is valid before Terraform commands or reviews execute, preventing “cdk.tf.json not found” errors and enabling reliable infrastructure code analysis.

Comment on lines +20 to +50
removalPolicy: cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
});

/* 2️⃣ VPC with a security group open to the world */
const vpc = new ec2.Vpc(this, 'MyVpc', {
subnetConfiguration: [
{ name: 'public', subnetType: ec2.SubnetType.PUBLIC },
],
maxAzs: 2,
});

const sg = new ec2.SecurityGroup(this, 'OpenSg', {
vpc,
description: 'Allow all inbound traffic',
allowAllOutbound: true,
});
sg.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.allTraffic(), 'Wide-open SG'); // 🔴 0.0.0.0/0 ALL

/* 3️⃣ Wild-card IAM permissions */
const role = new iam.Role(this, 'OverPermissiveRole', {
assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
description: 'Wild-card role for demo',
});
role.addManagedPolicy(
iam.ManagedPolicy.fromAwsManagedPolicyName('AdministratorAccess'), // 🔴 *
);

/* 4️⃣ Public, unencrypted RDS instance */
new rds.DatabaseInstance(this, 'InsecureDb', {
engine: rds.DatabaseInstanceEngine.postgres({

Check notice

Code scanning / Infrabase AI

No raw resources when possible Note

In cdk-test/infra.ts (lines ~20 and ~50), both the S3 bucket and the RDS instance use removalPolicy: DESTROY. This configuration causes permanent data loss upon stack deletion. Recommendation: In non-development environments, use RemovalPolicy.RETAIN or enable snapshot/deletionProtection options to prevent accidental data loss.
Comment on lines +23 to +27

/* 2️⃣ VPC with a security group open to the world */
const vpc = new ec2.Vpc(this, 'MyVpc', {
subnetConfiguration: [
{ name: 'public', subnetType: ec2.SubnetType.PUBLIC },

Check notice

Code scanning / Infrabase AI

No raw resources when possible Note

In cdk-test/infra.ts (lines ~23-27), a VPC is created directly using ec2.Vpc. Your organization maintains an internal VPC module (git::https://github.com/diggerhq/common-modules//vpc) that enforces standard tagging, flow logging, and subnet layouts. Recommendation: Use the internal VPC module to ensure consistency: e.g., new ModuleVPC(this, 'MyVpc', { source: 'git::https://github.com/diggerhq/common-modules//vpc', /* module inputs */ });
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant