Skip to content

icycloudio/terraform-aws-cicd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

2 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

AWS-Native CI/CD Template Engine

Infrastructure-as-Code meets CI/CD-as-Code
A complete, AWS-native CI/CD solution for any compute service using only Terraform and AWS services.

๐ŸŽฏ What This Template Engine Does

This template creates a complete CI/CD pipeline for AWS workloads that:

  • โœ… Automatically builds your code when you push to GitHub
  • โœ… Deploys to staging environment for testing
  • โœ… Waits for manual approval before production
  • โœ… Deploys to production with your approval
  • โœ… Sends notifications about pipeline status
  • โœ… Manages all permissions automatically

No Jenkins, no GitHub Actions, no external tools - just AWS services orchestrated by Terraform.

๐Ÿ—๏ธ Supported Compute Services

This template engine supports multiple AWS compute services:

โœ… Currently Supported

  • Lambda ZIP - Serverless functions with ZIP deployment
  • Terraform Infrastructure ๐Ÿ†• - Infrastructure as Code deployments with cross-account support
  • Lambda Container - Serverless functions with container deployment
  • Custom Workloads - Any compute service with custom deployment scripts

๐Ÿšง Planned Support

  • EC2 Applications - Traditional server-based applications
  • ECS Services - Containerized long-running applications
  • EKS Workloads - Kubernetes-native applications
  • Batch Jobs - Large-scale batch processing workloads

๐Ÿ—๏ธ What Gets Created

When you deploy this template, you get:

Core Pipeline Components

  • CodePipeline - Orchestrates the entire workflow
  • CodeBuild Projects - Handles build and deployment steps
  • S3 Bucket - Stores build artifacts
  • IAM Roles & Policies - Manages all permissions securely

Pipeline Stages

  1. Source - Monitors your GitHub repository
  2. Build - Compiles and packages your application code
  3. Deploy-Staging - Deploys to your staging environment
  4. Manual-Approval - Waits for your approval via AWS Console
  5. Deploy-Production - Deploys to your production environment

Notifications

  • CodeStar Notifications - Sends updates to Slack/SNS about pipeline status

๐Ÿ“‹ Prerequisites

Before using this template, you need:

  • AWS CLI configured with appropriate permissions
  • Terraform installed (version 1.0+)
  • GitHub repository with your application code
  • CodeStar Connection to your GitHub repository
  • Target compute resources (staging and production environments)
  • S3 bucket for storing build artifacts

๐Ÿš€ Quick Start

1. Configure Your Application

Edit terraform.tfvars with your specific details:

# Your application details
app_name         = "my-application"          # Used for naming all resources
github_repo_name = "my-company/my-application"  # Your GitHub repo

# AWS resources this pipeline will manage
codestar_connection_arn = "arn:aws:codestar-connections:us-east-1:123456789012:connection/abc123"

# Where to store build artifacts
codepipeline = {
  artifact_store = {
    location = "my-cicd-artifacts-bucket"
    type     = "S3"
  }
  # ... more configuration
}

2. Deploy the Pipeline

# Initialize Terraform
terraform init

# Review what will be created
terraform plan

# Create the CI/CD infrastructure
terraform apply

3. Test Your Pipeline

# Make a change to your code
echo "// Pipeline test" >> src/index.js

# Commit and push
git add .
git commit -m "Test pipeline deployment"
git push origin main

Watch your pipeline run in the AWS CodePipeline Console!

๐Ÿ“ Project Structure

Recommended Architecture: Separate Infrastructure from Application Code

# Your Application Repository
my-application/
โ”œโ”€โ”€ src/                    # Your application source code
โ”œโ”€โ”€ tests/                  # Application tests
โ”œโ”€โ”€ package.json           # Application dependencies (if applicable)
โ”œโ”€โ”€ requirements.txt       # Python dependencies (if applicable)
โ”œโ”€โ”€ Dockerfile             # Container definition (if applicable)
โ””โ”€โ”€ README.md             # Application documentation

# Your Infrastructure Repository (this template)
my-application-cicd/
โ”œโ”€โ”€ README.md             # This documentation
โ”œโ”€โ”€ iam_codebuild.tf      # CodeBuild IAM roles and policies
โ”œโ”€โ”€ iam_codepipeline.tf   # CodePipeline IAM roles and policies
โ”œโ”€โ”€ main.tf               # Pipeline infrastructure definition
โ”œโ”€โ”€ outputs.tf            # Pipeline outputs (ARNs, names, etc.)
โ”œโ”€โ”€ providers.tf          # Terraform provider configuration
โ”œโ”€โ”€ scripts/              # Build and deployment scripts
โ”œโ”€โ”€ terraform.tfvars     # Your CI/CD configuration (customize this!)
โ””โ”€โ”€ variables.tf         # Available configuration options

How It Works:

  • Application repo: Pure application code, no infrastructure concerns
  • Infrastructure repo: Complete CI/CD system including scripts and buildspecs
  • Clean separation: Developers focus on code, platform team manages CI/CD
  • Reusable template: Copy and customize for any application

โš™๏ธ Configuration Guide

Essential Configuration (terraform.tfvars)

Application Settings

app_name         = "my-api"                    # Used for naming resources
github_repo_name = "my-company/my-api"         # GitHub repository path

Compute Service Configuration

Configure your deployment through scripts and buildspecs:

Basic Configuration (works for any compute service):

codebuild_projects = {
  build = {
    name = "build"
    description = "Build stage for application"
    # Your build configuration
  }
  deploy-staging = {
    name = "deploy-staging"
    description = "Deploy to staging environment"
    # Your staging deployment configuration
  }
  deploy-prod = {
    name = "deploy-prod"
    description = "Deploy to production environment"
    # Your production deployment configuration
  }
}

For Lambda Functions (current template example):

codebuild_projects = {
  deploy-staging = {
    lambda = {
      function_arn = "arn:aws:lambda:us-east-1:123456789012:function:my-api-staging"
    }
    # Lambda-specific permissions automatically applied
  }
  deploy-prod = {
    lambda = {
      function_arn = "arn:aws:lambda:us-east-1:123456789012:function:my-api-prod"
    }
  }
}

For Other Compute Services: Use the custom workload approach and configure through your deployment scripts:

codebuild_projects = {
  deploy-staging = {
    # Configure through your deployment scripts in scripts/deploy-staging.sh
    # and buildspec configuration
    permission_profiles = ["s3", "cloudwatch"]  # Add needed permissions
    environment = {
      variables = [
        { name = "ENVIRONMENT", value = "staging", type = "PLAINTEXT" },
        { name = "CLUSTER_NAME", value = "my-staging-cluster", type = "PLAINTEXT" }
      ]
    }
  }
  deploy-prod = {
    # Configure through your deployment scripts in scripts/deploy-prod.sh
    permission_profiles = ["s3", "cloudwatch"]
    environment = {
      variables = [
        { name = "ENVIRONMENT", value = "production", type = "PLAINTEXT" },
        { name = "CLUSTER_NAME", value = "my-prod-cluster", type = "PLAINTEXT" }
      ]
    }
  }
}

The magic happens in your deployment scripts:

  • scripts/deploy-staging.sh - Contains your ECS, EC2, or other service deployment logic
  • scripts/deploy-prod.sh - Contains your production deployment logic
  • buildspec/*.yml - CodeBuild configuration for each stage

Notifications

codestar_notification_rule = {
  detail_type = "FULL"
  event_type_ids = [
    "codepipeline-pipeline-pipeline-execution-failed",
    "codepipeline-pipeline-pipeline-execution-succeeded"
  ]
  target = {
    address = "arn:aws:sns:us-east-1:123456789012:my-notifications"
    type    = "SNS"
  }
}

Advanced Configuration

Custom Permissions

If your workload needs special permissions during deployment:

codebuild_projects = {
  deploy-staging = {
    permission_profiles = ["lambda", "s3", "cloudwatch"]  # Pre-built permission sets
    custom_permissions = {
      "dynamodb-access" = {
        effect = "Allow"
        actions = ["dynamodb:GetItem", "dynamodb:PutItem"]
        resources = ["arn:aws:dynamodb:us-east-1:123456789012:table/my-table"]
        condition = []
      }
    }
  }
}

Environment Variables

Pass configuration to your build and deploy scripts:

codebuild_projects = {
  build = {
    environment = {
      variables = [
        { name = "NODE_ENV", value = "production", type = "PLAINTEXT" },
        { name = "API_KEY", value = "/my-app/api-key", type = "PARAMETER_STORE" }
      ]
    }
  }
}

๐Ÿ”ง Operations Guide

Monitoring Your Pipeline

Check Pipeline Status

  1. Go to AWS CodePipeline Console
  2. Find your pipeline: {app_name}-codepipeline
  3. View current execution status and history

Approve Production Deployments

  1. When pipeline reaches "Manual-Approval" stage
  2. Click "Review" in the AWS Console
  3. Add comments and click "Approve" or "Reject"

View Build Logs

  1. Click on failed CodeBuild stage in pipeline
  2. Click "View details" to see build logs
  3. Check CloudWatch Logs for detailed output

Common Operations

Updating Target Resources

# Edit terraform.tfvars with new resource ARNs/names
vim terraform.tfvars

# Apply changes
terraform plan
terraform apply

Adding New Notification Targets

# Add new SNS topic or Slack webhook to terraform.tfvars
vim terraform.tfvars

# Update infrastructure
terraform apply

Modifying Build Scripts

  1. Edit scripts in scripts/ directory
  2. Commit and push changes
  3. Pipeline will automatically use updated scripts

Troubleshooting

Pipeline Fails at Build Stage

  1. Check build logs in CodeBuild console
  2. Verify build script in scripts/build.sh
  3. Check environment variables in terraform.tfvars
  4. Ensure S3 permissions are correct

Pipeline Fails at Deploy Stage

  1. Verify target resource configuration in terraform.tfvars
  2. Check IAM permissions for CodeBuild role
  3. Verify deployment script in scripts/deploy-*.sh
  4. Check target resources exist and are accessible

GitHub Source Issues

  1. Verify CodeStar connection is active
  2. Check repository name in terraform.tfvars
  3. Ensure webhook is configured correctly

Permission Denied Errors

  1. Check IAM roles created by template
  2. Verify permission profiles in terraform.tfvars
  3. Add custom permissions if needed
  4. Review AWS CloudTrail for denied actions

๐Ÿ” Security Best Practices

IAM Permissions

  • โœ… Least privilege - Only necessary permissions granted
  • โœ… Role-based - Separate roles for build and deploy
  • โœ… Resource-specific - Permissions scoped to your resources
  • โœ… Conditional - Context-aware permission policies

Secrets Management

# Use Parameter Store or Secrets Manager for sensitive data
environment = {
  variables = [
    { name = "DB_PASSWORD", value = "/myapp/db/password", type = "PARAMETER_STORE" },
    { name = "API_SECRET", value = "myapp/api/secret", type = "SECRETS_MANAGER" }
  ]
}

Network Security

  • All CodeBuild projects run in AWS managed infrastructure
  • VPC configuration available - Connect to private resources in your VPC (AWS Documentation)
  • S3 bucket encryption enabled by default

๐Ÿ“Š Cost Optimization

Understanding Costs

  • CodePipeline V1: $1.00 per active pipeline per month (free for first 30 days)
  • CodePipeline V2: $0.002 per action execution minute (100 free minutes/month)
  • CodeBuild: Pay per build minute (~$0.005/minute for general1.small)
  • CodeBuild Free Tier: 100 build minutes/month (general1.small or arm1.small)
  • S3 Storage: Minimal cost for artifacts
  • CloudWatch Logs: Pay for log retention

Source: AWS CodePipeline Pricing, AWS CodeBuild Pricing

Cost-Saving Tips

  1. Optimize build times - faster builds = lower costs
  2. Choose right pipeline type - V2 better for infrequent deployments
  3. Use ARM instances - Often 20% cheaper than x86 instances
  4. Clean up artifacts - set S3 lifecycle policies
  5. Adjust log retention - shorter retention = lower costs
  6. Use efficient build images - smaller images = faster builds

๐Ÿš€ Template Customization

Adding New Compute Services

This template engine is designed to be extensible:

  1. Add new permission profiles in iam_codebuild.tf
  2. Create deployment scripts in scripts/ directory
  3. Update variable definitions in variables.tf
  4. Test with your compute service

Creating Service-Specific Templates

# Copy this template for a new compute service
cp -r . ../my-new-service-template

# Customize for your service
vim terraform.tfvars.example
vim scripts/deploy-*.sh
vim README.md

๐Ÿ”„ Updating This Template

Template Versioning

This template follows semantic versioning. Check for updates:

# Check current version
git tag --list

# Update to latest
git pull origin main
terraform plan  # Review changes
terraform apply # Apply updates

Migrating Between Versions

  1. Read changelog for breaking changes
  2. Backup current state with terraform state pull
  3. Test in staging environment first
  4. Update production after validation

๐Ÿ“š Additional Resources

AWS Documentation

Terraform Resources

Template Library

โœ… Available Now

  • Lambda ZIP Template (Python) - Production-ready serverless function deployments
  • Custom Workload Template - Flexible template for any compute service

๐Ÿšง Coming Soon

  • Lambda ZIP Template (Node.js) - JavaScript/TypeScript serverless functions
  • Lambda Container Template - Containerized serverless functions
  • ECS Service Template - Containerized applications
  • EC2 Application Template - Traditional server-based applications
  • EKS Workload Template - Kubernetes-native applications

Support

  • Issues: Create GitHub issues for bugs or feature requests
  • Discussions: Join community discussions for questions
  • Documentation: Check docs/ directory for detailed guides

๐ŸŽ‰ Success!

If you've made it this far, you should have:

  • โœ… A fully automated CI/CD pipeline
  • โœ… Automated staging deployments
  • โœ… Manual production approval gates
  • โœ… Comprehensive monitoring and notifications
  • โœ… Infrastructure and pipeline as code
  • โœ… A reusable template for any AWS compute service

Your applications are now enterprise-ready with professional CI/CD!


๐Ÿ”ฎ Template Engine Philosophy

This template embodies the principle that CI/CD should be as declarative as infrastructure. By making pipelines configurable through terraform.tfvars, we achieve:

  • Consistency across all compute services
  • Reusability through copy-paste scaling
  • Transparency in pipeline configuration
  • AWS-native integration without vendor lock-in

"The best CI/CD pipeline is the one your junior engineers can understand and operate confidently."

๐Ÿ“‹ Workload-Specific Configuration

Lambda ZIP Configuration

codebuild_projects = {
  deploy-staging = {
    permission_profiles = ["cloudwatch", "s3", "lambda", "cross-account-codedeploy"]
    lambda = {
      function_arn = "arn:aws:lambda:us-east-1:123456789012:function:my-function"
    }
    codedeploy = {
      application_arn = "arn:aws:codedeploy:us-east-1:123456789012:application:my-app"
      deployment_group_arn = "arn:aws:codedeploy:us-east-1:123456789012:deploymentgroup:my-group"
      cross_account_role_arn = "arn:aws:iam::123456789012:role/CodeDeployRole"
    }
  }
}

Terraform Infrastructure Configuration ๐Ÿ†•

Option 1: Using Backend Assume Role Configuration

codebuild_projects = {
  plan-staging = {
    permission_profiles = ["cloudwatch", "s3"]  # No terraform-cross-account needed
    terraform = {
      # cross_account_role_arn omitted - uses backend assume_role configuration
      # state_backend omitted - uses existing backend configuration in Terraform code
      working_directory = "."
      terraform_version = "1.6.0"
    }
  }
}

Your Terraform code includes both backend and role configuration:

terraform {
  backend "s3" {
    bucket         = "tf-central-state-backend-example-prod"
    key            = "cicd/infrastructure/my-app/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "tf-central-state-lock-example-prod"
    encrypt        = true
    assume_role = {
      role_arn     = "arn:aws:iam::905418069656:role/terraform-admin"
      session_name = "codebuild-backend-session"
    }
  }
}

Option 2: Explicit Configuration (Other Organizations)

codebuild_projects = {
  plan-staging = {
    permission_profiles = ["cloudwatch", "s3", "terraform-cross-account"]
    terraform = {
      cross_account_role_arn = "arn:aws:iam::123456789012:role/TerraformDeploymentRole"
      state_backend = {
        bucket = "terraform-state-staging-123456789012"
        key = "infrastructure/my-app/terraform.tfstate"
        region = "us-east-1"
        dynamodb_table = "terraform-state-lock-staging"
      }
      working_directory = "."
      terraform_version = "1.6.0"
    }
  }
}

Available Permission Profiles

For Lambda Workloads

  • cloudwatch - CloudWatch Logs access
  • s3 - S3 bucket access for artifacts and scripts
  • lambda - Lambda function update permissions
  • same-account-codedeploy - CodeDeploy permissions (same account)
  • cross-account-codedeploy - Cross-account CodeDeploy permissions

For Terraform Workloads ๐Ÿ†•

  • cloudwatch - CloudWatch Logs access
  • s3 - S3 bucket access for artifacts and scripts
  • terraform-cross-account - Cross-account role assumption for Terraform deployments

Terraform Pipeline Stages ๐Ÿ†•

For Terraform infrastructure deployments, the pipeline includes:

  1. Source - Monitors your GitHub repository
  2. Validate - Validates Terraform configuration syntax and formatting
  3. Plan-Staging - Generates Terraform plan for staging environment
  4. Deploy-Staging - Applies Terraform plan to staging environment
  5. Manual-Approval - Waits for your approval via AWS Console
  6. Plan-Production - Generates Terraform plan for production environment
  7. Deploy-Production - Applies Terraform plan to production environment

Terraform-Specific Features

  • Official Terraform Installation - Uses HashiCorp's official installation method
  • Version Management - Supports specific Terraform versions or latest
  • S3 State Backend - Dedicated state management per environment
  • DynamoDB Locking - Optional state locking with DynamoDB
  • Cross-Account Deployment - Deploy infrastructure to different AWS accounts
  • Plan/Apply Separation - Separate stages for planning and applying changes

๐ŸŽ‰ Success!

If you've made it this far, you should have:

  • โœ… A fully automated CI/CD pipeline
  • โœ… Automated staging deployments
  • โœ… Manual production approval gates
  • โœ… Comprehensive monitoring and notifications
  • โœ… Infrastructure and pipeline as code
  • โœ… A reusable template for any AWS compute service

Your applications are now enterprise-ready with professional CI/CD!


๐Ÿ”ฎ Template Engine Philosophy

This template embodies the principle that CI/CD should be as declarative as infrastructure. By making pipelines configurable through terraform.tfvars, we achieve:

  • Consistency across all compute services
  • Reusability through copy-paste scaling
  • Transparency in pipeline configuration
  • AWS-native integration without vendor lock-in

"The best CI/CD pipeline is the one your junior engineers can understand and operate confidently."

About

Terraform Module for dynamic configuration of CI/CD w/ AWS CodePipeline and CodeBuild

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages