Infrastructure-as-Code meets CI/CD-as-Code
A complete, AWS-native CI/CD solution for any compute service using only Terraform and AWS services.
This template creates a complete CI/CD pipeline for AWS workloads that:
- โ Automatically builds your code when you push to GitHub
- โ Deploys to staging environment for testing
- โ Waits for manual approval before production
- โ Deploys to production with your approval
- โ Sends notifications about pipeline status
- โ Manages all permissions automatically
No Jenkins, no GitHub Actions, no external tools - just AWS services orchestrated by Terraform.
This template engine supports multiple AWS compute services:
- Lambda ZIP - Serverless functions with ZIP deployment
- Terraform Infrastructure ๐ - Infrastructure as Code deployments with cross-account support
- Lambda Container - Serverless functions with container deployment
- Custom Workloads - Any compute service with custom deployment scripts
- EC2 Applications - Traditional server-based applications
- ECS Services - Containerized long-running applications
- EKS Workloads - Kubernetes-native applications
- Batch Jobs - Large-scale batch processing workloads
When you deploy this template, you get:
- CodePipeline - Orchestrates the entire workflow
- CodeBuild Projects - Handles build and deployment steps
- S3 Bucket - Stores build artifacts
- IAM Roles & Policies - Manages all permissions securely
- Source - Monitors your GitHub repository
- Build - Compiles and packages your application code
- Deploy-Staging - Deploys to your staging environment
- Manual-Approval - Waits for your approval via AWS Console
- Deploy-Production - Deploys to your production environment
- CodeStar Notifications - Sends updates to Slack/SNS about pipeline status
Before using this template, you need:
- AWS CLI configured with appropriate permissions
- Terraform installed (version 1.0+)
- GitHub repository with your application code
- CodeStar Connection to your GitHub repository
- Target compute resources (staging and production environments)
- S3 bucket for storing build artifacts
Edit terraform.tfvars with your specific details:
# Your application details
app_name = "my-application" # Used for naming all resources
github_repo_name = "my-company/my-application" # Your GitHub repo
# AWS resources this pipeline will manage
codestar_connection_arn = "arn:aws:codestar-connections:us-east-1:123456789012:connection/abc123"
# Where to store build artifacts
codepipeline = {
artifact_store = {
location = "my-cicd-artifacts-bucket"
type = "S3"
}
# ... more configuration
}# Initialize Terraform
terraform init
# Review what will be created
terraform plan
# Create the CI/CD infrastructure
terraform apply# Make a change to your code
echo "// Pipeline test" >> src/index.js
# Commit and push
git add .
git commit -m "Test pipeline deployment"
git push origin mainWatch your pipeline run in the AWS CodePipeline Console!
Recommended Architecture: Separate Infrastructure from Application Code
# Your Application Repository
my-application/
โโโ src/ # Your application source code
โโโ tests/ # Application tests
โโโ package.json # Application dependencies (if applicable)
โโโ requirements.txt # Python dependencies (if applicable)
โโโ Dockerfile # Container definition (if applicable)
โโโ README.md # Application documentation
# Your Infrastructure Repository (this template)
my-application-cicd/
โโโ README.md # This documentation
โโโ iam_codebuild.tf # CodeBuild IAM roles and policies
โโโ iam_codepipeline.tf # CodePipeline IAM roles and policies
โโโ main.tf # Pipeline infrastructure definition
โโโ outputs.tf # Pipeline outputs (ARNs, names, etc.)
โโโ providers.tf # Terraform provider configuration
โโโ scripts/ # Build and deployment scripts
โโโ terraform.tfvars # Your CI/CD configuration (customize this!)
โโโ variables.tf # Available configuration options
How It Works:
- Application repo: Pure application code, no infrastructure concerns
- Infrastructure repo: Complete CI/CD system including scripts and buildspecs
- Clean separation: Developers focus on code, platform team manages CI/CD
- Reusable template: Copy and customize for any application
app_name = "my-api" # Used for naming resources
github_repo_name = "my-company/my-api" # GitHub repository pathConfigure your deployment through scripts and buildspecs:
Basic Configuration (works for any compute service):
codebuild_projects = {
build = {
name = "build"
description = "Build stage for application"
# Your build configuration
}
deploy-staging = {
name = "deploy-staging"
description = "Deploy to staging environment"
# Your staging deployment configuration
}
deploy-prod = {
name = "deploy-prod"
description = "Deploy to production environment"
# Your production deployment configuration
}
}For Lambda Functions (current template example):
codebuild_projects = {
deploy-staging = {
lambda = {
function_arn = "arn:aws:lambda:us-east-1:123456789012:function:my-api-staging"
}
# Lambda-specific permissions automatically applied
}
deploy-prod = {
lambda = {
function_arn = "arn:aws:lambda:us-east-1:123456789012:function:my-api-prod"
}
}
}For Other Compute Services: Use the custom workload approach and configure through your deployment scripts:
codebuild_projects = {
deploy-staging = {
# Configure through your deployment scripts in scripts/deploy-staging.sh
# and buildspec configuration
permission_profiles = ["s3", "cloudwatch"] # Add needed permissions
environment = {
variables = [
{ name = "ENVIRONMENT", value = "staging", type = "PLAINTEXT" },
{ name = "CLUSTER_NAME", value = "my-staging-cluster", type = "PLAINTEXT" }
]
}
}
deploy-prod = {
# Configure through your deployment scripts in scripts/deploy-prod.sh
permission_profiles = ["s3", "cloudwatch"]
environment = {
variables = [
{ name = "ENVIRONMENT", value = "production", type = "PLAINTEXT" },
{ name = "CLUSTER_NAME", value = "my-prod-cluster", type = "PLAINTEXT" }
]
}
}
}The magic happens in your deployment scripts:
scripts/deploy-staging.sh- Contains your ECS, EC2, or other service deployment logicscripts/deploy-prod.sh- Contains your production deployment logicbuildspec/*.yml- CodeBuild configuration for each stage
codestar_notification_rule = {
detail_type = "FULL"
event_type_ids = [
"codepipeline-pipeline-pipeline-execution-failed",
"codepipeline-pipeline-pipeline-execution-succeeded"
]
target = {
address = "arn:aws:sns:us-east-1:123456789012:my-notifications"
type = "SNS"
}
}If your workload needs special permissions during deployment:
codebuild_projects = {
deploy-staging = {
permission_profiles = ["lambda", "s3", "cloudwatch"] # Pre-built permission sets
custom_permissions = {
"dynamodb-access" = {
effect = "Allow"
actions = ["dynamodb:GetItem", "dynamodb:PutItem"]
resources = ["arn:aws:dynamodb:us-east-1:123456789012:table/my-table"]
condition = []
}
}
}
}Pass configuration to your build and deploy scripts:
codebuild_projects = {
build = {
environment = {
variables = [
{ name = "NODE_ENV", value = "production", type = "PLAINTEXT" },
{ name = "API_KEY", value = "/my-app/api-key", type = "PARAMETER_STORE" }
]
}
}
}- Go to AWS CodePipeline Console
- Find your pipeline:
{app_name}-codepipeline - View current execution status and history
- When pipeline reaches "Manual-Approval" stage
- Click "Review" in the AWS Console
- Add comments and click "Approve" or "Reject"
- Click on failed CodeBuild stage in pipeline
- Click "View details" to see build logs
- Check CloudWatch Logs for detailed output
# Edit terraform.tfvars with new resource ARNs/names
vim terraform.tfvars
# Apply changes
terraform plan
terraform apply# Add new SNS topic or Slack webhook to terraform.tfvars
vim terraform.tfvars
# Update infrastructure
terraform apply- Edit scripts in
scripts/directory - Commit and push changes
- Pipeline will automatically use updated scripts
- Check build logs in CodeBuild console
- Verify build script in
scripts/build.sh - Check environment variables in
terraform.tfvars - Ensure S3 permissions are correct
- Verify target resource configuration in
terraform.tfvars - Check IAM permissions for CodeBuild role
- Verify deployment script in
scripts/deploy-*.sh - Check target resources exist and are accessible
- Verify CodeStar connection is active
- Check repository name in
terraform.tfvars - Ensure webhook is configured correctly
- Check IAM roles created by template
- Verify permission profiles in
terraform.tfvars - Add custom permissions if needed
- Review AWS CloudTrail for denied actions
- โ Least privilege - Only necessary permissions granted
- โ Role-based - Separate roles for build and deploy
- โ Resource-specific - Permissions scoped to your resources
- โ Conditional - Context-aware permission policies
# Use Parameter Store or Secrets Manager for sensitive data
environment = {
variables = [
{ name = "DB_PASSWORD", value = "/myapp/db/password", type = "PARAMETER_STORE" },
{ name = "API_SECRET", value = "myapp/api/secret", type = "SECRETS_MANAGER" }
]
}- All CodeBuild projects run in AWS managed infrastructure
- VPC configuration available - Connect to private resources in your VPC (AWS Documentation)
- S3 bucket encryption enabled by default
- CodePipeline V1: $1.00 per active pipeline per month (free for first 30 days)
- CodePipeline V2: $0.002 per action execution minute (100 free minutes/month)
- CodeBuild: Pay per build minute (~$0.005/minute for general1.small)
- CodeBuild Free Tier: 100 build minutes/month (general1.small or arm1.small)
- S3 Storage: Minimal cost for artifacts
- CloudWatch Logs: Pay for log retention
Source: AWS CodePipeline Pricing, AWS CodeBuild Pricing
- Optimize build times - faster builds = lower costs
- Choose right pipeline type - V2 better for infrequent deployments
- Use ARM instances - Often 20% cheaper than x86 instances
- Clean up artifacts - set S3 lifecycle policies
- Adjust log retention - shorter retention = lower costs
- Use efficient build images - smaller images = faster builds
This template engine is designed to be extensible:
- Add new permission profiles in
iam_codebuild.tf - Create deployment scripts in
scripts/directory - Update variable definitions in
variables.tf - Test with your compute service
# Copy this template for a new compute service
cp -r . ../my-new-service-template
# Customize for your service
vim terraform.tfvars.example
vim scripts/deploy-*.sh
vim README.mdThis template follows semantic versioning. Check for updates:
# Check current version
git tag --list
# Update to latest
git pull origin main
terraform plan # Review changes
terraform apply # Apply updates- Read changelog for breaking changes
- Backup current state with
terraform state pull - Test in staging environment first
- Update production after validation
- Lambda ZIP Template (Python) - Production-ready serverless function deployments
- Custom Workload Template - Flexible template for any compute service
- Lambda ZIP Template (Node.js) - JavaScript/TypeScript serverless functions
- Lambda Container Template - Containerized serverless functions
- ECS Service Template - Containerized applications
- EC2 Application Template - Traditional server-based applications
- EKS Workload Template - Kubernetes-native applications
- Issues: Create GitHub issues for bugs or feature requests
- Discussions: Join community discussions for questions
- Documentation: Check docs/ directory for detailed guides
If you've made it this far, you should have:
- โ A fully automated CI/CD pipeline
- โ Automated staging deployments
- โ Manual production approval gates
- โ Comprehensive monitoring and notifications
- โ Infrastructure and pipeline as code
- โ A reusable template for any AWS compute service
Your applications are now enterprise-ready with professional CI/CD!
This template embodies the principle that CI/CD should be as declarative as infrastructure. By making pipelines configurable through terraform.tfvars, we achieve:
- Consistency across all compute services
- Reusability through copy-paste scaling
- Transparency in pipeline configuration
- AWS-native integration without vendor lock-in
"The best CI/CD pipeline is the one your junior engineers can understand and operate confidently."
codebuild_projects = {
deploy-staging = {
permission_profiles = ["cloudwatch", "s3", "lambda", "cross-account-codedeploy"]
lambda = {
function_arn = "arn:aws:lambda:us-east-1:123456789012:function:my-function"
}
codedeploy = {
application_arn = "arn:aws:codedeploy:us-east-1:123456789012:application:my-app"
deployment_group_arn = "arn:aws:codedeploy:us-east-1:123456789012:deploymentgroup:my-group"
cross_account_role_arn = "arn:aws:iam::123456789012:role/CodeDeployRole"
}
}
}codebuild_projects = {
plan-staging = {
permission_profiles = ["cloudwatch", "s3"] # No terraform-cross-account needed
terraform = {
# cross_account_role_arn omitted - uses backend assume_role configuration
# state_backend omitted - uses existing backend configuration in Terraform code
working_directory = "."
terraform_version = "1.6.0"
}
}
}Your Terraform code includes both backend and role configuration:
terraform {
backend "s3" {
bucket = "tf-central-state-backend-example-prod"
key = "cicd/infrastructure/my-app/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "tf-central-state-lock-example-prod"
encrypt = true
assume_role = {
role_arn = "arn:aws:iam::905418069656:role/terraform-admin"
session_name = "codebuild-backend-session"
}
}
}codebuild_projects = {
plan-staging = {
permission_profiles = ["cloudwatch", "s3", "terraform-cross-account"]
terraform = {
cross_account_role_arn = "arn:aws:iam::123456789012:role/TerraformDeploymentRole"
state_backend = {
bucket = "terraform-state-staging-123456789012"
key = "infrastructure/my-app/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-lock-staging"
}
working_directory = "."
terraform_version = "1.6.0"
}
}
}cloudwatch- CloudWatch Logs accesss3- S3 bucket access for artifacts and scriptslambda- Lambda function update permissionssame-account-codedeploy- CodeDeploy permissions (same account)cross-account-codedeploy- Cross-account CodeDeploy permissions
cloudwatch- CloudWatch Logs accesss3- S3 bucket access for artifacts and scriptsterraform-cross-account- Cross-account role assumption for Terraform deployments
For Terraform infrastructure deployments, the pipeline includes:
- Source - Monitors your GitHub repository
- Validate - Validates Terraform configuration syntax and formatting
- Plan-Staging - Generates Terraform plan for staging environment
- Deploy-Staging - Applies Terraform plan to staging environment
- Manual-Approval - Waits for your approval via AWS Console
- Plan-Production - Generates Terraform plan for production environment
- Deploy-Production - Applies Terraform plan to production environment
- Official Terraform Installation - Uses HashiCorp's official installation method
- Version Management - Supports specific Terraform versions or latest
- S3 State Backend - Dedicated state management per environment
- DynamoDB Locking - Optional state locking with DynamoDB
- Cross-Account Deployment - Deploy infrastructure to different AWS accounts
- Plan/Apply Separation - Separate stages for planning and applying changes
If you've made it this far, you should have:
- โ A fully automated CI/CD pipeline
- โ Automated staging deployments
- โ Manual production approval gates
- โ Comprehensive monitoring and notifications
- โ Infrastructure and pipeline as code
- โ A reusable template for any AWS compute service
Your applications are now enterprise-ready with professional CI/CD!
This template embodies the principle that CI/CD should be as declarative as infrastructure. By making pipelines configurable through terraform.tfvars, we achieve:
- Consistency across all compute services
- Reusability through copy-paste scaling
- Transparency in pipeline configuration
- AWS-native integration without vendor lock-in
"The best CI/CD pipeline is the one your junior engineers can understand and operate confidently."