This project demonstrates the provisioning of a robust and scalable AWS infrastructure using Terraform. It focuses on creating a custom Virtual Private Cloud (VPC) with public and private subnets, an Internet Gateway (IGW), a NAT Gateway, and deploying EC2 instances (both public-facing and private) within this network. The infrastructure is designed using Terraform Modules for reusability and maintainability, with remote state management on AWS S3 for collaborative development and state locking.
This project was built as a hands-on exercise to master core Terraform concepts and AWS networking essentials.
- Custom VPC Network: A fully configured VPC with a CIDR block of
10.0.0.0/16. - Public and Private Subnets: Creation of distinct subnets across multiple Availability Zones for secure and flexible deployments.
- Internet Gateway (IGW): Enables communication between instances in public subnets and the internet.
- NAT Gateway: Allows instances in private subnets to initiate outbound connections to the internet while remaining isolated from inbound connections.
- EC2 Instance Deployment:
- Public EC2 Instances: Deployed in public subnets, accessible via SSH and hosting a basic Nginx web server.
- Private EC2 Instance: Deployed in a private subnet, accessible only from within the VPC (e.g., via a public bastion host).
- Terraform Custom Modules:
vpcmodule: Encapsulates all VPC-related resources (VPC, subnets, IGW, NAT Gateway, route tables).ec2module: Manages EC2 instance creation, including security groups and user data.- Promotes code reusability and separation of concerns.
- Remote State Management: Terraform state is stored securely in an AWS S3 bucket with DynamoDB for state locking, crucial for team environments.
- Security Best Practices:
- No hardcoded AWS credentials (relies on AWS CLI configuration).
.gitignoreensures sensitive files (.tfstate,.terraform/,credentials) are not committed to Git.
- Scalability: Uses
countandfor_each(implicitly via dynamic blocks for security groups) to provision multiple resources efficiently.
- Terraform: Infrastructure as Code tool
- AWS (Amazon Web Services): Cloud Provider
- VPC (Virtual Private Cloud)
- EC2 (Elastic Compute Cloud)
- S3 (Simple Storage Service) for remote state
- DynamoDB (for state locking)
- IAM (Identity and Access Management)
- Bash/Shell Scripting: For
user_datato configure EC2 instances (e.g., Nginx installation). - Git & GitHub: Version control and code hosting.
terraform-aws-project/
├── modules/
│ ├── vpc/
│ │ ├── main.tf # Defines VPC resources
│ │ ├── variables.tf # Input variables for VPC module
│ │ └── outputs.tf # Outputs from VPC module
│ └── ec2/
│ ├── main.tf # Defines EC2 and Security Group resources
│ ├── variables.tf # Input variables for EC2 module
│ └── outputs.tf # Outputs from EC2 module
├── main.tf # Root module: Calls VPC and EC2 modules, defines backend
├── variables.tf # Root module: Top-level variables
├── outputs.tf # Root module: Top-level outputs from module calls
└── .gitignore # Specifies files to ignore in Git
Follow these steps to set up and provision the infrastructure on your AWS account.
- AWS Account: An active AWS account.
- AWS CLI Configured: Ensure your AWS CLI is configured with appropriate credentials and default region (
ap-south-1).(Enter your Access Key ID, Secret Access Key,aws configure
ap-south-1for region, andjsonfor output format.) - Terraform Installed: Terraform (v1.0+) installed on your local machine (e.g., WSL for Windows users).
- SSH Key Pair: An SSH key pair named
my-ssh-key(or your preferred name) must exist in yourap-south-1AWS region.- You can create one via AWS Console (EC2 -> Key Pairs) or AWS CLI:
(Keep
aws ec2 create-key-pair --key-name my-ssh-key --query 'KeyMaterial' --output text > my-ssh-key.pem chmod 400 my-ssh-key.pem # Ensure proper permissions for SSH
my-ssh-key.pemsecure and accessible for SSH.)
- You can create one via AWS Console (EC2 -> Key Pairs) or AWS CLI:
-
Clone the Repository:
git clone [https://github.com/your-username/terraform-aws-devops-project.git](https://github.com/your-username/terraform-aws-devops-project.git) # Replace with your repo URL cd terraform-aws-devops-project
-
Create S3 Bucket for Terraform State: Go to your AWS S3 console in
ap-south-1region and create a globally unique bucket.- Bucket Name:
your-terraform-state-bucket-unique-name(e.g.,my-devops-project-tfstate-12345) - Region:
ap-south-1 - Leave other settings as default.
- Bucket Name:
-
Create DynamoDB Table for State Locking (Recommended): Go to your AWS DynamoDB console in
ap-south-1region and create a new table.- Table Name:
your-terraform-lock-table-unique-name(e.g.,my-devops-project-tf-locks) - Partition Key:
LockID(Type: String) - Leave other settings as default.
- Table Name:
-
Update
main.tf: Openmain.tfin the root directory and update thebackend "s3"block with your actual S3 bucket and DynamoDB table names:# main.tf backend "s3" { bucket = "your-terraform-state-bucket-unique-name" # <--- UPDATE THIS key = "terraform-aws-project/terraform.tfstate" region = "ap-south-1" encrypt = true dynamodb_table = "your-terraform-lock-table-unique-name" # <--- UPDATE THIS }
Also, ensure the
ami_idinmain.tf(for bothec2_publicandec2_privatemodules) is updated with a valid Amazon Linux 2 AMI forap-south-1. You can find it using:aws ec2 describe-images --owners amazon --filters "Name=name,Values=amzn2-ami-hvm-*-x86_64-gp2" "Name=state,Values=available" --query "sort_by(Images, &CreationDate)[-1].ImageId" --region ap-south-1
Navigate to the root of the project directory (terraform-aws-project/) in your terminal.
-
Initialize Terraform: This command downloads the necessary providers and initializes the S3 backend.
terraform init
-
Format Terraform Code: Ensures consistent formatting across all
.tffiles.terraform fmt
-
Validate Terraform Configuration: Checks the configuration for syntax errors and internal consistency.
terraform validate
-
Review the Plan: This command shows you exactly what Terraform will do to your AWS infrastructure (what resources will be created, modified, or destroyed) without making any changes. Always review this carefully!
terraform plan
-
Apply the Configuration: Executes the planned changes to provision your AWS resources. Type
yeswhen prompted to confirm.terraform apply
This process may take several minutes as resources like NAT Gateway can take time to become available.
After terraform apply completes:
- AWS Console: Log in to your AWS console and navigate to the VPC and EC2 dashboards in the
ap-south-1region.- Verify the creation of
devops-project-vpcwith its subnets, Internet Gateway, NAT Gateway, and route tables. - Check for
devops-project-public-ec2-1,devops-project-public-ec2-2, anddevops-project-private-ec2-1instances.
- Verify the creation of
- Access Public EC2: Copy the Public IP address of a public EC2 instance from the AWS console or using
terraform output public_instance_ips. Paste it into your web browser. You should see a simple Nginx welcome page, "Hello from Terraform EC2 Instance X!". - SSH to Public EC2:
ssh -i /path/to/my-ssh-key.pem ec2-user@<Public_IP_of_Public_EC2>
- SSH to Private EC2 (via Public EC2/Bastion): First SSH into a public EC2 instance, then from there SSH into the private instance. You'll need your
.pemkey on the public EC2 or use SSH agent forwarding.# On your local machine: ssh -A -i /path/to/my-ssh-key.pem ec2-user@<Public_IP_of_Public_EC2> # Once logged into Public EC2 (and if key is on Public EC2 or agent forwarded): ssh -i /path/to/my-ssh-key.pem ec2-user@<Private_IP_of_Private_EC2>
To avoid incurring ongoing AWS costs, it's crucial to destroy all provisioned resources after you're done.
-
Destroy Terraform Resources: Navigate to the root of the project directory (
terraform-aws-project/) and run:terraform destroy
Type
yeswhen prompted to confirm the destruction. -
Manually Delete S3 State Bucket and DynamoDB Lock Table:
terraform destroydoes not delete the S3 bucket or DynamoDB table used for remote state. You must delete these manually from the AWS Console to ensure no lingering costs.- S3: Go to S3, select
your-terraform-state-bucket-unique-name, and delete it. - DynamoDB: Go to DynamoDB, select
your-terraform-lock-table-unique-name, and delete the table.
- S3: Go to S3, select
Feel free to fork this repository, experiment, and contribute to make it even better!
This project is open-sourced under the MIT License (You might want to create a LICENSE file if you plan to share this broadly).
• Architected a secure AWS VPC using Terraform with 2 public and 1 private EC2 instance running Nginx, leveraging S3 for remote state and DynamoDB locking—cutting setup time by 50%. • Enabled 100% isolation for private EC2 by implementing bastion host access, allowing secure SSH into a private subnet within the Terraform-managed VPC. • Automated AWS infrastructure provisioning (VPC, EC2, S3, DynamoDB) via Terraform, streamlining deployments and eliminating manual configuration errors entirely.