Automating the AWS cloud Infrastructure using Terraform

Automating the AWS cloud Infrastructure using Terraform.

Abhishek Sharma
6 min readJun 15, 2020

--

What is terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter.

What is Ec2 instance in AWS?

AWS is a comprehensive, evolving cloud computing platform; EC2 is a service that allows business subscribers to run application programs in the computing environment. The EC2 can serve as a practically unlimited set of virtual machines.

Amazon provides a variety of types of instances with different configurations of CPU, memory, storage, and networking resources to suit user needs. Each type is also available in two different sizes to address workload requirements.

Instance types are grouped into families based on target application profiles. These groups include: general purpose, compute-optimized, GPU instances, memory optimized, storage optimized and micro instances.

Instances are created from Amazon Machine Images (AMI). The machine images are like templates that are configured with an operating system and other software, which determine the user’s operating environment. Users can select an AMI provided by AWS, the user community, or through the AWS Marketplace. Users can also create their own AMIs and share them.

Problem statement

  1. Create the key and security group which allow the port 80.
    2. Launch EC2 instance.
    3. In this Ec2 instance use the key and security group which we have created in step 1.
    4. Launch one Volume (EBS) and mount that volume into /var/www/html
    5. Developer have uploded the code into github repo also the repo has some images.
    6. Copy the github repo code into /var/www/html
    7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
    8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Solution:-

Firstly create a folder in Desktop after that open a notepad with file extension .tf

Now here we write inside the notepad…..

  1. First we have to write a code for configure a aws profile which is used by terraform.

provider “aws” {
region = “ap-south-1”
profile = “testing”
}

2. Creating the Security group for instance so our clients can access from other devices as the AWS has some default security setting for not allowing to connect from outside the host so their is firewall which protect from outside for connecting we need to configure the TCP settings which Allow to connect to ports for SSH and HTTP.

resource “aws_security_group” “http” {
name = “all_http”

ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

ingress {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

tags = {
Name = “all_http”
}
}

3. Launching EC2 instance using the key and security group which we have created

resource “aws_instance” “cloudtask1” {
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
key_name = “mykeys”
security_groups = [ “all_http” ]

connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/dell/Downloads/mykeys.pem”)
host = aws_instance.cloudtask1.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
]
}
tags = {
Name = “abhios1”
}
}

4. Launching one Volume (EBS)

resource “aws_ebs_volume” “abhiebs1” {
availability_zone = aws_instance.cloudtask1.availability_zone
size = 1
tags = {
Name = “volume1”
}
}

5. Now mounting that volume into /var/www/html and also copying the code from github:

resource “aws_volume_attachment” “abhiebs” {
device_name = “/dev/sdh”
volume_id = aws_ebs_volume.abhiebs1.id
instance_id = aws_instance.cloudtask1.id
force_detach = true
}

output “ip” {
value = aws_instance.cloudtask1.public_ip
}

resource “null_resource” “null”{
provisioner “local-exec”{
command = “echo ${aws_instance.cloudtask1.public_ip} > publicip.txt”
}
}

resource “null_resource” “nullremote1” {
depends_on = [
aws_volume_attachment.abhiebs,
]

connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/dell/Downloads/mykeys.pem”)
host = aws_instance.cloudtask1.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/abhi-85/terraform1.git /var/www/html”
]
}
}

6. Now we have to create the Bucket in S3 and copying the image inside it.

resource “aws_s3_bucket” “abhis3” {
bucket = “abhi85”
acl = “public-read”

tags = {
Name = “bucket1”
}

versioning {
enabled = true
}

}
locals {
s3_origin_id = “abhis3Origin”
}

resource “aws_s3_bucket_object” “abhis3obj” {
depends_on = [
aws_s3_bucket.abhis3,
]
bucket = “abhi85”
key = “vimal sir.jpg”
source = “C:/Users/dell/Downloads/vimal sir.jpg”
acl = “public-read”
content_type = “image or jpeg”
}

7. Now at last we have to Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html and saving the public IP in a file.

resource “aws_cloudfront_distribution” “s3_distribution” {
origin {
domain_name = aws_s3_bucket.abhis3.bucket_regional_domain_name
origin_id = local.s3_origin_id

}

enabled = true
is_ipv6_enabled = true
comment = “Some comment”
default_root_object = “index.html”

logging_config {
include_cookies = false
bucket = “abhi85.s3.amazonaws.com”
prefix = “myprefix”
}

default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = “none”
}
}

viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}

ordered_cache_behavior {
path_pattern = “/content/immutable/*”
allowed_methods = [“GET”, “HEAD”, “OPTIONS”]
cached_methods = [“GET”, “HEAD”, “OPTIONS”]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false
headers = [“Origin”]

cookies {
forward = “none”
}
}

min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = “redirect-to-https”
}

ordered_cache_behavior {
path_pattern = “/content/*”
allowed_methods = [“GET”, “HEAD”, “OPTIONS”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = “none”
}
}

min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = “redirect-to-https”
}

price_class = “PriceClass_200”

restrictions {
geo_restriction {
restriction_type = “none”

}
}

tags = {
Environment = “production”
}

viewer_certificate {
cloudfront_default_certificate = true
}
}

output “cloudfront_ip_addr” {
value = aws_cloudfront_distribution.s3_distribution.domain_name
}

Now for run this code first we have to write….

terraform init

Then ,

terraform apply -auto-approve

Now finally all the setup is done automated by using the terraform. In this I didn’t make anything manually.

Ec2 Instance
#Security_groups
#Volumes
#S3 bucket
#Cloudfront

Now here I use my IP to open my website…….

So, Finally i completed this task. I would like to thanks Vimal daga sir who taught us this things in this pandemic situation.

Any queries related you can connect me on Abhishek Sharma

Thank you…

--

--