Terraform with AWS(including EFS instead of EBS) Full Automation.

Abhishek Sharma
5 min readSep 1, 2020

--

AWS

Amazon Web Services (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis. In aggregate, these cloud computing web services provide a set of primitive abstract technical infrastructure and distributed computing building blocks and tools. One of these services is Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual cluster of computers, available all the time, through the Internet.

Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

The main components involved in launching of a web server are:
1. Elastic Cloud Compute(EC2) is a part of Amazon’s cloudcomputing platform that allows users to rent virtual computers on which to run their own computer applications. It provides compute as a service to the users(CAAS).

2. Elastic File System(EFS) is a cloud storage service provided by (AWS) designed to provide scalable, elastic, concurrent with some restrictions and encrypted file storage for use with both AWS cloud
services and on-premises resources. In simple words, it provides File storage as a service(FSAAS).

3. CloudFront is a content delivery network (CDN) offered by Amazon Web Services . Content delivery networks provide a globallydistributed network of proxy servers which cache content, such as web videos or other bulky media, more locally to consumers, thus improving access speed for downloading the content.

Problem statement:

Create/launch Application using Terraform

  1. Create the key-pair and security group which allow the port 80.

2. Launch an EC2 instance. In this EC2 instance use the key and security group which we have created in step 1.

3. Launch one storage volume (EFS) and attach that volume into the EC2 instance launched & mount the directory.

4. Get the code uploaded by the developer in GitHub and copy the code in the /var/www/html folder for deployment.

5. Create S3 bucket, and copy/deploy the static images into the S3 bucket and change the permission to public readable.

6 Create a CloudFront using S3 bucket(which contains images) and use the CloudFront URL to update in code.

7. Launch the application for testing from the code itself.

STEP1: Specifing Provider

provider "aws" {
region = "ap-south-1"
profile = "testing"
}

STEP2: Creating Security Group

This is security group, we are defining a firewall which has allowed SSH, HTTP & one more port through which EFS can communicate, the inbound or the traffic coming in is called ingress and the out bound or traffic going outside is called egress. CIDR defines the range.

resource "aws_security_group" "sc1" {    
name = "sc1"
description = "Allows SSH and HTTP"
vpc_id = "vpc-720b141a"
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "sc1"
}
}

STEP3: Launching EFS

This is to create EFS, this will create EFS cluster with the encryption done on the data in rest.

resource "aws_efs_file_system" "myefs"{   
creation_token="my-efs"
tags = {
Name= "myefs"
}
}
resource "aws_efs_mount_target" "first" {
file_system_id = aws_efs_file_system.myefs.id
subnet_id = "subnet-7761651f"
security_groups= [aws_security_group.sc1.id]
}

STEP4: Launching instance(EC2)

This will create the instance with some listed software and will also mount the EFS which we created.

resource "aws_instance" "myos1" {    
ami = "ami-0732b62d310b80e97"
instance_type = "t2.micro"
key_name = "mykeys"
security_groups = [aws_security_group.sc1.id]
subnet_id = "subnet-7761651f"
associate_public_ip_address = "1"
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/dell/Downloads/mykeys.pem")
host = aws_instance.myos1.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "myos1"
}
}

STEP5: Creating S3 bucket

This will help us create S3 bucket, this works as a unified storage from where we will use cloud front to make it globally scaled using its power of doing CDN- Content Delevery Network.

resource "aws_s3_bucket" "abhi85fortask" {    
bucket = "abhi85fortask"
acl = "public-read"
versioning {
enabled = true
}
tags = {
Name = "abhi85fortask"
Environment = "Dev"
}
}

STEP6: Uploading on S3 bucket

Uplaoding the the static data to the s3 bucket that we just created. Key is the name of the file after the object is uploaded in the bucket and source is the path of the file to be uploaded.

resource "aws_s3_bucket_object" "s3obj" {
depends_on = [
aws_s3_bucket.abhi85fortask,
]
bucket = "abhi85fortask"
key = "original.jpg"
source = "C:/Users/Dell/Desktop/original.jpg"
acl = "public-read"
content_type = "image or jpeg"
}

STEP7: Creating CloudFront

CloudFront is the service that is provided by the AWS in which they create small data centres where they store our data to achieve low latency. It will create a CloudFront distribution using an S3 bucket. In this bucket, we have stored all of the assets of our site like images, icons, etc.

resource "aws_cloudfront_distribution" "abhiCF" {      
origin {
domain_name = "abhi85fortask.s3.amazonaws.com"
origin_id = "S3-abhi85fortask"
custom_origin_config {
http_port = 80
https_port = 80
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
enabled = true
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-abhi85fortask"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}

OUTPUTS

For applying full automation we should run this command

Terraform init

Terraform apply — auto-approve

Here my task is completed

Thank you…..

--

--