Integrating AWS , Terraform and GitHub to launch an Application(WebServer).

Prakash Singh Rajpurohit
9 min readJun 15, 2020
Terraform + AWS + GitHub

In today’s automation world integration of different tools plays an important role to launch any application efficiently. Cloud give us the virtual infrastructure for a particular time as you pay, which reduce the cost of infrastructure. Terraform gives a boost to cloud computing to create the infrastructure efficiently.

Q. Why to use Terraform with AWS?

Challenges that we were facing before Terraform:

1. Every cloud computing have different Web-UI, command, syntax, etc. So we have to know the command and syntax of all different cloud to launch our application because some cloud give better services then other’s, so it is very hard to learn all cloud’s command and integrate it.

2. For launching any application (project) on cloud we need tones of services, then we connect all services together and finally deploy. If we do this manually it consume lots of time and energy.

Terraform brings the solution, 1-single language in Terraform can work for all cloud. So now we don’t have to learn all different command of cloud (aws, gcp, azure, alibaba, etc.) just learn the terraform. Typically in terraform we write a code not command, so this code become our document which can be used by other company member’s to create the same infrastructure which you have created. Just running terraform code entire infrastructure build and also by just one click to destroy everything. Terraform make the complete infrastructure automatic. Terraform have their own language known as Hashicorp configuration language (HCL) which is declarative in nature.

Terraform is a standardize tool to manage the cloud. Plugin’s makes the terraform intelligent. Whatever thing terraform create for you, terraform store everything in 1 file called state file (terraform.tstate) and they do tracking from this file like which service is created, which service is dependent on other, if you write code in different file with .tf extension they know the sequence of code, etc.

Agenda: — Create/Launch Application using Terraform:

  1. Create the key and security group which allow the port 80 for webserver.
  2. Launch EC2 instance.
  3. In this Ec2 instance use the key and security group which we have created in step 1.
  4. Launch one Volume (EBS) and mount that volume into /var/www/html.
  5. Developer have uploaded the code into github repo also the repo has some images.
  6. Copy the github repo code into /var/www/html.
  7. Create S3 bucket, and copy/deploy the images from github repo into the S3 bucket and change the permission to public readable.
  8. Cloudfront — edge location pull static data (image, video, pdf, etc) from S3 via content delivery network.
  9. Create snapshot of ebs.

Install the following tools in your system :

  1. Git
  2. AWS-CLI
  3. Terraform

Step 1 :

Configuration file use by terraform to access my aws account. Run this command in command prompt or terminal.

Storing Access Key ID, Password and Region in configuration file.

Here my profile name is prakash which I have provided to terraform, aws provider to login into my account.

provider "aws" {
profile = "prakash"
region = "ap-south-1"
}

I already created a key_pair, name Redhat.pem and save in my local machine. This key pair help to connect to our instance from anywhere in the world.

Creating security group:

//-----> Giving Security Protocol
resource "aws_security_group" "firewall" {
name = "Prakash-terraform-firewall"
ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "Firewall"
}
}

Now, we are creating a security group to give access for port number 80 for our webpage and shh for remote connectivity between local system and ec2 instance. ingress (in-bound) means request which come from outside it give access to it, here I have given access to ssh and http protocol with any ip. ngress (out-bound) means outgoing traffic from your EC2 instances, here I have given access to all protocol with any ip.

Step-2 :: Launch EC2 Instance:

//-->Instance image variable:
variable "ami" {
type = string
default = "ami-052c08d70def0ac62"
}
//-->Instance Type variable:
variable "instance_type" {
type = string
default = "t2.micro"
}

Terraform variables used for defining server details without having to remember infrastructure specific values, now this variable value is passed in the code to create the instance.

resource "aws_instance" "OS1" {
ami = var.ami
instance_type = var.instance_type
key_name = "Redhat"
vpc_security_group_ids = [aws_security_group.firewall.id]

//----> login into instance OS and runing following for application.
connection {
type = "ssh"
user = "ec2-user"
port = 22
private_key = file("C:/Users/addiction computers/Desktop/terra/Redhat.pem")
host = aws_instance.OS1.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd -y",
"sudo systemctl start httpd",
"sudo systemctl enable httpd",
"sudo yum install git -y"
]
}
tags = {
Name = "myos"
}
}

We will be using the key pair and security group that we created. To run command in instance we first connect the local machine and instance with ssh protocol. In instance I installed git and httpd (apache web server) to deploy the webserver.

Step-3 ,4,5 and 6 ::

  • In this Ec2 instance(myos) use the key and security group which we have created in step 1.
  • Launch one Volume (EBS) and mount that volume into /var/www/html
  • Developer have uploaded the code into github repo also the repo has some images.
  • Copy the github repo code into /var/www/html.
resource "aws_ebs_volume" "document-storage" {
availability_zone = aws_instance.OS1.availability_zone
type = "gp2"
size = 1
tags = {
Name = "myebs1"
}
}
resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdd"
volume_id = "${aws_ebs_volume.document-storage.id}"
instance_id = "${aws_instance.OS1.id}"
force_detach = true
}
//----> Attaching Volume with instance folder.
resource "null_resource" "mount-ebs" {
depends_on = [
aws_volume_attachment.ebs_att,
]
connection {
type = "ssh"
user = "ec2-user"
port = 22
private_key = file("C:/Users/addiction computers/Desktop/terra/Redhat.pem")
host = aws_instance.OS1.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/pk2101/automation.git /var/www/html/"
]
}
}

Now, we are creating the EBC volume of 1Gb and attaching that volume with instance (myos). After connecting the instance with local machine via ssh protocol now we format the volume and mount the ebs volume with folder /var/www/html. Finally used git clone command to download the code from github and store in /var/www/html.

Step7::

  • Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

In business world company run’s because of data only,data which is static in nature. We can create a code again but static data(image,video,pdf, etc) can’t be retrieve if lost. In Ebs we can face a downtime of webserver if storage fail which contain data. So if we store static data in S3 we will not get any downtime because S3 is a object storage service of aws which give guarantee of 5 times 9 % that your data wont lose. In S3 you can store unlimited data.

resource "aws_s3_bucket" "image-bucket" {
bucket = "webserver-images321"
acl = "public-read"
provisioner "local-exec" {
command = "mkdir Static-Content"
}provisioner "local-exec" {
command = "git clone https://github.com/pk2101/automation.git"
}provisioner "local-exec" {
when = destroy
command = "echo Yes | rmdir /s Static-Content"
}
}
resource "aws_s3_bucket_object" "image-upload" {
content_type = "image/jpeg"
bucket = "${aws_s3_bucket.image-bucket.id}"
key = "webphoto.jpeg"
source = "Static-Content/img2.jpeg"
acl = "public-read"
}

Here we are creating a S3 bucket. We first downloaded the static data in local machine from github and then upload it to S3 bucket. Static data which is in local machine remove at the time of destroying the entire infrastructure.

Step8::

  • Cloudfront — edge location pull static data (image, video, pdf, etc) from S3 via content delivery network.

Suppose company launch the website in Mumbai aws data center. Company have client across the globe, client may face a latency of loading a page because the static data(image,video,pdf,etc) and code are coming from India and client is outside of India. So to solve this issue aws launch around 250+ small small data center for static content. This 250+ data center is known as edge location where they connect to the S3 where our static data store, edge location store that in a cache when 1st time client request. The cache by-default storage time(dtt=86400 secods) is 1day. The service which aws have given for this is Cloudfront. This complete setup is called Content Delivery Network.

//CDNresource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
comment = "Cretae origin access identity"
}
locals {
s3_origin_id = "myS3Origin"
}
resource "aws_cloudfront_distribution" "s3_distribution" {
origin {
domain_name = "${aws_s3_bucket.image-bucket.bucket_regional_domain_name}"
origin_id = "${local.s3_origin_id}"
s3_origin_config {
origin_access_identity = "${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}"
}
}
enabled = true
is_ipv6_enabled = true
comment = "CDN distribution"
#default_root_object = "index.html"
/* logging_config {
include_cookies = false
bucket = "mylogs.s3.amazonaws.com"
prefix = "myprefix"
}*/
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = "/content/immutable/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values {
query_string = false
headers = ["Origin"]
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "redirect-to-https"
}
# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern = "/content/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = "redirect-to-https"
}
price_class = "PriceClass_200"restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["US", "CA", "GB", "DE", "IN"]
}
}
tags = {
Environment = "production"
}
viewer_certificate {
cloudfront_default_certificate = true
}
connection {
type = "ssh"
user = "ec2-user"
port = 22
private_key = file("C:/Users/addiction computers/Desktop/terra/Redhat.pem")
host = aws_instance.OS1.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo su <<END",
"echo \"<img src='http://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.image-upload.key}' class='center' height='550' width='550'>\" >> /var/www/html/index.html",
"END",
]
}
}

Here we are creating a 250+ small small data center accross the globe. Whenever any client request for webpage for 1st time it take client to the nearest edge location. Now on behalf of client, edge location will contact to S3 and pull the static data and give it to client, then edge location store that data in his cache for 1-day by-default.

Step9::

  • Create snapshot of ebs.
resource "aws_ebs_snapshot" "snapshot" {
volume_id = "${aws_ebs_volume.document-storage.id}"
tags = {
Name = "prakash-ebs-snap"
}
}

Now, we have created a snapshot of ebs which contain developer code. Now this ebs by other company member to create the same webserver in his machine. This snapshot can be used in other region.

Output:

output "myos_ip" {
value = aws_instance.OS1.public_ip
}
resource "null_resource" "ip_store" {
provisioner "local-exec" {
command = "echo ${aws_instance.OS1.public_ip} > publicip.txt"
}
}
//------------------------------------------------------------------resource "null_resource" "domain-name" {
provisioner "local-exec" {
command = "echo ${aws_cloudfront_distribution.s3_distribution.domain_name}/webphoto.html > cloudfront-domain-name.txt"
}
}
//------------------------------------------------------------------
resource "null_resource" "Output" {
depends_on = [
null_resource.mount-ebs, aws_cloudfront_distribution.s3_distribution
]
provisioner "local-exec" {
command = "start chrome ${aws_instance.OS1.public_ip}"
}
}

Now, we created output file to store the public ip of instance , Cloud front domain name. Finally a command passed in local system to show output in chrome.

Web-Server Page

In this Web-Server page Image came from S3 through edge location. Code is coming from ec2 instance from var/www/html location. So now if by any reason our instance/OS terminate our static data won’t lose, it is safe in S3.

Practical :

Here you can see the complete practical to launch an web-server using aws, terraform and GitHub. It is a great example of automation, just by one click complete infrastructure created.

Launching an Web-server using Terraform,AWS and GitHub

GitHub :- Click here to get complete code.

Thank-you for reading.

If this article is helpful, it would be appreciable if you could give 1 clap for it.

--

--

Prakash Singh Rajpurohit

Cloud Stack Developer. Working on Machine Learning, DevOps, Cloud and Big Data.