Using EFS instead of EBS service on the AWS as Create/launch Application using Terraform and Jenkins

Prince Prashant Saini
15 min readSep 8, 2020

Problem Statement =>

using EFS instead of EBS service on the AWS as,

Create/launch Application using Terraform

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Optional

  1. Those who are familiar with jenkins or are in devops AL have to integrate jenkins in this task wherever you feel can be integrated

Solution=>

First we have see what we have to used here

FILE THAT CREATED BY DEVELOPER TO PUSH TO GITHUB CREATE .tf FILE =>👇

Amazon Web Services=>(use any cloud buti used AWS highly recommended cloud )

Amazon Web Services (AWS) is a subsidiary of Amazon providing on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis. These cloud computing web services provide a variety of basic abstract technical infrastructure and distributed computing building blocks and tools. One of these services is Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual cluster of computers, available all the time, through the Internet. AWS’s version of virtual computers emulates most of the attributes of a real computer, including hardware central processing units (CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-disk/SSD storage; a choice of operating systems; networking; and pre-loaded application software such as web servers, databases, and customer relationship management (CRM).
The AWS technology is implemented at server farms throughout the world, and maintained by the Amazon subsidiary. Fees are based on a combination of usage (known as a “Pay-as-you-go” model), hardware, operating system, software, or networking features chosen by the subscriber required availability, redundancy, security, and service options. Subscribers can pay for a single virtual AWS computer, a dedicated physical computer, or clusters of either. As part of the subscription agreement, Amazon provides security for subscribers’ systems. AWS operates from many global geographical regions including 6 in North America.
Amazon markets AWS to subscribers as a way of obtaining large scale computing capacity more quickly and cheaply than building an actual physical server farm.[8] All services are billed based on usage, but each service measures usage in varying ways. As of 2017, AWS owns a dominant 34% of all cloud (IaaS, PaaS) while the next three competitors Microsoft, Google, and IBM have 11%, 8%, 6% respectively according to Synergy Group

We have give Provider AWS because we used AWS Cloud

#PROVIDER

provider "aws" {
region = "ap-south-1"
profile = "prince"
}

Amazon Virtual Private Cloud=>

Amazon Virtual Private Cloud (VPC) is a commercial cloud computing service that provides users a virtual private cloud, by “provision[ing] a logically isolated section of Amazon Web Services (AWS) Cloud”.Enterprise customers are able to access the Amazon Elastic Compute Cloud (EC2) over an IPsec based virtual private network.Unlike traditional EC2 instances which are allocated internal and external IP numbers by Amazon, the customer can assign IP numbers of their choosing from one or more subnets. By giving the user the option of selecting which AWS resources are public facing and which are not, VPC provides much more granular control over security. For Amazon it is “an endorsement of the hybrid approach, but it’s also meant to combat the growing interest in private clouds

#VPC

resource "aws_vpc" "vpcmain" {
cidr_block = "192.168.0.0/16"
instance_tenancy = "default"

tags = {
Name = "vpcmain"
}
}

Subnet=>

A subnet mask is used to divide an IP address into two parts. One part identifies the host (computer), the other part identifies the network to which it belongs. To better understand how IP addresses and subnet masks work, look at an IP (Internet Protocol) address and see how it is organized
We connect ot VPC ID because it part of VPC

#SUBNET

resource "aws_subnet" "awsmain" {
vpc_id = "${aws_vpc.vpcmain.id}"
cidr_block = "192.168.0.0/24"
map_public_ip_on_launch = true
availability_zone = "ap-south-1a"


tags = {
Name = "subnet"
}
}

Internet Gateway =>

An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. An internet gateway supports IPv4 and IPv6 traffic

#INTERNET_GATEWAY

resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.vpcmain.id}"

tags = {
Name = "gateway"
}
}

Route Table=>

A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed
A gateway route table supports routes where the target is local or an elastic network interface in VPC. Each route in a table specifies a destination and a target. IPv4 and IPv6 traffic are treated separately. i.e. Routes to IPv4 and IPv6 addresses or CIDR blocks are independent of each other

resource "aws_route_table" "route" {
vpc_id = "${aws_vpc.vpcmain.id}"

route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gw.id}"
}

tags = {
Name = "route-table"
}
}
resource "aws_route_table_association" "first" {
subnet_id = aws_subnet.awsmain.id
route_table_id = aws_route_table.route.id
}

S3 Bucket =>

An Amazon S3 bucket is a public cloud storage resource available in Amazon Web Services’ (AWS) Simple Storage Service (S3), an object storage offering. Amazon S3 buckets, which are similar to file folders, store objects, which consist of data and its descriptive metadata.

resource "aws_s3_bucket_object" "object" {
bucket = aws_s3_bucket.second.id
key = "my.jpg"
}

locals{
s3_origin_id = "aws_s3_bucket.second.id"
depends_on = [aws_s3_bucket.second]
}

NFS, EFS & Security Group=>

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.

A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. … If you don’t specify a security group, Amazon EC2 uses the default security group.

#NFS & SECURITY GROUP

resource "aws_security_group" "sg1" {
name = "securitygr1"
description = "Allow NFS"
vpc_id = "${aws_vpc.vpcmain.id}"

ingress {
description = "ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "NFS"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "nfs-groups"
}
}

#EFS

resource "aws_efs_file_system" "myefs" {
creation_token = "myefs"
performance_mode = "generalPurpose"

tags = {
Name = "myefs"
}
}

resource "aws_efs_mount_target" "myefs-mount" {
file_system_id = aws_efs_file_system.myefs.id
subnet_id = aws_subnet.awsmain.id
security_groups = [ aws_security_group.sg1.id ]
}

EC2=> instance Create and auto add page

Amazon Elastic Compute Cloud (EC2) is a part of Amazon.com’s cloud-computing platform, Amazon Web Services (AWS), that allows users to rent virtual computers on which to run their own computer applications. EC2 encourages scalable deployment of applications by providing a web service through which a user can boot an Amazon Machine Image (AMI) to configure a virtual machine, which Amazon calls an “instance”, containing any software desired. A user can create, launch, and terminate server-instances as needed, paying by the second for active servers — hence the term “elastic”. EC2 provides users with control over the geographical location of instances that allows for latency optimization and high levels of redundancy. In November 2010, Amazon switched its own retail website platform to EC2 and AWS.

#WEBSERVER_INSTANCE

resource "aws_instance" "webserver" {
depends_on = [ aws_efs_mount_target.myefs-mount ]
ami = "ami-0732b62d310b80e97"
instance_type = "t2.micro"
key_name = "awskey"
subnet_id = aws_subnet.awsmain.id
vpc_security_group_ids = [ aws_security_group.sg1.id ]

tags = {
Name = "webserver-os"
}
}
resource "null_resource" "nullremote1" {
depends_on = [
aws_instance.webserver
]
connection {
type = "ssh"
user= "ec2-user"
private_key = file("C:/Users/Prashant Saini/Downloads/awskey.pem")
host = aws_instance.webserver.public_ip
}

provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git amazon-efs-utils nfs-utils -y",
"sudo setenforce 0",
"sudo systemctl start httpd",
"sudo systemctl enable httpd",
"sudo mount -t efs ${aws_efs_file_system.myefs.id}:/ /var/www/html",
"sudo echo '${aws_efs_file_system.myefs.id}:/ /var/www/html efs defaults,_netdev 0 0' >> /etc/fstab",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/Prashantsaini25/Testing-webpage.git"
]
}
}

Amazon CloudFront=>

Amazon CloudFront is a content delivery network (CDN) offered by Amazon Web Services. Content delivery networks provide a globally-distributed network of proxy servers which cache content, such as web videos or other bulky media, more locally to consumers, thus improving access speed for downloading the content.
CloudFront has servers located in Europe (United Kingdom, Ireland, The Netherlands, Germany, Spain), Asia (Hong Kong, Singapore, Japan, Taiwan and India), Australia, South America, Africa, as well as in several major cities in the United States. The service operates from (as of July 2020) 205 edge locations on six continents.
CloudFront operates on a pay-as-you-go basis.
CloudFront competes with larger content delivery networks such as Akamai and Limelight Networks. Upon launch, Larry Dignan of ZDNet News stated that CloudFront could cause price and margin reductions for competing CDNs.

resource "aws_cloudfront_origin_access_identity" "identity" {
comment = "Some comment"
}
output "origin_access_identity" {
value = aws_cloudfront_origin_access_identity.identity
}
data "aws_iam_policy_document" "policy" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.second.arn}/*"]
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.identity.iam_arn}"]
}
}
statement {
actions = ["s3:ListBucket"]
resources = ["${aws_s3_bucket.second.arn}"]
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.identity.iam_arn}"]
}
}
}

Add Policy in S3 bucket =>

#POLICY

resource "aws_s3_bucket_policy" "first-policy" {
bucket = aws_s3_bucket.second.id
policy = data.aws_iam_policy_document.policy.json
}

resource "aws_cloudfront_distribution" "cloudfront" {
enabled = true
is_ipv6_enabled = true
wait_for_deployment = false
origin {
domain_name = "${aws_s3_bucket.second.bucket_regional_domain_name}"
origin_id = local.s3_origin_id
s3_origin_config {
origin_access_identity = "${aws_cloudfront_origin_access_identity.identity.cloudfront_access_identity_path}"

}
}
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false

cookies {
forward = "none"
}
}

viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}

Now developer Push the code now only

Github=>

For This we have start we Gitbash My developer Push the code with help of GIT bash or Github GUI any for git bash we have use some command first we create our work space then setup gitbash then in gitbash we go to that work space then initialize git plugins then give remote origin where we put code . use this following command 👇👇👇

mkdir workspace
cd workspace
git init
git remote add origin <link>

Then we configure post-commit hook

touch .git/hooks/post-commit
vim .git/hooks/post-commit

Now we push the code with

git push

we push in github

now push done upload successfully👇

now take link and paste to jenkins

Terraform

Terraform is an open-source infrastructure as code software tool created by HashiCorp. Users define and provision data center infrastructure using a declarative configuration language known as HashiCorp Configuration Language (HCL), or optionally JSON.
Terraform manages external resources (such as public cloud infrastructure, private cloud infrastructure, network appliances, software as a service, and platform as a service) with “providers”. HashiCorp maintains an extensive list of official providers, and can also integrate with community-developed providers. Users can interact with Terraform providers by declaring resources or by calling data sources.[6] Rather than using imperative commands to provision resources, Terraform uses declarative configuration to state the desired final state. Once a user invokes Terraform on a given resource, Terraform will perform CRUD actions on the user’s behalf to accomplish the desired state. The infrastructure as code can be written as modules, promoting reusability and maintainability.
Terraform supports a number of cloud infrastructure providers such as Amazon Web Services, Microsoft Azure, IBM Cloud, Google Cloud Platform DigitalOcean,Oracle Cloud Infrastructure, VMware vSphere, and OpenStack.
HashiCorp also supports a Terraform Module Registry, launched in 2017.In 2019, Terraform introduced the paid version called Terraform Enterprise for larger organizations. Terraform has four major commands: terraform init, terraform plan, terraform apply, terraform destroy.

First Install Terraform then in CMD check with command

Terraform version

After terraform version if they give version after it create notepad file

notepad task2.tf

after it successfully create use

terrafrom init

now after it apply terraform command

terraform apply
here you see like this after apply
now Terraform done

Now terraform done only that 3command we have to but it manually

FOR AUTOMATION WE USED

JENKINS=>

Jenkins is a free and open source automation server. It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery. It is a server-based system that runs in servlet containers such as Apache Tomcat. It supports version control tools, including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, ClearCase and RTC, and can execute Apache Ant, Apache Maven and sbt based projects as well as arbitrary shell scripts and Windows batch commands. The creator of Jenkins is Kohsuke Kawaguchi.Released under the MIT License, Jenkins is free software.
Builds can be triggered by various means, for example by commit in a version control system, by scheduling via a cron-like mechanism and by requesting a specific build URL. It can also be triggered after the other builds in the queue have completed. Jenkins functionality can be extended with plugins.
The Jenkins project was originally named Hudson, and was renamed after a dispute with Oracle, which had forked the project and claimed rights to the project name. The Oracle fork, Hudson, continued to be developed for a time before being donated to the Eclipse Foundation. Oracle’s Hudson is no longer maintained and was announced as obsolete in February 2017.

Jenkins An automation Tool

For this we have use Redhat download jenkins top of redhat and open in windows chrome you can download your self for redhat we used virtualbox

Redhat8=>(Use any linux but i used redhat version8 because many of industry used redhat so used this )

Red Hat, Inc. is an American multinational software company that provides open source software products to enterprises. Founded in 1993, Red Hat has its corporate headquarters in Raleigh, North Carolina, with other offices worldwide. It became a subsidiary of IBM on July 9, 2019.
Red Hat has become associated to a large extent with its enterprise operating system Red Hat Enterprise Linux. With the acquisition of open-source enterprise middleware vendor JBoss, Red Hat also offers Red Hat Virtualization (RHV), an enterprise virtualization product. Red Hat provides storage, operating system platforms, middleware, applications, management products, and support, training, and consulting services.
Red Hat creates, maintains, and contributes to many free software projects. It has acquired several proprietary software product codebases through corporate mergers and acquisitions and has released such software under open source licenses. As of March 2016, Red Hat is the second largest corporate contributor to the Linux kernel version 4.14 after Intel.
On October 28, 2018, IBM announced its intent to acquire Red Hat for $34 billion. The acquisition closed on July 9, 2019. Red Hat’s lead financial adviser in the transaction was Guggenheim Securities.

Oracle VM VirtualBox =>(used any VM i used this for installing redhat8)

Oracle VM VirtualBox (formerly Sun VirtualBox, Sun xVM VirtualBox and Innotek VirtualBox) is a free and open-source hosted hypervisor for x86 virtualization, developed by Oracle Corporation. Created by Innotek, it was acquired by Sun Microsystems in 2008, which was in turn acquired by Oracle in 2010.
VirtualBox may be installed on Windows, macOS, Linux, Solaris and OpenSolaris. There are also ports to FreeBSD and Genode. It supports the creation and management of guest virtual machines running Windows, Linux, BSD, OS/2, Solaris, Haiku, and OSx86, as well as limited virtualization of macOS guests on Apple hardware. For some guest operating systems, a “Guest Additions” package of device drivers and system applications is available,which typically improves performance, especially that of graphics.

for this setup Redhat in VM box

now Start the redhat and check command with systemctl

systemctl start jenkins
systemctl enable jenkins
systemctl status jenkins
It active now

Now take IP open in Chrome IP:PORT port should be 8080 only

now jenkins open it like this

JOB1=>
now we have create a job click on new item first give github Url for jenkins go and fetch the code or download the code👇

job1 github url

Now Create execution shell and use copy folder so give location👇

Now Job1 Done Apply and Save

job1 run and output😀👇✔

JOB2=>
now create a job2 to for terraform init command

give name and choose free style

now we used triggers if job1 complete then only job2 run for this we used build Triggers give job name we give job1 because we need if job copy command in folder then only job2 run after name chose Trigger only if build is stable if not done so cant run
And on execution shell we used Terraform init comman with sudo without sudo it would not work and give folder name because we cant go folder manually we need our Jenkins go and run command automatic so give proper folder path /root/foldername

job2 run and output

JOB3=>

now create job3 with same like job1 and job2 this is our last job so we have to apply terraform here same thing we do that give job2 but this time if our job2 successful then only run job3 again used Build trigger here command we used auto-approve because they ask yes or not so for yes we used this

AT LAST USET

😎😎😎😎😎BUILD PIP LINE😎😎😎😎😎

For build pipeline we have we click on +add button here then give name and click on build pipline and ok

Now see build pipline

in Job3 is error because of key we transfer key and and give local path and add it solved no problem work successfully

NOw after all Successful

we see Output on AWS👇😎😎🤘🤘

All working Fine Whatever we do all successful

--

--