Configure Docker in AWS with Help of Ansible

Prince Prashant Saini
9 min readAug 7, 2020
Ansible + AWS

PROBLEM STATEMENT ❓

Write an Ansible PlayBook that does the following operations in the managed nodes:

🔹 Configure Docker

🔹 Start and enable Docker services

🔹 Pull the httpd server image from the Docker Hub

🔹 Run the httpd container and expose it to the public

🔹 Copy the html code in /var/www/html directory and start the web server

SOLUTION 🤘

Here we have Setup Docker and create container with help of ansible in top of AWS cloud you can used any cloud process will be same some basic terraform

For this setup we know some information about this technology

Ansible 😎=>

Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows. It includes its own declarative language to describe system configuration. Ansible was written by Michael DeHaan and acquired by Red Hat in 2015. Ansible is agentless, temporarily connecting remotely via SSH or Windows Remote Management (allowing remote PowerShell execution) to do its tasks.

Docker😎=>

Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating system kernel and therefore use fewer resources than virtual machines.
The service has both free and premium tiers. The software that hosts the containers is called Docker Engine. It was first started in 2013 and is developed by Docker, Inc.

AWS😎=>

Amazon Web Services (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis. In aggregate, these cloud computing web services provide a set of primitive abstract technical infrastructure and distributed computing building blocks and tools. One of these services is Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual cluster of computers, available all the time, through the Internet. AWS’s version of virtual computers emulates most of the attributes of a real computer, including hardware central processing units (CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-disk/SSD storage; a choice of operating systems; networking; and pre-loaded application software such as web servers, databases, and customer relationship management (CRM).
The AWS technology is implemented at server farms throughout the world, and maintained by the Amazon subsidiary. Fees are based on a combination of usage (known as a “Pay-as-you-go” model), hardware, operating system, software, or networking features chosen by the subscriber required availability, redundancy, security, and service options. Subscribers can pay for a single virtual AWS computer, a dedicated physical computer, or clusters of either. As part of the subscription agreement, Amazon provides security for subscribers’ systems. AWS operates from many global geographical regions including 6 in North America.
Amazon markets AWS to subscribers as a way of obtaining large scale computing capacity more quickly and cheaply than building an actual physical server farm.[8] All services are billed based on usage, but each service measures usage in varying ways. As of 2017, AWS owns a dominant 34% of all cloud (IaaS, PaaS) while the next three competitors Microsoft, Google, and IBM have 11%, 8%, 6% respectively according to Synergy Group.

Terraform😎 =>

Terraform is an open-source infrastructure as code, software tool created by HashiCorp. It enables users to define and provision data center infrastructure using a declarative configuration language known as HashiCorp Configuration Language (HCL), or optionally JSON.
Terraform manages external resources, such as public cloud infrastructure, private cloud infrastructure, network appliances, software as a service, and platform as a service, with “providers”. HashiCorp maintains an extensive list of official providers, and can also integrate with community-developed providers. Users can interact with Terraform providers by declaring resources or by calling data sources. Rather than using imperative commands to provision resources, Terraform uses declarative configuration to state the desired final state. Once a user invokes Terraform on a given resource, Terraform will perform CRUD actions on the user’s behalf to accomplish the desired state. The infrastructure as code can be written as modules, promoting reusability and maintainability.
Terraform supports a number of cloud infrastructure providers such as Amazon Web Services, IBM Cloud, Google Cloud Platform,DigitalOcean,Oracle Cloud Infrastructure, VMware vSphere, and OpenStack.
HashiCorp also supports a Terraform Module Registry launched in 2017 during HashiConf 2017 conferences.In 2019 Terraform introduced the paid version called Terraform Enterprise for larger organizations. Terraform has four major commands: terraform init, terraform plan, terraform apply, terraform destroy.

Now we have integrate All this tools for automation

AWS + ANSIBLE + DOCKER

Now First we have start with AWS because for do any think we need OS so we have start to create two container with help of terraform

So we have create container with terraform for this we have need terraform in our system you can use direct GUI but for some more automation we used terraform

For installing terraform use this link👇

After it check with command

terraform version

Here version of terraform👆

Then we create one folder for workspace using cmd and create one file using .tf extension .tf is fixed for terraform if you not used this you can’t run terraform code

mkdir workspace #command for make folder name of folder is workspace
cd workspace #go to created directory
notepad ansible.tf #open notepad command

we have create notepad file

provider “aws” {
region = “ap-south-1”
profile = “PrincePrashantSaini”
}

#creating security group
resource “aws_security_group” “allow_tls” {
name = “allow_tls”
description = “allow ssh and httpd”

ingress {
description = “SSH Port”
from_port = 0
to_port = 65525
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
tags = {
Name = “allow_tls”
}
}

#creating key variable
variable “enter_ur_key_name” {
type = string
default = “awskey”
}

#create EC2
resource “aws_instance” “myinstance” {
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
key_name = var.enter_ur_key_name
security_groups = [“${aws_security_group.allow_tls.name}”]
connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/Prashant Saini/Downloads/awskey.pem”)
host = aws_instance.myinstance.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo yum install python3 -y”,
“sudo pip3 install ansible”,

]
}

tags = {
Name = “PrincePrashant”
}
}

then after create notepad file we used

terraform init

terraform init command

terraform apply

now created 😎😎🤘🤘

Now same file used for other VM change name and used it works after this check in AWS👇

now see it configure

Take IP and open in putty and key you can used cmd also but i used putty

Open it and then we go to root user with command

sudo su — root

👉Then we create one more user
We wouldn’t be able to install Ansible as a root user here, because in RHEL 8, this operation is not allowed. So, we are going to create a new user and setup a password for it.

useradd pps

passwd Prince

👉same thing do in second VM also in
(note:-ss we create user pps2 but we have create pps before so it not show here it already exists)

👉Once it done we give power to our user for this we use command

echo “pps ALL=(ALL) NOPASSWD: ALL” >> /etc/sudoers

👉The control node, also referred to as Ansible Master, connects to the managed host using SSH. Though using key-based authentication is recommended, when you are at a learning stage, use password-based authentication. so we go to this file and find PasswordAuthentication it disabled with # tag so we can enabled show in image 👇👇

vi /etc/ssh/sshd_config

After it reload service because if we changed any thing so after it we have to reload the services

systemctl reload sshd

👉In RHEL 8, Ansible cannot be installed as a root user. So, we are going to install Ansible after switching user that we have created we have created pps so we have used command for switch user 👇

su pps

👉after it command they ask password you can give set password👇

now install ansible here with command

pip3 install ansible — user

in my case already install so it not show now you can install in your system

👉Then check with version command it installed or not 👇

ansible — version

👉now create txt file for communicate with other VM for this we have to use vi command and here we give user and password for target vm👇

vi /etc/myhosts.txt

👆In the above ss, I have mentioned the private IP of the managed host under [IP]. This is called creating a group. I created a group named ‘webservers’ so that I can call all the servers at once, and thus avoid using individual IPs in my Ansible commands👆

👉Now let me show you, how we can configure password-less authentication between the ansible-control host and managed hosts. Run the command below in the Ansible directory👇

ssh-keygen
ssh-copy-id pps@172.31.5.176

Try one time with SSH its work or not

ssh pps@172.31.5.176

👉now try to ping one OS to another OS with ansible for check ansible setup or not

now its work fine if you see this green message if its red soo its fail check where you have do mistake 😎😎😎😎😎😎

👉now create one YML file i have create pb.yml you can create any

now final command run ansible notebook

ansible-playbok pb.yml

all green working fine 😎😎😎

check website is working or not that give in pub.yml we have passed home.html. know work fine it means our docker configure and its work fine.

For check in second OS its configure or not use

docker ps
docker ps -a
docker images

working good no challenge

THANKS FOR READING

--

--