Docker and Docker-Compose Setup on AWS EC2 Instance

Please visit my website and subscribe to my youtube channel for more articles

Install Docker on AWS

sudo yum update -y

sudo yum install -y docker

sudo service docker start

sudo usermod -a -G docker ec2-user

Docker version 17.09.1-ce, build

Docker installed successfully.

Install Docker-Compose. Get the latest one from here

Test Docker installation

Run hello-world image

docker run hello-world

Build a Image

Create a Dockerfile,requirements.txt,

Push image in Docker Hub

Now you can run your image from anywhere

Till now, we have created an image using Dockerfile,push it to DOckerhub so that anyone can use it now.

Some useful commands

Create DockerImage with commit option

  1. Run a container from the ubuntu and connect it to its command line:

docker run -i -t ubuntu /bin/bash

2. Install the Git toolkit:

  1. Check if the Git toolkit is installed:
  1. Exit the container:
  1. Check what has changed in the container comparing it to the ubuntu image:

The command should print a list of all files changed in the container.

  1. Commit the container to the image:

Using the exact same method, we can build ubuntu_with_git_and_jdk on top of the ubuntu_with_git image:

Create Image directly using Dockerfile

Create a Docker file with below contents

FROM ubuntu:16.04
RUN apt-get update && \
apt-get install -y python
ENTRYPOINT [“python”, “”]

Creata a

print “Hello World from Python!”

Docker Volumes

Let’s start with an example and specify the volume with the -v <host_path>:<container_path> option and connect to the container:

Now, we can create an empty file in host_directory in the container:

Let’s check if the file was created in the Docker host’s filesystem:

We can see that the filesystem was shared and the data was therefore persisted permanently. We can now stop the container and run a new one to see that our file will still be there:

Instead of specifying the volume with the -v flag, it’s possible to specify the volume as an instruction in the Dockerfile, for example:

Some useful commands

docker ps ( to show all running containers)

docker ps -a ( to show all containers(stopped and running)

docker images

docker exec -it 4a53d243816e bash ( To go inside a container)

Docker setup has completed succesfully with some basic knowledge.

Create a Docker-compose.yml/scale up application

is a YAML file that defines how Docker containers should behave in production.

This docker-compose.yml file tells Docker to do the following:

  • Pull the image we uploaded in step 2 from the registry.
  • Run 5 instances of that image as a service called web, limiting each one to use, at most, 10% of the CPU (across all cores), and 50MB of RAM.
  • Immediately restart containers if one fails.
  • Map port 80 on the host to web’s port 80.
  • Instruct web’s containers to share port 80 via a load-balanced network called webnet. (Internally, the containers themselves publish to web’s port 80 at an ephemeral port.)
  • Define the webnet network with the default settings (which is a load-balanced overlay network).

Our single service stack is running 5 container instances of our deployed image on one host.

You can run curl -4 http://localhost several times in a row and you will get different hostnames

You can update docker-compose.yml file and re-run the stack command .Docker performs an in-place update, no need to tear the stack down first or kill any containers.

docker stack rm getstartedlab

docker swarm leave --force

We have learnt how it should run in production by turning it into a service, scaling it up 5x in the process.

CLuster in Docker

Now we will deploy this application onto a cluster, running it on multiple machines. Multi-container, multi-machine applications are made possible by joining multiple machines into a “Dockerized” cluster called a swarm.

Swarm — group of machines that are running Docker and joined into a cluster.

Swarm managers can use several strategies to run containers, such as “emptiest node” — which fills the least utilized machines with containers. Or “global”, which ensures that each machine gets exactly one instance of the specified container. You instruct the swarm manager to use these strategies in the Compose file, just like the one you have already been using.

Swarm managers are the only machines in a swarm that can execute your commands, or authorize other machines to join the swarm as workers. Workers are just there to provide capacity and do not have the authority to tell any other machine what it can and cannot do.

docker swarm init to enable swarm mode and make your current machine a swarm manager then run docker swarm join on other machines to have them join the swarm as workers

Install Docker-machine on AWS EC2

Devops Automation Enginneer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store