Docker offers you an efficient, speedy way to port applications across systems and machines. It is both light and lean, allowing you to quickly contain applications and run them within their own secure environments via Linux Containers: LXC.
Whether it be from your development machine to a remote server for production or packaging everything for use elsewhere, when it comes to porting your application stack together with its dependencies and getting it to run without hiccups is always a challenge. In fact, the challenge is immense and solutions so far have not really proven successful for the masses.
In a nutshell, Docker offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines, virtual or physical, as well as bringing along loads more of great benefits with it.
Docker achieves its robust application containment via Linux Containers, for example, namespaces and other kernel features. Its further capabilities come from a project’s own parts and components which extract all of the complexity of working with lower-level linux tools/APIs used for system and application management with regards to securely containing processes.
The Docker Project and its Main Parts
Docker project (open-sourced by dotCloud in March ’13) consists of several main parts and elements used by these parts; mostly all of which are built on top of already existing functionality, libraries, and frameworks offered by the Linux kernel and third-parties such as LXC, device-mapper, aufs etc.
Main Docker Parts
Docker daemon: used to manage docker (LXC) containers on the host it runs
Docker CLI: used to command and communicate with the docker daemon
Docker image index: a repository (public or private) for docker images
Main Docker Elements
Docker containers: directories containing everything-your-application
Docker images: snapshots of containers or base OS (e.g. Ubuntu) images
Dockerfiles: scripts automating the building process of images
The following elements are used by the applications forming the docker project:
The entire procedure of porting applications using docker relies solely on the shipment of containers.
Docker containers are basically directories which can be packed (e.g. tar-archived) like any other before being shared and run across various different machines and platforms (hosts). The only dependency is having the hosts tuned to run the containers. Containment here is obtained via Linux Containers (LXC).
LXC (Linux Containers)
Linux Containers can be defined as a combination various kernel-level features which allow management of applications, and resources they use, contained within their own environment.
By making use of certain features (e.g. namespaces, chroots, cgroups and SELinux profiles), the LXC contains application processes and helps with their management through limiting resources, not allowing reach beyond their own file-system (access to the parent’s namespace) etc.
Docker, with its containers, makes use of LXC. However, it also brings along much more.
Docker containers have several main features.
They allow the following:
- Application portability
- Isolating processes
- Prevention from tempering with the outside
- Managing resource consumption and more; requiring much less resources than traditional virtual-machines used for isolated application deployments.
They do not allow the following:
- Messing with other processes
- Causing ‘dependency hell’
- Not working on a different system
Being vulnerable to attacks and abuse all system’s resources
and (also) more.
Being based and depending on LXC, from a technical aspect, these containers are like a directory but instead are shaped and formatted. This allows portability and gradual builds of containers.
Each container is layered like an onion and each action taken within a container consists of putting another block on top of the previous one which actually translates to a simple change within the file system. And various tools and configurations make this set-up work in a harmonious way altogether (e.g. union file-system).
This way of having containers allows the extreme benefit of easily launching and creating new containers and images, which are thus kept lightweight thanks to the gradual and layered way in which they are built. Since everything is based on the file-system, taking snapshots and performing roll-backs in time are cheap much like version control systems (VCS).
Each docker container starts from a docker image which forms the base for other applications and layers to come.
Docker images constitute the base of docker containers from which everything starts to form; they are very similar to default operating-system disk images which are used to run applications on servers or desktop computers.
Having these images allows seamless portability across systems making a solid, consistent, and dependable base with everything that is needed to run the applications. When everything is self-contained and the risk of system-level updates or modifications are eliminated, the container becomes immune to external exposures which could put it out of order; this prevents dependency hell.
As more layers are added on top of the base, new images can be formed by committing these changes. When a new container gets created from a saved, committed, image, things continue from where they left off. The union file system, brings all of the layers together as a single entity when you work with a container.
These base images can be explicitly stated when working with the docker CLI to directly create a new container or they might be specified inside a Dockerfile for automated image building.
Dockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed in order to form a new docker image. Each command executed translates to a new layer of the onion, eventually forming the end product.
They basically replace the process of doing everything manually and repeatedly. When a Dockerfile is finished executing, you end up having formed a image which you can then use to start a new container.
How To Install Docker
At first, Docker was only available on Ubuntu. Now it is quite easy to deploy Docker on RHEL based systems and others as well.
Installation Instructions for Ubuntu
The easy way to get Docker, other than simply using the pre-built application image, is to go with the 64-bit Ubuntu 14.04 VPS.
Update the vps
sudo apt-get update sudo apt-get -y upgrade
Ensure aufs support is available.
sudo apt-get install linux-image-extra-`uname -r`
You can add the Docker repository key to apt-key for package verification.
sudo apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
Then you may add the Docker repository to Apt sources with the command shown below.
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list
Next, update the repository with the new addition by using the command below.
sudo apt-get update
Lastly, download and install the Docker.
sudo apt-get install docker-engine
The default firewall for Ubuntu, UFW: Uncomplicated Firewall, will deny all forwarding traffic by default; this is a necessity for docker.
Now enable forwarding with UFW.
Modify the UFW configuration using the nano text editor.
sudo nano /etc/default/ufw
Scroll down and try to find the line that begins with ‘DEFAULTFORWARDPOLICY’.
Press CTRL+X then approve with Y to save and close.
Afterwards, reload the UFW with the following command.
sudo ufw reload
How To Use Docker
After you have the docker installed, its intuitive usage experience should make it very easy to work with. By now, you should have the Docker daemon running in the background. If you do not, use the command below to run the Docker daemon.
sudo docker -d &
Using Docker via CLI will consist of moving a chain of options and commands followed by arguments.
Remember that docker is gonna need sudo privileges for this to work.
sudo docker [option] [command] [arguments]
Now we are going to look at all of the available commands docker has.
To see the list of commands docker has to offer use the command below.
All of the current available commands (for 0.7.1) attach Attach to a running container build Build a container from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders from the containers filesystem to the host path diff Inspect changes on a container's filesystem events Get real time events from the server export Stream the contents of a container as a tar archive history Show the history of an image images List images import Create a new filesystem image from the contents of a tarball info Display system-wide information insert Insert a file in an image inspect Return low-level information on a container kill Kill a running container load Load an image from a tar archive login Register or Login to the docker registry server logs Fetch the logs of a container port Lookup the public-facing port which is NAT-ed to PRIVATE_PORT ps List containers pull Pull an image or a repository from the docker registry server push Push an image or a repository to the docker registry server restart Restart a running container rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save an image to a tar archive search Search for an image in the docker index start Start a stopped container stop Stop a running container tag Tag an image into a repository top Lookup the running processes of a container version Show the docker version information wait Block until a container stops, then print its exit code
To look at the system-wide information and docker’s version.
# For system-wide information on docker: sudo docker info
# For docker version: sudo docker version
Working with Images
The way to start working with every docker container is by using images. There are a ton of freely available images shared across the Docker image index and the CLI will allow easy access to query the image repository and to be able to download new ones.
To search for a docker image, use the command below.
# Usage: sudo docker search [image name] sudo docker search ubuntu
This command will provide you with a very long list of all available images matching the query: ‘Ubuntu’.
Downloading (PULLing) an image.
If you are building or creating a container, or before you do, you will need to have an image present at the host machine where all of the containers will exist. In order to download images, perhaps following ‘search’ and to execute ‘pull’ to get one.
# Usage: sudo docker pull [image name] sudo docker pull ubuntu
Most of the images on your system, as well as the ones you have created by committing, will be listed using ‘images’. This will show a complete list of all of those that are available.
# Example: sudo docker images sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE test_img latest 72461793563e 36 seconds ago 128 MB ubuntu 12.04 8dbd9e392a96 8 months ago 128 MB ubuntu latest 8dbd9e392a96 8 months ago 128 MB ubuntu precise 8dbd9e392a96 8 months ago 128 MB ubuntu 12.10 b750fe79269d 8 months ago 175.3 MB ubuntu quantal b750fe79269d 8 months ago 175.3 MB
Committing changes to an image.
Once you work with a container and continue to perform actions on it; you may want to help it keep its state. To do this, you will have to ‘commit’. Commiting will make sure everything continues from where they last left, next time you use one.
# Usage: sudo docker commit [container ID] [image name] sudo docker commit 8dbd9e392a96 my_img
Sharing (PUSHing) images:
In this article, once you have made your own container which you can share with the rest of the world, you may use ‘push’ to have the image listed in the index where everyone can download and use it.
Do not forget to ‘commit’ all of the changes.
# Usage: sudo docker push [username/image name] sudo docker push my_username/my_first_image
Reminder: You must sign up at ‘index.docker.io‘ to push images to the docker index.
Working with Containers.
Once you ‘run’ any type of process using an image, you will get a container in return. When the process is not actively working, the container will be a non-running container. Either way, all of the containers will reside on your system unless you remove them with the ‘rm’ command.
Listing all current containers:
By default, you should be able to use the following command to list all of the containers currently running.
sudo docker ps
If you want to have a list of both running and non-running containers, use the command below.
sudo docker ps -l
Creating a New Container
Right now it is not possible to make a container without running such things as commands. To make a new container, you must use a base image and type in a command to run.
# Usage: sudo docker run [image name] [command to run] sudo docker run my_img echo "hello"
# To name a container instead of having long IDs # Usage: sudo docker run -name [name] [image name] [comm.] sudo docker run -name my_cont_1 my_img echo "hello"
This should show the result ‘Hello’. Afterwards you should be right back where you were, like the host’s shell.
Since you cannot change the command running once you have have created a container, it is common practice to just use process managers and maybe even launch custom scripts to be able to execute different commands.
Running a container:
After you have created a container and it stops either due to the process stopping to work, or you stopping it directly, you can use ‘run’ to be able to get the container working again with the command you have used to create it.
# Usage: sudo docker run [container ID] sudo docker run c629b7d70666
Stopping a container:
If you wish to save progress and the changes you have made with a container, you can use ‘commit’ to save it as an image.
Do not forget that, with docker, commits are cheap. Use them to create images to save your progress with a container, or to change back when necessary; like snapshots in time.
Removing / Deleting a container:
With the ID of a container, you should be able to delete one with the ‘rm’ command.
# Usage: sudo docker rm [container ID] sudo docker rm c629b7d70666