Docker In Brief
Docker, a new container technology, is hotter than hot because it makes it possible to get far more apps running on the same old servers and it also makes it very easy to package and ship programs.
VM hypervisors, such as Hyper-V, KVM, and Xen, all are “based on emulating virtual hardware. That means they’re fat in terms of system requirements.”
Containers, however, use shared operating systems. That means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This in turn means you can “leave behind the useless 99.9% VM junk, leaving you with a small, neat capsule containing your application
Therefore, with a perfectly tuned container system, you can have as many as four-to-six times the number of server application instances as you can using Xen or KVM VMs on the same hardware.
Docker, however, is built on top of LXC. Like with any container technology, as far as the program is concerned, it has its own file system, storage, CPU, RAM, and so on. The key difference between containers and VMs is that while the hypervisor abstracts an entire device, containers just abstract the operating system kernel.
This, in turn, means that one thing hypervisors can do that containers can’t is to use different operating systems or kernels. So, for example, you can use VMware to run both instances of Windows Server 2012 and SUSE Linux Enterprise Server, at the same time. With Docker, all containers must use the same operating system and kernel.
If multiple copies of the same application are what you want, then you’ll love containers.
This move can save a data center or cloud provider tens-of-millions of dollars annually in power and hardware costs. It’s no wonder that they’re rushing to adopt Docker as fast as possible.
Containers gives you instant application portability.
Docker uses containers, in lieu of virtual machines, to enable multiple applications to be run at once on the same server.
This container technology enables applications to be assembled quickly from components and deployed on anything from laptops to servers — and to the cloud.
Docker is all about making it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
The Docker project offers higher-level tools, working together, which are built on top of some Linux kernel features with the goal of helping developers and system administrators port applications – with all of their dependencies conjointly – and get them running across systems and machines headache free.
Docker achieves this by creating safe, LXC (Linux Containers) based environments for applications called “containers” which are created using images. These bases for containers can be built either by executing commands manually by logging inside like a virtual-machine, or by automating the process through Dockerfiles.
Docker brings security to applications running in a shared environment, but containers by themselves are not an alternative to taking proper security measures.