Skip to main content

Docker - Basic Details

Docker is an advanced OS virtualization software platform that makes it easier to create, deploy, and run applications in a Docker container.

Docker is a container management service. The keywords of Docker are build, ship and run anywhere. The whole idea of Docker is for developers to easily develop applications, ship them into containers which can then be deployed anywhere. 

Docker allows the developers to choose the project-specific deployment environment for each project with a different set of tools and application stacks.



Docker provides flexibility and portability to run an application in various locations, whether on-premises or in a public cloud or a private cloud.

Features of Docker

  • Docker has the ability to reduce the size of development by providing a smaller footprint of the operating system via containers.

  • With containers, it becomes easier for teams across different units, such as development, QA and Operations to work seamlessly across applications.

  • You can deploy Docker containers anywhere, on any physical and virtual machines and even on the cloud.

  • Since Docker containers are pretty lightweight, they are very easily scalable.

    The key benefit of Docker is that it allows users to package an application with all of its dependencies into a standardized unit for software development. Unlike virtual machines, containers do not have high overhead and hence enable more efficient usage of the underlying system and resources.

    Docker Image:

    In Docker, everything is based on Images. An image is a combination of a file system and parameters. Let’s take an example of the following command in Docker.

    docker run hello-world 
    
  • The Docker command is specific and tells the Docker program on the Operating System that something needs to be done.

  • The run command is used to mention that we want to create an instance of an image, which is then called a container.

  • Finally, "hello-world" represents the image from which the container is made.

    Docker – Container Lifecycle

    The following illustration explains the entire lifecycle of a Docker container.

    Container Lifecycle
  • Initially, the Docker container will be in the created state.

  • Then the Docker container goes into the running state when the Docker run command is used.

  • The Docker kill command is used to kill an existing Docker container.

  • The Docker pause command is used to pause an existing Docker container.

  • The Docker stop command is used to pause an existing Docker container.

  • The Docker run command is used to put a container back from a stopped state to a running state.

     

    Docker- Architecture:

    he following image shows the standard and traditional architecture of virtualizationVirtualization

    • The server is the physical server that is used to host multiple virtual machines.

    • The Host OS is the base machine such as Linux or Windows.

    • The Hypervisor is either VMWare or Windows Hyper V that is used to host virtual machines.

    • You would then install multiple operating systems as virtual machines on top of the existing hypervisor as Guest OS.

    • You would then host your applications on top of each Guest OS.

    The following image shows the new generation of virtualization that is enabled via Dockers. Let’s have a look at the various layers.

    Various Layers
  • The server is the physical server that is used to host multiple virtual machines. So this layer remains the same.

  • The Host OS is the base machine such as Linux or Windows. So this layer remains the same.

  • Now comes the new generation which is the Docker engine. This is used to run the operating system which earlier used to be virtual machines as Docker containers.

  • All of the Apps now run as Docker containers.

The clear advantage in this architecture is that you don’t need to have extra hardware for Guest OS. Everything works as Docker containers.

Terminology

  • Images - The blueprints of our application which form the basis of containers. In the demo above, we used the docker pull command to download the busybox image.
  • Containers - Created from Docker images and run the actual application. We create a container using docker run which we did using the busybox image that we downloaded. A list of running containers can be seen using the docker ps command.
  • Docker Daemon - The background service running on the host that manages building, running and distributing Docker containers. The daemon is the process that runs in the operating system which clients talk to.
  • Docker Client - The command line tool that allows the user to interact with the daemon. More generally, there can be other forms of clients too - such as Kitematic which provide a GUI to the users.
  • Docker Hub - A registry of Docker images. You can think of the registry as a directory of all available Docker images. If required, one can host their own Docker registries and can use them for pulling images.

Comments

Popular posts from this blog

Explain - AWS CloudFront

What is AWS CloudFront? AWS CloudFront is a Content Delivery Network (CDN) service provided by Amazon Web Services (AWS). It’s designed to speed up the delivery of static and dynamic web content, such as HTML, CSS, JavaScript, and image files, to users by caching the content at strategically located data centers worldwide, known as edge locations .  When a user requests content, CloudFront serves it from the nearest edge location, reducing latency and improving load times. Key Features of CloudFront: Caching and Distribution : CloudFront caches content at edge locations to reduce the load on the origin server and to deliver content quickly to users across the globe. Origin Integration : It integrates seamlessly with other AWS services like S3, EC2, and even custom origin servers outside AWS, serving content directly from these sources. Dynamic Content Acceleration : CloudFront accelerates not only static but also dynamic content by optimizing routes based on AWS's global network. S...

𝗡𝗲𝘁𝘄𝗼𝗿𝗸 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀

𝗘𝘀𝘀𝗲𝗻𝘁𝗶𝗮𝗹 𝗡𝗲𝘁𝘄𝗼𝗿𝗸 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀 𝗘𝘃𝗲𝗿𝘆 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿 𝗦𝗵𝗼𝘂𝗹𝗱 𝗞𝗻𝗼𝘄 🌐 Here are 9 essential network protocols that every developer should understand, as they form the foundation of network communication, internet connectivity, and data exchange: Network Protocol 1. HTTP/HTTPS (Hypertext Transfer Protocol / HTTP Secure) Purpose : HTTP is used for transmitting data over the web, primarily for accessing and displaying webpages. HTTPS is the secure version of HTTP that encrypts data using SSL/TLS. Why Important : Almost all web-based applications rely on HTTP/HTTPS to send and receive data. Understanding HTTP methods (GET, POST, PUT, DELETE) and status codes (200, 404, etc.) is crucial for backend development and web services. 2. TCP/IP (Transmission Control Protocol / Internet Protocol) Purpose : TCP/IP is the foundational protocol suite for the internet, handling end-to-end data transmission. TCP ensures reliable data transfer, while IP handles addre...

What is DevOps?

  Introduction to DevOps DevOps is not just about tools but it also includes a set of best practices that enables to bridge the gap between the development and operations teams in the areas of continuous integration and deployment by using an integrated set of tools to automate the software delivery. It is imperative that the developers understand the operations side and vice versa. So the goal of DevOps is simply to help any organization in the speed of delivering applications to the end-users and enabling faster end-user feedback which is the need for any business today. Overview of Agile and DevOps There is no difference between Agile and DevOps. Instead, they complement each other. Let’s start by looking at the Waterfall model where all the requirements are frozen, and design & development are done one after the other until a stable product is available. So the issue here is that if there is a change in the customer's need at this stage then there is no way to include and d...