Published: 9 May 2024
Contributors: Stephanie Susnjara, Ian Smalley
Containers are executable units of software that package application code along with its libraries and dependencies. They allow code to run in any computing environment, whether it be desktop, traditional IT or cloud infrastructure.
Containers take advantage of a form of operating system (OS) virtualization in which features of the OS kernel (for example, Linux namespaces and cgroups, Windows silos and job objects) can be used to isolate processes and control the amount of CPU, memory and disk that those processes can access.
More portable and resource-efficient than virtual machines (VMs), containers have become the de facto compute units of modern cloud-native applications. Additionally, containers are critical to the underlying IT infrastructure that powers hybrid multicloud settings—the combination of on-premises, private cloud, public cloud and more than one cloud service from more than one cloud vendor.
According to a report from Business Research Insights1, the global container technology market was valued at USD 496.4 million in 2021 and is expected to reach USD 3123.42 million by 2031, with a compound annual growth rate (CAGR) of 19.8%.
Strategic application modernization is one key to transformational success that can boost annual revenue and lower maintenance and running costs.
Subscribe to the IBM newsletter
One way to better understand a container is to examine how it differs from a traditional virtual machine (VM), which is a virtual representation or emulation of a physical computer. A VM is often referred to as a guest, while the physical machine it runs on is called the host.
Virtualization technology makes VMs possible. A hypervisor—a small software layer—allocates physical computing resources (for example, processors, memory, storage) to each VM. It keeps each VM separate from others so they don’t interfere with each other. Each VM then contains a guest OS and a virtual copy of the hardware that the OS requires to run, along with an application and its associated libraries and dependencies. VMware was one of the first to develop and commercialize virtualization technology based on hypervisors.
Instead of virtualizing the underlying hardware, container technology virtualizes the operating system (typically Linux) so each container contains only the application and its libraries, configuration files and dependencies. The absence of the guest OS is why containers are so lightweight and, thus, faster and more portable than VMs.
Containers and virtual machines are not mutually exclusive. For instance, an organization might leverage both technologies by running containers in VMs to increase isolation and security and leverage already installed tools for automation, backup and monitoring.
For a deeper look at this comparison, check out “Containers versus VMs: What’s the difference?” and watch the video:
The primary advantage of containers, especially as compared to a VM, is that they provide a level of abstraction that makes them lightweight and portable. Their primary benefits include:
Containers share the machine OS kernel, eliminating the need for a full OS instance per application and making container files small and easy on resources. A container’s smaller size, especially compared to a VM, means it can spin up quickly and better support cloud-native applications that scale horizontally.
Containers carry all their dependencies with them, meaning that software can be written once and then run without needing to be re-configured across computing environments (for example, laptops, cloud and on-premises).
Due to a combination of their deployment portability and consistency across platforms and their small size, containers are an ideal fit for modern development and application patterns—such as DevOps, serverless and microservices—that are built by using regular code deployments in small increments.
Like VMs, containers enable developers and operators to improve CPU and memory utilization of physical machines. Containers go even further because they enable microservices architecture so that application components can be deployed and scaled more granularly. This is an attractive alternative to scaling up an entire monolithic application because a single component is struggling with its load.
Containers rely less on system resources, making them faster to manage and deploy than VMs. This feature helps save money and time on application deployment and optimizes time to market.
In an IBM survey, developers and IT executives reported many other container benefits. Check out the full report: Containers in the enterprise.
Containers depend on containerization, the packaging of software code with just the operating system (OS) and its relevant environment variables, configuration files, libraries and software dependencies.
The result is a container image that runs on a container platform. A container image represents binary data that encapsulates an application and all its software dependencies.
Containerization allows applications to be “written once and run anywhere,” providing portability, speeding the development process, preventing cloud vendor lock-in and more.
Containerization and process isolation have existed for decades2. A historical moment in container development occurred in 1979 with the development of chroot, part of the Unix version 7 operating system. Chroot introduced the concept of process isolation by restricting an application’s file access to a specific directory (the root) and its children (or subprocesses).
Another significant milestone occurred in 2008, when Linux containers (LXCs) were implemented in the Linux® kernel, fully enabling virtualization for a single instance of Linux. Over the years, technologies like FreeBSD jails and AIX Workload Partitions have offered similar operating system-level virtualization.
While LXC remains a well-known runtime and is part of the Linux distribution and vendor-neutral project3, newer Linux kernel technologies are available. Ubuntu, a modern, open-source Linux operating system, also provides this capability.
Watch the video to dive deeper into containerization:
Most developers look to 2013 as the start of the modern container era with the introduction of Docker. An open-source containerization software platform that functions as a platform as a service (PaaS), Docker enables developers to build, deploy, run, update and manage containers.
Docker uses the Linux kernel (the operating system’s base component) and kernel features (like Cgroups and namespaces) to separate processes so they can run independently. Docker essentially takes an application and its dependencies and turns them into a virtual container that can run on any Windows, macOS or Linux-running computer system.
Docker is based on a client-server architecture, with Docker Engine serving as the underlying technology. Docker provides an image-based deployment model, making sharing apps simple across computing environments.
To clarify any confusion, the Docker container platform namesake also refers to Docker, Inc.4, which develops productivity tools built around its open-source containerization platform and the Docker open source ecosystem and community5.
In 2015, Docker and other leaders in the container industry established The Open Container Initiative6 , part of the Linux Foundation, an open governance structure for the express purpose of creating open industry standards around container formats and runtime environments.
Docker is the most widely utilized containerization tool, with an 82.84% market share7.
Operating hundreds of thousands of containers across a system can become unmanageable and calls for an orchestration management solution.
That’s where container orchestration comes in, allowing companies to manage large volumes throughout their lifestyle, providing:
While other container orchestration platforms (for example, Apache Mesos, Nomad, Docker Swarm) exist, Kubernetes has become the industry standard.
Kubernetes architecture consists of running clusters that allow containers to run across multiple machines and environments. Each cluster typically consists of worker nodes, which run the containerized applications, and control plan nodes, which control the cluster. The control plane acts as the orchestrator of the Kubernetes cluster. It includes several components: the API server (manages all interactions with Kubernetes), the control manager (handles all control processes), the cloud controller manager (the interface with the cloud provider’s API), and so forth. Worker nodes run containers using container runtimes like Docker. Pods, the smallest deployable units in a cluster, hold one or more app containers and share resources, such as storage and networking information.
Kubernetes enables developers and operators to declare the desired state of their overall container environment through YAML files. Then, Kubernetes does all the processing work of establishing and maintaining that state, with activities that include deploying a specified number of instances of a given application or workload, rebooting that application if it fails, load balancing, autoscaling, zero downtime deployments and more. Container orchestration with Kubernetes is also crucial to continuous integration and continuous delivery (CI/CD) or the DevOps pipeline—which would be impossible without automation.
In 2015, Google donated Kubernetes to the Cloud Native Computing Foundation (CNCF)8, the open-source, vendor-neutral hub of cloud-native computing operated under the auspices of the Linux Foundation. Since then, Kubernetes has become the most widely used container orchestration tool for running container-based workloads worldwide. In a CNCF report9, Kubernetes is the second largest open-source project in the world (after Linux) and the primary container orchestration tool for 71% of Fortune 100 companies.
Containers as a service (CaaS) is a cloud computing service that allows developers to manage and deploy containerized applications. It gives businesses of all sizes access to portable, scalable cloud solutions.
CaaS provides a cloud-based platform where users can streamline container-based virtualization and container management processes. CaaS providers offer myriad features, including (but not limited to) container runtimes, orchestration layers and persistent storage management.
Similar to infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS), CaaS is available from cloud service providers (for example, AWS, Google Cloud Services, IBM Cloud®, Microsoft Azure) through a pay-as-you-go pricing model, which allows users to pay only for the services they use.
Organizations use containers to support the following:
Containers are small and lightweight, making them a good match for microservice architectures, where applications are constructed of many loosely coupled and independently deployable smaller services.
The combination of microservices as an architecture and containers as a platform is a common foundation for many development and operations teams that embrace DevOps methodologies. For instance, containers support DevOps pipelines, including continuous integration and continuous deployment (CI/CD) implementation.
Because containers can run consistently anywhere—across laptops, on-premises and cloud environments—they are an ideal underlying architecture for hybrid cloud and multicloud scenarios in which organizations operate across a mix of multiple public clouds in combination with their own data center.
One of the most common approaches to application modernization is containerizing applications in preparation for cloud migration.
Containerization (that is, Docker images orchestrated with Kubernetes) quickly enables DevOps pipelines to deploy artificial intelligence (AI) and machine learning (ML) apps into cloud computing environments.
Containers also offer an efficient way to deploy and manage the large language models (LLMs) associated with generative AI, providing portability and scalability when used with orchestration tools. Moreover, changes made to the LLM can be quickly packaged into a new container image, expediting development and testing.
Beyond Kubernetes, two of the most popular projects in the container ecosystem are Istio and Knative.
As developers use containers to build and run microservices, management concerns go beyond the lifecycle considerations of individual containers and into the ways that large numbers of small services—often referred to as a “service mesh”—connect with and relate to one another. Istio makes it easier for developers to manage the associated challenges with discovery, traffic, monitoring, security and more.
Knative (pronounced ‘kay-native’) is an open-source platform that provides an easy onramp to serverless computing, the cloud computing application development and execution model that enables developers to build and run application code without provisioning or managing servers or backend infrastructure.
Instead of deploying an ongoing instance of code that sits idle while waiting for requests, serverless brings up the code as needed, scaling it up or down as demand fluctuates, and then takes down the code when not in use. Serverless prevents wasted computing capacity and power and reduces costs because you only pay to run the code when it’s running.
Since containers play a major role in software development and deployment across hybrid cloud landscapes, organizations need to ensure that their containerized workloads remain safe from external and internal security threats.
Containers can be deployed anywhere, which creates new attack surfaces surrounding the container-based environment. Vulnerable security areas include container images, image registries, container runtimes, container orchestration platforms and host operating systems.
To start, enterprises need to integrate container security into their security policies and overall strategy. Such strategies must include security best practices along with cloud-based security software tools. This holistic approach should be designed to protect containerized applications and their underlying infrastructure throughout the entire container lifecycle.
Best security practices include a zero-trust strategy that assumes a complex network’s security is always at risk of external and internal threats. Moreover, containers call for a DevSecOps approach. DevSecOps is an application development practice that automates the integration of security practices at every phase of the software development lifecycle—from initial design through integration, testing, delivery and deployment.
Organizations also need to leverage the right container security tools to mitigate risks. Automated security solutions include configuration management, access control, scanning for malware or cyberattacks, network segmentation, monitoring and more.
Additionally, software tools are available to ensure that containerized workloads adhere to compliance and regulatory standards like the GDPR, HIPAA, etc.
Red Hat OpenShift on IBM Cloud uses Red Hat OpenShift in public and hybrid environments for velocity, market responsiveness, scalability and reliability.
Whether it's deployment, building new cloud-native applications, refactoring or replatforming existing applications, Cloud Pak for Applications (CP4Apps) has it covered.
With IBM Cloud Satellite, you can launch consistent cloud services anywhere—on premises, at the edge and in public cloud environments.
Run container images, batch jobs or source code as server-less workloads—with no sizing, deploying, networking or scaling required.
IBM Cloud Container Registry gives you a private registry that lets you manage your images and monitor them for safety issues.
Automatically determine the right resource allocation actions—and when to make them—to help ensure that your Kubernetes environments and mission-critical apps get exactly what they need to meet your SLOs.
Fusion software runs anywhere Red Hat OpenShift runs—on public cloud, on-premises, bare metal and virtual machines. Fusion provides an easy way to deploy Red Hat OpenShift applications and IBM watsonx™.
New IBM research documents the surging momentum of container and Kubernetes adoption.
Container orchestration is a key component of an open hybrid cloud strategy that lets you build and manage workloads from anywhere.
Docker is an open-source platform for building, deploying and managing containerized applications.
Containerization plays a crucial role in modern application development. In this video, Chris Rosen walks through four use cases for software development and IT operations to help you maximize performance and uptime, minimize cost, stay agile and remain in compliance.
Kubernetes, also known as k8s or kube, is an open source container orchestration platform for scheduling and automating the deployment, management and scaling of containerized applications.
Load balancing is the process of distributing network traffic efficiently among multiple servers to optimize application availability and ensure a positive end-user experience.
All links reside outside ibm.com
1 Container Technology Market Size, Share, Growth, And Industry Analysis, By Type (Docker, Rkt, CRI-O, & Others), By Application (Monitoring, Data Management, Security, & Others), Regional Insights, and Forecast From 2024 To 2031, Business Research Insights, February 2024.
2 A Brief History of Containers: From the 1970s Till Now, Aqua, January 10, 2020.
3 https://linuxcontainers.org/
4 About Docker, Docker.
5 Open Source Projects, Docker.
7 Top 5 containerization technologies in 2024, 6sense.
8 Cloud Native Computing Foundation
9 Kubernetes Project Journey Report, CNCF, June 8, 2023.