What is DevOps?

What is DevOps?

DevOps is a software development methodology that accelerates the delivery of higher-quality applications and services by combining and automating the work of software development and IT operations teams.

With shared tools and practices, including small but frequent updates, software development becomes more efficient, faster and more reliable.

By definition, DevOps, development operations, outlines both a software development process and an organizational culture shift that fosters coordination and collaboration between the development and IT operations teams, two groups that traditionally practiced separately from each other, or in silos.

In practice, the best DevOps processes and cultures extend beyond development practices and operations to incorporate inputs from all application stakeholders into the software development lifecycle. This includes platform and infrastructure engineers, security, compliance, governance, risk management and line-of-business teams, users and customers.

DevOps principles represent the current state in the evolution of the software delivery process during the past 20-plus years. The delivery process has progressed from giant application-wide code releases every several months or even years, to iterative smaller feature or functional updates, released as frequently as every day or several times per day.

Ultimately, DevOps is about meeting software users’ ever-increasing demand for frequent, innovative new features and uninterrupted performance and availability.

How we got to DevOps

Before 2000, most software was developed and updated by using the waterfall methodology, a linear approach to large-scale development projects. Software development teams spent months developing large bodies of new code that impacted most or all of the application lifecycle. Because the changes were so extensive, they spent several more months integrating that new code into the code base.

Next, quality assurance (QA), security and operations teams spent still more months testing the code. The result was months or even years between software releases, and often several significant patches or bug fixes between releases as well. This big bang approach to feature delivery was often characterized by complex and risky deployment plans, hard-to-schedule interlocks with upstream and downstream systems, and IT’s great hope that the business requirements had not changed drastically in the months leading up to production going live or the general availability (GA) version.

Agile development

To speed development and improve quality, development teams began adopting agile software development methodologies in the early 2000s. These methodologies are iterative rather than linear and focus on making smaller, more frequent updates to the application code base. Foremost among these DevOps methodologies are continuous integration and continuous delivery (CI/CD).

In CI/CD, smaller chunks of new code are merged into the code base at frequent intervals, and then automatically integrated, tested and prepared for deployment to the production environment. Agile modified the big bang approach into a series of smaller snaps, which also compartmentalized risks.

The more effectively these agile development practices accelerated software development and delivery, the more they exposed still-siloed IT operations, system provisioning, configuration, acceptance testing, management and monitoring, for example, as the next bottleneck in the software delivery lifecycle.

So, DevOps grew out of agile. It added new processes and tools that extend the continuous iteration and automation of CI/CD to the remainder of the software delivery lifecycle. And it implemented close collaboration between development and operations at every step in the process.

Aerial view of highways

Keep your head in the cloud 


Get the weekly Think Newsletter for expert guidance on optimizing multicloud settings in the AI era.

How DevOps works: The DevOps lifecycle

The DevOps lifecycle (sometimes called the continuous delivery pipeline, when portrayed in a linear fashion) is a series of iterative, automated development processes, or workflows, run within a larger, automated and iterative development lifecycle, designed to optimize the rapid delivery of high-quality software. Workflow names and the number of workflows differ depending on whom you ask, but they often include these eight steps.

Planning

In this workflow, teams scope out new features and functions for the next release, drawing from prioritized user feedback and case studies, as well as inputs from all internal stakeholders. The goal of the planning stage is to maximize the business value of the product by producing a backlog of features that enhance product value.

Coding

This is the programming step, where developers code and build new and enhanced features based on user stories and work items in the backlog. A combination of practices such as test-driven development (TDD), pair programming and peer code reviews are common. Developers often use their local workstations to perform the inner loop of writing and testing code before sending it down the continuous delivery pipeline.

Building continuous integration and continuous delivery

In this workflow, the new code is integrated into the existing code base, then tested and packaged for release and deployment. Common automation activities include merging code changes into a master copy, checking that code from a source code repository, and automating the compile, unit test and packaging into an executable file. The best practice is to store the output of the CI phase in a binary repository for the next phase.

Testing

Teams use testing, often automated testing, to make sure that the application meets standards and requirements. The classical DevOps approach includes a discrete test phase that occurs between building and release.

However, DevOps has advanced such that certain elements of testing can occur in planning (behavior-driven development), development (unit testing, contract testing), integration (static code scans, CVE scans, linting), deployment (smoke testing, penetration testing, configuration testing), operations (chaos testing, compliance testing) and learning (A/B testing).

Continuous testing is a powerful form of risk and vulnerability identification and provides an opportunity for IT to accept, mitigate or remediate risks. In addition, shift-left testing is an approach in software development that emphasizes moving testing activities earlier in the development process. This approach drives better product quality, better test coverage, continuous feedback loops and a faster time to market.

Release

The first of the operations stages, the release stage is the last before the users access the application. In this workflow the runtime build output (from integration) is deployed to a runtime environment, usually a development environment where runtime tests are run for quality, compliance and security.

If errors or defects are found, developers have a chance to intercept and remediate any problems before any users see them. There are typically environments for development, testing and production, with each environment requiring progressively stricter quality gates. When developers have fixed all identified issues and the application meets all requirements, the operations team confirms it is ready for deployment and builds it into the production environment.

Deploy

Deployment is when the project moves to a production environment where users can access the changes to the application. Infrastructure is set up and configured (often by using infrastructure as code) and application code is deployed. A good practice for deployment to a production environment is to deploy first to a subset of end users, and then eventually to all users once stability is established.

Operate

If getting features delivered to a production environment is characterized as “Day 1”, then once features are running in production, “Day 2” operations begin. Monitoring feature performance, behavior and availability helps ensure that the features provide value to users.

In this stage, teams check that features are running smoothly and that there are no interruptions in service, making sure the network, storage, platform, compute and security postures are all healthy. If issues occur, operations teams identify the incident, alert the proper personnel, troubleshoot problems and apply fixes.

Monitor

This is the gathering of feedback from users and customers on features, functions, performance and business value to take back to planning for enhancements and features in the next release. This also includes any learning and backlog items from the operations activities that can help developers proactively prevent known incidents from reoccurring. This is the point where the “wraparound” to the planning phase that drives continuous improvement occurs.

There are two other important continuous workflows in the lifecycle:

Security

While waterfall methodologies and agile implementations “tack on” security workflows after delivery or deployment, DevOps strives to incorporate security from the start (planning), when security issues are easiest and least expensive to address, and run continuously throughout the rest of the development cycle. This approach to security is referred to as shifting left. Some organizations have had less success shifting left than others, which led to the rise of DevSecOps (development, security and operations).

Compliance

It is also best to address regulatory governance, risk and compliance (GRC) early and throughout the development lifecycle. Regulated industries are often mandated to provide a certain level of observability, traceability and access to how features are delivered and managed in their runtime operational environment.

This requires planning, development, testing and enforcement of policies in the continuous delivery pipeline and the runtime environment. Auditability of compliance measures is important for proving compliance with third-party auditors.

DevOps culture

Business leaders generally agree that DevOps methods don’t work without a commitment to DevOps culture, that is, a different organizational and technical approach to software development.

At the organizational level, DevOps requires continuous communication, collaboration and shared responsibility among all software delivery stakeholders. This includes software development and IT operations teams, for certain, but also security, compliance, governance, risk and line-of-business teams, to innovate quickly and continually and focus on quality from the start.

Usually, the best way to accomplish this is to break down silos and reorganize personnel into cross-functional, autonomous DevOps teams that can work on projects from start to finish (planning to feedback) without making handoffs to, or waiting for the approval of, other teams. In the context of agile development, shared accountability and collaboration are the bedrock of a shared product focus with valuable outcomes.

At the technical level, DevOps requires a commitment to automation that keeps projects moving within and between workflows. It also requires feedback and measurement that enables teams to continually accelerate cycles and improve software quality and performance.

IBM DevOps

What is DevOps?

Andrea Crawford explains what DevOps is, the value of DevOps, and how DevOps practices and tools help you move your apps through the entire software delivery pipeline from ideation through production.Led by top IBM thought leaders, the curriculum is designed to help business leaders gain the knowledge needed to prioritize the AI investments that can drive growth.

Benefits of DevOps

Better collaboration

Fostering a culture of collaboration and removing silos brings the work of developers and operations teams closer together, which boosts efficiency and reduces workloads due to the combination of workflows. Because developers and operations teams share many responsibilities, there are fewer surprises as projects progress. DevOps teams know exactly what environment the code runs in as they develop it.

Accelerated delivery

DevOps teams deliver new code faster through increased collaboration and the creation of more focused (and more frequent) releases by using a microservices architecture. This process drives improvements, innovations and bug fixes to market sooner.

It also allows organizations to adapt to market changes more quickly and better meet customer needs, resulting in increased customer satisfaction and competitive advantage. The software release process can be automated with continuous delivery and continuous integration.

Greater reliability

Continuous delivery and continuous integration include automated testing to help ensure the quality and reliability of software and infrastructure updates. Monitoring and logging verify performance in real time.

Quicker scaling

Automation, including infrastructure such as code, can help manage development, testing and production, and enable faster scaling with greater efficiency.

Enhanced security

DevSecOps integrates continuous integration, delivery and deployment into the development process so that security is built in from the start, rather than retrofit. Teams build security testing and audits into workflows by using infrastructure as code to help maintain control and track compliance.

Increased job satisfaction

A DevOps approach can help improve job satisfaction by automating mundane, repetitive tasks and enabling employees to focus on more gratifying work that drives business value.

DevOps tools: Building a DevOps toolchain

The demands of DevOps and DevOps culture put a premium on tools that support asynchronous collaboration, seamlessly integrate DevOps workflows, and automate the entire DevOps lifecycle as much as possible.

Categories of DevOps tools include:

Project management tools

Project management tools enable teams to build a backlog of user stories (requirements) that form coding projects, break them down into smaller tasks and track the tasks through to completion. Many tools support agile project management practices, such as Scrum, Lean and Kanban, that developers bring to DevOps. Popular open source options include GitHub Issues and Jira.

Collaborative source code repositories

These are version-controlled coding environments that enable multiple developers to work on the same code base. Code repositories should integrate with CI/CD, testing and security tools, so that when code is committed to the repository it can automatically move to the next step. Open source code repositories include GitHub and GitLab.

CI/CD pipelines

These are tools that automate code checkout, building, testing and deployment. Jenkins is the most popular open source tool in this category; many previous open source alternatives, such as CircleCI, are now available in commercial versions only.

For continuous deployment (CD) tools, Spinnaker straddles between application and infrastructure as code layers. ArgoCD is another popular open source choice for Kubernetes native CI/CD.

Test automation frameworks

These include software tools, libraries and best practices for automating unit, contract, functional, performance, usability, penetration and security tests. The best of these tools support multiple languages. Some use artificial intelligence (AI) to automatically reconfigure tests in response to code changes. The expanse of test tools and frameworks is far and wide. Popular open source test automation frameworks include Selenium, Appium, Katalon, Robot Framework and Serenity (formerly known as Thucydides).

Configuration management tools

Configuration management tools (also known as infrastructure as code tools) enable DevOps engineers to configure and provision fully versioned and fully documented infrastructure by running a script. Open source options include Ansible (Red Hat®), Chef, Puppet and Terraform. Kubernetes performs the same function for containerized applications.

Monitoring tools

Monitoring tools help DevOps teams identify and resolve system issues. They also gather and analyze data in real time to reveal how code changes impact application performance. Open source monitoring tools include Datadog, Nagios, Prometheus and Splunk.

Continuous feedback tools

These tools gather feedback from users, either through heat mapping (recording users' actions on the screen), surveys or self-service issue ticketing.

DevOps and cloud native development

Cloud native is an approach to building applications that use foundational cloud computing technologies. Cloud platforms help to enable consistent and optimal application development, deployment, management and performance across public, private and multicloud environments.

Today, cloud-native applications are typically:

  • Built by using microservices: Loosely coupled, independently deployable components that have their own self-contained stack, and communicate with each other via REST APIs, event streaming or message brokers.

  • Deployed in containers: Executable units of code that contain all the code, runtimes and operating system dependencies required to run the application. For many organizations, containers are synonymous with Docker containers, but other containers are available.

  • Operated (at scale) by using Kubernetes: An open source container orchestration platform for scheduling and automating the deployment, management and scaling of containerized applications.

In many ways, cloud-native development and DevOps were made for each other. For example, developing and updating microservices, that is, the iterative delivery of small units of code to a small code base, is a perfect fit for the rapid release and management cycles of DevOps. It would be difficult to deal with the complexity of a microservices architecture without DevOps deployment and operation.

A recent IBM survey of developers and IT executives found that 78% of current microservices users expect to increase the time, money and effort they’ve invested in the architecture, and 56% of non-users are likely to adopt microservices within the next two years.

By packaging and permanently fixing all OS dependencies, containers enable rapid CI/CD and deployment cycles, because all integration, testing and deployment occur in the same environment. Kubernetes orchestration performs the same continuous configuration tasks for containerized applications as Ansible, Puppet and Chef perform for non-containerized applications.

Most leading cloud computing providers including AWS, Google, Microsoft Azure and IBM Cloud® offer some sort of managed DevOps pipeline solution.

What is DevSecOps?

DevSecOps is DevOps that continuously integrates and automates security throughout the DevOps lifecycle, from planning through feedback and back to planning again.

Another way to put this is that DevSecOps is what DevOps was supposed to be from the start. But two of the early, significant (and for a time insurmountable) challenges of DevOps adoption were integrating security expertise into cross-functional teams (a cultural problem), and implementing security automation into the DevOps lifecycle (a technical issue). Security came to be perceived as the team of no, and as an expensive bottleneck in many DevOps practices.

DevSecOps emerged as a specific effort to integrate and automate security as originally intended. In DevSecOps, security is a first-class citizen and stakeholder along with development and operations and brings security into the development process with a product focus.

DevOps and site reliability engineering (SRE)

Site reliability engineering (SRE) uses software engineering techniques to automate IT operations tasks, such as production system management, change management, incident response and even emergency response, that systems administrators might otherwise perform manually. SRE seeks to transform the classical system administrator into an engineer.

The goal of SRE is similar to the goal of DevOps, but is more specific: SRE aims to balance an organization's desire for rapid application development with its need to meet performance and availability levels specified in service level agreements (SLAs) with customers.

Site reliability engineers achieve this balance by determining an acceptable level of operational risk caused by applications, called an error budget, and by automating operations to meet that level.

On a cross-functional DevOps team, SRE can serve as a bridge between development and operations. SRE provides the metrics and automation tools teams need to push code changes and new features through the DevOps pipeline as quickly as possible, without violating the terms of the organization’s SLAs.

The future of DevOps

As the breadth of tasks that can be automated increases, more functions are added to DevOps, which generates multiple variations of DevOps. And as DevOps proves its many benefits, business investment increases.

According to Verified Market Research, the DevOps market was valued at USD 10,96 billion in 2023 and is projected to reach USD 21,13 billion by 2031, growing at a CAGR of 21,23% from 2024 to 2031.

To help ensure DevOps success, businesses are increasingly adopting:

AIOps

Artificial intelligence for IT operations brings in AI and machine learning to automate and streamline IT operations, enabling quick analysis of huge amounts of data.

BizDevOps

BizDevOps brings business units in to collaborate on the software development process along with development and operations. Also known as DevOps 2.0, this cultural shift speeds the process and leads to stronger solutions that align with business unit goals.

Containerization

Another way to create new efficiencies is with containerization, where an app and its dependencies are encapsulated into a streamlined, portable package that runs on almost any platform.

DevSecOps

Adding more security functions at the very beginning of development propelled DevSecOps. Security is no longer an afterthought.

GitOps

GitOps focuses on storing application code on a Git repository so that it is version-controlled, available to multiple team members and fully traceable and auditable. These measures help increase efficiency, reliability and scalability.

Observability

While traditional monitoring tools provide visibility, observability platforms provide a deeper understanding of how a system is performing and, more importantly, context—the why behind the performance. In addition to providing this comprehensive understanding, observability allows all stakeholders to access the data they need to build solutions and create better applications.

Serverless architecture

Serverless computing is an application development and execution model that enables a dev to build and run application code without provisioning or managing servers or backend infrastructure. In serverless architectures, developers write application code and deploy it to containers managed by a cloud service provider.

Related solutions IBM DevOps Accelerate

Streamline your software delivery pipeline with IBM DevOps Accelerate, a comprehensive solution for automating CI/CD and release management.

Explore IBM DevOps Accelerate
IBM DevOps Automation

Achieve faster, more reliable releases by automating processes, optimizing workflows, and improving team collaboration across every stage of development and deployment.

Explore IBM DevOps Automation
DevOps for IBM Z

Transform mission-critical applications for hybrid cloud environments with stability, security and agility.

Explore IBM Z
Take the next step

Unlock the potential of DevOps to build, test, and deploy secure cloud-native apps with continuous integration and delivery.

Explore DevOps Solutions Discover DevOps in action