Home Page Title Page Title rethink IT DevOps Let’s rethink IT DevOps
Watch the video (03:20)
lines connected with dots

What if you were to design your IT DevOps process for a new company? What would you automate to make better predictions and accelerate application delivery?

Let's rethink cloud operations

View all chapters

Instead of talking about the number of days between deployments, you should be talking about the number of updates per time period. Chris Farrell Vice President, Automation Value Services IBM

“Many companies exist because of applications, which means application performance is the most critical measurement outside of revenue,” says Chris Farrell, vice president of automation value services software at IBM. “When your application is your business, speed is both a weapon and a proxy for the quality of your application.”

In this world of hyper-deployment, Farrell says it’s critical that organizations “flip the script” in how they think about achieving continuous integration and continuous delivery (CI/CD). “Instead of talking about the number of days between deployments, you should be talking about the number of updates per time period,” he says. “The shorter the time period, the more you’re moving up the line.”

If I were to design the IT DevOps process for a new company, I would focus on automating the last step – monitoring. Chris Farrell Vice President, Automation Value Services IBM

IBM’s series “Rethink & Automate” invites leaders to reimagine common business and IT processes by approaching them from a greenfield perspective and embracing automation. The typical DevOps process is a cyclical set of eight steps — planning, coding, building and testing, followed by releasing, deploying, operating and monitoring.  When one of the eight steps slows down, the whole pipeline slows down.

“Outside of the ‘born digital’ world, improving speed may be even more critical to large incumbent enterprises,” writes IBM’s Hans A.T. Dekkers in "The speed of smarter architecture", a paper published by the IBM Institute for Business Value. “When we see the average lifespan of companies on the S&P 500 drop from 60 years (in the 1960s) to under 20 years (today), with an accelerating trend toward even higher turnover, we’re seeing the effects of having — or lacking — speed.”

Take action

Discover new ways to improve your IT DevOps process in a complimentary Automation Innovation Workshop.

Request a workshop

To achieve CI/CD, developers need to build once, deploy anywhere and manage the pipeline constantly. Here’s how Farrell would redesign the typical cycle using automation, noting that any improvements require “a complete commitment to DevOps and a desire to reach and achieve continuous delivery.”

Move from monitoring to observability

“This may surprise people, but if I were to redesign the DevOps process from scratch, my first focus would be the last step: monitoring,” Farrell says. “You should get rid of tools from the traditional monitoring space and move to observability as quickly as possible. Remember, the more workloads you apply observability to, the more quickly and accurately any Ops member can navigate from a problem to its root cause, without involving developers and other subject matter experts.”

You need to get out of the traditional monitoring space and move into observability. Chris Farrell Vice President, Automation Value Services IBM

In IT, observability refers to software tools and practices for aggregating, correlating and analyzing a steady stream of performance data from a distributed application, along with the hardware and network it runs on, so you can more effectively troubleshoot and debug the application and network. Observability is a natural evolution of application performance monitoring (APM) to better address the increasingly rapid, distributed and dynamic nature of cloud-native application deployments.

Outside of monitoring, every step of the DevOps process already has many tools that accelerate, integrate and automate the process. “Traditional monitoring tools struggle with accelerated pipelines and modern tech stacks, specifically because the manual setup, reconfiguration and/or redeployment slow things down,” Farrell says. The observability platform delivers understanding — visibility with context — and adjusts to any changes in real time, meaning it’s always up to date.

Observability is more democratic. It’s built to help everyone who has a stake in applications see the data they need to see. Chris Farrell Vice President, Automation Value Services IBM

Observability also ties together applications and infrastructure, which is necessary as the lines between application code, code-based infrastructure and hardware stacks blur. “If you think about the need for speed across the pipeline, the platforms have to be able to be just as flexible and fast as the application code itself,” Farrell says.

Automate observability for more speed and results

“The need to go to observability is absolute, but it has to be automated,” Farrell says. An automated observability platform with an analytics engine allows the platform itself to deliver understanding, recommendations and remediation for problems. You no longer have to spend time diagnosing problems; it’s done automatically.

Automation across the IT DevOps process provides a number of other benefits beyond speed. Continuous feedback means developers can rapidly and decisively take action for ongoing improvement. Improved error detection enables developers to remediate before errors cause what Farrell describes as “catastrophic” impacts. And finally, system integration improves team collaboration, enabling all IT and DevOps professionals within a team to change code, respond to feedback and rectify issues without slowing down their colleagues.

How to measure success Three ways businesses can evaluate speed and frequency in IT DevOps Developer velocity

Also called “software delivery speed,” a term for the speed of development and updates (and what organizations should focus on improving in their DevOps process)

Concept-to-cash lead time

The time it takes for software (or any one update) to begin generating capital

Sense and respond

How effectively a business (and its associated applications) can respond to changes in the business environment

According to the DORA 2018 Accelerate: State of DevOps report (Link resides outside ibm.com), “elite performing organizations” have 46x more frequent code deployments, a 2,555x faster lead time from commit to deploy, a 7x lower change failure rate, and are 2,604x faster to recover from incidents. You can see the exponential benefit of more frequent deployments leading to accelerated releases of new software — and thousands of times faster incident resolutions. “One of my favorite correlations is the reduction in change failure rate, even as you deploy faster,” Farrell says.

When organizations automate all eight steps of the process, they can expect higher quality and better customer satisfaction. But Farrell says his favorite benefit is speed. "One example I saw was at a bank. It would take them approximately 10 to 12 months to have an idea of a product before it went live. Once they got their new DevOps processes, that timeframe changed to two weeks,” he says. “You see absolute, direct results of success in the marketplace.”

Next steps

Enhance your application performance monitoring.

Discover the ease of use of IBM Instana Observability Play in the sandbox
Next chapter

 

Let’s rethink cloud operations.

Read chapter 4
Ch. 1: Let's rethink recruiting Ch. 2: Let's rethink retail operations Ch. 4: Let's rethink cloud operations Ch. 5: Let's rethink customer service