March 17, 2021 By Sai Vennam 5 min read

By providing consistent services across environments (on-premises, cloud platform and at the edge), a distributed cloud architecture makes many points of IT friction disappear.

“Distributed cloud made real: build faster, securely, anywhere”: Join industry leaders and a special guest for this virtual event. Register now.

The world has shifted in the direction of using more cloud native (or cloud-agnostic) capabilities. There are many open source technologies like Kubernetes that run the same regardless of the cloud environment. Starting from scratch is easier today because cloud native apps are inherently easier to integrate. But most companies aren’t starting from scratch, and it is often a challenge to integrate cloud native capabilities with existing applications in on-premises data centers or different cloud environments.

Before we go deeper into that, I think it’s important to understand the basics of integration and why this growing necessity brings some significant pain to DevOps teams.

The case for application integration

In the 1990s and early 2000s, many industry-specific tools and capabilities locked customers into a particular vendor. It was hard to escape because of all the proprietary tools used to build your apps.

But guess what? People did it anyway. So, today, these companies have different teams running apps on different tools — Apache Tomcat® and JBoss®, for example — and they have to find a way to get everything to work together. That’s where integration comes in.

There are three primary ways integration happens:

  1. Application programming interface (API): Without APIs, most software today wouldn’t exist. But APIs don’t just give us access to the data, they also manage the mechanics of how applications interact with one another. So standardized rules, established contracts (API docs) and API management are important.
  2. Event-driven architectures: You can use message queuing services — like IBM MQ – or event-streaming capabilities – like open-source Apache Kafka — to set up event-driven architectures. Event-driven architectures utilize a queue that forms a middle integration layer that keeps incoming application transactions from being lost due to database constraints. This helps provide a better user experience. Learn more about the difference between event-driven architecture and event streaming.
  3. Data transfer: Synchronizing data from on-premises to the cloud can be expensive and time consuming, so having high-speed data transfer is important. Once transferred, you have to be able to access your data from your cloud-native apps.

Pain points of integration

These days, Development and Operations are very much together. That’s why “DevOps” is such a common term now. Within a team, you’ll have developers doing more than dev-oriented tasks; they’ll also be doing ops-oriented tasks related to their specific application development domain. And when you have folks working multiple ways in multiple environments, operational expenses go up. Let’s look at an example.

Say you work at a warehouse distribution service company with 2,000 data distribution centers spread out nationwide and a couple cloud data center hubs — one on the U.S. West Coast and another on the U.S. East Coast.

With each data distribution center, you might have a Kubernetes cluster running locally to keep track of what inventories are in the warehouse and what’s available. So now, with those 2,000 distribution centers, you have at least 2,000 Kubernetes clusters. Don’t forget, you also have the two hubs with main Kubernetes clusters communicating to all those edge environments.

Now, say you have a new version of an app that needs rolled out across all 2,000 distribution centers. This scenario is painful for your operations team. This is where distributed cloud comes into play.

Distributed cloud provides a single view

Distributed cloud enables teams to focus more on the actual application and development of the code, and less on the deployment and operational aspect of it. Essentially, distributed cloud means that regardless of where your Kubernetes clusters are running, you can manage all of them from a central public cloud location.

See my video for a deeper dive on distributed cloud:

Going back to our warehouse scenario, if an operations engineer wants to roll out an app update, they’ll go to one of those two hubs managing the rest of the distributed cloud and let the public cloud handle the rollout to all the edge locations. This works because your public cloud knows exactly how those edge locations and all those clusters are running.

Considering all the apps and hybrid environments of a single enterprise, there could be so many solutions as part of the overall integration portfolio, which is a problem in itself. It’s time-consuming, expensive and inefficient. What the enterprise needs is a single platform, a single pane of glass, if you will. That single view is part of a distributed cloud architecture.

Phases of integration with distributed cloud

Now, when a company has numerous technologies doing different functions for integration, they are carrying a large amount of technical debt, so to speak, and it’s a complexity the team has to manage and maintain. Distributed cloud addresses this in two general phases.

Phase 1: Freeing DevOps from the burden of platform management

Suppose you need a way to manage your public APIs, establish rate limiting and set up public gateways. Instead of investing resources to manually set up open source projects, you go with an enterprise solution. You can go with IBM API Connect® because now you know IBM Cloud is going to maintain and manage it as-a-service in the IBM public cloud. As a user, you aren’t managing it; you simply go in and use the software.

Taking advantage of the as-a-service capabilities allows your developers to focus on what matters: writing and publishing APIs. The company saves effort, time and money.

Phase 2: Consolidating is crucial for integration

Integration involves more than API management. As I mentioned, there are three main categories: API management, event-driven architectures and data transfer. And, of course, there are smaller sub-categories under all three of those.

Having different vendors for these categories means multiple environments, which creates complexity. True integration is about reducing complexity by reducing the number of pieces and consolidating as much as possible. IBM Cloud Pak® for Integration, for example, consolidates multiple tools together in a versioned package. This means you know your API management tool is going to work seamlessly with the event-driven architectures, message queuing and data transfer services.

Regardless of the platform, the need for consolidated integration is crucial. You don’t want the complexity of using multiple tools from multiple vendors and then lose time and money trying to patch everything together. The goal is that single pane of glass.

Consolidated integration with distributed cloud

How does distributed cloud tie into integration? Since the operational expense of multiple environments can be so high, with many clusters running in different places, companies look at the centralized control of a distributed cloud to solve the puzzle.

When you’re running the same version of a container across multiple edge environments, you need a single, consolidated way to get data about all of them. With a distributed cloud, it is easier to see what clusters are running, where your applications are healthy and, most importantly, the condition of the service endpoints of those clusters.

IBM Cloud Satellite is the distributed cloud offering from IBM public cloud. When you have a distributed cloud like IBM Cloud Satellite, you can simply query it to give you all of the application endpoints for all of the clusters running in all your edge locations. Just like that, it’ll output a list for you.

From there, you have a better way of integrating the apps that need to work together without wasting time on unnecessary integrations. Not all of those edge locations are talking to each other, but they do need to talk to the main hub. With IBM Cloud Satellite, you can make sure that communication is seamless without wasting time elsewhere.

The key thing to remember is you do not want to use multiple integration tools from multiple vendors. It’s expensive, time-consuming and — thanks to distributed cloud — it’s unnecessary.

Get started with IBM Cloud Satellite

Distributed cloud allows teams to have faster and easier health checks, seamless integration, better visibility and reliable management. On top of it all, distributed cloud solutions like IBM Cloud Satellite help reduce the overall pain points and operational expenses of managing multiple cloud native environments across multiple locations. That is something every DevOps team can celebrate.

Learn more about IBM Cloud Satellite.

Was this article helpful?
YesNo

More from Cloud

New 4th Gen Intel Xeon profiles and dynamic network bandwidth shake up the IBM Cloud Bare Metal Servers for VPC portfolio

3 min read - We’re pleased to announce that 4th Gen Intel® Xeon® processors on IBM Cloud Bare Metal Servers for VPC are available on IBM Cloud. Our customers can now provision Intel’s newest microarchitecture inside their own virtual private cloud and gain access to a host of performance enhancements, including more core-to-memory ratios (21 new server profiles/) and dynamic network bandwidth exclusive to IBM Cloud VPC. For anyone keeping track, that’s 3x as many provisioning options than our current 2nd Gen Intel Xeon…

IBM and AWS: Driving the next-gen SAP transformation  

5 min read - SAP is the epicenter of business operations for companies around the world. In fact, 77% of the world’s transactional revenue touches an SAP system, and 92% of the Forbes Global 2000 companies use SAP, according to Frost & Sullivan.   Global challenges related to profitability, supply chains and sustainability are creating economic uncertainty for many companies. Modernizing SAP systems and embracing cloud environments like AWS can provide these companies with a real-time view of their business operations, fueling growth and increasing…

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

3 min read - IBM Storage Defender is a purpose-built end-to-end data resilience solution designed to help businesses rapidly restart essential operations in the event of a cyberattack or other unforeseen events. It simplifies and orchestrates business recovery processes by providing a comprehensive view of data resilience and recoverability across primary and  auxiliary storage in a single interface. IBM Storage Defender deploys AI-powered sensors to quickly detect threats and anomalies. Signals from all available sensors are aggregated by IBM Storage Defender, whether they come…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters