July 18, 2024 By Phill Powell 6 min read

Distributed computing is a process that uses numerous computing resources in different operating locations to mimic the processes of a single computer. Distributed computing assembles different computers, servers and computer networks to accomplish computing tasks of widely varying sizes and purposes.

Distributed computing even works in the cloud. And while it’s true that distributed cloud computing and cloud computing are essentially the same in theory, in practice, they differ in their global reach, with distributed cloud computing able to extend cloud computing across different geographies.

In small, distributed computing systems with components near each other, components can be linked through a local area network (LAN). In larger distributed systems whose components are separated by geography, components are connected through wide area networks (WAN). The components in a distributed system share information through an elaborate system of message-passing, over whichever type of network is being used.

Distributed computing often tackles computing’s most intense and complicated computational challenges, which is why this activity typically requires implementing shared memory and multiple components. Further, distributed computing depends upon highly coordinated synchronization and hefty amounts of computing power so that the entire system can effectively process data, engage in file-sharing as needed and work toward a common goal. 

10 distributed computing use cases

The following examples show the many ways that distributed computing is being used across many industries and platforms: 

Communications

The communications industry routinely makes use of distributed computing. Telecommunication networks are examples of peer-to-peer networks, whether they take the form of telephone networks or cellular networks. Two major communication-based examples of distributed computing have been the internet and e-mail, both of which transformed modern life. 

Computing

Computing is being dominated by major revolutions in artificial intelligence (AI) and machine learning (ML). Both technologies are advancing rapidly, and each makes extensive use of distributed computing. The algorithms that empower AI and ML require large volumes of training data, in addition to strong and steady amounts of processing power. Distributed computing supplies both.

Data management

Distributed computing turns complex data management and data storage jobs into subtasks distributed across nodes, which are entities that function as either client or server—identifying needs and issuing requests or working to fulfill those needs. Database management is an area empowered by distributed computing, as are distributed databases, which perform faster by having tasks broken down into smaller actions. Distributed computing even includes the use of data centers as part of a distributed computing chain.

Energy

The energy and environmental sectors are both impacted by distributed computing, which is assisting smart-grid technology in regulating the usage and optimization of energy consumption. Smart grids are also used to assemble environmental data from various input devices.

Finance

Distributed computing ensures that vast computational loads get shared evenly across multiple systems. In addition, workers in specific financial areas are already using distributed computing for things like risk assessment. Distributed computing helps financial institutions churn huge calculations to better inform decision-making and craft financial strategies.

Manufacturing

Distributed computing uses its multiple resources to keep automation running efficiently at large-scale manufacturing facilities, and often serves in a load-balancing capacity. There’s even distributed manufacturing, which uses the distributed cloud model and applies it to the tools of production, which are spread out geographically. Manufacturing also deals with designing and creating Internet of Things (IoT) gadgets and tools that collect and transmit data.

Medical

Distributed computing helps enable many of modern medicine’s breakthrough technologies, including robotic surgeries that depend on vast amounts of data. By leveraging its talent for amazingly detailed 3D graphics and video animations, distributed computing can demonstrate patent procedures and pharmaceutical design of planned medications.

Retail

Inventory discrepancies can sometimes occur for retailers that operate brick-and-mortar locations in addition to providing online shopping alternatives. Distributed Order Management Systems (DOMS) enabled by distributed computing help keep ecommerce applications running smoothly, so modern retailers can keep pace with changing customer expectations.

Science

Distributed computing is being used in an expanding number of scientific pursuits, like training neural networks. Scientific computing is also using distributed computing’s enormous capability to solve massive scientific calculations, like those governing space flight. Distributed computing video simulations can make scientific projections better understood.

Videogames

Providers of massively multiplayer online games (MMOGs) make extensive use of distributed computing to craft and run their complicated, real-time game environments. A complex meshing of operating systems, networks and processors makes it possible for thousands of end-user players to share and participate in an enthralling gaming experience. 

What makes a distributed computing system?

Although there are no rules set in stone regarding what constitutes a distributed computing system, even the simplest form of distributed computing usually possesses at least three basic components:

  • Primary system controller: The primary system controller controls everything within a distributed system and monitors and tracks everything that transpires within that system. It’s biggest job is managing and administering every server request that enters the system.
  • System datastore: The system datastore, usually located on the disk vault, is the system’s repository for all shared data. In “non-cluttered” systems, the shared data might live on one machine or many, but all computers being used in the system need access to the datastore.
  • Database: Distributed computing systems warehouse all their data in relational databases. Once this is accomplished, the data is shared by groups of users. Relational databases put all workers on the same page instantly.

Beyond those core components, each distributed computing system can be customized according to an organization’s needs. One of the great advantages of using a distributed computing system is that the system can be expanded by adding more machines, thereby increasing its scalability. The other significant advantage is increased redundancy, so if one machine in the network fails for whatever reason, the work of the system continues despite that point of failure.

The goal of distributed computing systems is to make that distributed computing network function as if it were a single system. This coordination is accomplished through an elaborate system of message-passing between various components.

Communication protocols govern that back-and-forth exchange of messages and create a relationship called “coupling” that exists between these components. This relationship is expressed in one of two forms:

  • Loose coupling: The connection between two loosely coupled components is weak enough so that alterations to one component will not impact the other component.
  • Tight coupling: The level of synchronization and parallelism is so great in tightly coupled components that a process called “clustering” uses redundant components to ensure ongoing system viability.

 Fault tolerance” is another key concept—a corrective process that allows an OS to respond and correct a failure in software or hardware, while the system continues to operate.

Distributed computing also deals with the positive and negative effects of “concurrency”—the simultaneous execution of multiple operating instruction sequences. Chief among its positive qualities is the fact that concurrency enables shared resources and the parallel computing of multiple process threads. (Although parallel computing should not be confused with parallel processing, which is a process whereby runtime tasks are broken down into multiple smaller tasks.)

The negatives associated with concurrency include increased latency and even traffic bottlenecks, where the amount of data being transferred exceeds the normal recommended bandwidth.

Distributed computing system architectures

Distributed computing types are typically classified according to the distributed computing architecture each uses:

  • Client-server system: Uses a client-server architecture that lets them be used with more than one system. A client directs input to the server as a request (usually either a command for a specific task or a request for more computing resources). The server then works to fulfill the task and report back on the action taken.
  • Peer system: This relies upon peer architecture and is also known as a “peer-to-peer” system. Peer systems use nodes, which function as either client or server—identifying needs and issuing requests or working to fulfill those needs. As the name implies, there’s no hierarchy in peer systems, so programs operating in peer systems can communicate freely with each other and transfer data via peer networks.
  • Middleware: The “middleman” that operates between two distinct applications. Middleware is itself an application that resides between two apps and supplies service to both. Middleware also has an interpretive aspect. It functions as a translator between various interoperability apps that are being run on different systems and allows those apps to freely exchange data.
  • Three-tier system: So named because of the number of layers used to represent a program’s functionality. As opposed to typical client-server architecture in which data is placed within the client system, the three-tier system instead keeps data stored in its middle layer, which is called the Data Layer. Three-tier systems are often used in web applications.
  • N-tier system: Sometimes referred to as multitiered distributed systems, N-tier systems are unlimited in their capacity for network functions, which they route to other apps for processing. The architecture of N-tier systems is like that found in three-tier systems. N-tier systems are often used as the architectural basis for numerous web services and data systems. 

While these are the main types of distributed computing architecture, there are other distributed computing paradigms that deserve mention:

  • Blockchain: Blockchain is a distributed database or ledger that’s both replicated and synchronized on a network’s various computers. Blockchain helps ensure redundancy by issuing the source ledger to all computers in the chain.
  • Grid computing: Grid computing is a type of distributed computing that deals with non-interactive workloads, usually involving a combination of grid frameworks and middleware software. The scalable grid accessed through the user interface functions like a huge file system.
  • Heterogeneous computing: Heterogeneous computing is a form of distributed computing that allows a single computer system to maintain computing subsystems. The processors at work in heterogeneous computing might be working on different tasks, but all of them work in parallel to accelerate computer performance and minimize task-processing times.
  • Microservices: Microservices are a form of distributed computing in which applications are broken down into much smaller components, often called “services.” Services frameworks are conjoined by the application program interface (API), which enables interaction between components. 

Get started

In our quick tour of distributed computing, we’ve identified what distributed computing is, what goes into making distributed computing systems, and which types of architectures are associated with distributed computing systems. In addition, we’ve learned about 10 industries that are smartly crafting their future now by making special use of distributed computing systems.

Like with distributed computing, IBM Satellite products give you the tools to deploy and run apps wherever you want—whether that means on-premises, through edge computing or public cloud environments.

Consume a common set of cloud services that includes toolchains, databases and AI. The IBM Cloud Satellite-managed distributed cloud solution delivers cloud services, APIs, access policies, security controls and compliance.

Explore IBM Satellite products
Was this article helpful?
YesNo

More from Cloud

How a US bank modernized its mainframe applications with IBM Consulting and Microsoft Azure

9 min read - As organizations strive to stay ahead of the curve in today's fast-paced digital landscape, mainframe application modernization has emerged as a critical component of any digital transformation strategy. In this blog, we'll discuss the example of a US bank which embarked on a journey to modernize its mainframe applications. This strategic project has helped it to transform into a more modern, flexible and agile business. In looking at the ways in which it approached the problem, you’ll gain insights into…

The power of the mainframe and cloud-native applications 

4 min read - Mainframe modernization refers to the process of transforming legacy mainframe systems, applications and infrastructure to align with modern technology and business standards. This process unlocks the power of mainframe systems, enabling organizations to use their existing investments in mainframe technology and capitalize on the benefits of modernization. By modernizing mainframe systems, organizations can improve agility, increase efficiency, reduce costs, and enhance customer experience.  Mainframe modernization empowers organizations to harness the latest technologies and tools, such as cloud computing, artificial intelligence,…

Modernize your mainframe applications with Azure

4 min read - Mainframes continue to play a vital role in many businesses' core operations. According to new research from IBM's Institute for Business Value, a significant 7 out of 10 IT executives believe that mainframe-based applications are crucial to their business and technology strategies. However, the rapid pace of digital transformation is forcing companies to modernize across their IT landscape, and as the pace of innovation continuously accelerates, organizations must react and adapt to these changes or risk being left behind. Mainframe…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters