When organizations move to microservices, they need to support dozens or hundreds of specific applications. Managing those endpoints separately means supporting a large number of virtual machines (VMs), including demand. Cluster software like Kubernetes can create pods and scale them up, but Kubernetes does not provide routing, traffic rules, or strong monitoring or debugging tools.
Enter the service mesh.
As the number of services increases, the number of potential ways to communicate increases exponentially. Two services have only two communication paths. Three services have six, while 10 services have 90. A service mesh provides a single way to configure those communications paths by creating a policy for the communication.
A service mesh instruments the services and directs communications traffic according to a predefined configuration. Instead of configuring a running container, or writing code to do so, an administrator can provide configuration to the service mesh and have it complete that work. Previously, this had to happen with web servers and service-to-service communication.
The most common way to do this in a cluster is to use the sidecar pattern. A sidecar is a new container, inside the pod, that routes and observes communications traffic between services and containers.