December 18, 2020 By Shikha Srivastava
Kirti Apte
11 min read

Software application projects can sometimes end with monolithic user interfaces, even when using a microservices architecture.

This monolithic UI can lead to unnecessary complexity and can often result in scaling difficulties, performance issues, and other problems as frontend developers try to keep pace with changes to backend microservices. To try and prevent a project ending with a monolithic UI, this blog post describes how to apply the twelve-factor app methodology to the creation of UI microservices.

A microservices architecture often starts with a focus on only the creation of the backend microservices. This approach can lead to a monolithic UI that combines and surfaces different functions and data from modular backend microservices. These large and complex UIs go against the fundamental concepts of microservice-based architecture, which is to enable multiple microservices to handle their own functions and tasks across both the backend and frontend. With our own application development, we work towards bringing modularity to both the backend and frontend of our applications. As we develop our UI microservices, we pay close attention to the twelve-factor app methodology for the Kubernetes model. 

The twelve-factor app methodology provides a well-defined guideline for developing microservices. This methodology is a commonly used pattern to follow to run and scale microservices. This blog focuses on applying the 12 factors to UI microservices development, along with applying additional key factors that are specific to UI microservices and are supported by the Kubernetes model for container orchestration. The details for applying these factors are broken down into three overall categories for Kubernetes-based UI microservices:

  • Code factors
  • Deploy factors
  • Operate factors

Code factors

Factor I: Codebase

One codebase tracked in revision control, many deploys.”

Typically, an application is composed of multiple components, with each component supporting backend and UI functions. In a microservices architecture, each component — including any composite UI microservice — should be developed independently of other microservices by dedicated development teams. A composite UI microservice aggregates the UI from other microservices by following a pattern where the UI microservices are decoupled from the composite UI, while still providing a single-pane-of-glass experience. When you develop UI microservices, the code base should follow revision control and a single service with multiple deploys pattern. 

Consider the following core principles when you are designing your UI microservices codebase:

  1. Single responsibility: Each UI microservice has only a single purpose. For example, your application can have separate inventory UI, governance and risk UI, and cost management UI microservices.
  2. High cohesion: Each microservice must include all functions that are needed to serve its single purpose.
  3. Loose coupling: Each UI microservice must have no direct coupling with the composite UI or any other UI microservice. 

The following diagram shows multiple separate UI microservices that plug into a main composite UI microservice at runtime to give a consistent experience:

Although each UI microservice is independent and can adopt its own choice of technology, designing microservices to use the same technology allows each microservice to share common components and drive consistency. For example, the composite UI and UI microservices in the preceding diagram can adopt different technologies, such as Node.js, React, and JavaScript. These UI microservices can be created using a source control repository, such as a Git repository. Then, specific versions of these UI containerized images can be stored in Docker Hub.  

With Docker Hub versions available, you can reference a specific image version in the container spec for pods and deployments. With this approach, you can have different versions of a microservice running in your development, staging, and production environments. Applications in these environments can then behave differently based on the configurations for each microservice version:

Factor V: Build, release, run

“Strictly separate build and run stages.”

Decoupled UI microservices provide a strict separation of build, release, and run phases. Each microservice team is responsible for completing tasks to commit code and build Docker images using the build pipeline. Node package manager can be used to install dependent packages for any Node-based UI microservice. The Docker image can also be published in the artifactory. You can then use the Helm Kubernetes Package Manager or Red Hat OpenShift Operators to package your application. These releases can be tagged and used in different development, staging, and production environments:

Factor X: Dev/prod parity

“Keep development, staging, and production as similar as possible.”

UI microservices can have dependencies on data from different backend microservices. These UI microservices should be designed to be deployed with the same architecture in any environment for consistency. Essentially, UI microservices should be able to handle various error conditions, such as backend API errors and application domain specific errors. Fault tolerance — such as when data or dependent services are unavailable — should be built into each UI microservice. This fault tolerance should include the composite UI, which should be tolerant towards any contributing UI microservice being unavailable. 

Typically, UI microservices are developed and tested locally, which is not a production-ready approach. CI/CD processes need to run integration builds with key automated tests to catch integration issues as early as possible. For example, UI microservices can run Selenium-based functional tests with pull request builds and long-running Nightwatch-based tests running once a day to simulate a production-like data workload. The following screenshot shows a Selenium test output that is integrated with a Travis CI build: 

Deploy factors

Factor II: Dependencies

“Explicitly declare and isolate dependencies.”

UI microservices should be stateless and clearly declare all dependencies. Isolate the header and any authentication or authorization functions that are required for UI microservices into separate services. 

With Kubernetes, you can use liveliness and readiness probes to clearly declare and check for dependent services. The following diagram shows UI microservices that use readiness probes to check for required services, such as a header service, authorization service, and backing API services. Liveliness probes check whether the UI service is healthy. API services use readiness probes to check whether other data services or provider services are up and available. The composite UI checks whether UI services are discovered, and if any services are not discovered, the UI menu for those missing services does not display:

The following screenshot shows a liveliness and readiness probe YAML definition:

Factor III: Config

“Store config in the environment.

UI microservices typically connect to backing API services. The configuration for connecting to these backing services should be stored in a ConfigMap or in Secrets to ensure that UI microservices are independent of the configurations. These configurations can be moved to different environments without requiring modifications to the source code. A simple, but very effective approach.   

Factor VI: Process

“Execute the app as one or more stateless processes.

UI microservices should be stateless by design. This statelessness enables scaling and failure recovery features to be easily implemented with containerized UI microservices that leverage Kubernetes container orchestration. 

For example, if you had a UI microservice uiMicroservice1, you can update the microservice deployment within the uiMicroservice1 namespace to use three replicas through the following kubectl command:

kubectl patch deployment uiMicroservice1 -n uiMicroservice1 --type json -p='[{"op": "replace", "path": "/spec/replicas", "value":"3"}]

Then, if you run a kubectl get pods command, your output can include three pods similar to the following output:

NAME                                                  READY     STATUS    RESTARTS   AGE
uiMicroservice1-28633765-670qr   1/1           Running    0                  23s
uiMicroservice1-28633765-j5qs3    1/1           Running    0                  23s
uiMicroservice1-28633765-huio3    1/1           Running    0                 23s

Factor IV: Backing services

“Treat backing services as attached resources.”

For example, a composite UI microservice should treat modular UI microservices as backing services. The supporting modular UI microservices should be accessed as services and specified in the configuration so that the the supporting modular microservice can be changed without affecting the composite UI and other modular UI microservices. The modular UI microservices can also have API as a backing service. Usually an API backing service collects data from different providers — such as data sources — and then normalizes and transforms the data into the format that the UI needs.

Factor VII: Port binding

“Export services via port binding.

Each UI microservice and all dependent backend services need to be exposed through a well-defined port.  You can use Ingress to control external access and expose services externally. 

For example, the following diagram shows UI microservices that use Ingress to control access and expose services externally. These UI microservices can access dependent API services using well-defined service ports. The composite UI microservice constructs the main navigation menu from the different endpoints to provide a consistent single-pane-of-glass experience to users.

When you are designing your port bindings, ensure that your routes do not conflict. As a tip, you can run different instances of your service in different Kubernetes namespaces:

Operate factors

Factor VIII: Concurrency

“Scale out via the process model.”

As much as possible, UI microservices should remain stateless. This approach allows for horizontal and vertical scaling of the UI. 

Factor X: Disposability

Maximize robustness with fast startup and graceful shutdown.

For UI microservices, the idea that processes should be disposable means that when an application stops abruptly, the user should not be affected. You can achieve this result by using Kubernetes-provided ReplicaSets. With ReplicaSets, you can control multiple sets of stateless UI microservices, and Kubernetes will maintain a level of availability for the microservices.

Factor XI: Logs

Treat logs as event streams.

UI microservices must report health and diagnostic information that provides insights to various events so that problems can be detected and diagnosed. This information helps to correlate events between independent microservices. Establish standard practices for your UI and other microservices to achieve a single logging format and to establish how to log health and diagnostic information for each service.

Factor XII: Admin Tasks

Run admin/management tasks as one-off processes.

Essentially, admin tasks should be isolated. This goal for UI microservices is no different than it is for any other microservice. 

Beyond the 12 factors

In addition to the preceding 12 factors, we pay close attention to the following additional factors when developing production-grade enterprise applications. Adhering to these factors can be beneficial to you in your application development. For more information on these factors, see “7 Missing Factors from 12-Factor Applications.”

Factor XIII: Observable

Apps should provide visibility about current health and metrics.

Web interfaces need to be resilient and available 24/7 to meet business demand. When moving from monolithic UI to a modular microservices-based UI architecture, microservices grow in number and the communication between the microservices becomes more complex. Observability for microservices is critical for gaining visibility into communication failures and reacting to failures quickly.  

As you design your UI microservices, use the following methods to help you make your microservices observable:

  • Kubernetes liveliness and readiness probes: These probes can be used to detect whether a service is live and ready to receive traffic. Refer to Factor II: Dependencies to learn more about liveliness and readiness probes.
  • Custom metrics: Collections of custom metrics like API response times, CPU and memory utilization, and API performance metrics are important for UI microservices. UI microservices should define any essential metrics to observe, such as dependent API response time or dependent UI microservice response time. A monitoring system like Prometheus can be set up to scrape from the metrics endpoint. Production environments should always be set up for observability tools. Leverage the techniques that are available within your production environment for the collection and visualization of key metrics for dependencies. Ensure that thresholds and alerts on the key metrics are based on the overall service level objective for your application. 
  • Synthetic monitoring: Set up synthetic monitoring for all key APIs and URLs. Synthetic monitoring allows you to continuously test your application’s health and performance. You can set up synthetic tests from a different location to monitor the response time of key APIs and transactions. For more information about the synthetic monitoring that we use in the IBM Cloud Pak® for Multicloud Management, see Synthetics PoP

Factor XIV: Schedulable

Applications should provide guidance on expected resource constraints.

Like any other microservice, UI microservices should provide guidance on expected resource constraints for CPU and memory usage to ensure Kubernetes reserves the required resources for the microservices. You can define request and limits for CPU and memory in the deployment config. For example, the following screenshot shows how to define requests and memory limits for a UI microservice uiMicroservice1:

Factor XV: Upgradable 

Apps must upgrade data formats from previous generations.

Incremental upgrades for UI microservices are frequently required to release features on shorter delivery cycles. Upgrades without service disruptions are important when upgrading any service. An important feature to understand and support for any dependent API service is backwards compatibility so that no breaking changes are introduced from upgrades. We use Operators to deploy our microservices in Kubernetes and leverage the Operator pattern to manage upgrades. 

The following diagram shows how you can leverage the Operator pattern to independently manage UI microservice upgrades. In this diagram, the UI and API Operator and product images are pushed to a Red Hat Quay.io repository.  

Application Operators are deployed in namespace1 and packaged as the Catalog Source. The Operator Source provides the endpoint to receive updates from the Quay.io registry. When the Catalog Source receives updates about the version v2 of the microservice, the Catalog Source updates the subscription based on the preference, which can be automatic or manual:

Factor XVI: Least privilege

Containers should be running with the least privilege.

Incorrect or excessive permissions that are assigned to pods and containers pose a security threat and can lead to compromised pods. UI microservices need to access API services, Ingress services, and other essential services. When you design your microservices, consider the following areas when you are assigning privileges to pods and containers:

  • Role-based access control (RBAC) policies: RBAC rules need to maintain the least-privilege principle. As you are developing your services, continuously review and improve the RBAC rules for your services. The following diagram shows a UI Microservice 1 pod that can access an API pod to use Get and List APIs. The microservice obtains this access through role creation and a role binding that is required by the API pod:
  • Non-root user: Run UI and API containers as a non-root user. The following screenshot shows a YAML definition that shows how to run a container as a non-root user:
  • Network policies: Use network policies to control service-to-service communication. The following diagram shows how to enforce a network policy so a user can connect to the composite UI and other UI microservices, but cannot connect directly to the database pod:  

Factor XVII: Auditable

Know what, when, who, and where for all critical operations.

Well-designed UI microservices are stateless and typically call API backing services to get data. These microservices should have clear audit trails of who did what, which should be tracked through API services. 

Factor XVIII: Securable (identity, network, scope, certificates)

Protect the app and resources from the outsiders.

As a best practice, you should consider incorporating the following key security factors that UI microservices might need to provide:

  • Authentication: Typically, authentication is a dedicated service that UI microservices connect to for checking the identity of users.
  • Authorization:  Typically, authorization is a dedicated microservice that UI microservices connect to for enforcing role-based access control on different capabilities that are exposed in the UI.
  • Certificate management: UI microservices can use a certificate manager to create, store, and renew digital certificates. The following list identifies examples of certificate managers that can be deployed in a cluster: 
  • Data protection: Establish security measures for protecting data in transit and at rest.
  • Vulnerability scans: You can include vulnerability scan automation in your build pipeline to detect any vulnerabilities in the images.
  • Mutation scans: You can include mutation scan automation in your build pipeline to detect any mutations in the image.
  • Source code scans: Static and dynamic source code scans are important for UI microservices to detect security flaws in the source code and when interacting with other services.
  • Accessibility scans: UI microservices need to follow accessibility standards, which can be tracked through checklists. For example, all microservices that are published by IBM adhere to the standards included in the IBM Accessibility checklist.

Conclusion

We hope you have found this topic interesting. If you are in the middle of containerizing an application UI to deploy in Kubernetes, record the factors that you already applied and apply any factors that you are missing. Share your perspective with others.

Thanks for reading. 

If you found this article interesting, take a look at these related articles:

Thanks to Robert Wellon for reviewing this article.

Was this article helpful?
YesNo

More from Cloud

Top 6 innovations from the IBM – AWS GenAI Hackathon

5 min read - Eight client teams collaborated with IBM® and AWS this spring to develop generative AI prototypes to address real-world business challenges in the public sector, financial services, energy, healthcare and other industries. Over the course of several weeks, cross-functional teams comprising client teams, IBM and AWS representatives worked to design, develop and iterate on prototypes that push the boundaries of what's possible with generative AI. IBM used design thinking and user-centric approach to guide the teams throughout the hackathon. AWS provided…

IBM + AWS: Transforming Software Development Lifecycle (SDLC) with generative AI

7 min read - Generative AI is not only changing the way applications are built, but the way they are envisioned, designed, tested, documented, and deployed. It’s also revolutionizing the software development lifecycle (SDLC). IBM and AWS are infusing Amazon Bedrock generative AI capabilities into the IBM® SDLC solution to drive increased efficiency, speed, quality and value in every application lifecycle consistently and at scale. The evolution of the SDLC landscape The software development lifecycle has undergone several silent revolutions in recent decades. The…

How digital solutions increase efficiency in warehouse management

3 min read - In the evolving landscape of modern business, the significance of robust operational and maintenance systems cannot be overstated. Efficient warehouse management helps businesses to operate seamlessly, ensure precision and drive productivity to new heights. In our increasingly digital world, bar coding stands out as a cornerstone technology, revolutionizing warehouses by enabling meticulous data tracking and streamlined workflows. With this knowledge, A3J Group is focused on using IBM® Maximo® Application Suite and the Red Hat® Marketplace to help bring inventory solutions…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters