April 27, 2020 By Sai Vennam 4 min read

Red Hat OpenShift is the enterprise Kubernetes platform, and with the latest version, the platform has undergone significant improvements to developer experience, automation, and platform management.

In this video, I’m going to walk you through the changes in this new version, explain how each of these new features work, and demonstrate how these updates are improving the OpenShift 4 console. I hope you enjoy!

Learn more

Video Transcript

What’s new in OpenShift 4.3?

Red Hat OpenShift is the enterprise Kubernetes platform, and with the latest version—OpenShift 4—the platform has undergone significant improvements to developer experience, automation, and the management of the platform itself.

But what exactly changed and what do you need to know about OpenShift 4? Let’s get started.

So, OpenShift is Kubernetes at the core, and with OpenShift 4, the platform is now driven by Operators. Yes, this includes the services that support OpenShift, as well as app services that are deployed by users like you. This is significant. So, we’ll start with covering Operators. 

Next, we’ll jump into one of the first things users will notice in OpenShift 4—that is an improved developer experience. And this comes with significant updates to the console. 

And then, finally, we’ll dive into some of the community-driven projects that OpenShift has adopted into supported solutions. Things like OpenShift Service Mesh and OpenShift Pipelines. There’s actually more but we’ll touch on these two today.

Operators

Starting with Operators—essentially, they allow you to automate the lifecycle of containers. Let’s say I’m deploying a simple frontend and backend application. Once I deploy all of them into my cluster, I’ve got to manage all the automation or the config of individual applications.

But, with an Operator, I can take a different approach. By installing an Operator into a cluster with OLM (or Operator Lifecycle Manager), I can enable new CRDs (or Custom Resource Definitions).

These CRDs allow me to manage my application using custom config files tailored from my application. In addition, any automation that I need can be built into the operator itself. Essentially, I’ve extended the Kubernetes API to create new custom resources that are tailored to the resources that you regularly work with.

In OpenShift 4, the services that makeup OpenShift itself are actually managed by Operators. This means we get to take advantage of that same framework to easily do installation and upgrades of OpenShift itself.

Check out the Operator SDK to create your own Operators, or use the embedded Operator Hub in OpenShift to start quickly and with existing solutions.

Improved console experience

Next, let’s talk about one of the first things you’ll notice in the platform, and that is an improved developer experience starting from the console.

The main thing you’ll notice is a different view for administrators and developers. There’s new dashboard capabilities for streamlined deployment of applications, whether you start with a git repo, container image, or deployment YAML.

In addition, you’ll get better observability into the platform. For example, there’s an events view which tracks everything happening in your cluster.

There’s improved administration of the cluster itself, with a new user-manage section as well.

Cloud native development

OpenShift Pipelines with Tekton

Lastly, let’s close with some of the community-driven projects that OpenShift is supporting. One of them is going to be OpenShift Pipelines with Tekton. 

Tekon is a cloud native way to declare CI/CD pipelines, and it’s based entirely on Kubernetes. It starts with defining the tasks that make up a CI/CD flow, which actually run as pods in your cluster. These make up a pipeline, and that’s able to deploy applications into your cluster.

In OpenShift, there’s actually UI integration between Tekton and OpenShift. So, OpenShift Pipelines lets you manage your CI/CD, all in the dashboard.

OpenShift Service Mesh

The other community-driven project I want to talk about is OpenShift Service Mesh. This is based on Istio. 

Imagine you have a number of services that are dependent on one another. A number of concerns arise in the interaction between these services. How do you actually manage these interdependent complexities?

Well, instead of managing them in the app itself, you can take advantage of the Istio control plane, which uses “sidecars” to basically help you control how these microservices connect with each other, how they enforce policies, and then even observe how they behave.

That way, the capabilities rest on the control plane rather than the apps themselves.

Was this article helpful?
YesNo

More from Cloud

New 4th Gen Intel Xeon profiles and dynamic network bandwidth shake up the IBM Cloud Bare Metal Servers for VPC portfolio

3 min read - We’re pleased to announce that 4th Gen Intel® Xeon® processors on IBM Cloud Bare Metal Servers for VPC are available on IBM Cloud. Our customers can now provision Intel’s newest microarchitecture inside their own virtual private cloud and gain access to a host of performance enhancements, including more core-to-memory ratios (21 new server profiles/) and dynamic network bandwidth exclusive to IBM Cloud VPC. For anyone keeping track, that’s 3x as many provisioning options than our current 2nd Gen Intel Xeon…

IBM and AWS: Driving the next-gen SAP transformation  

5 min read - SAP is the epicenter of business operations for companies around the world. In fact, 77% of the world’s transactional revenue touches an SAP system, and 92% of the Forbes Global 2000 companies use SAP, according to Frost & Sullivan.   Global challenges related to profitability, supply chains and sustainability are creating economic uncertainty for many companies. Modernizing SAP systems and embracing cloud environments like AWS can provide these companies with a real-time view of their business operations, fueling growth and increasing…

Experience unmatched data resilience with IBM Storage Defender and IBM Storage FlashSystem

3 min read - IBM Storage Defender is a purpose-built end-to-end data resilience solution designed to help businesses rapidly restart essential operations in the event of a cyberattack or other unforeseen events. It simplifies and orchestrates business recovery processes by providing a comprehensive view of data resilience and recoverability across primary and  auxiliary storage in a single interface. IBM Storage Defender deploys AI-powered sensors to quickly detect threats and anomalies. Signals from all available sensors are aggregated by IBM Storage Defender, whether they come…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters