October 17, 2019 By Sandip A Amin 7 min read

Introducing the Portworx container-native storage and data managment solution for IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud

Today, we are announcing the support of the Portworx software-defined storage solution for IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud that can now be provisioned in Kubernetes or OpenShift clusters via the IBM Cloud Catalog

This integration allows for the use of the Portworx solution via an IBM Cloud Pay-As-You-Go or Subscription account, in which charges incur on an hourly basis and integrated billing for Portworx is supported.

Features

The Portworx container-native storage solution provides the following capabilities for stateful workloads:

  • Container-granular volumes that give you the ability to provision volumes as small as 1G and dynamically expand to large multi-terabyte volumes as your workload needs to grow—all without application disruption.
  • Declaratively specify the I/O profile of your application by leveraging one of the application-aware storage classes that are predefined by Portworx.
  • Block and shared volume support.
  • Globally namespaced volumes give support and availability of volumes across a multizone Kubernetes or OpenShift cluster. 
  • Replicated and synchronous volume support.
  • Volume encryption via both IBM Key Protect and other key management systems.
  • Local volume snapshots and volume snapshots in IBM Cloud Object Storage.
  • Role-based access control.
  • Application crash-consistent (multi-container) snapshots.
  • Support for both hyper-converged and storage-rich deployment topologies.
  • Ability to perform multi-cluster and multicloud application migrations for Kubernetes resources and data.

For more information, see the Portworx feature list.

What are the available storage deployment topologies in IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud?

Portworx aggregates and tiers physical storage into a virtual storage pool. It does this by automatically discovering free available raw block storage on your worker nodes. To create the storage cluster, Portworx requires a minimum of three physical worker nodes with additional storage. 

The easiest way to add worker nodes with additional block storage is to leverage SDS worker nodes that already come with extra local ssd disks. Alternatively, it is also possible to attach raw, unformatted block storage to non-SDS nodes

To gain the best performance and allow Portworx to schedule your workloads where the actual container volume data resides, it is recommended to utilize a hyper-converged deployment topology as shown in the following diagram:

In this diagram, the default worker pool is created with SDS worker nodes that come with physical local storage. The physical storage is evenly spread across the worker nodes in a hyper-converged topology.

It is also possible to deploy Portworx in a storage-heavy/storage-rich topology where the actual physical storage is centralized on a subset of worker nodes in the cluster. In this topology, it is recommended that you add a new worker pool for the physical storage pool as shown in the following diagram:

In this diagram, the default worker pool is created with virtual server worker nodes where the workloads run, while the storage worker pool is created with SDS worker nodes. Because Portworx runs as a daemon set on every worker node in the cluster, Portworx storage can be accessed by all applications that run in the cluster regardless of the worker pool that the app belongs to.

How do I deploy Portworx on a Red Hat Openshift cluster from the IBM Catalog?

Prerequisites

Before you begin installing Portworx on your Red Hat OpenShift cluster, follow the steps to prepare your cluster for the Portworx installation:

  1. Ensure that you created a Red Hat OpenShift cluster with at least three worker nodes with raw unformatted block storage. 
  2. Configure an IBM Cloud Databases for etcd service instance to store Portworx metadata and configuration information. Make sure that you store the credentials to access your service instance in a Kubernetes secret in your cluster. Note the name of your secret and the API endpoint for your Databases for etcd service instance as this information is used during the Portworx installation.
  3. Determine if you want to encrypt volumes by using IBM Key Protect. To encrypt your volumes, you must set up an IBM Key Protect service instance and store your service information in a Kubernetes secret.
  4. Follow the instructions to install the Helm client version 2.14.3 or higher on your local machine, and to install the Helm server (tiller) with a service account in your cluster.

After you finish setting up your cluster, you will have created two Kubernetes secrets, assuming that you decided to  to encrypt your volumes by using IBM Key Protect:

  • A secret that is named px-etcd-certs in the kube-system namespace that holds the credentials to connect to your Databases for ectd service instance.
  • A secret that is named px-ibm in the portworx namespace that holds the credentials to connect to your IBM Key Protect service instance.

The following image shows a sample OpenShift cluster that is named myocpcluster-pxstorage and that is created with three SDS worker nodes with extra local storage by using the ms3c-16x64_ssd_encrypted machine flavor. The worker nodes are evenly spread across three zones to achieve a hyper-converged storage topology.

Installing Portworx from the IBM Cloud Catalog

After you complete all of the prerequisites and create the appropriate Kubernetes secrets in your cluster, you are now ready to install Portworx from the IBM Cloud Catalog as shown in the following screenshot: 

After you select the region and resource group where your OpenShift cluster is located, complete the fields as follows: 

  1. Enter a memorable name for your Portworx service, such as Portworx-Enterprise-openshift-hyperconverged.
  2. In the Tag field, enter the name of the OpenShift cluster where you want to install Portworx. By using the tag, you associate the Portworx service instance with the OpenShift cluster and you can assist in Day-2 lifecycle operations, such as using the management console, PX-Central.
  3. Enter an IBM Cloud API Key to retrieve the list of clusters that you have access to. If you don’t have an API key, see Managing user API keys. The list of clusters that are located in the selected region and resource group is dynamically populated as shown in input field 5. 
  4. Enter the API endpoint for the IBM Databases for etcd service instance that you created and retrieved as part of the prerequisites. 
  5. Select the cluster from the drop-down list where you want to install Portworx. In the example above, myocpcluster-pxstorage is selected.
  6. Enter a unique name for your Portworx cluster. 
  7. Enter the name of the Kubernetes secret that you created in your cluster to store the Databases for etcd service credentials. In this example, px-etcd-certs is entered.
  8. Optionally, select whether you want to encrypt your volumes by using a custom key that is defined in a Kubernetes Secret or by using IBM Key Protect. In the example above, we selected IBM Key Protect.

After you entered all the information, you can proceed to create the Portworx service instance. You can navigate to the Resource list to see the Portworx provisioning status:

Verifying your Portworx installation

When your Portworx service instance shows a Provisioned status, check the status of the Portworx storage layer before you start deploying stateful applications that use Portworx volumes. To do this, run the following verification steps:  

  1. Verify that all the required Portworx pods run in your cluster. You must see one portworx, stork, and stork-scheduler pod for each worker node in your cluster. Because the OpenShift multizone cluster in this example has three worker nodes, you see a total of nine pods in the kube-system namespace.
    kubectl get pods -n kube-system | grep 'portworx\|stork'kubectl
  2. Log in to one of the Portworx pods and run the /opt/pwx/bin/pxctl status command. This can be easily done via the OpenShift Cluster Console by opening a terminal session to one of the Portworx pods. The command output shows that the status of the Portworx cluster is Online and that it has automatically discovered extra 1.7 TB storage per worker node to form a total storage capacity of 5.2 TB. This total physical storage capacity is available to create Portworx-backed persistent volumes that you can mount to your application deployments.

Deploying MySQL by using a Portworx encrypted volume

Now that you verified the status of the Portworx cluster, you can deploy a stateful application by using a Portworx volume. In this example, we’ll create a persistent Portworx volume that is encrypted with IBM Key Protect by using the following persistent volume claim. The persistent volume claim references the storage class portworx-db-sc, which is configured by default and optimized to run database workloads in the cluster. To enable volume encryption, the px/secure annotation must be set to true.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: secure-pvc
  annotations:
    px/secure: "true"
spec:
  storageClassName: portworx-db-sc
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 200Gi

After you create the persistent volume, verify that the persistent volume named secured-pvc is in a Bound status.

Next, you use the following Kubernetes deployment to deploy MySQL in your cluster that uses the secure-pvc persistent volume claim that you created earlier. 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mysql
spec:
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
    spec:
      schedulerName: stork
      containers:
      - name: mysql
        image: mysql:5.6
        imagePullPolicy: "Always"
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
        volumeMounts:
        - mountPath: /var/lib/mysql
          name: mysql-data
      volumes:
      - name: mysql-data
        persistentVolumeClaim:
          claimName: secure-pvc

After you create the MySQL deployment in your OpenShift cluster,  you can check the status of the mysql pod from the OpenShift Console as shown below:

More information

The Portworx container-native software-defined storage solution for IBM Cloud Kubernetes Service and RedHat OpenShift on IBM Cloud provides a variety of features to support stateful applications. To learn more, visit the following links:

If you have questions, engage our team via the IBM Cloud Kubernetes Service Slack. Log in to Slack by using your IBM ID and post your question in the #portworx-on-iks channel. If you do not use an IBM ID for your IBM Cloud account, request an invitation to this Slack

More from Announcements

Success and recognition of IBM offerings in G2 Summer Reports  

2 min read - IBM offerings were featured in over 1,365 unique G2 reports, earning over 230 Leader badges across various categories.   This recognition is important to showcase our leading products and also to provide the unbiased validation our buyers seek. According to the 2024 G2 Software Buyer Behavior Report, “When researching software, buyers are most likely to trust information from people with similar roles and challenges, and they value transparency above other factors.”  With over 90 million visitors each year and hosting more than 2.6…

Manage the routing of your observability log and event data 

4 min read - Comprehensive environments include many sources of observable data to be aggregated and then analyzed for infrastructure and app performance management. Connecting and aggregating the data sources to observability tools need to be flexible. Some use cases might require all data to be aggregated into one common location while others have narrowed scope. Optimizing where observability data is processed enables businesses to maximize insights while managing to cost, compliance and data residency objectives.  As announced on 29 March 2024, IBM Cloud® released its next-gen observability…

Unify and share data across Netezza and watsonx.data for new generative AI applications

3 min read - In today's data and AI-driven world, organizations are generating vast amounts of data from various sources. The ability to extract value from AI initiatives relies heavily on the availability and quality of an enterprise's underlying data. In order to unlock the full potential of data for AI, organizations must be able to effectively navigate their complex IT landscapes across the hybrid cloud.   At this year’s IBM Think conference in Boston, we announced the new capabilities of IBM watsonx.data, an open…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters