IBM Cloud Private for Linux® on IBM® Z and LinuxONE technology preview
With IBM Cloud Private for Linux® on IBM® Z and LinuxONE, you can install an IBM Cloud Private cluster on IBM® Z and LinuxONE.
- Architecture
- Hardware requirements
- Supported operating systems and platforms
- Supported node types
- Supported features
- Installation
- Known issues and limitations
Architecture
The IBM® Cloud Private cluster on IBM® Z and LinuxONE has four main classes of nodes: boot, master, worker, and proxy. However, Vulnerability Advisor (VA), and etcd nodes are not supported in this technology preview release.
For the details of each type of cluster node, see IBM® Cloud Private architecture.
The IBM® Cloud Private cluster on IBM® Z and LinuxONE resembles the following architecture diagram:
If you use a management node in your cluster, the architecture resembles the following diagram:
Note: For a complete list of the supported features in the IBM Cloud Private for Linux® on IBM® Z and LinuxONE, see the Supported features section.
Hardware requirements
Ensure you review and verify that you meet the increased memory requirements.
Note:
- You can only have one master node because the High Availability (HA) is not supported in this technology preview release.
- The etcd is installed on the master node.
- You must use a separate s390x architecture Linux LPAR or zKVM guest to build Docker images for your applications.
Requirement | Boot node | Master node | Proxy node | Worker node | Management node |
---|---|---|---|---|---|
Number of hosts | 1 | 1 | 1 or more | 1 or more | 1 or more |
cores (IFLs) | 1 | 2 | 1 or more | 1 or more | 1 or more |
CPU | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz |
RAM | >=4 GB | >=16 GB | >=4 GB | >=4 GB | >=16 GB |
Free Disk space to install | >=100 GB | >=200 GB | >=150 GB | >=150 GB | >=200 GB |
For the disk size requirements, see Disk space requirements.
Supported operating systems and platforms
Platform | Operating system |
---|---|
Linux® on IBM® Z and LinuxONE | Red Hat Enterprise Linux (RHEL) 7.3, 7.4 and 7.5 |
Ubuntu 18.04 LTS and 16.04 LTS | |
SUSE Linux Enterprise Server (SLES) 12 SP3 |
Note:
- Check the documentation for your operating system to ensure that you are using a supported kernel level.
- If you use SLES 12 SP3 as the operating system, upgrade the kernel to the latest version by using the
sudo zypper update
command. For more information, see Security update for the Linux Kernel.
IBM Cloud Private components are distributed as a set of Docker images that incorporate their own operating system dependencies. It is recommended that you use one of the certified operating systems that are listed in the preceding table. However, IBM Cloud Private can run on any Linux operating system that supports Docker 1.12 and later.
Supported node types
Node type | IBM® Z and LinuxONE (s390x) |
---|---|
Boot | Y |
Master | Y |
Management | Y |
Proxy | Y |
Worker | Y |
VA | N |
Supported features
Feature | Linux® on IBM® Z and LinuxONE | Notes |
---|---|---|
Cloud Foundry | N | |
Cloud Automation Manager | N* | *IBM Cloud Automation Manager can manage IBM z/VM 6.4 virtual machines using the z/VM Cloud Manager Appliance. |
Installation | Y | Installation supported on master nodes or dedicated boot nodes only. |
Management console | Y | Management console runs on master nodes only. |
ELK | Y* | *While ELK runs on only master or management nodes, data from worker nodes is collected by using Filebeat. |
Monitoring
|
Y* | *While Prometheus and Grafana run on only master or management nodes, data from worker nodes is collected by using the node exporter. |
Security and RBAC | Y | |
FIPS mode | N | |
Vulnerability advisor | N | |
IPsec | N | |
IPVS | N | |
Networking: Calico | Y | |
Networking: NSX-T | N | |
Storage: GlusterFS | N | |
Storage: VMware | N | |
Storage: Minio | N | |
Volume encryption | N | |
Metering | Y | |
Helm repo or API | Y | |
Nvidia GPU support | N | |
Containerd | N | |
External load balancer | N | |
HPA | N | |
Multicloud Manager | N | |
IBM Cloud Private-CE (Community Edition) | N |
Note: IBM® Cloud Private will support new versions of supported operating systems, Kubernetes, Docker, and other dependent infrastructure after new releases happen and when they are fully tested by the IBM® Cloud Private team.
Installation
Installation of IBM Cloud Private for Linux® on IBM® Z and LinuxONE can be completed in five main steps:
- Install Docker for your boot node only
- Set up the installation environment
- (Optional) Customize your cluster
- Set up Docker for your cluster nodes
- Deploy the environment
- Verify the installation
When the installation completes, access your cluster and complete post installation tasks.
Note: If you encounter errors during installation, see Troubleshooting install.
Step 1: Install Docker for your boot node only
The boot node is the node that is used for installation of your cluster. The boot node is usually your master node. For more information about the boot node, see Boot node.
You need a version of Docker that is supported by IBM Cloud Private installed on your boot node. See Supported Docker versions.
To install Docker, see Manually installing Docker.
Step 2: Set up the installation environment
- Log in to the boot node as a user with root permissions.
-
Download the installation files from the IBM Early Program website.
- For a IBM Cloud Private for Linux® on IBM® Z and LinuxONE cluster, download the
ibm-cloud-private-s390x-3.1.1.tar.gz
file.
- For a IBM Cloud Private for Linux® on IBM® Z and LinuxONE cluster, download the
-
Extract the images and load them into Docker. Extracting the images might take a few minutes.
tar xf ibm-cloud-private-s390x-3.1.1.tar.gz -O | sudo docker load
-
Create an installation directory to store the IBM Cloud Private configuration files and change to that directory. For example, to store the configuration files in
/opt/ibm-cloud-private-3.1.1
, run the following commands:sudo mkdir /opt/ibm-cloud-private-3.1.1; cd /opt/ibm-cloud-private-3.1.1
-
Extract the configuration files from the installer image.
sudo docker run -v $(pwd):/data -e LICENSE=accept \ ibmcom/icp-inception-s390x:3.1.1-ee \ cp -r cluster /data
A
cluster
directory is created inside your installation directory. For example, if your installation directory is/opt/ibm-cloud-private-3.1.1
, the/opt/ibm-cloud-private-3.1.1/cluster
folder is created. For an overview of the cluster directory structure, see Cluster directory structure. -
(Optional) You can view the license file for IBM Cloud Private, where
$LANG
is a supported language format.sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-s390x:3.1.1-ee
For example, to view the license in Simplified Chinese by using Linux® x86_64 for IBM Cloud Private, run the following command:
sudo docker run -e LICENSE=view -e LANG=zh_CN ibmcom/icp-inception-s390x:3.1.1-ee
For a list of supported language formats, see Supported languages.
-
Create a secure connection from the boot node to all other nodes in your cluster. Complete one of the following processes:
- Set up SSH in your cluster. See Sharing SSH keys among cluster nodes.
- Set up password authentication in your cluster. See Configuring password authentication for cluster nodes.
-
Add the IP address of each node in the cluster to the
/<installation_directory>/cluster/hosts
file. See Setting the node roles in the hosts file. You can also define customized host groups, see Defining custom host groups. -
If you use SSH keys to secure your cluster, in the
/<installation_directory>/cluster
folder, replace thessh_key
file with the private key file that is used to communicate with the other cluster nodes. See Sharing SSH keys among cluster nodes. Run this command:sudo cp ~/.ssh/id_rsa ./cluster/ssh_key
In this example,
~/.ssh/id_rsa
is the location and name of the private key file. -
Move the image files for your cluster to the
/<installation_directory>/cluster/images
folder.-
Create an images directory:
mkdir -p cluster/images;
-
If your cluster contains the s390x node, place the s390x package in the images directory:
sudo mv /<path_to_installation_file>/ibm-cloud-private-s390x-3.1.1.tar.gz cluster/images/
In the command,
path_to_installation_file
is the path to the images file. -
Step 3: Customize your cluster
- You can set a variety of optional cluster customizations that are available in the
/<installation_directory>/cluster/config.yaml
file. See Customizing the cluster with the config.yaml file. For additional customizations, you can also review Customizing your installation. -
In an environment that has multiple network interfaces (NICs), such as OpenStack and AWS, you must add the following code to the
config.yaml
file:- For IBM Cloud Private:
cluster_lb_address: <external address> proxy_lb_address: <external address>
The
<external address>
value is the IP address, fully-qualified domain name, or OpenStack floating IP address that manages communication to external services. Setting theproxy_lb_address
parameter is required for proxy HA environments only. - For IBM Cloud Private:
Step 4: Set up Docker for your cluster nodes
Cluster nodes are the master, worker, proxy, and management nodes. See Architecture.
You need a version of Docker that is supported by IBM Cloud Private installed on your cluster node. See Supported Docker versions.
If you do not have supported version of Docker installed on your cluster nodes, IBM Cloud Private can automatically install Docker on your cluster nodes during the installation.
To prepare your cluster nodes for automatic installation of Docker, see Configuring cluster nodes for automatic Docker installation.
Step 5: Deploy the environment
-
Change to the
cluster
folder in your installation directory.cd ./cluster
-
Deploy your environment. Depending on your options, you might need to add more parameters to the deployment command.
-
If you had specified the
offline_pkg_copy_path
parameter in theconfig.yaml
file, add the-e ANSIBLE_REMOTE_TEMP=<offline_pkg_copy_path>
option in the deployment command, where<offline_pkg_copy_path>
is the value of theoffline_pkg_copy_path
parameter that you set in theconfig.yaml
file. -
By default, the command to deploy your environment is set to deploy 15 nodes at a time. If your cluster has more than 15 nodes, the deployment might take a longer time to finish. If you want to speed up the deployment, you can specify a higher number of nodes to be deployed at a time. Use the argument
-f <number of nodes to deploy>
with the command.To deploy your environment:
- For IBM Cloud Private for Linux® on IBM® Z and LinuxONE, run this command:
sudo docker run --net=host -t -e LICENSE=accept \ -v "$(pwd)":/installer/cluster ibmcom/icp-inception-s390x:3.1.1-ee install
Note: If you encounter errors during deployment, run the deployment command again with
-v
to collect other error messages. If you continue to receive errors during the rerun, run the following command to collect the log files:sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-s390x:3.1.1-ee healthcheck
The log files are located in the
cluster/logs
directory. - For IBM Cloud Private for Linux® on IBM® Z and LinuxONE, run this command:
-
Step 6: Verify the status of your installation
If the installation succeeded, the access information for your cluster is displayed. The master_ip is the IP address of the master node for your cluster.
UI URL is https://master_ip:8443 , default username/password is admin/admin
For IBM Cloud Private: If you specified a cluster_lb_address
value in your config.yaml
file, the <ip_address>
is the cluster_lb_address
address. If you did not specify that value, in HA clusters,
the <ip_address>
in this message is the cluster_vip
address that you specified, and in standard clusters, it is the IP address of the master node.
Note: If you created your cluster within a private network, use the public IP address of the master node to access the cluster. Specify the public IP address in the config.yaml
and update the cluster_lb_address
parameter before you install IBM Cloud Private.
Access your cluster
Access your cluster. From a web browser, browse to the URL for your cluster. For a list of supported browsers, see Supported browsers.
- For more information about accessing your cluster by using the IBM Cloud Private management console from a web browser, see Accessing your IBM Cloud Private cluster by using the management console.
-
For more information about accessing your cluster by using the Kubernetes command line (kubectl), see Accessing your IBM Cloud Private cluster by using the kubectl CLI.
Note: If you’re unable to log in immediately after the installation completes, it might be because the management services are not ready. Wait for a few minutes and try again.
Note: You might see a
502 Bad Gateway
message when you open a page in the management console shortly after installation. If you do, the NGINX service has not started all components. The pages load after all components start.
Post installation tasks
- Restart your firewall.
- Ensure that all the IBM Cloud Private default ports are open. For more information about the default IBM Cloud Private ports, see Default ports.
- Back up the boot node. Copy your
/<installation_directory>/cluster
directory to a secure location. If you use SSH keys to secure your cluster, ensure that the SSH keys in the backup directory remain in sync. - Install other software from your bundle. See Installing IBM software onto IBM Cloud Private.
Known issues and limitations
IBM Cloud Private on Linux® on IBM® Z and LinuxONE is a technology preview release and has the following limitations:
- VA node is not supported in this release.
- High Availability (HA) is not supported in this release.
- You cannot upgrade the IBM Cloud Private 3.1.1 to a newer version on Linux® on IBM® Z and LinuxONE.
- Mixed architectures of worker or proxy nodes are not supported.
- For a list of unsupported features, see the table in the Supported features section.
- For known issues of IBM Cloud Private, see Known issues and limitations.