Creating the cluster nodes
You can use the Secure Service Container for IBM Cloud Private command line interface (CLI) tool to create all the necessary cluster nodes and provision them with appropriate network, storage, CPU, memory resources.
The CLI tool configuration and installation is the prerequisites for IBM Cloud Private installation on the Secure Service Container for IBM Cloud Private environment.
This procedure is intended for users with role cloud administrator.
Before you begin
- Check that you copy the isolated VM archive
ICPIsolatedvm.tar.gzinto the/configdirectory. - If you have a firewall installed on the x86 or Linux on Z server, check that the
iptableson the server is configured to allow network traffic from and to docker containers. When the command line tool is running inside a docker container, it has to communicate with the remote hosting appliance on Secure Service Container partitions. The firewall rules for docker user must contain configurations forDOCKER-USER. You can use the following commands to configure the firewall rules for docker containers on the server.iptables -I FORWARD -j DOCKER-USER iptables -A DOCKER-USER -j ACCEPT - Refer to the checklist that you prepared on this topic Planning for Secure Service Container for IBM Cloud Private.
Procedure
On the x86 or Linux on Z server, complete the following steps as a root user.
-
Run the command line tool to create the cluster nodes. Note that the command must be run in the parent directory of the
configdirectory. For example,/opt/<installation-directory>.docker run --network=host --rm -it -v $(pwd)/config:/ssc4icp-cli-installer/config ibmzcontainers/ssc4icp-cli-installer:1.1.0.3 install -
After the installation completes, the following directories and files are created under the
configdirectory for future reference and use.Logs- A directory that contains logs for all operations being performed.Cluster-status.yaml- A file that is used to capture the current status of installation.-
DemoCluster- A directory with your cluster name that contains the following files for the cluster:cluster-configuration.yaml- A file indicates that the cluster configurations in thessc4icp-config.yamlfile are applied successfully.quotagroup-symlink.yaml- A file contains details of storage containers, quotagroups and device names if you specify GlusterFS configurations in thessc4icp-config.yamlfile. For more information, see Deploying GlusterFS.ipsec.conf- A file that contains the network topology of the cluster.ipsec.secret- A file that contains a randomly generated Pre-Shared-Key (PSK) that will be used as an authorization token to the IPSec network.
-
An ssh key pair
ssh_keyandssh_key.pubfiles that will provide SSH access for the IBM Cloud Private installer to all the cluster nodes. In order for the IBM Cloud Private installer to access your master or boot node over SSH and also to use the generated SSH key, you must follow the instructions in the Before you begin section of Deploying IBM Cloud Private.
The following cluster-configuration.yaml example file is generated based on the cluster configuration specified in the ssc4icp-config.yaml file.
LPARS:
- containers:
- cpu: '4'
icp_storage: 140000M
internal_network:
gateway: 192.168.0.1
ip: 192.168.0.252
parent: encf900
subnet: 192.168.0.0/24
memory: '4098'
name: worker-15001
port: '15001'
root_storage: 60000M
- cpu: '4'
icp_storage: 140000M
internal_network:
gateway: 192.168.0.1
ip: 192.168.0.253
parent: encf900
subnet: 192.168.0.0/24
memory: '4098'
name: worker-15002
port: '15002'
root_storage: 60000M
ipaddress: 10.152.151.105
- containers:
- cpu: '3'
icp_storage: 140000M
internal_network:
gateway: 192.168.0.1
ip: 192.168.0.254
parent: encf900
subnet: 192.168.0.0/24
memory: '1024'
name: proxy-16001
port: '16001'
proxy_external_network:
gateway: 172.16.0.1
ip: 172.16.0.4
parent: encf700
subnet: 172.16.0.0/24
root_storage: 60000M
ipaddress: 10.152.151.105
cluster:
ipsecsecrets: ApcQn3Gmuo4sbwLRZfLxo3hD9MEP21u4HAvOQyoQ2dEYVifWEdE98dIF9Panc2/gJP6nSXQu3NHgASbb/VlT3w==
masterconfig:
internal_ips:
- 192.168.0.251
subnet: 192.168.0.0/24
name: DemoCluster
repoid: ICPIsolatedvm
sshpubkey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDSBAz0Pf+vPWiZCfLZ/NKIrbvFy9+4iav0ihQJ89zIRrwIasQHKPepRPzXYH0h/3g8iAIKymZLuBl2bSn6/tGNN1stl5nsIdZ5Vr8yKd9a7YAHpBYgzkgq9qcuZHIP5PRJwlrcfwIiCLVdfp73Z2ZCfdzjbfMmd1pb2egv78XlTJLyizxAN1jV7PyVkiJdjFkxKXIqbWfuYMoMYrwSsBB0gjn66KiQlptrbem9hvYkeMX7d+zOLjd46C3F7+0gbSzabE0IScfkZdXbjmN9ldIa+70H1/ruGdIuoMN3+1m7Wjj0Xh4sPyXVkmaqRHvJ/OC/cHkxRweNsztM1GbMkXQXXRhxykYlWNqo/E+U2hPScCDMsf+WU9di4pjYc/JDfVHXFmN/vtk+WFkQqJMwy/hVR5lOjnSrRA1RTU97uRzOsCgzqXpV9vnF9Xr+20RR4Ml/tP6pEHOScEsjA5ztn/PYROEKM/3zaV3O2Px6bWP24SWWvOARqjRipHda6/3GzMylp1bL4DMyvKSJ+m5WlwlIDcrYZ0z2xM2zbzmnjgbDiYWYvu9AgaHSQO8Vdep8wcM0ZO6zXqDT6awPEFBNKkcuxYrv1K5Vf48w0O8saceQSX0VhNuBk4Kf4PWAy30TxJC4KaCq1yf7zE1545ImCjxIKjY2PPieDgNC0rNCyejfVw==
Note:
- Those generated directories and files must not be deleted because the uninstallation procedure needs to validate those files when resetting the environment.
- The private key
ssh_keymust be protected because it provides SSH access to all of the cluster nodes. Only the cloud administrator who will install the IBM Cloud Private can have the access privilege to thessh_keyfile. - If you run the command line tool again with a same cluster name but different configurations, you must delete the
config/<ClusterName>directory first. - The
ipsec.secretsfile contains the IPSec key generated by the command line tool, and will be used when creating the network configuration for each node. You can use LUKS (Linux Unified Key Setup) hardware encryption to protect the key from ambiguous access other than the root user on the x86 or Linux on Z server.
The following quotagroup-symlink.yaml example file is generated based on the configuration of template5 and template6 in the ssc4icp-config.yaml file. In the example, two quotagroups storage_17001_glusterfs1_qg and storage_17001_glusterfs1_qg are attached to the GlusterFS node storage-17001, storage_18001_glusterfs1_qg attached to node storage-18001, and storage_18002_glusterfs1_qg attached
to node storage-18002.
container = storage-17001, quotagroup = storage_17001_glusterfs1_qg, symbolic_link = /dev/disk/by-runq-id/storage_17001_glusterfs1_qg
container = storage-17001, quotagroup = storage_17001_glusterfs2_qg, symbolic_link = /dev/disk/by-runq-id/storage_17001_glusterfs2_qg
container = storage-18001, quotagroup = storage_18001_glusterfs1_qg, symbolic_link = /dev/disk/by-runq-id/storage_18001_glusterfs1_qg
container = storage-18002, quotagroup = storage_18002_glusterfs1_qg, symbolic_link = /dev/disk/by-runq-id/storage_18002_glusterfs1_qg
Result
Verify the cluster status by using the docker ps command, the nodes for both IBM Cloud Private cluster and GlusterFS are listed as the following.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7750d5343991 77b4f35dbc74 "entry.sh init" 2 days ago Up 2 days storage-18002
8fd222737e87 77b4f35dbc74 "entry.sh init" 2 days ago Up 2 days storage-18001
bbdf0695c317 77b4f35dbc74 "entry.sh init" 2 days ago Up 2 days storage-17001
121fe6ba774d 77b4f35dbc74 "entry.sh init" 2 days ago Up 2 days proxy-16001
5b157b60cf0c 77b4f35dbc74 "entry.sh init" 2 days ago Up 2 days worker-15001
6e1579fcef84 77b4f35dbc74 "entry.sh init" 2 days ago Up 2 days worker-15002
Next
Follow the instructions in Configure the network on the master nodes to ensure that cluster nodes are connected in the cluster.