Creating the cluster nodes
You can use the Secure Service Container for IBM Cloud Private command line interface (CLI) tool to create all the necessary cluster nodes and provision them with appropriate network, storage, CPU, memory resources.
The CLI tool configuration and installation is the prerequisites for IBM Cloud Private installation on the Secure Service Container for IBM Cloud Private environment.
This procedure is intended for users with role cloud administrator.
Before you begin
- Check that you copy the isolated VM archive
ICPIsolatedvm.tar.gz
into the/config
directory. - If you have a firewall installed on the x86 server, check that the
iptables
on the x86 server is configured to allow network traffic from and to docker containers. When the command line tool is running inside a docker container, it has to communicate with the remote hosting appliance on Secure Service Container partitions. The firewall rules for docker user must contain configurations forDOCKER-USER
. You can use the following commands to configure the firewall rules for docker containers on the x86 server.iptables -I FORWARD -j DOCKER-USER iptables -A DOCKER-USER -j ACCEPT
Procedure
On the x86 server, complete the following steps as a root user.
-
Run the command line tool to create the cluster nodes. Note that the command must be run in the parent directory of the
config
directory. For example,/opt/<installation-directory>
.docker run --network=host --rm -it -v $(pwd)/config:/ssc4icp-cli-installer/config ibmzcontainers/ssc4icp-cli-installer:1.1.2 install
-
After the installation completes, the following directories and files are created under the
config
directory for future reference and use.Logs
- A directory that contains logs for all operations being performed.Cluster-status.yaml
- A file that is used to capture the current status of installation.DemoCluster
- A directory with your cluster name that contains the following files for the cluster:cluster-configuration.yaml
- A file indicates that the cluster configurations in thessc4icp-config.yaml
file are applied successfully.ipsec.conf
- A file that contains the network topology of the cluster.ipsec.secret
- A file that contains a randomly generated Pre-Shared-Key (PSK) that will be used as an authorization token to the IPSec network.
- An ssh key pair
ssh_key
andssh_key.pub
files that will provide SSH access for the IBM Cloud Private installer to all the cluster nodes. In order for the IBM Cloud Private installer to access your master or boot node over SSH and also to use the generated SSH key, you must follow the instructions in the Before you begin section of Deploying IBM Cloud Private.
The following cluster-configuration.yaml
example file is generated based on the cluster configuration specified in the ssc4icp-config.yaml
file.
LPARS:
- containers:
- cpu: '4'
icp_storage: 140000M
internal_network:
gateway: 192.168.0.1
ip: 192.168.0.252
parent: encf900
subnet: 192.168.0.0/24
memory: '4098'
name: worker-15001
port: '15001'
root_storage: 60000M
- cpu: '4'
icp_storage: 140000M
internal_network:
gateway: 192.168.0.1
ip: 192.168.0.253
parent: encf900
subnet: 192.168.0.0/24
memory: '4098'
name: worker-15002
port: '15002'
root_storage: 60000M
ipaddress: 10.152.151.105
- containers:
- cpu: '3'
icp_storage: 140000M
internal_network:
gateway: 192.168.0.1
ip: 192.168.0.254
parent: encf900
subnet: 192.168.0.0/24
memory: '1024'
name: proxy-16001
port: '16001'
proxy_external_network:
gateway: 172.16.0.1
ip: 172.16.0.4
parent: encf700
subnet: 172.16.0.0/24
root_storage: 60000M
ipaddress: 10.152.151.105
cluster:
ipsecsecrets: ApcQn3Gmuo4sbwLRZfLxo3hD9MEP21u4HAvOQyoQ2dEYVifWEdE98dIF9Panc2/gJP6nSXQu3NHgASbb/VlT3w==
masterconfig:
internal_ips:
- 192.168.0.251
subnet: 192.168.0.0/24
name: DemoCluster
repoid: ICPIsolatedvm
sshpubkey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDSBAz0Pf+vPWiZCfLZ/NKIrbvFy9+4iav0ihQJ89zIRrwIasQHKPepRPzXYH0h/3g8iAIKymZLuBl2bSn6/tGNN1stl5nsIdZ5Vr8yKd9a7YAHpBYgzkgq9qcuZHIP5PRJwlrcfwIiCLVdfp73Z2ZCfdzjbfMmd1pb2egv78XlTJLyizxAN1jV7PyVkiJdjFkxKXIqbWfuYMoMYrwSsBB0gjn66KiQlptrbem9hvYkeMX7d+zOLjd46C3F7+0gbSzabE0IScfkZdXbjmN9ldIa+70H1/ruGdIuoMN3+1m7Wjj0Xh4sPyXVkmaqRHvJ/OC/cHkxRweNsztM1GbMkXQXXRhxykYlWNqo/E+U2hPScCDMsf+WU9di4pjYc/JDfVHXFmN/vtk+WFkQqJMwy/hVR5lOjnSrRA1RTU97uRzOsCgzqXpV9vnF9Xr+20RR4Ml/tP6pEHOScEsjA5ztn/PYROEKM/3zaV3O2Px6bWP24SWWvOARqjRipHda6/3GzMylp1bL4DMyvKSJ+m5WlwlIDcrYZ0z2xM2zbzmnjgbDiYWYvu9AgaHSQO8Vdep8wcM0ZO6zXqDT6awPEFBNKkcuxYrv1K5Vf48w0O8saceQSX0VhNuBk4Kf4PWAy30TxJC4KaCq1yf7zE1545ImCjxIKjY2PPieDgNC0rNCyejfVw==
Note:
- Those generated directories and files must not be deleted because the uninstallation procedure needs to validate those files when resetting the environment.
- The private key
ssh_key
must be protected because it provides SSH access to all of the cluster nodes. Only the cloud administrator who will install the IBM Cloud Private can have the access privilege to thessh_key
file. - If you run the command line tool again with a same cluster name but different configurations, you must delete the
config/<ClusterName>
directory first. - The
ipsec.secrets
file contains the IPSec key generated by the command line tool, and will be used when creating the network configuration for each node. You can use LUKS (Linux Unified Key Setup) hardware encryption to protect the key from ambiguous access other than the root user on the x86 server.
Next
Follow the instructions in Configure the network on the master nodes to ensure that cluster nodes are connected in the cluster.