Skip to content

Commit

Permalink
Enhancement: Support for zVM compute nodes to Hosted Control Plane - …
Browse files Browse the repository at this point in the history
…vswitch FCP/DASD (#227)

Enhancement: Support for zVM compute nodes to Hosted Control Plane -
vswitch FCP/DASD

- Code changes for supporting zVM for Hosted Control Plane
- Supported network type  : vswitch
- Supported disk types: FCP/DASD
- Updated the documentation for the same

---------

Signed-off-by: root <[email protected]>
Co-authored-by: root <[email protected]>
  • Loading branch information
veera-damisetti and root authored Dec 15, 2023
1 parent 3d66a58 commit b21d3c8
Show file tree
Hide file tree
Showing 18 changed files with 305 additions and 40 deletions.
7 changes: 4 additions & 3 deletions docs/run-the-playbooks-for-hypershift.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
# Run the Playbooks
## Prerequisites
* Running OCP Cluster ( Management Cluster )
* KVM host with root user access or user with sudo privileges
* KVM host with root user access or user with sudo privileges if compute nodes are KVM.
* zvm host ( bastion ) and nodes if compute nodes are zVM.

### Network Prerequisites
* DNS entry to resolve api.${cluster}.${domain} , api-int.${cluster}.${domain} , *apps.${cluster}.${domain} to a load balancer deployed to redirect incoming traffic to the ingresses pod ( Bastion ).
* If using dynamic IP for agents, make sure you have entries in DHCP Server for macaddresses you are using in installation to map to IPv4 addresses and along with this DHCP server should make your IPs to use nameserver which you have configured.
## Note:
* As of now we are supporting only macvtap for hypershift Agent based installation.
* As of now we are supporting only macvtap for hypershift Agent based installation for KVM compute nodes.

## Step-1: Setup Ansible Vault for Management Cluster Credentials
### Overview
Expand Down Expand Up @@ -36,7 +37,7 @@ ansible-vault edit playbooks/secrets.yaml

## Step-2: Initial Setup for Hypershift
* Navigate to the [root folder of the cloned Git repository](https://github.com/IBM/Ansible-OpenShift-Provisioning) in your terminal (`ls` should show [ansible.cfg](https://github.com/IBM/Ansible-OpenShift-Provisioning/blob/main/ansible.cfg)).
* Update all the variables in Section-16 ( Hypershift ) and Section-3 ( File Server : ip , protocol and iso_mount_dir ) in [all.yaml](https://github.com/veera-damisetti/Ansible-OpenShift-Provisioning/blob/main/inventories/default/group_vars/all.yaml.template) before running the playbooks.
* Update variables as per the compute node type (zKVM /zVM) in Section-16 ( Hypershift ) and Section-3 ( File Server : ip , protocol and iso_mount_dir ) in [all.yaml](https://github.com/veera-damisetti/Ansible-OpenShift-Provisioning/blob/main/inventories/default/group_vars/all.yaml.template) before running the playbooks.
* First playbook to be run is setup_for_hypershift.yaml which will create inventory file for hypershift and will add ssh key to the kvm host.

* Run this shell command:
Expand Down
11 changes: 10 additions & 1 deletion docs/set-variables-group-vars.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,7 @@
## 16 - Hypershift ( Optional )
**Variable Name** | **Description** | **Example**
:--- | :--- | :---
**hypershift.compute_node_type** | Select the compute node type for HCP , either zKVM or zVM | zvm
**hypershift.kvm_host** | IPv4 address of KVM host for hypershift <br /> (kvm host where you want to run all oc commands and create VMs)| 192.168.10.1
**hypershift.kvm_host_user** | User for KVM host | root
**hypershift.bastion_hypershift** | IPv4 address for bastion of Hosted Cluster | 192.168.10.1
Expand Down Expand Up @@ -232,15 +233,23 @@
**hypershift.asc.iso_url** | Give URL for ISO image | https://... <br /> ...s390x-live.s390x.iso
**hypershift.asc.root_fs_url** | Give URL for rootfs image | https://... <br /> ... live-rootfs.s390x.img
**hypershift.asc.mce_namespace** | Namespace where your Multicluster Engine Operator is installed. <br /> Recommended Namespace for MCE is 'multicluster-engine'. <br /> Change this only if MCE is installed in other namespace. | multicluster-engine
**hypershift.agents_parms.agents_count** | Number of agents for the hosted cluster <br /> The same number of compute nodes will be attached to Hosted Cotrol Plane | 2
**hypershift.agents_parms.static_ip_parms.static_ip** | true or false - use static IPs for agents using NMState | true
**hypershift.agents_parms.static_ip_parms.ip** | List of IP addresses for agents | 192.168.10.1
**hypershift.agents_parms.static_ip_parms.interface** | Interface for agents for configuring NMStateConfig | eth0
**hypershift.agents_parms.agents_count** | Number of agents for the hosted cluster <br /> The same number of compute nodes will be attached to Hosted Cotrol Plane | 2
**hypershift.agents_parms.agent_mac_addr** | List of macaddresses for the agents. <br /> Configure in DHCP if you are using dynamic IPs for Agents. | - 52:54:00:ba:d3:f7
**hypershift.agents_parms.disk_size** | Disk size for agents | 100G
**hypershift.agents_parms.ram** | RAM for agents | 16384
**hypershift.agents_parms.vcpus** | vCPUs for agents | 4
**hypershift.agents_parms.nameserver** | Nameserver to be used for agents | 192.168.10.1
**hypershift.agents_parms.zvm_parameters.network_mode** | Network mode for zvm nodes <br /> Supported modes: vswitch | vswitch
**hypershift.agents_parms.zvm_parameters.disk_type** | Disk type for zvm nodes <br /> Supported disk types: fcp, dasd | dasd
**hypershift.agents_parms.zvm_parameters.vcpus** | CPUs for each zvm node | 4
**hypershift.agents_parms.zvm_parameters.memory** | RAM for each zvm node | 16384
**hypershift.agents_parms.zvm_parameters.nameserver** | Nameserver for compute nodes | 192.168.10.1
**hypershift.agents_parms.zvm_parameters.subnetmask** | Subnet mask for compute nodes | 255.255.255.0
**hypershift.agents_parms.zvm_parameters.gateway** | Gateway for compute nodes | 192.168.10.1
**hypershift.agents_parms.zvm_parameters.nodes** | Set of parameters for zvm nodes <br /> Give the details of each zvm node here |

## 17 - (Optional) Disconnected cluster setup
**Variable Name** | **Description** | **Example**
Expand Down
68 changes: 65 additions & 3 deletions inventories/default/group_vars/all.yaml.template
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,7 @@ env:
kvm: [ libguestfs, libvirt-client, libvirt-daemon-config-network, libvirt-daemon-kvm, cockpit-machines, libvirt-devel, virt-top, qemu-kvm, python3-lxml, cockpit, lvm2 ]
bastion: [ haproxy, httpd, bind, bind-utils, expect, firewalld, mod_ssl, python3-policycoreutils, rsync ]
hypershift: [ make, jq, git, virt-install ]
zvm: [ git, python3-pip, python3-devel, openssl-devel, rust, cargo, libffi-devel, wget, tar, jq, gcc, make, x3270, python39 ]

# Section 12 - OpenShift Settings
install_config:
Expand Down Expand Up @@ -239,13 +240,15 @@ rhcos_live_rootfs: "rhcos-4.12.3-s390x-live-rootfs.s390x.img"
# Section 16 - Hypershift ( Optional )

hypershift:
compute_node_type: # KVM or zVM

kvm_host:
kvm_host_user:
bastion_hypershift:
bastion_hypershift_user:

create_bastion: true
networking_device: enc1100
create_bastion: true
networking_device: enc1100 # Following set of parameters required only if create_bastion is true
gateway:

bastion_parms:
Expand All @@ -257,6 +260,9 @@ hypershift:
gateway:
subnet_mask:


# Parameters for oc login

mgmt_cluster_nameserver:
oc_url:

Expand Down Expand Up @@ -291,20 +297,76 @@ hypershift:
mce_namespace: multicluster-engine # This is the Recommended Namespace for Multicluster Engine operator

agents_parms:
agents_count:

# KVM specific parameters - KVM on s390x

static_ip_parms:
static_ip: true
ip: # Required only if static_ip is true
#-
#-
interface: eth0
agents_count:
# If you want to use specific mac addresses, provide them here
agent_mac_addr:
#-
disk_size: 100G
ram: 16384
vcpus: 4
nameserver:



# zVM specific parameters - s390x

zvm_parameters:
network_mode: vswitch # Supported modes: vswitch
disk_type: # Supported modes: fcp , dasd
vcpus: 4
memory: 16384
nameserver:
subnetmask:
gateway:

nodes:
- name:
host:
user:
password:
osa:
ifname: encbdf0
id: 0.0.bdf0,0.0.bdf1,0.0.bdf2
ip:

# Required if disk_type is dasd
dasd:
disk_id:

# Required if disk_type is fcp
lun:
- id:
paths:
- wwpn:
fcp:

- name:
host:
user:
password:
osa:
ifname: encbdf0
id: 0.0.bdf0,0.0.bdf1,0.0.bdf2
ip:

dasd:
disk_id:

lun:
- id:
paths:
- wwpn:
fcp:


# Section 17 - (Optional) Setup disconnected clusters
# Warning: currently, the oc-mirror plugin is officially downloadable to amd64 only.
Expand Down
13 changes: 13 additions & 0 deletions playbooks/create_agents_and_wait_for_install_complete.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,19 @@
roles:
- boot_agents_hypershift

- name: Boot zvm nodes
hosts: bastion_hypershift
tasks:
- name: Install tessia baselib
import_role:
name: install_tessia_baselib
when: hypershift.compute_node_type | lower == 'zvm'

- name: Start zvm nodes
include_tasks: ../roles/boot_zvm_nodes_hypershift/tasks/main.yaml
loop: "{{ range(hypershift.agents_parms.agents_count | int) | list }}"
when: hypershift.compute_node_type | lower == 'zvm'

- name: Scale Nodepool & Configure Haproxy on bastion for hosted workers
hosts: bastion_hypershift
roles:
Expand Down
12 changes: 10 additions & 2 deletions playbooks/create_hosted_cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,12 @@
- name: Setting host
set_fact:
host: 'kvm_host_hypershift'
when: hypershift.compute_node_type | lower != 'zvm'

- name: Install Prereqs on host
import_role:
name: install_prerequisites_host_hypershift
when: hypershift.compute_node_type | lower != 'zvm'

- name: Create macvtap network
hosts: kvm_host_hypershift
Expand All @@ -20,9 +23,12 @@
set_fact:
networking:
device1: "{{ hypershift.networking_device }}"
when: hypershift.compute_node_type | lower != 'zvm'

- name: Creating macvtap network
import_role:
name: macvtap
when: hypershift.compute_node_type | lower != 'zvm'

- name: Create bastion for hypershift
hosts: kvm_host_hypershift
Expand All @@ -33,7 +39,9 @@
- name: Creating Bastion
include_role:
name: create_bastion_hypershift
when: hypershift.create_bastion == true
when:
- hypershift.create_bastion == true
- hypershift.compute_node_type | lower != 'zvm'

- name: Configuring Bastion
hosts: bastion_hypershift
Expand Down Expand Up @@ -67,7 +75,7 @@
- create_hcp_InfraEnv_hypershift

- name: Download Required images for booting Agents
hosts: kvm_host_hypershift
hosts: "{{ 'kvm_host_hypershift' if 'kvm_host_hypershift' in groups['all'] else 'bastion_hypershift' }}"
become: true
roles:
- setup_for_agents_hypershift
Expand Down
1 change: 1 addition & 0 deletions roles/add_hc_workers_to_haproxy_hypershift/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
mode tcp
bind {{ hypershift.bastion_hypershift }}:443
bind {{ hypershift.bastion_hypershift }}:80
marker: "# console"

- name: Add Hosted Cluster Worker IPs to Haproxy
lineinfile:
Expand Down
47 changes: 47 additions & 0 deletions roles/boot_zvm_nodes_hypershift/tasks/main.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
---
- name: Creating agents
block:
- name: Getting script for booting
template:
src: ../templates/boot_nodes.py
dest: /root/ansible_workdir/boot_nodes.py

- name: Debug
debug:
msg: "Booting agent-{{ item }}"

- name: Booting zvm node
shell: |
python /root/ansible_workdir/boot_nodes.py \
--zvmname "{{ hypershift.agents_parms.zvm_parameters.nodes[item].name }}" \
--zvmhost "{{ hypershift.agents_parms.zvm_parameters.nodes[item].host }}" \
--zvmuser "{{ hypershift.agents_parms.zvm_parameters.nodes[item].user }}" \
--zvmpass "{{ hypershift.agents_parms.zvm_parameters.nodes[item].password }}" \
--cpu "{{ hypershift.agents_parms.zvm_parameters.vcpus }}" \
--memory "{{ hypershift.agents_parms.zvm_parameters.memory }}" \
--kernel 'file:///var/lib/libvirt/images/pxeboot/kernel.img' \
--initrd 'file:///var/lib/libvirt/images/pxeboot/initrd.img' \
--cmdline "$(cat /root/ansible_workdir/agent-{{ item }}.parm)"
- name: Attaching dasd disk
shell: vmcp attach {{ hypershift.agents_parms.zvm_parameters.nodes[item].dasd.disk_id }} to {{ hypershift.agents_parms.zvm_parameters.nodes[item].name }}
when: "{{ hypershift.agents_parms.zvm_parameters.disk_type | lower == 'dasd' }}"

- name: Attaching fcp disks
shell: vmcp attach {{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].fcp.split('.')[-1] }} to {{ hypershift.agents_parms.zvm_parameters.nodes[item].name }}
when: "{{ hypershift.agents_parms.zvm_parameters.disk_type | lower == 'fcp' }}"

- name: Wait for the agent to come up
shell: oc get agents -n "{{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }}" --no-headers -o custom-columns=NAME:.metadata.name,APPROVED:.spec.approved | awk '$2 == "false"' | wc -l
register: agent_count
until: agent_count.stdout | int == 1
retries: 40
delay: 10

- name: Get the name of agent
shell: oc get agents -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} --no-headers -o custom-columns=NAME:.metadata.name,APPROVED:.spec.approved | awk '$2 == "false"'
register: agent_name

- name: Approve agents
shell: oc -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} patch agent {{ agent_name.stdout.split(' ')[0] }} -p '{"spec":{"approved":true,"hostname":"compute-{{item}}.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }}","installerArgs":"[\"--append-karg\",\"rd.neednet=1\", \"--append-karg\", \"ip={{ hypershift.agents_parms.zvm_parameters.nodes[item].osa.ip }}::{{ hypershift.agents_parms.zvm_parameters.gateway }}:{{ hypershift.agents_parms.zvm_parameters.subnetmask }}:compute-{{ item }}.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }}:{{ hypershift.agents_parms.zvm_parameters.nodes[item].osa.ifname }}:none\", \"--append-karg\", \"nameserver={{ hypershift.agents_parms.zvm_parameters.nameserver }}\", \"--append-karg\",\"rd.znet=qeth,{{ hypershift.agents_parms.zvm_parameters.nodes[item].osa.id }},layer2=1\",\"--append-karg\", {% if hypershift.agents_parms.zvm_parameters.disk_type | lower != 'fcp' %}\"rd.dasd=0.0.{{ hypershift.agents_parms.zvm_parameters.nodes[item].dasd.disk_id }}\"{% else %}\"rd.zfcp={{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].fcp}},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].wwpn }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].id }}\"{% endif %}]"}}' --type merge

40 changes: 40 additions & 0 deletions roles/boot_zvm_nodes_hypershift/templates/boot_nodes.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
#!/usr/bin/env python3
from tessia.baselib.hypervisors.zvm.zvm import HypervisorZvm
import argparse

parser = argparse.ArgumentParser(description="Get the environment.")

parser.add_argument("--zvmname", type=str, help="z/VM Hypervisor name", required=True)
parser.add_argument("--zvmhost", type=str, help="z/VM Hostname or IP", required=True)
parser.add_argument("--zvmuser", type=str, help="z/VM user", required=True)
parser.add_argument("--zvmpass", type=str, help="z/VM user password", required=True)
parser.add_argument("--cpu", type=int, help="number of Guest CPUs", required=True)
parser.add_argument("--memory", type=int, help="Guest memory in MB", required=True)
parser.add_argument("--kernel", type=str, help="kernel URI", required=True, default='')
parser.add_argument("--cmdline", type=str, help="kernel cmdline", required=True, default='')
parser.add_argument("--initrd", type=str, help="Initrd URI", required=True, default='')

args = parser.parse_args()

parameters = {
'transfer-buffer-size': 8000
}

guest_parameters = {
"boot_method": "network",
"storage_volumes" : [],
"ifaces" : [],
"netboot": {
"cmdline": args.cmdline,
"kernel_uri": args.kernel,
"initrd_uri": args.initrd,
}
}

zvm = HypervisorZvm(args.zvmname,args.zvmhost, args.zvmuser, args.zvmpass, parameters)
zvm.login()
print("Logged in ")
zvm.start(args.zvmuser, args.cpu, args.memory, guest_parameters)
print("VM Started")
zvm.logoff()
print("Logged out")
20 changes: 15 additions & 5 deletions roles/create_hcp_InfraEnv_hypershift/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -98,29 +98,39 @@
- name: Creating list of mac addresses
set_fact:
agent_mac_addr: []
when: hypershift.agents_parms.static_ip_parms.static_ip == true
when:
- hypershift.agents_parms.static_ip_parms.static_ip == true
- hypershift.compute_node_type | lower != 'zvm'

- name: Getting mac addresss for agents
set_fact:
agent_mac_addr: "{{ hypershift.agents_parms.agent_mac_addr }}"
when: ( hypershift.agents_parms.static_ip_parms.static_ip == true ) and ( hypershift.agents_parms.agent_mac_addr != None )
when:
- ( hypershift.agents_parms.static_ip_parms.static_ip == true ) and ( hypershift.agents_parms.agent_mac_addr != None )
- hypershift.compute_node_type | lower != 'zvm'

- name: Generate mac addresses for agents
set_fact:
agent_mac_addr: "{{ agent_mac_addr + ['52:54:00' | community.general.random_mac] }}"
when: ( hypershift.agents_parms.static_ip_parms.static_ip == true ) and ( hypershift.agents_parms.agent_mac_addr == None )
when:
- ( hypershift.agents_parms.static_ip_parms.static_ip == true ) and ( hypershift.agents_parms.agent_mac_addr == None )
- hypershift.compute_node_type | lower != 'zvm'
loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}"

- name: Create NMState Configs
template:
src: nmStateConfig.yaml.j2
dest: /root/ansible_workdir/nmStateConfig-agent-{{ item }}.yaml
when: hypershift.agents_parms.static_ip_parms.static_ip == true
when:
- hypershift.agents_parms.static_ip_parms.static_ip == true
- hypershift.compute_node_type | lower != 'zvm'
loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}"

- name: Deploy NMState Configs
command: oc apply -f /root/ansible_workdir/nmStateConfig-agent-{{ item }}.yaml
when: hypershift.agents_parms.static_ip_parms.static_ip == true
when:
- hypershift.agents_parms.static_ip_parms.static_ip == true
- hypershift.compute_node_type | lower != 'zvm'
loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}"

- name: Wait for ISO to generate in InfraEnv
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{% if hypershift.compute_node_type | lower != 'zvm' %}
[kvm_host_hypershift]
kvm_host_hypershift ansible_host={{ hypershift.kvm_host }} ansible_user={{ hypershift.kvm_host_user }} ansible_become_password={{ kvm_host_password }}



{% endif %}
[bastion_hypershift]
bastion_hypershift ansible_host={{ hypershift.bastion_hypershift }} ansible_user={{ hypershift.bastion_hypershift_user }}
Loading

0 comments on commit b21d3c8

Please sign in to comment.