Table of Contents
A couple of years ago, we added support for installations using the on-premise assisted install (and even wrote a blog post on how to start a virtual lab). Around the same time the OCP developers added a much more streamlined install method to the install binary itself. It works mostly out of the box so we just had to get it integrated on our supported methods and works so well that it has replaced the old assisted on-premise installation method in the DCI OCP agent.
In this post we’ll be looking at how we can adjust to the Agent-Based installer method using the DCI OpenShift Agent, spoiler alert: very few things change
Pre-Requisites
Going by the hardware requisites listed in our official documentation we can estimate the size of the server that will hold our virtualized environment, it will depend on how many nodes you want to have in your test lab:
- Single Node OpenShift
- Minimum 3-node OpenShift cluster
- Split 3+3 node OpenShift cluster
For a cluster the size of each VM we need is 8 cores and 16G of RAM, while the SNO configuration requires a bigger VM (16 cores and 64G of RAM) to house the whole thing in a single node. For the purpose of this exercise we’ll pick the control-plane only cluster so it fits within a regular sized server
Regarding software, our main lab server (our “jumpbox” in DCI lingo) needs RHEL 8 with a valid subscription, EPEL and the DCI release/repositories
Setup DCI Lab
Once we have our basic RHEL 8 system installed, we can proceed with the rest of the setup. We've had a few posts before covering sort of the same process but it all boils down to:
- Follow the Virtual Lab Quickstart manually or
- Cheat and use the example ansible playbook
For the purposes of this blog post we're going to be lazy and go with the easy path (since I already did the work for you!)
Tip
Feel free to modify the provided example playbook to suit your needs! there should be a similar playbook in the DCI OCP Agent samples directory, maybe even more up to date
Let's examine the playbook piece by piece, the first block is where our variables are defined
6 vars: 7 rhsm_org_id: "{{ vault_rhsm_org_id }}" # from https://console.redhat.com 8 rhsm_activation_key: "{{ vault_rhsm_activation_key }}" # from https://console.redhat.com/insights/connector/activation-keys 9 dci_client_id: "{{ vault_dci_client_id }}" # from https://www.distributed-ci.io/remotecis 10 dci_api_secret: "{{ vault_dci_api_secret }}" # from https://www.distributed-ci.io/remotecis 11 dci_cs_url: "https://api.distributed-ci.io/" 12 insights_tags: "{{ my_insights_tags }}" # a dictionary 13 extra_rhel_repos: 14 - "codeready-builder-for-rhel-{{ ansible_distribution_major_version }}-{{ ansible_architecture }}-rpms" 15 - ansible-2-for-rhel-{{ ansible_distribution_major_version }}-{{ ansible_architecture }}-rpms 16 third_party_releases: 17 dci: 18 url: "https://packages.distributed-ci.io/dci-release.el{{ ansible_distribution_major_version }}.noarch.rpm" 19 key: https://packages.distributed-ci.io/RPM-GPG-KEY-DCI-2024 20 epel: 21 url: "https://dl.fedoraproject.org/pub/epel/epel-release-latest-{{ ansible_distribution_major_version }}.noarch.rpm" 22 key: "https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-{{ ansible_distribution_major_version }}"
A couple of things to note here:
- The RHSM information may not be needed, maybe you have your own way to ensure and subscribe your systems to RHSM. In my particular case I have these values stored in an ansible vault but your mileage may vary
- The DCI API client and secrets you should fetch from https://distributed-ci.io/remotecis
Now the first part of the playbook is basic OS level setup, like subscribing to RHSM or enabling insights, no need to go too deep into those until line 56:
56 - name: Add extra repos 57 ansible.builtin.command: 58 cmd: subscription-manager repos --enable={{ item }} 59 changed_when: false 60 loop: "{{ extra_rhel_repos }}"
Here we enable some extra repositories as defined in the variables at the top. This bit is important because the DCI agent requires ansible <= 2.9.27 as of February 2025.
We also install the RPM signing keys and release packages for EPEL and DCI and then finally we install the appropriate version of ansible and lock its version to avoid accidentally upgrading in the future. We don't technically need to do this but I like to avoid possible headaches when/if I update the packages in my system.
84 - name: Install ansible packages 85 ansible.builtin.dnf: 86 name: 87 - dnf-plugin-versionlock 88 - ansible-2.9.27 89 state: present 90 when: not ansible_check_mode 91 92 - name: Lock ansible version 93 community.general.dnf_versionlock: 94 name: ansible 95 state: present 96 ignore_errors: "{{ ansible_check_mode }}"
We finally install the dci-openshift-agent package which actually has provisions in place to install the supported versions of every requirement as long as we have the correct repos configured.
After that there's a bit of extra setup that is added as a convenience: generating an SSH key pair for the dci-openshift-agent user and add it to the authorized_keys so you're able to talk to localhost. In this particular case our jumpbox is actually localhost, so we need to be able to SSH in back as our user.
104 - name: Create ~dci-openshift-agent/.ssh 105 ansible.builtin.file: 106 path: "{{ '~dci-openshift-agent/.ssh' | expanduser }}" 107 state: directory 108 mode: '0750' 109 owner: dci-openshift-agent 110 group: dci-openshift-agent 111 112 - name: Create SSH key 113 vars: 114 _type: rsa 115 community.crypto.openssh_keypair: 116 path: "{{ '~dci-openshift-agent' | expanduser }}/.ssh/id_{{ _type }}" 117 state: present 118 mode: '0600' 119 owner: dci-openshift-agent 120 group: dci-openshift-agent 121 type: "{{ _type }}" 122 register: keypair 123 ignore_errors: "{{ ansible_check_mode }}" 124 125 - name: Keyscan localhost 126 ansible.builtin.command: ssh-keyscan localhost 127 register: localhost_key 128 changed_when: false 129 130 - name: Copy localhost keys to known_hosts 131 ansible.builtin.copy: 132 dest: "{{ '~dci-openshift-agent/.ssh/known_hosts' | expanduser }}" 133 content: "{{ localhost_key.stdout }}" 134 mode: '0640' 135 owner: dci-openshift-agent 136 group: dci-openshift-agent 137 ignore_errors: "{{ ansible_check_mode }}" 138 139 - name: Add generated keypair to authorized_keys 140 ansible.builtin.lineinfile: 141 path: "{{ '~dci-openshift-agent/.ssh/authorized_keys' | expanduser }}" 142 line: "{{ keypair.public_key }}" 143 state: present 144 create: true 145 mode: '0640'
Finally, we create the Remote CI authentication file for the OCP agent in /etc/dci-openshift-agent/dcirc.sh and another one for DCI Pipeline in /etc/dci-pipeline/credentials.yml
150 - name: Create dcirc.sh file 151 ansible.builtin.copy: 152 dest: /etc/dci-openshift-agent/dcirc.sh 153 content: | 154 export DCI_CLIENT_ID={{ dci_client_id }} 155 export DCI_API_SECRET={{ dci_api_secret }} 156 export DCI_CS_URL={{ dci_cs_url }} 157 mode: '0644' 158 owner: dci-openshift-agent 159 group: dci-openshift-agent 160 161 - name: Create credentials file 162 ansible.builtin.copy: 163 dest: /etc/dci-pipeline/credentials.yml 164 content: | 165 --- 166 DCI_CLIENT_ID: {{ dci_client_id }} 167 DCI_API_SECRET: {{ dci_api_secret }} 168 DCI_CS_URL: {{ dci_cs_url }} 169 mode: '0644'
As you can see the example playbook follows the documentation step by step, and just does a few more things for convenience.
Here's a short screencast showing the playbook in action, depending on how many things you changed the output should be similar. Run it and after a few minutes you should have a general-purpose DCI jumpbox with the appropriate configuration to run the OCP agent.
Generating Ansible Inventory
In order to complete your lab is generating an inventory file that can control our VMs. Specifically for the ABI method there's an example installed as part of the OCP agent package. You'll need to excute a tiny playbook on your jumpbox (which should already have the package installed) as the dci-openshift-agent user. There's a samples/abi_on_libvirt/parse-template.yml playbook in the user's home: executing this playbook with the desired configuration will leave you with an ansible inventory that you can put in your /etc/dci-openshift-agent directory. Today we're setting up an inventory with a control-plane only configuration, so we execute the playbook like so:
$ whoami dci-openshift-agent $ cd samples/abi_on_libvirt $ ansible-playbook --inventory dev/controlplane parse-template.yml
Tip
Feel free to explore the other inventories and adjust the resulting inventory to your liking, e.g. number/distribution of nodes, VM size, etc
Once you have a hosts file you can then copy it to /etc/dci-openshift-agent/hosts.
Installing Underlying VM Infrastructure
Since we're mimicing a real-world physical cluster (and accompanying provision infrastructure) in a single machine with virtual machines, we also need to mimimic the usual infrastructure that is needed when installing any openshift cluster, meaning NTP/DNS/HTTP servers
There is yet another example playbook and, again, there is probably a more up to date version in the OCP agent samples directory.
This playbook leverages quite a few roles from the excellent Red Hat CI OCP Ansible collection with just a tiny bit of extra logic on top.
Based on the inventory you created in the last step (and whatever adjustements you make) this playbook will:
- Install an HTTP store for you (container based)
- Install a local container registry (container based, optional)
- Setup and create a full libvirt environment
- Setup DNS records for your VMs
- Setup NTP server for your VMs
- Install the Sushy Tools emulator, to control your VMs with an RedFish-like interface
I'm not going too deep into this playbook because most of the work is done in the imported roles, suffice is to say that once you're happy with your /etc/dci-openshift-agent/hosts inventory you can then run this playbook like this:
$ ansible-playbook --inventory /etc/dci-openshift-agent/hosts vm-infrastructure.yml
And here's a short screen cast of what (roughly) this playbook should accomplish:
Hint
Please adjust both your inventory and/or playbook to your needs, most of the defaults are "fine" but may not suit your needs 100%
DCI Pipeline
The last step is to create and adjust your pipeline file, an agent-based install pipeline can be as simple as follows:
- name: my-openshift-abi stage: ocp ansible_playbook: /usr/share/dci-openshift-agent/dci-openshift-agent.yml ansible_cfg: /usr/share/dci-openshift-agent/ansible.cfg ansible_inventory: /etc/dci-openshift-agent/hosts dci_credentials: /etc/dci-pipeline/credentials.yml topic: OCP-4.17 components: - ocp
And that's it, save that pipeline.yml wherever you like (since we're referencing absolute paths) and let it run:
Note
The above screencast is sped up, the time for completion will vary depending on hardware factors
Conclusion
If you followed all the steps so far (and no bugs were found) at the end you should have a local lab with a Red Hat OpenShift (virtual) Cluster, you can now interact with this cluster just like any other OCP cluster and you can run both the DCI OpenShift and the OpenShift App agents on it, based on the exact same inventory.