SNO deployments with DCI under the hood
Back to all posts

SNO Official deployment options

Let’s take a look at the 2 official methods to deploy SNO:

  1. Single Node OpenShift Official deployment documentation consists on generating a discovery ISO that will contain installation images and base information about the cluster. This ISO is used to bootstrap the node by attaching it to a Server or VM through a virtual media (CD-ROM or USB). This method is also known as installation through Assisted Installer.

  2. Another deployment method is to use Advanced Cluster Management (ACM) with Zero Touch Provisioning (ZTP), this option requires installing ACM, Openshift Data Foundation (ODF) and GitOps operators in a running OCP cluster (with at least three nodes). It uses Assisted Installer mechanisms to install a separate or multiple Single Node Openshift clusters via virtual media or Preboot eXecution Environment (PXE).

What DCI does to deploy SNO

SNO deployment via DCI is a very opinionated collection of playbooks to facilitate the automation and customization with as little hardware as possible, it can be used in hardware without virtual media support. It uses the traditional PXE method to perform the installation as it has proved to be the most compatible method across the platforms available to test by most partners.

DCI will take care of the orchestration of multiple phases to set up the provisioner node, deploy SNO, and run post-deployment operations like installing operators or running tests. It can install both: SNO on libvirt or SNO on Baremetal. OCP upgrades can also be executed in the SNO nodes deployed via DCI.

Prerequisites

SNO deployments with DCI mainly require the installation of the DCI OpenShift Agent, and a provisioner node.

For Virtual SNO these are the main requirements:

For Baremetal SNO the main requirements are the following:

Virtual SNO main aspects

A full detailed description of the steps to deploy can be found here. And an example of the inventory is provided in the DCI OpenShift Agent. Because it is a virtual deployment, predefined values are configured by default, and the cluster will only be accessible from the provisioner node (the host server). What happens during the installation is the following:

In summary, the initial steps listed above are what the SNO documentation provides, but in an automation fashion through DCI.

Baremetal SNO main aspects

Also, a full picture of the variables, components, and installation requirements are described here. Similarly to the virtual deployment, the important parts are the variables defined in the inventory. An inventory example is provided in the DCI OpenShift Agent as reference.

The Baremetal deployment through DCI at first glance looks very different compared to the official documentation and the virtual installation, but what is important is that it uses the same bits. This means that instead of pulling the Live image and building a discovery ISO with an embedded ignition, the DCI agent will pull the kernel, ramdisk, and rootfs which usually come inside the Live image, and generate a grub configuration to boot via iPXE, the grub file will make reference to the images which are stored in the TFTP service in the provisioner node, and the ignition file is stored in a cache (HTTP) service.

To summarize these are the main steps during the deployment:

Conclusions

SNO allows the deployment of an OCP cluster without many complications and without worrying about having a mini data center for the deployment. Some of the recommended use scenarios are summarized below. Of course, I invited you to give it a try to the deployment via DCI and let us know your feedback or comments.