This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Omni Documentation

Welcome to the Omni user guide! This guide shows you everything from getting started to more advanced deployments with Omni.

What is Omni?

Omni is a Kubernetes management platform that simplifies the creation and management of Kubernetes clusters on any environment to provide a simple, secure, and resilient platform. It automates cluster creation, management and upgrades, and integrates Kubernetes and Omni access into enterprise identity providers. While Omni does provide a powerful UI, tight integration with Talos Linux means the platform is 100% API-driven from Linux to Kubernetes to Omni.

Simple

Omni automates the creation of a highly available API endpoint, transparently provides secure encryption, and automates Kubernetes and OS upgrades. Omni works just as well on the edge as it does for large data centers.

Omni is also available for license for on-premises installations.

Secure

Omni creates clusters with both Kubernetes and the OS configured for best-practices security. All traffic to Omni is wireguard-encrypted. Optionally, traffic between the cluster nodes can be encrypted, allowing clusters to span insecure networks. Integration with enterprise identity providers ensures that even admin-level kubeconfig is validated against current user access-lists.

Is Omni for me?

Omni is excellent for managing clusters in just about any environment you have. Machines in the cloud, on-premise, edge, home - they all can be managed with Omni. Unique to Omni you can even create hybrid clusters consisting of machines in disparate locations around the world.

Some common use cases are:

  • On-premise bare metal clusters that can be scaled up with machines in the cloud
  • Edge clusters that are supported by machines in the data center and/or cloud
  • Mixed cloud
  • Single node edge clusters

Ready to get started?

Sign Up to start your free 2 week trial and start exploring today!

1 - Tutorials

1.1 - Getting Started with Omni

A short guide on setting up a Talos Linux cluster with Omni.

In this Getting Started guide we will create a high availability Kubernetes cluster in Omni. This guide will use UTM/QEMU, but the same process will work with bare metal machines, cloud instances, and edge devices.

Prerequisites

Network access

If your machines have outgoing access, you are all set. At a minimum all machines should have outgoing access to the Wireguard endpoint shown on the Home panel, which lists the IP address and UDP port that machines should be able to reach. Machines need to be able to reach that address both on the UDP port specified, and on TCP port 443.

Some virtual or physical machines

The simplest way to experience Omni is to be able to fire up virtual machines. For this tutorial, we suggest any virtualization platform that can boot off an ISO (UTM, ProxMox, Fusion, etc) although any cloud platform can also be used with minor adjustments. Bare metal can also be used, of course, but is often slower to boot and not everyone has spare physical servers around.

talosctl

talosctl is the command line tool for issuing API calls and operating system commands to machines in an Omni cluster. It is not required - cluster management is done via the Omni UI or omnictl, but talosctl can be useful to investigate the state of the nodes and explore functionality.

Download talosctl:

curl -sL https://talos.dev/install | sh

You can also download talosctl from within Omni, by selecting the “Download talosctl” button on the right hand side of the Home screen, then selecting the version and platform of talosctl desired. You should rename the downloaded file to talosctl, make it executable, and copy it to a location on your PATH.

kubectl

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You use kubectl to deploy applications, inspect and manage cluster resources, view logs, etc.

Download kubectl via one of methods outlined in the documentation.

Omni integrates all operations (for Omni itself, Kubernetes, and Talos Linux) against the authentication configured for Omni (which may be GitHub, Google, enterprise SAML, etc.) Thus in order to use kubectl with Omni, you need to install the oidc-login plugin per the documentation.

Note: When using HomeBrew on Macs with M1 chips, there have been reports of issues with the plugin being installed to the wrong path and not being found. You may find it simpler to copy the file from GitHub and manually put the kubelogin binary on your path under the name kubectl-oidc_login so that the kubectl plugin mechanism can find it.

omnictl

omnictl is also an optional binary. Almost all cluster operations can be done via the Omni Web UI, but omnictl is used for advanced operations, to integrate Omni into CI workflows, or simply if you prefer a CLI to a UI.

Download omnictl from within Omni: on the Home tab, click the “Download omnictl” button on the right hand side, select the appropriate platform, and the “Download” button. Then ensure to rename the binary, make it executable, and copy to a location on your path. For example:

Downloads % mv omnictl-darwin-arm64 omnictl
Downloads % chmod +x omnictl
Downloads % mv omnictl /usr/local/bin

Download Installation Media

Omni is a BYO Machine platform - the only thing you need to do is boot your machines off an Omni image. The Omni image will have the necessary credentials and endpoints built in to it, that you can use to boot all your machines. To download the installation media, go to the Home screen in Omni, and select “Download Installation Media” from the right hand side. Select the appropriate media and platform type - e.g. I will select ISO (arm64) as I am going to boot a virtual machine within UTM on an apple M1.

Images exist for many platforms, but you will have to follow the specific installation instructions for that platform (which often involve copying the image to S3 type storage, creating a machine image from it, etc.)

Boot machines off the downloaded image

Create at least 1 virtual machine with 2GB of memory, but 4 are suggested, using your Hypervisor. Have each virtual machine boot off the ISO image you just downloaded, and start the virtual machines.

After a few seconds, the machines should show in the Machines panel of Omni, with the available tag. They will also have tags showing their architecture, memory, cores and other information.

Create Cluster

Click “Clusters” on the left navigation panel, then “Create Cluster” in the top right. You can give your cluster a name, select the version of Talos Linux to install, and the version of Kubernetes. You can also specify any Patches that should be applied in creating your cluster, but in most cases these are not needed to get started. There are other options on this screen - encryption, backups, machine sets, etc - but we will skip those for this tutorial.

In the section headed “Available Machines”, select at least one machine to be the control plane, by clicking CP. (Ideally, you will have 3 control plane nodes.) Select one machine to be a worker, by clicking W0 next to the machine.

Then click Create Cluster. Your cluster is now being created, and you will be taken to the Cluster Overview page. From this page you can download the kubeconfig and talosconfig files for your cluster, by clicking the buttons on the right hand side.

Access Kubernetes

You can query your Kubernetes cluster using normal kubernetes operations:

kubectl --kubeconfig ./talos-default-kubeconfig.yaml get nodes

Note: you will have to change the referenced kubeconfig file depending on the name of the cluster you created.

The first time you use the kubectl command to query a cluster, a browser window will open requiring you to authenticate with your identity provider (Google or GitHub most commonly.) If you get a message error: unknown command "oidc-login" for "kubectl" Unable to connect to the server: then you need to install the oidc-login plugin as noted above.

Access Talos commands

You can explore Talos API commands. Again, the first time you access the Talos API, a browser window will start to authenticate your request. The downloaded talosconfig file for the cluster includes the Omni endpoint, so you do not need to specify endpoints, just nodes.

talosctl --talosconfig ./talos-default-talosconfig.yaml --nodes 10.5.0.2 get members

In the above example you will need to change the name of the talosconfig file, if you changed the cluster name from the default, also the node IP, using the actual IP or name of the nodes you created (which are shown in Omni.)

Explore Omni

Now you have a complete cluster, with a high-availability Kubernetes API endpoint running on the Omni infrastructure, where all authentication is tied in to your enterprise identity provider. It’s a good time to explore all that Omni can offer, including other areas of the UI such as:

  • etcd backup and restores
  • simple cluster upgrades of Kubernetes and Operating System
  • proxying of workload HTTP access
  • simple scaling up and down of clusters
  • the concept of Machine Sets, that let you manage your infrastructure by classes

And if you are wanting to declaratively manage your clusters and infrastructure declaratively, as code, check out Cluster Templates.

Destroy the Cluster

When you are all done, you can remove the cluster by clicking “Destroy Cluster”, in the bottom right of the Cluster Overview panel. This will wipe the machine and return them to the Available state.

Cluster example

We have a complete example of a managed cluster complete with a monitoring stack and application management. It can be found in our community contrib repo.

Components

The contrib example includes:

Use

You will need to copy the contents of the omni directory to a git repository that can be accessed by the cluster you create. You will need to update the ArgoCD ApplicationSet template to reference your new git repo, and regenerate the ArgoCD bootstrap patch.

sed -i 's|https://github.com/siderolabs/contrib.git|<your-git-repo>|' apps/argocd/argocd/bootstrap-app-set.yaml
kustomize build apps/argocd/argocd | yq -i 'with(.cluster.inlineManifests.[] | select(.name=="argocd"); .contents=load_str("/dev/stdin"))' infra/patches/argocd.yaml

With these changes made you should commit the new values and push them to the git repo.

Next you should register your machines with Omni (see guides for AWS, GCP, Azure, Hetzner, and bare metal) and create machine classes to match your hardware. By default, the example cluster template is configured to use 3 instances of machine classes named omni-contrib-controlplane, and all instances that match a machines class called omni-contrib-workers. You can modify these settings in the cluster-template.yaml, but keep in mind that for Rook/Ceph to work you will need to use at least 3 instances with additional block devices for storage.

Once machines are registered you can create the cluster using the cluster template.

omnictl template sync --file infra/cluster-template.yaml

This should create the cluster as described, bootstrap ArgoCD, and begin installing applications from your repo. Depending on your infrastructure, it should take 5-10 mins for the cluster to come fully online with all applications working and healthy. Monitoring can be viewed directly from Omni using the workload proxy feature, with links to Grafana and Hubble found on the left-hand side of the Omni UI.

1.2 - Upgrading Omni Clusters

A guide to keeping your clusters up to date with Omni.

Introduction

Omni makes keeping your cluster up-to-date easy - which is good, as it is important to stay current with Talos Linux and Kubernetes releases, to ensure you are not exposed to already fixed security issues and bugs. Keeping your clusters up-to-date involves updating both the underlying operating system (Talos Linux) and Kubernetes.

Upgrading the Operating System

In order to update the Talos Linux version of all nodes in a cluster, navigate to the overview of the cluster you wish to update. (For example, click the cluster name in the Clusters panel.) If newer Talos Linux versions are available, there will be an indication in the far right, where the current cluster Talos version is listed. Clicking that icon, or the “Update Talos” button in the lower right, will allow you to select the new version of Talos Linux that should be deployed across all nodes of the cluster.

Select the new version, and then “Upgrade” (or “Downgrade”, if you are selecting an older version than currently deployed.) (Omni will ensure that the Kubernetes version running in the cluster is compatible with the selected version of Talos Linux.)

Note: the recommended upgrade path is to always upgrade to the latest patch release of all intermediate minor releases.
For example, if upgrading from Talos 1.5.0 to Talos 1.6.2, the recommended upgrade path would be:
upgrade from 1.5.0 to latest patch of 1.5 - to v1.5.5
upgrade from v1.5.5 to latest patch of 1.6 - to v1.6.2

Omni will then cycle through all nodes in the cluster, safely updating them to the selected version of Talos Linux. Omni will update the control plane nodes first. (Omni ensures the etcd cluster is healthy and will remain healthy after the node being updated leaves the etcd cluster, before allowing a control plane node to be upgraded.)

Omni will drain and cordon each node, update the OS, and then un-cordon the node. Omni always updates nodes with the Talos Linux flag --preserve=true, keeping ephemeral data.

NOTE: If any of your workloads are sensitive to being shut down ungracefully, be sure to use the lifecycle.preStop Pod spec.

Kubernetes Upgrades

As with the Talos Linux version, Omni will notify you on the right hand side of the cluster overview if there is a new version of Kubernetes available. You may click either the Upgrade icon next to the Kubernetes version, or the Update Kubernetes button on the lower right of the cluster overview. Kubernetes upgrades are done non-disruptively to workloads and are run in several phases:

  • Images for new Kubernetes components are pre-pulled to the nodes to minimize downtime and test for image availability.
  • New static pod definitions are rendered on the configuration update which is picked up by the kubelet. The command waits for the change to propagate to the API server state.
  • The command updates the kube-proxy daemonset with the new image version.
  • On every node in the cluster, the kubelet version is updated.

Note: The upgrade operation never deletes any resources from the cluster: obsolete resources should be deleted manually.

Applying changed Kubernetes Manifests

Unlike the Talos Linux command talosctl upgrade-k8s, Omni does not automatically apply updates to Kubernetes bootstrap manifests on a Kubernetes upgrade. This is to prevent Omni overwriting changes to the bootstrap manifests that you applied manually. (Talos Linux has a --dry-run feature on the upgrade command that shows you changes before the upgrade - Omni shows you the changes after the upgrade, but before they are applied.) Thus after each Kubernetes upgrade, it is recommended to examine the BootStrap Manifests of the cluster (as shown in the left hand navigation) and apply the changes, if they are appropriate.

Locking nodes

Omni allows you to control which nodes are upgraded during Talos or Kubernetes upgrade operations. You can lock nodes, which prevents them from receiving configuration updates, upgrades and downgrades. This allows you to ensure that new versions of Talos Linux or Kubernetes, or new config patches, are rolled out in a safe and controlled manner. If you cannot do a blue/green deployment with different clusters, you can roll out a new Kubernetes or Talos Linux release, or config patch, to just some of the nodes in your cluster. Once you have validated your applications perform correctly on the new versions, you can unlock all the nodes, and allow them to be updated also.

Note: you cannot lock control plane nodes, as it is not supported to have the Kubernetes version of a worker higher than that of the control plane nodes in a cluster - this may result in API version incompatibility.

To lock a node, simply select the Lock icon to the right of the node on the Cluster Overview screen, or use the omnictl cluster machine lock command. Upgrade and config patch operations will apply to all other nodes in the cluster, but locked nodes will retain their configuration at the time of locking. Unlock the nodes to allow pending cluster updates to complete.

1.3 - Installing Airgapped Omni

A tutorial on installing Omni in an airgapped environment.

Prerequisites

DNS server NTP server TLS certificates Installed on machine running Omni

  • genuuid
    • Used to generate a unique account ID for Omni.
  • Docker
    • Used for running the suite of applications
  • Wireguard
    • Used by Siderolink

Overview

Gathering Dependencies

In this package, we will be installing:

  • Gitea
  • Keycloak
  • Omni

To keep everything organized, I am using the following directory structure to store all the dependencies and I will move them to the airgapped network all at once.

NOTE: The empty directories will be used for the persistent data volumes when we deploy these apps in Docker.

airgap
├── certs
├── gitea
├── keycloak
├── omni
└── registry

Generate Certificates

TLS Certificates

This tutorial will involve configuring all of the applications to be accessed via https with signed .pem certificates generated with certbot. There are many methods of configuring TLS certificates and this guide will not cover how to generate your own TLS certificates, but there are many resources available online to help with this subject if you do not have certificates already.

Omni Certificate

Omni uses etcd to store the data for our installation and we need to give it a private key to use for encryption of the etcd database.

  1. First, Generate a GPG key.
gpg --quick-generate-key "Omni (Used for etcd data encryption) how-to-guide@siderolabs.com" rsa4096 cert never

This will generate a new GPG key pair with the specified properties.

What’s going on here?

  • quick-gnerate-key allows us to quickly generate a new GPG key pair. -"Omni (Used for etcd data encryption) how-to-guide@siderolabs.com" is the user ID associated with the key which generally consists of the real name, a comment, and an email address for the user.
  • rsa4096 specifies the algorithm type and key size.
  • cert means this key can be used to certify other keys.
  • never specifies that this key will never expire.
  1. Add an encryption subkey

We will use the fingerprint of this key to create an encryption subkey.

To find the fingerprint of the key we just created, run:

gpg --list-secret-keys

Next, run the following command to create the encryption subkey, replacing $FPR with your own keys fingerprint.

gpg --quick-add-key $FPR rsa4096 encr never

In this command:

  • $FPR is the fingprint of the key we are adding the subkey to.
  • rsa4096 and encr specify that the new subkey will be an RSA encryption key with a size of 4096 bits.
  • never means this subkey will never expire.
  1. Export the secret key

Lastly we’ll export this key into an ASCII formatted file so Omni can use it.

gpg --export-secret-key --armor how-to-guide@siderolabs.com > certs/omni.asc
  • --armor is an option which creates the output in ASCII format. Without it, the output would be binary.

Save this file to the certs directory in our package.

Create the app.ini File

Gitea uses a configuration file named app.ini which we can use to pre-configure with the necessary information to run Gitea and bypass the intitial startup page. When we start the container, we will mount this file as a volume using Docker.

Create the app.ini file

vim gitea/app.ini

Replace the DOMAIN, SSH_DOMAIN, and ROOT_URL values with your own hostname:

APP_NAME=Gitea: Git with a cup of tea
RUN_MODE=prod
RUN_USER=git
I_AM_BEING_UNSAFE_RUNNING_AS_ROOT=false

[server]
CERT_FILE=cert.pem
KEY_FILE=key.pem
APP_DATA_PATH=/data/gitea
DOMAIN=${GITEA_HOSTNAME}
SSH_DOMAIN=${GITEA_HOSTNAME}
HTTP_PORT=3000
ROOT_URL=https://${GITEA_HOSTNAME}:3000/
HTTP_ADDR=0.0.0.0
PROTOCOL=https
LOCAL_ROOT_URL=https://localhost:3000/

[database]
PATH=/data/gitea/gitea.db
DB_TYPE=sqlite3
HOST=localhost:3306
NAME=gitea
USER=root
PASSWD=

[security]
INSTALL_LOCK=true # This is the value which tells Gitea not to run the intitial configuration wizard on start up

NOTE: If running this in a production environment, you will also want to configure the database settings for a production database. This configuration will use an internal sqlite database in the container.

Gathering Images

Next we will gather all the images needed installing Gitea, Keycloak, Omni, and the images Omni will need for creating and installing Talos.

I’ll be using the following images for the tutorial:

Gitea

  • docker.io/gitea/gitea:1.19.3 Keycloak
  • quay.io/keycloak/keycloak:21.1.1 Omni
  • ghcr.io/siderolabs/omni:v0.11.0
    • Contact Us if you would like the image used to deploy Omni in an airgapped, or on-prem environement.
  • ghcr.io/siderolabs/imager:v1.4.5
    • pull this image to match the version of Talos you would like to use. Talos
  • ghcr.io/siderolabs/flannel:v0.21.4
  • ghcr.io/siderolabs/install-cni:v1.4.0-1-g9b07505
  • docker.io/coredns/coredns:1.10.1
  • gcr.io/etcd-development/etcd:v3.5.9
  • registry.k8s.io/kube-apiserver:v1.27.2
  • registry.k8s.io/kube-controller-manager:v1.27.2
  • registry.k8s.io/kube-scheduler:v1.27.2
  • registry.k8s.io/kube-proxy:v1.27.2
  • ghcr.io/siderolabs/kubelet:v1.27.2
  • ghcr.io/siderolabs/installer:v1.4.5
  • registry.k8s.io/pause:3.6

NOTE: The Talos images needed may be found using the command talosctl images. If you do not have talosctl installed, you may find the instructions on how to install it here.

Package the images

  1. Pull the images to load them locally into Docker.
  • Run the following command for each of the images listed above except for the Omni image which will be provided to you as an archive file already.
sudo docker pull registry/repository/image-name:tag
  1. Verify all of the images have been downloaded
sudo docker image ls
  1. Save all of the images into an archive file.
  • All of the images can be saved as a single archive file which can be used to load all at once on our airgapped machine with the following command.
docker save -o image-tarfile.tar \
  list \
  of \
  images

Here is an an example of the command used for the images in this tutuorial:

docker save -o registry/all_images.tar \
  docker.io/gitea/gitea:1.19.3 \
  quay.io/keycloak/keycloak:21.1.1 \
  ghcr.io/siderolabs/imager:v1.4.5 \
  ghcr.io/siderolabs/flannel:v0.21.4 \
  ghcr.io/siderolabs/install-cni:v1.4.0-1-g9b07505 \
  docker.io/coredns/coredns:1.10.1 \
  gcr.io/etcd-development/etcd:v3.5.9 \
  registry.k8s.io/kube-apiserver:v1.27.2 \
  registry.k8s.io/kube-controller-manager:v1.27.2 \
  registry.k8s.io/kube-scheduler:v1.27.2 \
  registry.k8s.io/kube-proxy:v1.27.2 \
  ghcr.io/siderolabs/kubelet:v1.27.2 \
  ghcr.io/siderolabs/installer:v1.4.5 \
  registry.k8s.io/pause:3.6

Move Dependencies

Now that we have all the packages necessary for the airgapped deployment of Omni, we’ll create a compressed archive file and move it to our airgapped network.

The directory structure should look like this now:

airgap
├── certs
│   ├── fullchain.pem
│   ├── omni.asc
│   └── privkey.pem
├── gitea
│   └── app.ini
├── keycloak
├── omni
└── registry
    ├── omni-image.tar # Provided to you by Sidero Labs
    └── all_images.tar

Create a compressed archive file to move to our airgap machine.

cd ../
tar czvf omni-airgap.tar.gz airgap/

Now I will use scp to move this file to my machine which does not have internet access. Use whatever method you prefer to move this file.

scp omni-airgap.tar.gz $USERNAME@$AIRGAP_MACHINE:/home/$USERNAME/

Lastly, I will log in to my airgapped machine and extract the compressed archive file in the home directory

cd ~/
tar xzvf omni-airgap.tar.gz

Log in Airgapped Machine

From here on out, the rest of the tutorial will take place from the airgapped machine we will be installing Omni, Keycloak, and Gitea on.

Gitea

Gitea will be used as a container registry for storing our images, but also many other functionalities including Git, Large File Storage, and the ability to store packages for many different package types. For more information on what you can use Gitea for, visit their documentation.

Install Gitea

Load the images we moved over. This will load all the images into Docker on the airgapped machine.

docker load -i registry/omni-image.tar
docker load -i registry/all_images.tar

Run Gitea using Docker:

  • The app.ini file is already configured and mounted below with the - v argument.
sudo docker run -it \
    -v $PWD/certs/privkey.pem:/data/gitea/key.pem \
    -v $PWD/certs/fullchain.pem:/data/gitea/cert.pem \
    -v $PWD/gitea/app.ini:/data/gitea/conf/app.ini \
    -v $PWD/gitea/data/:/data/gitea/ \
    -p 3000:3000 \
    gitea/gitea:1.19.3

You may now log in at the https://${GITEA_HOSTNAME}:3000 to begin configuring Gitea to store all the images needed for Omni and Talos.

Gitea setup

This is just the bare minimum setup to run Omni. Gitea has many additional configuration options and security measures to use in accordance with your industry’s security standards. More information on the configuration of Gitea can be found (here)[https://docs.gitea.com/].

Create a user

Click the Register button at the top right corner. The first user created will be created as an admin and permissions which can be adjusted accordingly afterwards if you like.

Create organizations

After registering an admin user, the organizations, can be created which will act as the package repositories for storing images. Create the following organizations:

  • siderolabs
  • keycloak
  • coredns
  • etcd-development
  • registry-k8s-io-proxy

NOTE: If you are using self-signed certs and would like to push images to your local Gitea using Docker, you will also need to configure your certs.d directory as described (here)[https://docs.docker.com/engine/security/certificates/].

Push Images to Gitea

Now that all of our organizations have been created, we can push the images we loaded into our Gitea for deploying Keycloak, Omni, and storing images used by Talos.

For all of the images loaded, we first need to tag them for our Gitea.

sudo docker tag original-image:tag gitea:3000/new-image:tag

For example, if I am tagging the kube-proxy image it will look like this:

NOTE: Don’t forget to tag all of the images from registry.k8s.io to go to the registry-k8s-io-proxy organization created in Gitea.

docker tag registry.k8s.io/kube-proxy:v1.27.2 ${GITEA_HOSTNAME}:3000/registry-k8s-io-proxy/kube-proxy:v1.27.2

Finally, push all the images into Gitea.

docker push ${GITEA_HOSTNAME}:3000/registry-k8s-io-proxy/kube-proxy:v1.27.2

Keycloak

Install Keycloak

The image used for keycloak is already loaded into Gitea and there are no files to stage before starting it so I’ll run the following command to start it. Replace KEYCLOAK_HOSTNAME and GITEA_HOSTNAME with your own hostnames.

sudo docker run -it \
    -p 8080:8080 \
    -p 8443:8443 \
    -v $PWD/certs/fullchain.pem:/etc/x509/https/tls.crt \
    -v $PWD/certs/privkey.pem:/etc/x509/https/tls.key \
    -v $PWD/keycloak/data:/opt/keycloak/data \
    -e KEYCLOAK_ADMIN=admin \
    -e KEYCLOAK_ADMIN_PASSWORD=admin \
    -e KC_HOSTNAME=${KEYCLOAK_HOSTNAME} \
    -e KC_HTTPS_CERTIFICATE_FILE=/etc/x509/https/tls.crt \
    -e KC_HTTPS_CERTIFICATE_KEY_FILE=/etc/x509/https/tls.key \
    ${GITEA_HOSTNAME}:3000/keycloak/keycloak:21.1.1 \
    start

Once Keycloak is installed, you can reach it in your browser at `https://${KEYCLOAK_HOSTNAME}:3000

Configuring Keycloak

For details on configuring Keycloak as a SAML Identity Provider to be used with Omni, follow this guide: Configuring Keycloak SAML

Omni

With Keycloak and Gitea installed and configured, we’re ready to start up Omni and start creating and managing clusters.

Install Omni

To install Omni, first generate a UUID to pass to Omni when we start it.

export OMNI_ACCOUNT_UUID=$(uuidgen)

Next run the following command, replacing hostnames for Omni, Gitea, or Keycloak with your own.

sudo docker run \
  --net=host \
  --cap-add=NET_ADMIN \
  -v $PWD/etcd:/_out/etcd \
  -v $PWD/certs/fullchain.pem:/fullchain.pem \
  -v $PWD/certs/privkey.pem:/privkey.pem \
  -v $PWD/certs/omni.asc:/omni.asc \
  ${GITEA_HOSTNAME}:3000/siderolabs/omni:v0.12.0 \
    --account-id=${OMNI_ACCOUNT_UUID} \
    --name=omni \
    --cert=/fullchain.pem \
    --key=/privkey.pem \
    --siderolink-api-cert=/fullchain.pem \
    --siderolink-api-key=/privkey.pem \
    --private-key-source=file:///omni.asc \
    --event-sink-port=8091 \
    --bind-addr=0.0.0.0:443 \
    --siderolink-api-bind-addr=0.0.0.0:8090 \
    --k8s-proxy-bind-addr=0.0.0.0:8100 \
    --advertised-api-url=https://${OMNI_HOSTNAME}:443/ \
    --siderolink-api-advertised-url=https://${OMNI_HOSTNAME}:8090/ \
    --siderolink-wireguard-advertised-addr=${OMNI_HOSTNAME}:50180 \
    --advertised-kubernetes-proxy-url=https://${OMNI_HOSTNAME}:8100/ \
    --auth-auth0-enabled=false \
    --auth-saml-enabled \
    --talos-installer-registry=${GITEA_HOSTNAME}:3000/siderolabs/installer \
    --talos-imager-image=${GITEA_HOSTNAME}:3000/siderolabs/imager:v1.4.5 \
    --kubernetes-registry=${GITEA_HOSTNAME}:3000/siderolabs/kubelet \
    --auth-saml-url "https://${KEYCLOAK_HOSTNAME}:8443/realms/omni/protocol/saml/descriptor"

What’s going on here:

  • --auth-auth0-enabled=false tells Omni not to use Auth0.
  • --auth-saml-enabled enables SAML authentication.
  • --talos-installer-registry, --talos-imager-image and --kubernetes-registry allow you to set the default images used by Omni to point to your local repository.
  • --auth-saml-url is the URL we saved earlier in the configuration of Keycloak.
    • --auth-saml-metadata may also be used if you would like to pass it as a file instead of a URL and can be used if using self-signed certificates for Keycloak.

Creating a cluster

Guides on creating a cluster on Omni can be found here:

Because we’re working in an airgapped environment we will need the following values added to our cluster configs so they know where to pull images from. More information on the Talos MachineConfig.registries can be found here.

NOTE: In this example, cluster discovery is also disabled. You may also configure cluster discovery on your network. More information on the Discovery Service can be found here

machine:
  registries:
    mirrors:
    docker.io:
      endpoints:
      - https://${GITEA_HOSTNAME}:3000
    gcr.io:
      endpoints:
      - https://${GITEA_HOSTNAME}:3000
    ghcr.io:
      endpoints:
      - https://${GITEA_HOSTNAME}:3000
    registry.k8s.io:
      endpoints:
      - https://${GITEA_HOSTNAME}:3000/v2/registry-k8s-io-proxy
      overridePath: true
cluster:
  discovery:
    enabled: false

Specifics on patching machines can be found here:

Closure

With Omni, Gitea, and Keycloak set up, you are ready to start managing and installing Talos clusters on your network! The suite of applications installed in this tutorial is an example of how an airgapped environment can be set up to make the most out of the Kubernetes clusters on your network. Other container registries or authentication providers may also be used with a similar setup, but this suite was chosen to give you starting point and an example of what your environment could look like.

1.4 - Using SAML and ACLs

A tutorial on using SAML and ACLs in Omni.

Using SAML and ACLs for fine-grained access control

In this tutorial we will use SAML and ACLs to control fine-grained access to Kubernetes clusters.

Let’s assume that at our organization:

  • We run a Keycloak instance as the SAML identity provider.
  • Have our Omni instance already configured to use Keycloak as the SAML identity provider.
  • Our Omni instance has 2 types of clusters:
    • Staging clusters with the name prefix staging-: staging-1, staging-2, etc.
    • Production clusters with the name prefix prod-: prod-1, prod-2, etc.
  • We want the users with the SAML role omni-cluster-admin to have full access to all clusters.
  • We want the users with the SAML role omni-cluster-support to have full access to staging clusters and read-only access to production clusters.

Sign in as the initial SAML User

If our Omni instance has no users yet, the initial user who signs in via SAML will be automatically assigned to the Omni Admin role.

We sign in as the user admin@example.org and get the Omni Admin role.

Configuring the AccessPolicy

We need to configure the ACL to assign the omni-cluster-support role to the users with the SAML role omni-cluster-support and the omni-cluster-admin role to the users with the SAML role omni-cluster-admin.

Create the following YAML file acl.yaml:

metadata:
  namespace: default
  type: AccessPolicies.omni.sidero.dev
  id: access-policy
spec:
  usergroups:
    support:
      users:
        - labelselectors:
            - saml.omni.sidero.dev/role/omni-cluster-support=
    admin:
      users:
        - labelselectors:
            - saml.omni.sidero.dev/role/omni-cluster-admin=
  clustergroups:
    staging:
      clusters:
        - match: staging-*
    production:
      clusters:
        - match: prod-*
    all:
      clusters:
        - match: "*"
  rules:
    - users:
        - group/support
      clusters:
        - group/staging
      role: Operator
    - users:
        - group/support
      clusters:
        - group/production
      role: Reader
      kubernetes:
        impersonate:
          groups:
            - read-only
    - users:
        - group/admin
      clusters:
        - group/all
      role: Operator
  tests:
    - name: support engineer has Operator access to staging cluster
      user:
        name: support-eng@example.org
        labels:
          saml.omni.sidero.dev/role/omni-cluster-support: ""
      cluster:
        name: staging-1
      expected:
        role: Operator
    - name: support engineer has Reader access to prod cluster and impersonates read-only group
      user:
        name: support-eng@example.org
        labels:
          saml.omni.sidero.dev/role/omni-cluster-support: ""
      cluster:
        name: prod-1
      expected:
        role: Reader
        kubernetes:
          impersonate:
            groups:
              - read-only
    - name: admin has Operator access to staging cluster
      user:
        name: admin-1@example.org
        labels:
          saml.omni.sidero.dev/role/omni-cluster-admin: ""
      cluster:
        name: staging-1
      expected:
        role: Operator
    - name: admin has Operator access to prod cluster
      user:
        name: admin-1@example.org
        labels:
          saml.omni.sidero.dev/role/omni-cluster-admin: ""
      cluster:
        name: prod-1
      expected:
        role: Operator

As the admin user admin@example.org, apply this ACL using omnictl:

$ omnictl apply -f acl.yaml

Accessing the Clusters

Now, in an incognito window, log in as a support engineer, cluster-support-1@example.org. Since the user is not assigned to any Omni role yet, they cannot use Omni Web.

Download omnictl and omniconfig from the UI, and try to list the clusters by using it:

$ omnictl --omniconfig ./support-omniconfig.yaml get cluster
NAMESPACE   TYPE   ID   VERSION
Error: rpc error: code = PermissionDenied desc = failed to validate: 1 error occurred:
	* rpc error: code = PermissionDenied desc = unauthorized: access denied: insufficient role: "None"

You won’t be able to list the clusters because the user is not assigned to any Omni role.

Now try to get the cluster staging-1:

$ omnictl --omniconfig ./support-omniconfig.yaml get cluster staging-1
NAMESPACE   TYPE      ID          VERSION
default     Cluster   staging-1   5

You can get the cluster staging-1 because the ACL allows the user to access the cluster.

Finally, try to delete the cluster staging-1:

$ omnictl --omniconfig ./support-omniconfig.yaml delete cluster staging-1
torn down Clusters.omni.sidero.dev staging-1
destroyed Clusters.omni.sidero.dev staging-1

The operation will succeed, because the ACL allows Operator-level access to the cluster for the user.

Try to do the same operations with the cluster prod-1:

$ omnictl --omniconfig ./support-omniconfig.yaml get cluster prod-1
NAMESPACE   TYPE      ID          VERSION
default     Cluster   prod-1   5

$ omnictl --omniconfig ./support-omniconfig.yaml delete cluster prod-1
Error: rpc error: code = PermissionDenied desc = failed to validate: 1 error occurred:
	* rpc error: code = PermissionDenied desc = unauthorized: access denied: insufficient role: "Reader"

The user will be able to get the cluster but not delete it, because the ACL allows only Reader-level access to the cluster for the user.

If you do the same operations as the admin user, you’ll notice that you are able to both get and delete staging and production clusters.

Assigning Omni roles to Users

If you want to allow SAML users to use Omni Web, you need to assign them at least the Reader role. As the admin, sign in to Omni Web and assign the role Reader to both cluster-support-1@example.org and cluster-admin-1@example.org.

Now, as the support engineer, you can sign out & sign in again to Omni Web and see the clusters staging-1 and prod-1 in the UI.

2 - How-to guides

2.1 - Register a Bare Metal Machine (ISO)

A guide on how to register bare metal machines with Omni using an ISO.

This guide shows you how to register a bare metal machine with Omni by booting from an ISO.

Dashboard

Upon logging in you will be presented with the Omni dashboard.

Download the ISO

First, download the ISO from the Omni portal by clicking on the “Download Installation Media” button. Now, click on the “Options” dropdown menu and search for the “ISO” option. Notice there are two options: one for amd64 and another for arm64. Select the appropriate option for the machine you are registering. Now that you have selected the ISO option for the appropriate architecture, click the “Download” button.

Write the ISO to a USB Stick

First, plug the USB drive into your local machine. Now, find the device path for your USB drive and write the ISO to the USB drive.

diskutil list
...
/dev/disk2 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   *31.9 GB    disk2
...

In this example disk2 is the USB drive.

dd if=<path to ISO> of=/dev/disk2 conv=fdatasync
$ lsblk
...
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdb      8:0    0 39.1G  0 disk
...

In this example sdb is the USB drive.

dd if=<path to ISO> of=/dev/sdb conv=fdatasync

Boot the Machine

Now that we have our bootable USB drive, plug it into the machine you are registering. Once the machine is booting you will notice logs from Talos Linux on the console stating that it is reachable over an IP address.

Conclusion

Navigate to the “Machines” menu in the sidebar. You should now see a machine listed.

You now have a bare metal machine registered with Omni and ready to provision.

2.2 - Register a Bare Metal Machine (PXE/iPXE)

A guide on how to register a bare metal machines with Omni using PXE/iPXE.

This guide shows you how to register a bare metal machine with Omni by PXE/iPXE booting.

Copy the Required Kernel Parameters

Upon logging in you will be presented with the Omni dashboard. Click the “Copy Kernel Parameters” button and save the value for later.

Download the PXE/iPXE Assets

Download vmlinuz and initramfs.xz from the release of your choice (Talos Linux 1.2.6 or greater is required), and place them in /var/lib/matchbox/assets.

Create the Profile

Place the following in /var/lib/matchbox/profiles/default.json:

{
  "id": "default",
  "name": "default",
  "boot": {
    "kernel": "/assets/vmlinuz",
    "initrd": ["/assets/initramfs.xz"],
    "args": [
      "initrd=initramfs.xz",
      "init_on_alloc=1",
      "slab_nomerge",
      "pti=on",
      "console=tty0",
      "console=ttyS0",
      "printk.devkmsg=on",
      "talos.platform=metal",
      "siderolink.api=<your siderolink.api value>",
      "talos.events.sink=<your talos.events.sink value>",
      "talos.logging.kernel=<your talos.logging.kernel value>"
    ]
  }
}

Update siderolink.api, talos.events.sink, and talos.logging.kernel with the kerenl parameters copied from the dashboard.

Place the following in /var/lib/matchbox/groupss/default.json:

Create the Group

{
  "id": "default",
  "name": "default",
  "profile": "default"
}

Once your machine is configured to PXE boot using your tool of choice, power the machine on.

Conclusion

Navigate to the “Machines” menu in the sidebar. You should now see a machine listed.

You now have a bare metal machine registered with Omni and ready to provision.

2.3 - Register a GCP Instance

A guide on how to register a GCP instance with Omni.

This guide shows you how to register a GCP instance with Omni.

Dashboard

Upon logging in you will be presented with the Omni dashboard.

Download the Image

First, download the GCP image from the Omni portal by clicking on the “Download Installation Media” button. Now, click on the “Options” dropdown menu and search for the “GCP” option. Notice there are two options: one for amd64 and another for arm64. Select the appropriate option for the machine you are registering. Now that you have selected the GCP option for the appropriate architecture, click the “Download” button.

Upload the Image

In the Google Cloud console, navigate to Buckets under the Cloud Storage menu, and create a new bucket with the default. Click on the bucket in the Google Cloud console, click Upload Files, and select the image download from the Omni console.

Convert the Image

In the Google Cloud console select Images under the Compute Engine menu, and then Create Image. Name your image (e.g. Omni-talos-1.2.6), then select the Source as Cloud Storage File. Click Browse in the Cloud Storage File field and navigate to the bucket you created. Select the image you uploaded. Leave the rest of the options at their default and click Create at the bottom.

Create a GCP Instance

In Google Cloud console select VM instances under the Compute Engine menu. Now select Create Instance. Name your instance, and select a region and zone. Under “Machine Configuration”, ensure your instance has at least 4GB of memory. In the Boot Disk section, select Change and then select Custom Images. Select the image created in the previous steps. Now, click Create at the bottom to create your instance.

Conclusion

Navigate to the “Machines” menu in the sidebar. You should now see a machine listed.

You now have a GCP machine registered with Omni and ready to provision.

2.4 - Register an AWS EC2 Instance

A guide on how to register an AWS EC2 instance with Omni.

This guide shows you how to register an AWS EC2 instance with Omni.

Set your AWS region

REGION="us-west-2"

Creating the subnet

First, we need to know what VPC to create the subnet on, so let’s describe the VPCs in the region where we want to create the Omni machines.

$ aws ec2 describe-vpcs --region $REGION
{
    "Vpcs": [
        {
            "CidrBlock": "172.31.0.0/16",
            "DhcpOptionsId": "dopt-0238fea7541672af0",
            "State": "available",
            "VpcId": "vpc-04ea926270c55d724",
            "OwnerId": "753518523373",
            "InstanceTenancy": "default",
            "CidrBlockAssociationSet": [
                {
                    "AssociationId": "vpc-cidr-assoc-0e518f7ac9d02907d",
                    "CidrBlock": "172.31.0.0/16",
                    "CidrBlockState": {
                        "State": "associated"
                    }
                }
            ],
            "IsDefault": true
        }
    ]
}

Note the VpcId (vpc-04ea926270c55d724).

Now, create a subnet on that VPC with a CIDR block that is within the CIDR block of the VPC. In the above example, as the VPC has a CIDR block of 172.31.0.0/16, we can use 172.31.128.0/20.

$ aws ec2 create-subnet \
    --vpc-id vpc-04ea926270c55d724 \
    --region us-west-2 \
    --cidr-block 172.31.128.0/20
{
    "Subnet": {
        "AvailabilityZone": "us-west-2c",
        "AvailabilityZoneId": "usw2-az3",
        "AvailableIpAddressCount": 4091,
        "CidrBlock": "172.31.192.0/20",
        "DefaultForAz": false,
        "MapPublicIpOnLaunch": false,
        "State": "available",
        "SubnetId": "subnet-04f4d6708a2c2fb0d",
        "VpcId": "vpc-04ea926270c55d724",
        "OwnerId": "753518523373",
        "AssignIpv6AddressOnCreation": false,
        "Ipv6CidrBlockAssociationSet": [],
        "SubnetArn": "arn:aws:ec2:us-west-2:753518523373:subnet/subnet-04f4d6708a2c2fb0d",
        "EnableDns64": false,
        "Ipv6Native": false,
        "PrivateDnsNameOptionsOnLaunch": {
            "HostnameType": "ip-name",
            "EnableResourceNameDnsARecord": false,
            "EnableResourceNameDnsAAAARecord": false
        }
    }
}

Note the SubnetID (subnet-04f4d6708a2c2fb0d).

Create the Security Group

$ aws ec2 create-security-group \
    --region $REGION \
    --group-name omni-aws-sg \
    --description "Security Group for Omni EC2 instances"
{
    "GroupId": "sg-0b2073b72a3ca4b03"
}

Note the GroupId (sg-0b2073b72a3ca4b03).

Allow all internal traffic within the same security group, so that Kubernetes applications can talk to each other on different machines:

aws ec2 authorize-security-group-ingress \
    --region $REGION \
    --group-name omni-aws-sg \
    --protocol all \
    --port 0 \
    --source-group omni-aws-sg

Creating the bootable AMI

To do so, log in to your Omni account, and, from the Omni overview page, select “Download Installation Media”. Select “AWS AMI (amd64)” or “AWS AMI (arm64)”, as appropriate for your desired EC2 instances. (Most are amd64.) Click “Download”, and the AMI will be downloaded to you local machine.

Extract the downloaded aws-amd64.tar.gz Then copy the disk.raw file to S3. We need to create a bucket, copy the image file to it, import it as a snapshot, then register an AMI image from it.

Create S3 bucket

REGION="us-west-2"
aws s3api create-bucket \
    --bucket <bucket name> \
    --create-bucket-configuration LocationConstraint=$REGION \
    --acl private

Copy image file to the bucket

aws s3 cp disk.raw s3://<bucket name>/omni-aws.raw

Import the image as a snapshot

$ aws ec2 import-snapshot \
    --region $REGION \
    --description "Omni AWS" \
    --disk-container "Format=raw,UserBucket={S3Bucket=<bucket name>,S3Key=omni-aws.raw}"
{
    "Description": "Omni AWS",
    "ImportTaskId": "import-snap-1234567890abcdef0",
    "SnapshotTaskDetail": {
        "Description": "Omni AWS",
        "DiskImageSize": "0.0",
        "Format": "RAW",
        "Progress": "3",
        "Status": "active",
        "StatusMessage": "pending"
        "UserBucket": {
            "S3Bucket": "<bucket name>",
            "S3Key": "omni-aws.raw"
        }
    }
}

Check the status of the import with:

$ aws ec2 describe-import-snapshot-tasks \
    --region $REGION \
    --import-task-ids
{
    "ImportSnapshotTasks": [
        {
            "Description": "Omni AWS",
            "ImportTaskId": "import-snap-1234567890abcdef0",
            "SnapshotTaskDetail": {
                "Description": "Omni AWS",
                "DiskImageSize": "705638400.0",
                "Format": "RAW",
                "Progress": "42",
                "Status": "active",
                "StatusMessage": "downloading/converting",
                "UserBucket": {
                    "S3Bucket": "<bucket name>",
                    "S3Key": "omni-aws.raw"
                }
            }
        }
    ]
}

Once the Status is completed note the SnapshotId (snap-0298efd6f5c8d5cff).

Register the Image

$ aws ec2 register-image \
    --region $REGION \
    --block-device-mappings "DeviceName=/dev/xvda,VirtualName=talos,Ebs={DeleteOnTermination=true,SnapshotId=$SNAPSHOT,VolumeSize=4,VolumeType=gp2}" \
    --root-device-name /dev/xvda \
    --virtualization-type hvm \
    --architecture x86_64 \
    --ena-support \
    --name omni-aws-ami
{
    "ImageId": "ami-07961b424e87e827f"
}

Note the ImageId (ami-07961b424e87e827f).

Create EC2 instances from the AMI

Now, using the AMI we created, along with the security group created above, provision EC2 instances:

 aws ec2 run-instances \
    --region  $REGION \
    --image-id ami-07961b424e87e827f \
    --count 1 \
    --instance-type t3.small   \
    --subnet-id subnet-0a7f5f87f62c301ea \
    --security-group-ids $SECURITY_GROUP   \
    --associate-public-ip-address \
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=omni-aws-ami}]" \
    --instance-market-options '{"MarketType":"spot"}'

2.5 - Register an Azure Instance

A guide on how to register an Azure instance with Omni.

This guide shows you how to register an Azure instance with Omni.

Dashboard

Upon logging in you will be presented with the Omni dashboard.

Download the Image

Download the Azure image from the Omni portal by clicking on the “Download Installation Media” button. Click on the “Options” dropdown menu and search for the “Azure” option. Notice there are two options: one for amd64 and another for arm64. Select the appropriate architecture for the machine you are registering, then click the “Download” button.

Once downloaded to your local machine, untar with tar -xvf /path/to/image

Upload the Image

In the Azure console, navigate to Storage accounts, and create a new storage account. Once the account is provisioned, navigate to the resource and click Upload. In the Upload Blob form, select Create New container, and name your container (e.g. omni-may-2023). Now click Browse for Files, and select the disk.vhd file that you uncompressed above, then select Upload.

We’ll make use of the following environment variables throughout the setup. Edit the variables below with your correct information.

# Storage account to use
export STORAGE_ACCOUNT="StorageAccountName"

# Storage container to upload to
export STORAGE_CONTAINER="StorageContainerName"

# Resource group name
export GROUP="ResourceGroupName"

# Location
export LOCATION="centralus"

# Get storage account connection string based on info above
export CONNECTION=$(az storage account show-connection-string \
                    -n $STORAGE_ACCOUNT \
                    -g $GROUP \
                    -o tsv)

You can upload the image you uncompressed to blob storage with:

az storage blob upload \
  --connection-string $CONNECTION \
  --container-name $STORAGE_CONTAINER \
  -f /path/to/extracted/disk.vhd \
  -n omni-azure.vhd

Convert the Image

In the Azure console select Images, and then Create. Select a Resource Group, Name your image (e.g. omni-may), and set the OS type to Linux. Now Browse to the storage blob created above, navigating to the container with the uploaded disk.vhd. Select “Standard HDD” for account type, then click Review and Create, then Create.

Now that the image is present in our blob storage, we’ll register it.

az image create \
  --name omni \
  --source https://$STORAGE_ACCOUNT.blob.core.windows.net/$STORAGE_CONTAINER/omni-azure.vhd \
  --os-type linux \
  -g $GROUP

Create an Azure Instance

Creating an instance requires setting the os-disk-size property, which is easiest to achieve via the CLI:

az vm create \
    --name azure-worker \
    --image omni \
    -g $GROUP \
    --admin-username talos \
    --generate-ssh-keys \
    --verbose \
    --os-disk-size-gb 20

Conclusion

In the Omni UI, navigate to the “Machines” menu in the sidebar. You should now see the Azure machine that was created listed as an available machine, registered with Omni and ready to provision.

2.6 - Expose an HTTP Service from a Cluster

A guide on how to expose am HTTP service from a cluster for external access.

This guide shows you how to expose an HTTP Kubernetes Service to be accessible from Omni Web.

Enabling Workload Service Proxying Feature

You first need to enable the workload service proxying feature on the cluster you want to expose Services from.

If you are creating a new cluster, you can enable the feature by checking the checkbox in the “Cluster Features” section:

If you have an existing cluster, simply check the checkbox in the features section of the cluster overview page:

If you are using cluster templates, you can enable the feature by adding the following to the cluster template YAML:

features:
  enableWorkloadProxy: true

You will notice that the “Exposed Services” section will appear on the left menu for the cluster the feature is enabled on.

Exposing a Kubernetes Service

Let’s install a simple Nginx deployment and service to expose it.

Create the following nginx.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: workload-proxy-example-nginx
  namespace: default
spec:
  selector:
    matchLabels:
      app: workload-proxy-example-nginx
  template:
    metadata:
      labels:
        app: workload-proxy-example-nginx
    spec:
      containers:
        - name: workload-proxy-example-nginx
          image: nginx:stable-alpine-slim
---
apiVersion: v1
kind: Service
metadata:
  name: workload-proxy-example-nginx
  namespace: default
  annotations:
    omni-kube-service-exposer.sidero.dev/port: "50080"
    omni-kube-service-exposer.sidero.dev/label: Sample Nginx
    omni-kube-service-exposer.sidero.dev/icon: H4sICB0B1mQAA25naW54LXN2Z3JlcG8tY29tLnN2ZwBdU8ly2zAMvfcrWPZKwiTANWM5015yyiHdDr1kNLZsa0axvKix8/cFJbvNdCRCEvEAPDxQ8/vLSydem+Op7XeVtGCkaHbLftXuNpX8Pax1kveL+UetxY9919erZiWG/k58+/kgvjb7Xonz+Qyn182RP2DZvyjx0OyaYz30x38o8dhemqP43vfdSWi9+DDnCHFuV8O2ksmY/UWKbdNutsPfz9e2OX/pL5U0wghCvqVgqrtTJbfDsL+bzUrhM0F/3MzQGDPjlHIxH9qhaxbrtmueh7d987zbtLvLfDZtz/f1sBWrSj5aD9klhVswwdfWgLNJXR+GL6sgRwSP6QmRd53yELzCCMmRShCjqyFmLOsWwCiIKS01GJOUA0qZHQUby5ZXlsAGjkv8wmuK00A+gDfxoD1DSREQOm0teBdVgOA4wqdY1i0i+AiG4lOGbFEhg7icZWJIgCMz+It1DA/hYDQXScxVjyyohpCprBt7SswylJze49htVNxQjk6xDuSXTAs12OQgUGLWMRenLj4pTsNb11SSde/uPhmbA2U5e6c3qxBiEdhTOhhO77CIwxvJ55p7NVlN1owX+xkOJhUb3M1OTuShAZpQIoK72mtcSF5bwExLoxECjsqzssgIzdMLB2IdiPViApHbsTwhH1KNkIgFHO2tTOB54pjfXu3k4QLechmK9lCGzfm9s0XbQtmWfqa4NB0Oo1lzVtUsx6wjKxtYBcKSMkJOyGzJBbYxBM0aBypZfdBRJyDCz0zNRjXZKw0D/J75KFApFvPVTt73kv/6b0Lr9bqMp/wziz8W9M/pAwQAAA==
spec:
  selector:
    app: workload-proxy-example-nginx
  ports:
    - name: http
      port: 80
      targetPort: 80

Apply it to the cluster:

kubectl apply -f nginx.yaml

Note the following annotations on the cluster:

omni-kube-service-exposer.sidero.dev/port: "50080"
omni-kube-service-exposer.sidero.dev/label: Sample Nginx
omni-kube-service-exposer.sidero.dev/icon: H4sICB0B1mQAA25naW54LXN2Z3JlcG8tY29tLnN2ZwBdU8ly2zAMvfcrWPZKwiTANWM5015yyiHdDr1kNLZsa0axvKix8/cFJbvNdCRCEvEAPDxQ8/vLSydem+Op7XeVtGCkaHbLftXuNpX8Pax1kveL+UetxY9919erZiWG/k58+/kgvjb7Xonz+Qyn182RP2DZvyjx0OyaYz30x38o8dhemqP43vfdSWi9+DDnCHFuV8O2ksmY/UWKbdNutsPfz9e2OX/pL5U0wghCvqVgqrtTJbfDsL+bzUrhM0F/3MzQGDPjlHIxH9qhaxbrtmueh7d987zbtLvLfDZtz/f1sBWrSj5aD9klhVswwdfWgLNJXR+GL6sgRwSP6QmRd53yELzCCMmRShCjqyFmLOsWwCiIKS01GJOUA0qZHQUby5ZXlsAGjkv8wmuK00A+gDfxoD1DSREQOm0teBdVgOA4wqdY1i0i+AiG4lOGbFEhg7icZWJIgCMz+It1DA/hYDQXScxVjyyohpCprBt7SswylJze49htVNxQjk6xDuSXTAs12OQgUGLWMRenLj4pTsNb11SSde/uPhmbA2U5e6c3qxBiEdhTOhhO77CIwxvJ55p7NVlN1owX+xkOJhUb3M1OTuShAZpQIoK72mtcSF5bwExLoxECjsqzssgIzdMLB2IdiPViApHbsTwhH1KNkIgFHO2tTOB54pjfXu3k4QLechmK9lCGzfm9s0XbQtmWfqa4NB0Oo1lzVtUsx6wjKxtYBcKSMkJOyGzJBbYxBM0aBypZfdBRJyDCz0zNRjXZKw0D/J75KFApFvPVTt73kv/6b0Lr9bqMp/wziz8W9M/pAwQAAA==

To expose a service, only the omni-kube-service-exposer.sidero.dev/port annotation is required.

Its value must be a port that is unused on the nodes, such as by other exposed Services.

The annotation omni-kube-service-exposer.sidero.dev/label can be set to a human-friendly name to be displayed on the Omni Web left menu.

If not set, the default name of <service-name>.<service-namespace> will be used.

The annotation omni-kube-service-exposer.sidero.dev/icon can be set to render an icon for this service on the Omni Web left menu.

If set, valid values are:

  • Either a base64-encoded SVG
  • Or a base64-encoded GZIP of an SVG

To encode an SVG file icon.svg to be used for the annotation, you can use the following command:

gzip -c icon.svg | base64

Accessing the Exposed Service

You will notice that the Service you annotated will appear under the “Exposed Services” section in Omni Web, on the left menu when the cluster is selected.

Clicking it will render the Service in Omni.

2.7 - Create an Omni Service Account

A guide on how to create an Omni service account.

This guide shows you how to create an Omni service account.

You will need omnictl installed and configured to follow this guide. If you haven’t done so already, follow the omnictl guide.

Creating the Service Account

To create an Omni service account, use the following command:

omnictl serviceaccount create <sa-name>

The output of this command will print OMNI_ENDPOINT and OMNI_SERVICE_ACCOUNT_KEY.

Export these variables with the printed values:

export OMNI_ENDPOINT=<output from above command>
export OMNI_SERVICE_ACCOUNT_KEY=<output from above command>

You can now use omnictl with the generated service account.

2.8 - Create a Service Account Kubeconfig

A guide on how to create a service account kubeconfig in Omni.

This guide shows you how to create a service account kubeconfig in Omni.

You need omnictl installed and configured to follow this guide. If you haven’t done so already, follow the omnictl guide.

You also need to have a cluster created in Omni to follow this guide.

Creating the Service Account Kubeconfig

To create a service account kubeconfig, run the following command:

omnictl kubeconfig --service-account -c <cluster> --user <user> <path to kubeconfig>

This command will create a service account token with the given username and obtain a kubeconfig file for the given cluster and username.

You can now use kubectl with the generated kubeconfig.

2.9 - Scale Down a Cluster

A guide on how to scale down a cluster with Omni.

This guide shows you how to delete machines in a cluster with Omni.

Upon logging in, click the “Clusters” menu item on the left, then the name of the cluster you wish to delete nodes from. Click the “Nodes” menu item on the left. Now, select “Destroy” from the menu under the elipsis:

The cluster will now scale down.

2.10 - Scale Up a Cluster

A guide on how to scale up a cluster with Omni.

This guide shows you how to add machines to a cluster with Omni. Upon logging in, click the “Cluster” menu item on the left, then the name of the cluster you wish to add nodes to. From the “Cluster Overview” tab, click the “Add Machines” button in the sidebar on the right.

From the list of available machines that is shown, identify the machine or machines you wish to add, and then click “ControlPlane” or “Worker”, to add the machine with that role. You may add multiple machines in one operation. Click “Add Machines” when all machines have been selected to be added.

The cluster will now scale up.

2.11 - Register a Hetzner Server

A guide on how to register a Hetzner server with Omni.

This guide shows you how to register a Hetzner server with Omni.

Dashboard

Upon logging in you will be presented with the Omni dashboard.

Download the Hetzner Image

First, download the Hetzner image from the Omni portal by clicking on the “Download Installation Media” button. Now, click on the “Options” dropdown menu and search for the “Hetzner” option. Notice there are two options: one for amd64 and another for arm64. Select the appropriate option for the machine you are registering. Now, click the “Download” button.

Place the following in the same directory as the downloaded installation media and name the file hcloud.pkr.hcl:

packer {
  required_plugins {
    hcloud = {
      version = ">= 1.0.0"
      source  = "github.com/hashicorp/hcloud"
    }
  }
}

locals {
  image = "<path to downloaded installation media>"
}

source "hcloud" "talos" {
  rescue       = "linux64"
  image        = "debian-11"
  location     = "hel1"
  server_type  = "cx11"
  ssh_username = "root"

  snapshot_name = "Omni Image"
}

build {
  sources = ["source.hcloud.talos"]

  provisioner "file" {
    source = "${local.image}"
    destination = "/tmp/talos.raw.xz"
  }

  provisioner "shell" {
    inline = [
      "xz -d -c /tmp/talos.raw.xz | dd of=/dev/sda && sync",
    ]
  }
}

Now, run the following:

export HCLOUD_TOKEN=${TOKEN}
packer init .
packer build .

Take note of the image ID produced by by running this command.

Create a Server

hcloud context create talos

hcloud server create --name omni-talos-1 \
    --image <image ID> \
    --type cx21 --location <location>

Conclusion

Navigate to the “Machines” menu in the sidebar. You should now see a machine listed.

You now have a Hetzner server registered with Omni and ready to provision.

2.12 - Restore Etcd of a Cluster Managed by Cluster Templates to an Earlier Snapshot

A guide on how to restore a cluster’s etcd to an earlier snapshot.

This guide shows you how to restore a cluster’s etcd to an earlier snapshot. This is useful when you need to revert a cluster to an earlier state.

This tutorial has the following requirements:

  • The CLI tool omnictl must be installed and configured.
  • The cluster which you want to restore must still exist (not deleted from Omni) and have backups in the past.
  • The cluster must be managed using cluster templates (not via the UI).

Finding the Cluster’s UUID

To find the cluster’s UUID, run the following command, replacing my-cluster with the name of your cluster:

omnictl get clusteruuid my-cluster

The output will look like this:

NAMESPACE   TYPE          ID              VERSION   UUID
default     ClusterUUID   my-cluster      1         bb874758-ee54-4d3b-bac3-4c8349737298

Note the UUID column, which contains the cluster’s UUID.

Finding the Snapshot to Restore

List the available snapshots for the cluster:

omnictl get etcdbackup -l omni.sidero.dev/cluster=my-cluster

The output will look like this:

NAMESPACE   TYPE         ID                         VERSION     CREATED AT                         SNAPSHOT
external    EtcdBackup   my-cluster-1701184522   undefined   {"nanos":0,"seconds":1701184522}   FFFFFFFF9A99FBF6.snapshot
external    EtcdBackup   my-cluster-1701184515   undefined   {"nanos":0,"seconds":1701184515}   FFFFFFFF9A99FBFD.snapshot
external    EtcdBackup   my-cluster-1701184500   undefined   {"nanos":0,"seconds":1701184500}   FFFFFFFF9A99FC0C.snapshot

The SNAPSHOT column contains the snapshot name which you will need to restore the cluster. Let’s assume you want to restore the cluster to the snapshot FFFFFFFF9A99FBFD.snapshot.

Deleting the Existing Control Plane

To restore the cluster, we need to first delete the existing control plane of the cluster. This will take the cluster into the non-bootstrapped state. Only then we can create the new control plane with the restored etcd.

Use the following command to delete the control plane, replacing my-cluster with the name of your cluster:

omnictl delete machineset my-cluster-control-planes

Creating the Restore Template

Edit your cluster template manifest template-manifest.yaml, edit the list of control plane machines for your needs, and add the bootstrapSpec section to the control plane, with cluster UUID and the snapshot name we found above:

kind: Cluster
name: my-cluster
kubernetes:
  version: v1.28.2
talos:
  version: v1.5.5
---
kind: ControlPlane
machines:
  - 430d882a-51a8-48b3-ae00-90c5b0b5b0b0
  - e865efbc-25a1-4436-bcd9-0a431554e328
  - 820c2b44-568c-461e-91aa-c2fc228c0b2e
bootstrapSpec:
  clusterUUID: bb874758-ee54-4d3b-bac3-4c8349737298 # The cluster UUID we found above
  snapshot: FFFFFFFF9A99FBFD.snapshot # The snapshot name we found above
---
kind: Workers
machines:
  - 18308f52-b833-4376-a7c8-1cb9de2feafd
  - 79f8db4d-3b6b-49a7-8ac4-aa5d2287f706

Syncing the Template

To sync the template, run the following command:

omnictl cluster template sync -f template-manifest.yaml
omnictl cluster template status -f template-manifest.yaml

After the sync, your cluster will be restored to the snapshot you specified.

Restarting Kubelet on Worker Nodes

To ensure a healthy cluster operation, the kubelet needs to be restarted on all worker nodes.

Get the IDs of the worker nodes:

omnictl get clustermachine -l omni.sidero.dev/role-worker,omni.sidero.dev/cluster=my-cluster

The output will look like this:

NAMESPACE   TYPE             ID                                     VERSION
default     ClusterMachine   26b87860-38b4-400f-af72-bc8d26ab6cd6   3
default     ClusterMachine   2f6af2ad-bebb-42a5-b6b0-2b9397acafbc   3
default     ClusterMachine   5f93376a-95f6-496c-b4b7-630a0607ac7f   3
default     ClusterMachine   c863ccdf-cdb7-4519-878e-5484a1be119a   3

Gather the IDs in this output, and issue a kubelet restart on them using talosctl:

talosctl -n 26b87860-38b4-400f-af72-bc8d26ab6cd6 service kubelet restart
talosctl -n 2f6af2ad-bebb-42a5-b6b0-2b9397acafbc service kubelet restart
talosctl -n 5f93376a-95f6-496c-b4b7-630a0607ac7f service kubelet restart
talosctl -n c863ccdf-cdb7-4519-878e-5484a1be119a service kubelet restart

2.13 - File an Issue

A guide on how to file an issue for Omni.

This guide shows you file an issue for Omni.

Click on the “Report an issue” button in the header:

Now, click on the “New issue” button:

Choose the issue type, fill in the details, and submit the issue.

2.14 - Install talosctl

A guide on how to install talosctl.

This guide shows you how to install talosctl.

Run the following:

curl -sL https://talos.dev/install | sh

You now have talosctl installed.

2.15 - Manage Access Policies (ACLs)

A guide on how to manage Omni ACLs.

This guide will show how to give the user support@example.com full access to the staging cluster but limited access to the production cluster.

Create an AccessPolicy resource

Create a local file acl.yaml:

metadata:
  namespace: default
  type: AccessPolicies.omni.sidero.dev
  id: access-policy
spec:
  rules:
    - users:
        - support@example.com
      clusters:
        - staging
      role: Operator
      kubernetes:
        impersonate:
          groups:
            - system:masters
    - users:
        - support@example.com
      clusters:
        - production
      role: Reader
      kubernetes:
        impersonate:
          groups:
            - my-app-read-only
  tests:
    - name: support engineer has full access to staging cluster
      user:
        name: support@example.com
      cluster:
        name: staging
      expected:
        role: Operator
        kubernetes:
          impersonate:
            groups:
              - system:masters
    - name: support engineer has read-only access to my-app namespace in production cluster
      user:
        name: support@example.com
      cluster:
        name: production
      expected:
        role: Reader
        kubernetes:
          impersonate:
            groups:
              - my-app-read-only

As an Omni admin, apply this ACL using omnictl:

omnictl apply -f acl.yaml

When users interact with Omni API or UI, they will be assigned to the role specified in the ACL.

When users access the Kubernetes cluster through Omni, they will have the groups specified in the ACL.

Kubernetes RBAC then can be used to grant permissions to these groups.

Create Kubernetes RBAC resources

Locally, create rbac.yaml with a Namespace called my-app, and a Role & RoleBinding to give access to the my-app-read-only group:

apiVersion: v1
kind: Namespace
metadata:
  name: my-app
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: read-only
  namespace: my-app
rules:
  - apiGroups: ["", "extensions", "apps", "batch", "autoscaling"]
    resources: ["*"]
    verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-only
  namespace: my-app
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: read-only
subjects:
  - kind: Group
    name: my-app-read-only
    apiGroup: rbac.authorization.k8s.io

As the cluster admin, apply the manifests to the Kubernetes cluster production:

kubectl apply -f rbac.yaml

Test the access

Try to access the cluster with a kubeconfig generated by the user support@example.com:

kubectl get pods -n my-app

The user should be able to list pods in the my-app namespace because of the Role and RoleBinding created above.

Try to list pods in another namespace:

kubectl get pod -n default

The user should not be able to list pods in namespace default.

2.16 - Create Etcd Backups

A guide on how to create cluster etcd backups using Omni.

CLI

First of all, check the current overall status of the cluster backup subsystem:

omnictl get etcdbackupoverallstatus

If you have freshly created Omni instance, the output will be similar to this:

NAMESPACE   TYPE                      ID                          VERSION   CONFIGURATION NAME   CONFIGURATION ERROR   LAST BACKUP STATUS   LAST BACKUP ERROR   LAST BACKUP TIME   CONFIGURATION ATTEMPT
ephemeral   EtcdBackupOverallStatus   etcdbackup-overall-status   1         s3                   not initialized

The combination of the CONFIGURATION NAME and CONFIGURATION ERROR fields display the current backup store configuration status. Currently, Omni supports two backup stores: local and s3. These are configured during Omni initialization. The output above indicates that the backup store is set to use the s3 store. However, the s3 configuration itself has not yet been added, so the CONFIGURATION ERROR field shows not initialized. The rest of the fields show as empty because no backups have been created yet.

S3 configuration

To use S3 as the backup storage, you will first need to configure the S3 credentials for Omni to use. This can be done by creating an EtcdBackupS3Configs.omni.sidero.dev resource in Omni. Below is an example for Minio S3:

metadata:
  namespace: default
  type: EtcdBackupS3Configs.omni.sidero.dev
  id: etcd-backup-s3-conf
spec:
  bucket: mybucket
  region: us-east-1
  endpoint: http://127.0.0.1:9000
  accesskeyid: access
  secretaccesskey: secret123
  sessiontoken: ""

Let’s go through the fields:

  • bucket - the name of the S3 bucket for storing backups. This is the only field required in all cases.
  • region - the region of the S3 bucket. If not provided, Omni will use the default region.
  • endpoint - the S3 endpoint. If not provided, Omni will use the default AWS S3 endpoint.
  • accesskeyid and secretaccesskey - the credentials to access the S3 bucket. If not provided, Omni will assume it runs in an EC2 instance with an IAM role that has access to the specified S3 bucket.
  • sessiontoken - the session token (if any) for accessing the S3 bucket.

Save it as <file-name>.yaml and apply using omnictl apply -f <file-name>.yaml. During resource creation, Omni will validate the provided credentials by attempting to list the objects in the bucket. It will return an error if the validation fails and will not update the resource.

Let’s get our overall status again and check the output:

NAMESPACE   TYPE                      ID                          VERSION   CONFIGURATION NAME   CONFIGURATION ERROR   LAST BACKUP STATUS   LAST BACKUP ERROR   LAST BACKUP TIME   CONFIGURATION ATTEMPT
ephemeral   EtcdBackupOverallStatus   etcdbackup-overall-status   2         s3

Note that the CONFIGURATION ERROR field is now empty, indicating that the provided configuration is valid.

Manual backup

Now, let’s create a manual backup. To do that, we need to create a resource:

metadata:
  namespace: ephemeral
  type: EtcdManualBackups.omni.sidero.dev
  id: <your-cluster-name>
spec:
  backupat:
    seconds: <unix-timestamp>
    nanos: 0

The <unix-timestamp> should be no more than one minute in the future or in the past. The easiest way to get the current timestamp is to simply invoke date +%s in your shell. The nanos field should always be 0.

After you save the resource as <file-name>.yaml, apply it using omnictl apply -f <file-name>.yaml. In a few seconds, you can check the status of the backup:

omnictl get etcdbackupstatus -o yaml

This command print per-cluster backup status. The output will be similar to this:

metadata:
  namespace: ephemeral
  type: EtcdBackupStatuses.omni.sidero.dev
  id: <cluster-name>
  version: 1
  owner: EtcdBackupController
  phase: running
spec:
  status: 1
  error: ""
  lastbackuptime:
    seconds: 1702166400
    nanos: 985220192
  lastbackupattempt:
    seconds: 1702166400
    nanos: 985220192

You can also get the overall status of the backup subsystem, where the output will be similar to this:

metadata:
  namespace: ephemeral
  type: EtcdBackupOverallStatuses.omni.sidero.dev
  id: etcdbackup-overall-status
  version: 3
  owner: EtcdBackupOverallStatusController
  phase: running
spec:
  configurationname: s3
  configurationerror: ""
  lastbackupstatus:
    status: 1
    error: ""
    lastbackuptime:
      seconds: 1702166400
      nanos: 985220192
    lastbackupattempt:
      seconds: 1702166400
      nanos: 985220192

Automatic backup

Omni also supports automatic backups. You can enable this feature by directly editing the cluster resource Clusters.omni.sidero.dev or by using cluster templates. Let’s explore how we can do this in both ways.

Cluster templates

Enabling automatic backups using cluster templates is quite straightforward. First, you’ll need a template that resembles the following:

kind: Cluster
name: talos-default
kubernetes:
  version: v1.28.2
talos:
  version: v1.5.5
features:
  backupConfiguration:
    interval: 1h
---
kind: ControlPlane
machines:
  - 1dd4397b-37f1-4196-9c37-becef670b64a
---
kind: Workers
machines:
  - 0d1f01c3-0a8a-4560-8745-bb792e3dfaad
  - a0f29661-cd2d-4e25-a6c9-da5ca4c48d58

This is the minimal example of a cluster template for a cluster with a single-node control plane and two worker nodes. Your machine UUIDs will likely be different, and the Kubernetes and Talos versions will probably also differ. You will need both of these, as well as the cluster name, in your cluster template. To obtain these, refer to the clustermachinestatus and cluster resources.

In this example, we are going to set the backup interval for the cluster to one hour. Save this template as <file-name>.yaml. Before applying this change, we want to ensure that no automatic backup is enabled for this cluster. To do that, let’s run the following command:

omnictl cluster template -f <file-name>.yaml diff

The Omni response will resemble the following:

--- Clusters.omni.sidero.dev(default/talos-default)
+++ Clusters.omni.sidero.dev(default/talos-default)
@@ -19,4 +19,7 @@
   features:
     enableworkloadproxy: false
     diskencryption: false
-  backupconfiguration: null
+  backupconfiguration:
+    interval:
+      seconds: 3600
+      nanos: 0

Now that we have verified that Omni does not already have an automatic backup enabled, we will apply the change:

omnictl cluster template -f <file-name>.yaml sync

If you didn’t have any backups previously, Omni will not wait for an hour and will immediately create a fresh backup. You can verify this by running the following command:

omnictl get etcdbackup --selector omni.sidero.dev/cluster=talos-default

Keep in mind that to obtain the backup status, you will need to use the label selector omni.sidero.dev/cluster along with your cluster name. In this example it is talos-default.

NAMESPACE   TYPE         ID                         VERSION     CREATED AT
external    EtcdBackup   talos-default-1702166400   undefined   {"nanos":0,"seconds":1702166400}

Cluster resource

Another way to enable automatic backups is by directly editing the cluster resource. To do this, first, you’ll need to retrieve the cluster resource from the Omni:

omnictl get cluster talos-default -o yaml
metadata:
  namespace: default
  type: Clusters.omni.sidero.dev
  id: talos-default
  finalizers:
    - KubernetesUpgradeStatusController
    - TalosUpgradeStatusController
    - SecretsController
    - ClusterController
spec:
  installimage: ""
  kubernetesversion: 1.28.2
  talosversion: 1.5.5
  features:
    enableworkloadproxy: false
    diskencryption: false

Add fields related to the backup configuration while preserving the existing fields:

metadata:
  namespace: default
  type: Clusters.omni.sidero.dev
  id: talos-default
  finalizers:
    - KubernetesUpgradeStatusController
    - TalosUpgradeStatusController
    - SecretsController
    - ClusterController
spec:
  installimage: ""
  kubernetesversion: 1.28.2
  talosversion: 1.5.5
  features:
    enableworkloadproxy: false
    diskencryption: false
  backupconfiguration:
    interval:
      seconds: 3600
      nanos: 0

Save it to the file and apply using omnictl apply -f <file-name>.yaml. You will get the output similar to the one above for the cluster template.

2.17 - Create a Machine Class

A guide on how to create a machine class.

This guide shows you how to create and a machine class.

First, click the “Machine Classes” section button in the sidebar.

<figcaption class="card-body px-0 pt-2 pb-0">
	<p class="card-text">

Next, click the “Create Machine Class” button.

<figcaption class="card-body px-0 pt-2 pb-0">
	<p class="card-text">

Add machine query conditions by typing them manually in the input box.

<figcaption class="card-body px-0 pt-2 pb-0">
	<p class="card-text">

Clicking the label in the machine list will add them to the input box.

<figcaption class="card-body px-0 pt-2 pb-0">
	<p class="card-text">

Clicking on “+” will add blocks to match the machines using boolean OR operator.

<figcaption class="card-body px-0 pt-2 pb-0">
	<p class="card-text">

<figcaption class="card-body px-0 pt-2 pb-0">
	<p class="card-text">

Name the machine class.

<figcaption class="card-body px-0 pt-2 pb-0">
	<p class="card-text">

Click “Create Machine Class”.

<figcaption class="card-body px-0 pt-2 pb-0">
	<p class="card-text">

Create a file called machine-class.yaml with the following content:

metadata:
  namespace: default
  type: MachineClasses.omni.sidero.dev
  id: test
spec:
  matchlabels:
    # matches machines with amd64 architecture and more than 2 CPUs
    - omni.sidero.dev/arch: amd64, omni.sidero.dev/cpus > 2

Create the machine class:

omnictl apply -f machine-class.yaml

2.18 - Create a Cluster

A guide on how to create a cluster.

This guide shows you how to create a cluster from registered machines.

First, click the “Clusters” section button in the sidebar. Next, click the “Create Cluster” button.

<figcaption class="card-body px-0 pt-2 pb-0">
	<p class="card-text">

Select the role for each machine you would like to create a cluster from. Now that each machine has a role, choose the install disk from the dropdown menu for each machine. Finally, click “Create Cluster”

<figcaption class="card-body px-0 pt-2 pb-0">
	<p class="card-text">

Create a file called cluster.yaml with the following content:

kind: Cluster
name: example
kubernetes:
  version: v1.26.0
talos:
  version: v1.3.2
---
kind: ControlPlane
machines:
  - <control plane machine UUID>
---
kind: Workers
machines:
  - <worker machine UUID>
---
kind: Machine
name: <control plane machine UUID>
install:
  disk: /dev/<disk>
---
kind: Machine
name: <worker machine UUID>
install:
  disk: /dev/<disk>

Now, validate the document:

omnictl cluster template validate -f cluster.yaml

Create the cluster:

omnictl cluster template sync -f cluster.yaml --verbose

Finally, wait for the cluster to be up:

omnictl cluster template status -f cluster.yaml

Create a file called cluster.yaml with the following content:

kind: Cluster
name: example
kubernetes:
  version: v1.28.0
talos:
  version: v1.5.4
---
kind: ControlPlane
machineClass:
  name: control-planes
  size: 1
---
kind: Workers
machineClass:
  name: workers
  size: 1
---
kind: Workers
name: secondary
machineClass:
  name: secondary-workers
  size: unlimited

Be sure to create machine classes control-planes, workers and secondary-workers beforehand. See machine classes how-to.

Now, validate the document:

omnictl cluster template validate -f cluster.yaml

Create the cluster:

omnictl cluster template sync -f cluster.yaml --verbose

Finally, wait for the cluster to be up:

omnictl cluster template status -f cluster.yaml

2.19 - Enable Disk Encryption

A guide on how to enable Omni KMS assisted disk encryption for a cluster.

First, click the “Clusters” section button in the sidebar. Next, click the “Create Cluster” button.

<figcaption class="card-body px-0 pt-2 pb-0">
	<p class="card-text">

Select Talos version >= 1.5.0. Click “Enable Encryption” checkbox.

Create a file called cluster.yaml with the following content:

kind: Cluster
name: example
kubernetes:
  version: v1.27.3
talos:
  version: v1.5.0
features:
  diskEncryption: true
---
kind: ControlPlane
machines:
  - <control plane machine UUID>
---
kind: Workers
machines:
  - <worker machine UUID>
---
kind: Machine
name: <control plane machine UUID>
install:
  disk: /dev/<disk>
---
kind: Machine
name: <worker machine UUID>
install:
  disk: /dev/<disk>

Now, validate the document:

omnictl cluster template validate -f cluster.yaml

Create the cluster:

omnictl cluster template sync -f cluster.yaml --verbose

Finally, wait for the cluster to be up:

omnictl cluster template status -f cluster.yaml

2.20 - Add a User to Omni with SAML Enabled

A guide on how to add a user to Omni with SAML authentication enabled.

This guide shows you how to create a user in an Omni instance with SAML authentication enabled.

Enable new user access to Omni in your SAML identity provider.

Make user login from the new user account.

Log into Omni using another account with Admin permissions.

Find newly added user in the list of users.

Now, select “Edit User” from the menu under the ellipsis:

Change its role to Reader, Operator or Admin:

Next, click “Update User”:

2.21 - Auto-assign roles to SAML users

A guide on how to assign Omni roles to SAML users automatically.

This guide shows you how to configure your Omni instance, so that new users logging in with SAML authentication are automatically assigned to a role based on their SAML role attributes.

Create the file assign-operator-to-engineers-label.yaml for the SAMLLabelRule resource, with the following content:

metadata:
  namespace: default
  type: SAMLLabelRules.omni.sidero.dev
  id: assign-operator-to-engineers-label
spec:
  assignroleonregistration: Operator
  matchlabels:
    - saml.omni.sidero.dev/role/engineers

As an admin, create it on your Omni instance using omnictl:

omnictl apply -f assign-operator-to-engineers-label.yaml

This will create a resource that assigns the Operator role to any user that logs in with SAML and has the SAML attribute Role with the value engineers.

Log in to Omni as a new SAML user with the SAML attribute with name Role and value engineers.

This will cause the user created on the Omni side to be labeled as saml.omni.sidero.dev/role/engineers.

This label will match the SAMLLabelRule resource we created above, and the user will automatically be assigned the Operator role.

2.22 - Create a Patch For Cluster Control Planes

A guide on how to create a config patch for the control plane of a cluster.

This guide shows you how to create a patch for the control plane machine set of a cluster.

Upon logging in, click the “Clusters” menu item on the left. Now, select “Config Patches” from the menu under the elipsis:

Next, click “Create Patch”:

Pick the “Control Planes” option from the “Patch Target” dropdown:

Type in the desired config patch:

Click “Save” to create the config patch:

2.23 - Create a Patch For Cluster Machines

A guide on how to create a config patch for a machine in a cluster.

This guide shows you how to create a patch for a machine in a cluster.

Upon logging in, click the “Clusters” menu item on the left. Now, select “Config Patches” from the menu under the ellipsis:

Next, click “Create Patch”:

Pick the specific machine from the “Patch Target” dropdown:

Type in the desired config patch:

Click “Save” to create the config patch:

2.24 - Create a Patch For Cluster Workers

A guide on how to create a config patch for the worker machine set of a cluster.

This guide shows you how to create a patch for the worker machine set of a cluster.

Upon logging in, click the “Clusters” menu item on the left. Now, select “Config Patches” from the menu under the elipsis:

Next, click “Create Patch”:

Pick the “Workers” option from the “Patch Target” dropdown:

Type in the desired config patch:

Click “Save” to create the config patch:

2.25 - Export a Cluster Template from a Cluster Created in the UI

A guide on how to export a cluster template from a cluster created in the UI.

This guide shows you how to export a cluster template from a cluster created in the UI. This is useful when you want to switch a cluster from being manually managed to being managed by cluster templates (i.e. via the CLI, to be used in CI automation).

Exporting the Cluster Template

To export a cluster, run the following command:

omnictl cluster template export -c my-cluster -o my-cluster-exported-template.yaml

It will export the template for the cluster with name my-cluster into the file my-cluster-exported-template.yaml.

If you inspect the exported template, you will see an output like the following:

kind: Cluster
name: my-cluster
labels:
  my-label: my-value
kubernetes:
  version: v1.27.8
talos:
  version: v1.5.5
---
kind: ControlPlane
machines:
  - 1e3133f4-fb7a-4b62-bd4f-b792e2df24e2
  - 5439f561-f09e-4259-8788-9ab835bb9922
  - 63564547-c9cb-4a30-a54a-8f95a29d66a5
---
kind: Workers
machines:
  - 4b46f512-55d0-482c-ac48-cd916b62b74e
patches:
  - idOverride: 500-04e39280-4b36-435e-bedc-75c4ab340a80
    annotations:
      description: Enable verbose logging for kubelet
      name: kubelet-verbose-log
    inline:
      machine:
        kubelet:
          extraArgs:
            v: "4"
---
kind: Machine
name: 1e3133f4-fb7a-4b62-bd4f-b792e2df24e2
install:
  disk: /dev/vda
---
kind: Machine
name: 4b46f512-55d0-482c-ac48-cd916b62b74e
---
kind: Machine
name: 5439f561-f09e-4259-8788-9ab835bb9922
---
kind: Machine
name: 63564547-c9cb-4a30-a54a-8f95a29d66a5

Using the Exported Cluster Template to Manage the Cluster

You can now use this template to manage the cluster - edit the template as needed and sync it using the CLI:

omnictl cluster template sync -f my-cluster-exported-template.yaml

Check the sync status:

omnictl cluster template status -f my-cluster-exported-template.yaml

2.26 - Create a Hybrid Cluster

A guide on how to create a hybrid cluster.

This guide shows you how to create and configure a cluster consisting of machines that are any combination of bare metal, cloud virtual machines, on-premise virtual machines, or SBCs, using KubeSpan, which enables Kubernetes to communicate securely with machines in the cluster on different networks.

Refer to the general guide on creating a cluster to get started. To create a hybid cluster apply the following cluster patch by clicking on “Config Patches” and navigating the the “Cluster” tab:

machine:
  network:
    kubespan:
      enabled: true

2.27 - Use Kubectl With Omni

This guide shows you how to use kubectl with an Omni-managed cluster.

Navigate to the clusters page by clicking on the “Clusters” button in the sidebar.

Click the three dots in the cluster’s item to access the options menu.

Click “Download kubeconfig”.

Alternatively you can click on the cluster and download the kubeconfig from the cluster dashboard.

Install the oidc-login plugin per the official documentation: https://github.com/int128/kubelogin.

2.28 - Install and Configure Omnictl

A guide on installing and configuring omnictl for Omni.

This guide shows you how to install and configure omnictl.

Download omnictl and omniconfig from the Omni dashboard.

Add the downloaded omniconfig.yaml to the default location to use it with omnictl:

cp omniconfig.yaml ~/.config/omni/config

If you would like to merge the omniconfig.yaml with an existing configuration, use the following command:

omnictl config merge ./omniconfig.yaml

List the contexts to verify that the omniconfig was added:

$ omnictl config contexts
CURRENT   NAME         URL
          ...
          example      https://example.omni.siderolabs.io/
          ...

Run omnictl for the first time to perform initial authentication using a web browser:

omnictl get clusters

If the browser window does not open automatically, it can be opened manually by copying and pasting the URL into a web browser:

BROWSER=echo omnictl get clusters

2.29 - Deploy Omni On-prem

This guide shows you how to deploy Omni on-prem. This guide assumes that Omni will be deployed on an Ubuntu machine. Small differences should be expected when using a different OS.

For SAML integration sections, this guide assumes Azure AD will be the provider for SAML.

Prereqs

There are several prerequisites for deploying Omni on-prem.

Install Docker

Install Docker according to the Ubuntu installation guide here.

Generate Certs

On-prem Omni will require valid SSL certificates. This means that self-signed certs will not work as of the time of this writing. Generating certificates is left as an exercise to the user, but here is a rough example that was tested using DigitalOcean’s DNS integration with certbot to generate certificates. The process should be very similar for other providers like Route53.

# Install certbot
$ sudo snap install --classic certbot

# Allow for root access
$ sudo snap set certbot trust-plugin-with-root=ok

# Install DNS provider
$ snap install certbot-dns-<provider>

# Create creds file with API tokens
$ echo '<creds example' > creds.ini

# Create certs for desired domain
$ certbot certonly --dns-<provider> -d <domain name for onprem omni>

Configure Authentication

Auth0

First, you will need an Auth0 account.

On the account level, configure “Authentication - Social” to allow GitHub and Google login.

Create an Auth0 application of the type “single page web application”.

Configure the Auth0 application with the following:

  • Allowed callback URLs: https://<domain name for onprem omni>
  • Allowed web origins: https://<domain name for onprem omni>
  • Allowed logout URLs: https://<domain name for onprem omni>

Disable username/password auth on “Authentication - Database - Applications” tab.

Enable GitHub and Google login on the “Authentication - Social” tab.

Enable email access in the GitHub settings.

Take note of the following information from the Auth0 application:

  • Domain
  • Client ID

SAML Identity Providers

Create Etcd Encryption Key

Generate a GPG key:

gpg --quick-generate-key "Omni (Used for etcd data encryption) how-to-guide@siderolabs.com" rsa4096 cert never

Find the fingerprint of the generated key:

gpg --list-secret-keys

Using the fingerprint, add an encryption subkey and export:

gpg --quick-add-key <fingerprint> rsa4096 encr never
gpg --export-secret-key --armor how-to-guide@siderolabs.com > omni.asc

Generate UUID

It is important to generate a unique ID for this Omni deployment. It will also be necessary to use this same UUID each time you “docker run” your Omni instance.

Generate a UUID with:

export OMNI_ACCOUNT_UUID=$(uuidgen)

Deploy Omni

Running Omni is a simple docker run, with some slight differences in flags for Auth0 vs. SAML authentication.

Auth0

docker run \
  --net=host \
  --cap-add=NET_ADMIN \
  -v $PWD/etcd:/_out/etcd \
  -v <path to TLS certificate>:/tls.crt \
  -v <path to TLS key>:/tls.key \
  -v $PWD/omni.asc:/omni.asc \
  ghcr.io/siderolabs/omni:<tag> \
    --account-id=${OMNI_ACCOUNT_UUID} \
    --name=onprem-omni \
    --cert=/tls.crt \
    --key=/tls.key \
    --siderolink-api-cert=/tls.crt \
    --siderolink-api-key=/tls.key \
    --private-key-source=file:///omni.asc \
    --event-sink-port=8091 \
    --bind-addr=0.0.0.0:443 \
    --siderolink-api-bind-addr=0.0.0.0:8090 \
    --k8s-proxy-bind-addr=0.0.0.0:8100 \
    --advertised-api-url=https://<domain name for onprem omni>/ \
    --siderolink-api-advertised-url=https://<domain name for onprem omni>:8090/ \
    --siderolink-wireguard-advertised-addr=<ip address of the host running Omni>:50180 \
    --advertised-kubernetes-proxy-url=https://<domain name for onprem omni>:8100/ \
    --auth-auth0-enabled=true \
    --auth-auth0-domain=<Auth0 domain> \
    --auth-auth0-client-id=<Auth0 client ID> \
    --initial-users=<email address>

Configuration options are available in the help menu (--help).

SAML

docker run \
  --net=host \
  --cap-add=NET_ADMIN \
  -v $PWD/etcd:/_out/etcd \
  -v <path to full chain TLS certificate>:/tls.crt \
  -v <path to TLS key>:/tls.key \
  -v $PWD/omni.asc:/omni.asc \
  ghcr.io/siderolabs/omni:<tag> \
    --account-id=${OMNI_ACCOUNT_UUID} \
    --name=onprem-omni \
    --cert=/tls.crt \
    --key=/tls.key \
    --siderolink-api-cert=/tls.crt \
    --siderolink-api-key=/tls.key \
    --private-key-source=file:///omni.asc \
    --event-sink-port=8091 \
    --bind-addr=0.0.0.0:443 \
    --siderolink-api-bind-addr=0.0.0.0:8090 \
    --k8s-proxy-bind-addr=0.0.0.0:8100 \
    --advertised-api-url=https://<domain name for onprem omni>/ \
    --siderolink-api-advertised-url=https://<domain name for onprem omni>:8090/ \
    --siderolink-wireguard-advertised-addr=<ip address of the host running Omni>:50180 \
    --advertised-kubernetes-proxy-url=https://<domain name for onprem omni>:8100/ \
    --auth-saml-enabled=true \
    --auth-saml-url=<app federation metadata url copied during Azure AD setup>

Configuration options are available in the help menu (--help).

2.30 - Back Up On-prem Omni Database

This guide shows you how to back up the database of an on-prem Omni instance.

Omni uses etcd as its database.

There are 2 operating modes for etcd: embedded and external.

When Omni is run with --etcd-embedded=true flag, it will configure the embedded etcd server to listen the addresses specified by the --etcd-endpoints flag (http://localhost:2379 by default).

In the same host where Omni is running (in Docker, --network=host needs to be used), you can use the etcdctl command to back up the database:

etcdctl --endpoints http://localhost:2379 snapshot save snapshot.db

The command will save the snapshot of the database to the snapshot.db file.

It is recommended to periodically (e.g. with a cron job) take snapshots and store them in a safe location, like an S3 bucket.

2.31 - Configure Keycloak for Omni

  1. Log in to Keycloak.

  2. Create a realm.

  • In the upper left corner of the page, select the dropdown where it says master

  • Fill in the realm name and select create

  1. Find the realm metadata.
  • In the realm settings, there is a link to the metadata needed for SAML under Endpoints.
    • Copy the link or save the data to a file. It will be needed for the installation of Omni.

  1. Create a client
  • Select the Clients tab on the left

  • Fill in the General Settings as shown in the example below. Replace the hostname in the example with your own Omni hostname or IP.
    • Client type
    • Client ID
    • Name

  • Fill in the Login settings as shown in the example below. Replace the hostname in the example with your own Omni hostname or IP.
    • Root URL
    • Valid redirect URIs
    • Master SAML PRocessing URL

  • Modify the Signature and Encryption settings.
    • Sign documents: off
    • Sign assertions: on

  • Set the Client signature required value to off.

  • Modify Client Scopes

  • Select Add predefined mapper.

  • The following mappers need to be added because they will be used by Omni will use these attributes for assigning permissions.
    • X500 email
    • X500 givenName
    • X500 surname

  • Add a new user (optional)
    • If Keycloak is being used as an Identity Provider, users can be created here.

  • Enter the user information and set the Email verified to Yes

  • Set a password for the user.

2.32 - Configure Entra ID AD for Omni

In the Azure portal, click “Enterprise Applications”.

Click “New Application” and search for “Entra SAML Toolkit”.

Name this application something more meaningful if desired and click “Create”.

Under the “Manage” section of the application, select “Single sign-on”, then “SAML” as the single sign-on method.

In section 1 of this form, enter identifier, reply, and sign on URLs that match the following and save:

  • Identifier (Entity ID): https://<domain name for omni>/saml/metadata
  • Reply URL (Assertion Consumer Service URL): https://<domain name for omni>/saml/acs
  • Sign on URL: https://<domain name for omni>/login

From section 3, copy the “App Federation Metadata Url” for later use.

Again, under the “Manage” section of the application, select “Users and groups”.

Add any users or groups you wish to give access to your Omni environment here.

2.33 - Configure Okta for Omni

  1. Log in to Otka
  2. Create a new App Integration
  3. Select “SAML 2.0”
  4. Give the Application a recognisable name (we suggest simply “Omni”)
  5. Set the SAML Settings and Attribute Statements as show below:

  1. Click “Next” and optionally fill out the Feedback, then click “Finish”

Once that is complete, you should now be able to open the “Assignements” tab for the application you just created and manage your users and access as usual.

3 - Reference

3.1 - Cluster Templates

Reference documentation for cluster templates.

Cluster templates are parsed, validated, and converted to Omni resources, which are then created or updated via the Omni API. Omni guarantees backward compatibility for cluster templates, so the same template can be used with any future version of Omni.

All referenced files in machine configuration patches should be stored relative to the current working directory.

Structure

The Cluster Template is a YAML file consisting of multiple documents, with each document having a kind field that specifies the type of the document. Some documents might also have a name field that specifies the name (ID) of the document.

kind: Cluster
name: example
labels:
  my-label: my-value
kubernetes:
  version: v1.26.0
talos:
  version: v1.3.2
features:
  diskencryption: true
patches:
  - name: kubespan-enabled
    inline:
      machine:
        network:
          kubespan:
            enabled: true
---
kind: ControlPlane
machines:
  - 27c16241-96bf-4f17-9579-ea3a6c4a3ca8
  - 4bd92fba-998d-4ef3-ab43-638b806dd3fe
  - 8fdb574a-a252-4d7d-94f0-5cdea73e140a
---
kind: Workers
machines:
  - b885f565-b64f-4c7a-a1ac-d2c8c2781373
  - a54f21dc-6e48-4fc1-96aa-3d7be5e2612b
---
kind: Workers
name: xlarge
machines:
  - 1f721dee-6dbb-4e71-9832-226d73da3841
---
kind: Machine
name: 27c16241-96bf-4f17-9579-ea3a6c4a3ca8
---
kind: Machine
name: 4bd92fba-998d-4ef3-ab43-638b806dd3fe
install:
  disk: /dev/vda
---
kind: Machine
name: 8fdb574a-a252-4d7d-94f0-5cdea73e140a
install:
  disk: /dev/vda
---
kind: Machine
name: b885f565-b64f-4c7a-a1ac-d2c8c2781373
install:
  disk: /dev/vda
---
kind: Machine
name: a54f21dc-6e48-4fc1-96aa-3d7be5e2612b
locked: true
install:
  disk: /dev/vda
---
kind: Machine
name: 1f721dee-6dbb-4e71-9832-226d73da3841
install:
  disk: /dev/vda

Each cluster template should have exactly one document of kind: Cluster, kind: ControlPlane, and any number of kind: Workers with different names.

Every Machine document must be referenced by either a ControlPlane or Workers document.

Document Types

Cluster

The Cluster document specifies the cluster configuration, labels, defines the cluster name and base component versions.

kind: Cluster
name: example
labels:
  my-label: my-value
annotations:
  my-annotation: my-value
kubernetes:
  version: v1.26.1
talos:
  version: v1.3.3
features:
  enableWorkloadProxy: true
  diskEncryption: true
  backupConfiguration:
    interval: 1h
patches:
  - file: patches/example-patch.yaml
FieldTypeDescription
kindstringCluster
namestringCluster name: only letters, digits and - and _ are allowed. The cluster name is used as a key by all other documents, so if the cluster name changes, a new cluster will be created.
labelsmap[string]stringLabels to be applied to the cluster.
annotationsmap[string]stringAnnotations to be applied to the cluster.
kubernetes.versionstringKubernetes version to use, vA.B.C.
talos.versionstringTalos version to use, vA.B.C.
features.enableWorkloadProxybooleanWhether to enable the workload proxy feature. Defaults to false.
features.diskEncryptionbooleanWhether to enable disk encryption. Defaults to false.
features.backupConfiguration.intervalstringCluster etcd backup interval. Must be a valid Go duration. Zero 0 disables automatic backups.
patchesarrayList of patches to apply to the cluster.

ControlPlane

The ControlPlane document specifies the control plane configuration, defines the number of control plane nodes, and the list of machines to use.

As control plane machines run an etcd cluster, it is recommended to use a number of machines for the control plane that can achieve a stable quorum (i.e. 1, 3, 5, etc.). Changing the set of machines in the control plane will trigger a rolling scale-up/scale-down of the control plane.

The control plane should have at least a single machine, but it is recommended to use at least 3 machines for the control plane for high-availability.

kind: ControlPlane
labels:
  my-label: my-value
annotations:
  my-annotation: my-value
machines:
  - 27c16241-96bf-4f17-9579-ea3a6c4a3ca8
  - 4bd92fba-998d-4ef3-ab43-638b806dd3fe
  - 8fdb574a-a252-4d7d-94f0-5cdea73e140a
patches:
  - file: patches/example-controlplane-patch.yaml
FieldTypeDescription
kindstringControlPlane
labelsmap[string]stringLabels to be applied to the control plane machine set.
annotationsmap[string]stringAnnotations to be applied to the control plane machine set.
machinesarrayList of machine IDs to use for control plane nodes (mutually exclusive with machineClass).
patchesarrayList of patches to apply to the machine set.
machineClassMachineClassMachine Class configuration (mutually exclusive with machines).

Workers

The Workers document specifies the worker configuration, defines the number of worker nodes, and the list of machines to use.

kind: Workers
name: workers
labels:
  my-label: my-value
annotations:
    my-annotation: my-value
machines:
  - b885f565-b64f-4c7a-a1ac-d2c8c2781373
updateStrategy:
  rolling:
    maxParallelism: 3
deleteStrategy:
  type: Rolling
  rolling:
    maxParallelism: 5
patches:
  - file: patches/example-workers-patch.yaml
FieldTypeDescription
kindstringWorkers
namestringWorker machine set name: only letters, digits and - and _ are allowed. Defaults to workers when omitted. Must be unique and not be control-planes.
labelsmap[string]stringLabels to be applied to the worker machine set.
annotationsmap[string]stringAnnotations to be applied to the worker machine set.
machinesarrayList of machine IDs to use as worker nodes in the machine set (mutually exclusive with machineClass).
patchesarrayList of patches to apply to the machine set.
machineClassMachineClassMachine Class configuration (mutually exclusive with machines).
updateStrategyUpdateStrategyUpdate strategy for the machine set. Defaults to type: Rolling with maxParallelism: 1.
deleteStrategyUpdateStrategyDelete strategy for the machine set. Defaults to type: Unset.

MachineClass

The MachineClass section of the Control Plane or the Workers defines the rule for picking the machines in the machine set.

kind: Workers
name: workers
machineClass:
  name: worker-class
  size: 2
FieldTypeDescription
namestringName of the machine class to use.
sizenumberNumber of machines to pick from the matching machine class.

UpdateStrategy

The UpdateStrategy section of the Workers defines the update and/or the delete strategy for the machine set.

kind: Workers
name: workers
updateStrategy:
  rolling:
    maxParallelism: 3
deleteStrategy:
  type: Rolling
  rolling:
    maxParallelism: 5
FieldTypeDescription
typestringStrategy type. Can be Rolling or Unset. Defaults to Rolling for updateStrategy and Unset for the deleteStrategy. When Unset, all updates and/or deletes will be applied at once.
rolling.maxParallelismnumberMaximum number of machines to update and/or delete in parallel. Only used when the type is Rolling. Defaults to 1.

Machine

The Machine document specifies the install disk and machine-specific configuration patches. They are optional, but every Machine document must be referenced by either a ControlPlane or Workers document.

kind: Machine
name: 27c16241-96bf-4f17-9579-ea3a6c4a3ca8
labels:
  my-label: my-value
annotations:
  my-annotation: my-value
locked: false
install:
  disk: /dev/vda
patches:
  - file: patches/example-machine-patch.yaml
FieldTypeDescription
kindstringMachine
namestringMachine ID.
labelsmap[string]stringLabels to be applied to the machine set node.
annotationsmap[string]stringAnnotations to be applied to the machine set node.
lockedstringWhether the machine should be marked as locked. Can be true only if the machine is used as a worker.
install.diskstringDisk to install Talos on. Matters only for Talos running from ISO or iPXE.
patchesarrayList of patches to apply to the machine.

Common Fields

patches

The patches field is a list of machine configuration patches to apply to a cluster, a machine set, or an individual machine. Config patches modify the configuration before it is applied to each machine in the cluster. Changing configuration patches modifies the machine configuration which gets automatically applied to the machine.

patches:
  - file: patches/example-patch.yaml
  - name: kubespan-enabled
    inline:
      machine:
        network:
          kubespan:
            enabled: true
  - idOverride: 950-set-env-vars
    labels:
      my-label: my-value
    annotations:
      my-annotation: my-value
    inline:
      machine:
        env:
          MY_ENV_VAR: my-value
FieldTypeDescription
filestringPath to the patch file. Path is relative to the current working directory when executing omnictl. File should contain Talos machine configuration strategic patch.
namestringName of the patch. Required for inline patches when idOverride is not set, optional for file patches (default name will be based on the file path).
idOverridestringOverride the config patch ID, so it won’t be generated from the name or file.
labelsmap[string]stringLabels to be applied to the config patch.
annotationsmap[string]stringAnnotations to be applied to the config patch.
inlineobjectInline patch containing Talos machine configuration strategic patch.

A configuration patch may be either inline or file based. Inline patches are useful for small changes, file-based patches are useful for more complex changes, or changes shared across multiple clusters.

3.2 - Access Policies (ACLs)

Reference documentation for ACLs.

ACLs are used to control fine-grained access policies of users to resources; and are validated, stored, and evaluated as an AccessPolicy resource in Omni.

At the moment, only Kubernetes cluster access (group impersonation) is supported.

Structure

AccessPolicy

The AccessPolicy is a single resource containing a set of user groups, a set of cluster groups, a list of matching rules and a list of tests.

metadata:
  namespace: default
  type: AccessPolicies.omni.sidero.dev
  id: access-policy
spec:
  usergroups:
    # match level-1 users by fnmatch expression
    level-1:
      users:
        - match: level-1*
    # match level-2 users by label selectors
    level-2:
      users:
        - labelselectors:
            - level=2
    # match level-3 users by explicit list
    level-3:
      users:
        - name: admin1@example.com
        - name: admin2@example.com
  clustergroups:
    dev:
      clusters:
        - match: dev-*
    staging:
      clusters:
        - match: staging-*
        - match: preprod-*
    production:
      clusters:
        - match: prod-*
  rules:
    - users:
        - group/level-1
      clusters:
        - group/dev
      role: Operator
    - users:
        - group/level-1
      clusters:
        - group/staging
      role: Reader
      kubernetes:
        impersonate:
          groups:
            - read-only
    - users:
        - group/level-2
      clusters:
        - group/dev
        - group/staging
      role: Operator
    - users:
        - group/level-2
      clusters:
        - group/production
      role: Reader
      kubernetes:
        impersonate:
          groups:
            - read-only
    - users:
        - group/level-3
      clusters:
        - group/dev
        - group/staging
        - group/production
      role: Admin
    # simple rule - without links to user or cluster groups
    - users:
        - vault-admin@example.com
      clusters:
        - vault
      role: Admin
  tests:
    # level-1 tests
    - name: level-1 engineer has Operator access to dev cluster
      user:
        name: level-1-a@example.com
      cluster:
        name: dev-cluster-1
      expected:
        role: Operator
    - name: level-1 engineer has read-only access to staging cluster
      user:
        name: level-1-b@example.com
      cluster:
        name: staging-cluster-1
      expected:
        role: Reader
        kubernetes:
          impersonate:
            groups:
              - read-only
    - name: level-1 engineer has no access to production cluster
      user:
        name: level-1-c@example.com
      cluster:
        name: production-cluster-1
      expected:
        role: None
        kubernetes:
          impersonate:
            groups: []
    # level-2 tests
    - name: level-2 engineer has Operator access to staging cluster
      user:
        name: something@example.com
        labels:
          level: "2"
      cluster:
        name: preprod-cluster-1
      expected:
        role: Operator
    - name: level-2 engineer has read-only access to prod cluster
      user:
        name: something@example.com
        labels:
          level: "2"
      cluster:
        name: prod-cluster-1
      expected:
        role: Reader
        kubernetes:
          impersonate:
            groups:
              - read-only
    # level-3 tests
    - name: level-3 engineer has admin access to prod cluster
      user:
        name: admin1@example.com
      cluster:
        name: prod-cluster-1
      expected:
        role: Admin
    # vault-admin tests
    - name: vault-admin has admin access to vault
      user:
        name: vault-admin@example.com
      cluster:
        name: vault
      expected:
        role: Admin
FieldTypeDescription
metadata.namespacestringAlways set to default.
metadata.typestringAccessPolicies.omni.sidero.dev.
metadata.idstringAlways set to access-policy.
spec.usergroupsmap[string]UserGroupMap of user group names to user group definitions.
spec.clustergroupsmap[string]ClusterGroupMap of cluster group names to cluster group definitions.
spec.rulesarrayList of rules to match.
spec.testsarrayList of tests to run when the resource is created or updated.

UserGroup

A UserGroup is a group of users.

users:
  - name: user1@example.com
  - name: user2@example.com
FieldTypeDescription
usersarrayList of Users.

User

A User is a single user.

name: user1@example.com
match: user1*
labelselectors:
  - level=1
FieldTypeDescription
namestringUser identity used to authenticate to Omni.
matchstringfnmatch expression to match user identities.
labelselectorsarrayList of label selector strings.

Note: name, match and labelselectors are mutually exclusive. Only one of them can be set to a non-zero value.

ClusterGroup

A ClusterGroup is a group of clusters.

clusters:
  - name: cluster-1
  - name: cluster-2
FieldTypeDescription
clustersarrayList of Clusters.

Cluster

A Cluster is a single cluster.

name: cluster-1
match: cluster-1*
FieldTypeDescription
namestringCluster name (ID).
matchfnmatch expression to match cluster names (IDs).

Note: name and match are mutually exclusive. Only one of them can be set to a non-zero value.

Rule

A Rule is a set of users, clusters and Kubernetes impersonation groups.

The reserved prefix group/ is used to reference a user group in users or a cluster group in clusters.

users:
  - user1@example.com
  - group/user-group-1
clusters:
  - cluster1
  - group/cluster-group-1
role: Operator
kubernetes:
  impersonate:
    groups:
      - system:masters
      - another-impersonation-group
FieldTypeDescription
usersarrayList of Users or UserGroups.
clustersarrayList of Clusters or ClusterGroups.
roleenumRole to grant to the user.
kubernetes.impersonate.groupsarrayList of strings representing Kubernetes impersonation groups.

Role

A Role is the role to grant to the user.

Possible values: None, Reader, Operator, Admin.

Test

A Test is a single test case.

Test cases are run when the resource is created or updated, and if any of them fail, the operation is rejected.

name: support engineer has full access to staging cluster
user:
  name: support1@example.com
cluster:
  name: staging-cluster-1
expected:
  role: Operator
  kubernetes:
    impersonate:
      groups:
        - system:masters
FieldTypeDescription
namestringHuman-friendly test case name.
userTestUserUser identity to use in the test.
clusterTestClusterCluster to use in the test.
expectedExpectedExpected result.

TestUser

A TestUser is the user identity to use in a test case.

name: user1@example.com
labels:
  level: "1"
FieldTypeDescription
namestringUser identity to use in the test.
labelsmap[string]stringMap of label names to label values.

TestCluster

A TestCluster is the cluster to use in a test case.

name: cluster-1
FieldTypeDescription
namestringCluster name (ID).

Expected

An Expected is the expected results of a test case.

role: Operator
kubernetes:
  impersonate:
    groups:
      - system:masters
      - another-impersonation-group
FieldTypeDescription
roleenumRole to grant to the user.
kubernetes.impersonate.groupsarrayList of strings representing Kubernetes impersonation groups.

3.3 - omnictl CLI

omnictl CLI tool reference.

omnictl apply

Create or update resource using YAML file as an input

omnictl apply [flags]

Options

  -d, --dry-run       Dry run, implies verbose
  -f, --file string   Resource file to load and apply
  -h, --help          help for apply
  -v, --verbose       Verbose output

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

  • omnictl - A CLI for accessing Omni API.

omnictl cluster delete

Delete all cluster resources.

Synopsis

Delete all resources related to the cluster. The command waits for the cluster to be fully destroyed.

omnictl cluster delete cluster-name [flags]

Options

      --destroy-disconnected-machines   removes all disconnected machines which are part of the cluster from Omni
  -d, --dry-run                         dry run
  -h, --help                            help for delete
  -v, --verbose                         verbose output (show diff for each resource)

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster kubernetes manifest-sync

Sync Kubernetes bootstrap manifests from Talos controlplane nodes to Kubernetes API.

Synopsis

Sync Kubernetes bootstrap manifests from Talos controlplane nodes to Kubernetes API. Bootstrap manifests might be updated with Talos version update, Kubernetes upgrade, and config patching. Talos never updates or deletes Kubernetes manifests, so this command fills the gap to keep manifests up-to-date.

omnictl cluster kubernetes manifest-sync cluster-name [flags]

Options

      --dry-run   don't actually sync manifests, just print what would be done (default true)
  -h, --help      help for manifest-sync

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster kubernetes upgrade-pre-checks

Run Kubernetes upgrade pre-checks for the cluster.

Synopsis

Verify that upgrading Kubernetes version is available for the cluster: version compatibility, deprecated APIs, etc.

omnictl cluster kubernetes upgrade-pre-checks cluster-name [flags]

Options

  -h, --help        help for upgrade-pre-checks
      --to string   target Kubernetes version for the planned upgrade

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster kubernetes

Cluster Kubernetes management subcommands.

Synopsis

Commands to render, validate, manage cluster templates.

Options

  -h, --help   help for kubernetes

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster machine lock

Lock the machine

Synopsis

When locked, no config updates, upgrades and downgrades will be performed on the machine.

omnictl cluster machine lock machine-id [flags]

Options

  -h, --help   help for lock

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster machine unlock

Unlock the machine

Synopsis

Removes locked annotation from the machine.

omnictl cluster machine unlock machine-id [flags]

Options

  -h, --help   help for unlock

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster machine

Machine related commands.

Synopsis

Commands to manage cluster machines.

Options

  -h, --help   help for machine

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster status

Show cluster status, wait for the cluster to be ready.

Synopsis

Shows current cluster status, if the terminal supports it, watch the status as it updates. The command waits for the cluster to be ready by default.

omnictl cluster status cluster-name [flags]

Options

  -h, --help            help for status
  -q, --quiet           suppress output
  -w, --wait duration   wait timeout, if zero, report current status and exit (default 5m0s)

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster template delete

Delete all cluster template resources from Omni.

Synopsis

Delete all resources related to the cluster template. This command requires API access.

omnictl cluster template delete [flags]

Options

      --destroy-disconnected-machines   removes all disconnected machines which are part of the cluster from Omni
  -d, --dry-run                         dry run
  -f, --file string                     path to the cluster template file.
  -h, --help                            help for delete
  -v, --verbose                         verbose output (show diff for each resource)

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster template diff

Show diff in resources if the template is synced.

Synopsis

Query existing resources for the cluster and compare them with the resources generated from the template. This command requires API access.

omnictl cluster template diff [flags]

Options

  -f, --file string   path to the cluster template file.
  -h, --help          help for diff

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster template export

Export a cluster template from an existing cluster on Omni.

Synopsis

Export a cluster template from an existing cluster on Omni. This command requires API access.

omnictl cluster template export cluster-name [flags]

Options

  -c, --cluster string   cluster name
  -f, --force            overwrite output file if it exists
  -h, --help             help for export
  -o, --output string    output file (default: stdout)

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster template render

Render a cluster template to a set of resources.

Synopsis

Validate template contents, convert to resources and output resources to stdout as YAML. This command is offline (doesn’t access API).

omnictl cluster template render [flags]

Options

  -f, --file string   path to the cluster template file.
  -h, --help          help for render

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster template status

Show template cluster status, wait for the cluster to be ready.

Synopsis

Shows current cluster status, if the terminal supports it, watch the status as it updates. The command waits for the cluster to be ready by default.

omnictl cluster template status [flags]

Options

  -f, --file string     path to the cluster template file.
  -h, --help            help for status
  -q, --quiet           suppress output
  -w, --wait duration   wait timeout, if zero, report current status and exit (default 5m0s)

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster template sync

Apply template to the Omni.

Synopsis

Query existing resources for the cluster and compare them with the resources generated from the template, create/update/delete resources as needed. This command requires API access.

omnictl cluster template sync [flags]

Options

  -d, --dry-run       dry run
  -f, --file string   path to the cluster template file.
  -h, --help          help for sync
  -v, --verbose       verbose output (show diff for each resource)

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster template validate

Validate a cluster template.

Synopsis

Validate that template contains valid structures, and there are no other warnings. This command is offline (doesn’t access API).

omnictl cluster template validate [flags]

Options

  -f, --file string   path to the cluster template file.
  -h, --help          help for validate

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster template

Cluster template management subcommands.

Synopsis

Commands to render, validate, manage cluster templates.

Options

  -h, --help   help for template

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl cluster

Cluster-related subcommands.

Synopsis

Commands to destroy clusters and manage cluster templates.

Options

  -h, --help   help for cluster

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl completion bash

Generate the autocompletion script for bash

Synopsis

Generate the autocompletion script for the bash shell.

This script depends on the ‘bash-completion’ package. If it is not installed already, you can install it via your OS’s package manager.

To load completions in your current shell session:

source <(omnictl completion bash)

To load completions for every new session, execute once:

Linux:

omnictl completion bash > /etc/bash_completion.d/omnictl

macOS:

omnictl completion bash > $(brew --prefix)/etc/bash_completion.d/omnictl

You will need to start a new shell for this setup to take effect.

omnictl completion bash

Options

  -h, --help              help for bash
      --no-descriptions   disable completion descriptions

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl completion fish

Generate the autocompletion script for fish

Synopsis

Generate the autocompletion script for the fish shell.

To load completions in your current shell session:

omnictl completion fish | source

To load completions for every new session, execute once:

omnictl completion fish > ~/.config/fish/completions/omnictl.fish

You will need to start a new shell for this setup to take effect.

omnictl completion fish [flags]

Options

  -h, --help              help for fish
      --no-descriptions   disable completion descriptions

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl completion powershell

Generate the autocompletion script for powershell

Synopsis

Generate the autocompletion script for powershell.

To load completions in your current shell session:

omnictl completion powershell | Out-String | Invoke-Expression

To load completions for every new session, add the output of the above command to your powershell profile.

omnictl completion powershell [flags]

Options

  -h, --help              help for powershell
      --no-descriptions   disable completion descriptions

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl completion zsh

Generate the autocompletion script for zsh

Synopsis

Generate the autocompletion script for the zsh shell.

If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once:

echo "autoload -U compinit; compinit" >> ~/.zshrc

To load completions in your current shell session:

source <(omnictl completion zsh)

To load completions for every new session, execute once:

Linux:

omnictl completion zsh > "${fpath[1]}/_omnictl"

macOS:

omnictl completion zsh > $(brew --prefix)/share/zsh/site-functions/_omnictl

You will need to start a new shell for this setup to take effect.

omnictl completion zsh [flags]

Options

  -h, --help              help for zsh
      --no-descriptions   disable completion descriptions

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl completion

Generate the autocompletion script for the specified shell

Synopsis

Generate the autocompletion script for omnictl for the specified shell. See each sub-command’s help for details on how to use the generated script.

Options

  -h, --help   help for completion

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl config add

Add a new context

omnictl config add <context> [flags]

Options

      --basic-auth string   basic auth credentials
  -h, --help                help for add
      --identity string     identity to use for authentication
      --url string          URL of the server (default "grpc://127.0.0.1:8080")

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl config basic-auth

Set the basic auth credentials

omnictl config basic-auth <username> <password> [flags]

Options

  -h, --help   help for basic-auth

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl config context

Set the current context

omnictl config context <context> [flags]

Options

  -h, --help   help for context

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl config contexts

List defined contexts

omnictl config contexts [flags]

Options

  -h, --help   help for contexts

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl config identity

Set the auth identity for the current context

omnictl config identity <identity> [flags]

Options

  -h, --help   help for identity

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl config info

Show information about the current context

omnictl config info [flags]

Options

  -h, --help   help for info

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl config merge

Merge additional contexts from another client configuration file

Synopsis

Contexts with the same name are renamed while merging configs.

omnictl config merge <from> [flags]

Options

  -h, --help   help for merge

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl config new

Generate a new client configuration file

omnictl config new [<path>] [flags]

Options

      --basic-auth string   basic auth credentials
  -h, --help                help for new
      --identity string     identity to use for authentication
      --url string          URL of the server (default "grpc://127.0.0.1:8080")

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl config url

Set the URL for the current context

omnictl config url <url> [flags]

Options

  -h, --help   help for url

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl config

Manage the client configuration file (omniconfig)

Options

  -h, --help   help for config

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl delete

Delete a specific resource by ID or all resources of the type.

Synopsis

Similar to ‘kubectl delete’, ‘omnictl delete’ initiates resource deletion and waits for the operation to complete.

omnictl delete <type> [<id>] [flags]

Options

      --all                Delete all resources of the type.
  -h, --help               help for delete
  -n, --namespace string   The resource namespace. (default "default")
  -l, --selector string    Selector (label query) to filter on, supports '=' and '==' (e.g. -l key1=value1,key2=value2)

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

  • omnictl - A CLI for accessing Omni API.

omnictl download

Download installer media

Synopsis

This command downloads installer media from the server

It accepts one argument, which is the name of the image to download. Name can be one of the following:

 * iso - downloads the latest ISO image
 * AWS AMI (amd64), Vultr (arm64), Raspberry Pi 4 Model B - full image name
 * oracle, aws, vmware - platform name
 * rockpi_4, rock64 - board name

To get the full list of available images, look at the output of the following command: omnictl get installationmedia -o yaml

The download command tries to match the passed string in this order:

* name
* profile

By default it will download amd64 image if there are multiple images available for the same name.

For example, to download the latest ISO image for arm64, run:

omnictl download iso --arch amd64

To download the latest Vultr image, run:

omnictl download "vultr"

To download the latest Radxa ROCK PI 4 image, run:

omnictl download "rockpi_4"
omnictl download <image name> [flags]

Options

      --arch string                  Image architecture to download (amd64, arm64) (default "amd64")
  -h, --help                         help for download
      --initial-labels stringArray   Bake initial labels into the generated installation media
      --output string                Output file or directory, defaults to current working directory (default ".")

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

  • omnictl - A CLI for accessing Omni API.

omnictl get

Get a specific resource or list of resources.

Synopsis

Similar to ‘kubectl get’, ‘omnictl get’ returns a set of resources from the OS. To get a list of all available resource definitions, issue ‘omnictl get rd’

omnictl get <type> [<id>] [flags]

Options

  -h, --help                     help for get
      --id-match-regexp string   Match resource ID against a regular expression.
  -n, --namespace string         The resource namespace. (default "default")
  -o, --output string            Output format (json, table, yaml, jsonpath). (default "table")
  -l, --selector string          Selector (label query) to filter on, supports '=' and '==' (e.g. -l key1=value1,key2=value2)
  -w, --watch                    Watch the resource state.

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

  • omnictl - A CLI for accessing Omni API.

omnictl kubeconfig

Download the admin kubeconfig of a cluster

Synopsis

Download the admin kubeconfig of a cluster. If merge flag is defined, config will be merged with ~/.kube/config or [local-path] if specified. Otherwise kubeconfig will be written to PWD or [local-path] if specified.

omnictl kubeconfig [local-path] [flags]

Options

  -c, --cluster string              cluster to use
  -f, --force                       force overwrite of kubeconfig if already present, force overwrite on kubeconfig merge
      --force-context-name string   force context name for kubeconfig merge
      --groups strings              group to be used in the service account token (groups). only used when --service-account is set to true (default [system:masters])
  -h, --help                        help for kubeconfig
  -m, --merge                       merge with existing kubeconfig (default true)
      --service-account             create a service account type kubeconfig instead of a OIDC-authenticated user type
      --ttl duration                ttl for the service account token. only used when --service-account is set to true (default 8760h0m0s)
      --user string                 user to be used in the service account token (sub). required when --service-account is set to true

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

  • omnictl - A CLI for accessing Omni API.

omnictl machine-logs

Get logs for a machine

Synopsis

Get logs for a provided machine id

omnictl machine-logs machineID [flags]

Options

  -f, --follow              specify if the logs should be streamed
  -h, --help                help for machine-logs
      --log-format string   log format (raw, omni, dmesg) to display (default is to display in raw format) (default "raw")
      --tail int32          lines of log file to display (default is to show from the beginning) (default -1)

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

  • omnictl - A CLI for accessing Omni API.

omnictl serviceaccount create

Create a service account

omnictl serviceaccount create <name> [flags]

Options

  -h, --help            help for create
  -r, --role string     role of the service account. only used when --use-user-role=false
  -t, --ttl duration    TTL for the service account key (default 8760h0m0s)
  -u, --use-user-role   use the role of the creating user. if true, --role is ignored (default true)

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl serviceaccount destroy

Destroy a service account

omnictl serviceaccount destroy <name> [flags]

Options

  -h, --help   help for destroy

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl serviceaccount list

List service accounts

omnictl serviceaccount list [flags]

Options

  -h, --help   help for list

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl serviceaccount renew

Renew a service account by registering a new public key to it

omnictl serviceaccount renew <name> [flags]

Options

  -h, --help           help for renew
  -t, --ttl duration   TTL for the service account key (default 8760h0m0s)

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl serviceaccount

Manage service accounts

Options

  -h, --help   help for serviceaccount

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

omnictl talosconfig

Download the admin talosconfig of a cluster

Synopsis

Download the admin talosconfig of a cluster. If merge flag is defined, config will be merged with ~/.talos/config or [local-path] if specified. Otherwise talosconfig will be written to PWD or [local-path] if specified.

omnictl talosconfig [local-path] [flags]

Options

      --admin            get admin talosconfig (DEBUG-ONLY)
  -c, --cluster string   cluster to use
  -f, --force            force overwrite of talosconfig if already present
  -h, --help             help for talosconfig
  -m, --merge            merge with existing talosconfig (default true)

Options inherited from parent commands

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

  • omnictl - A CLI for accessing Omni API.

omnictl

A CLI for accessing Omni API.

Options

      --context string             The context to be used. Defaults to the selected context in the omniconfig file.
  -h, --help                       help for omnictl
      --insecure-skip-tls-verify   Skip TLS verification for the Omni GRPC and HTTP API endpoints.
      --omniconfig string          The path to the omni configuration file. Defaults to 'OMNICONFIG' env variable if set, otherwise the config directory according to the XDG specification.

SEE ALSO

4 - Explanation

4.1 - Machine Registration

Machine registration is built on top of the extremely fast WireGuard® technology built in to Linux. A technology dubbed SideroLink builds upon WireGuard in order to provide a fully automated way of setting up and maintaining a WireGuard tunnel between Omni and each registered machine. Once the secure tunnel is established between a machine it is possible to manage a machine from nearly anywhere in the world.

The SideroLink network is an overlay network used within the data and management planes within Omni. The sole requirements are that your machine has egress to port 443 and the WireGuard port assigned to your account.

4.2 - Omni KMS Disk Encryption

Starting from 1.5.0, Talos supports KMS (Key Management Server) disk encryption key types. KMS keys are randomly generated on the Talos node and then sealed using the KMS server. A sealed key is stored in the luks2 metadata. To decrypt a disk, Talos node needs to communicate with the KMS server and decrypt the sealed key. The KMS server endpoint is defined in the key configuration.

If the Cluster resource has diskencryption enabled, Omni creates a config patch for each cluster machine and sets key’s KMS endpoint to the Omni gRPC API. Each disk encryption key is sealed using an AES256 key managed by Omni:

  • Omni generates a random AES256 key for a machine when it is allocated.
  • When the machine is wiped the encryption key is deleted.

4.3 - Authentication and Authorization

Auth0

Github

In order to login with GitHub you must use your primary verified email.

SAML

Security Assertion Markup Language (SAML) is an open standard that allows identity providers (IdP) to pass authorization credentials to service providers (SP). Omni plays the role of service provider.

To enable SAML on your account please submit a ticket in Zendesk. Or reach out to us in the #omni channel in Slack.

SAML alters Omni user management:

  • Users are automatically created on the first login into Omni:
  • the first user gets Admin role;
  • any subsequently created user gets None role.
  • Admin can change other users’ roles.
  • Creating or deleting a user is not possible.
  • Omni gets user attributes from SAML assertion and adds them as labels to Identity resource with saml.omni.sidero.dev/ prefix.
  • ACL can be used to adjust fine grained permissions instead of changing the user roles.