Reference Deployment Guide for Kubernetes Cluster Installation over Mellanox Ethernet Solution with Tectonic Installer

Version 7

    In this document we will demonstrate a deployment procedure of Kubernetes Cluster over Mellanox Ethernet Solution with Tectonic Installer.

    This document is partially based on document “Install Tectonic on Baremetal with Terraform”.

     

     

    References

     

    Setup Overview

    Equipment

     

     

     

    Network design

     

    The following illustration shows an example configuration. This example does not cover the Internet route configuration and external DNS.

     

     

     

    Server Wiring(Optional, if user Dual-port card)

     

    In our reference we'll wire 1st port to Ethernet switch and will not use a 2nd port.

     

    Network Configuration

     

    We use in the our setup four servers.

    Each servers will connected to the SN2100 switch by a 100Gb ETH copper cable.

    Deployment server has static IP configuration. Rest servers get IP addresses from DHCP service which installed on Deployment server.

     

     

    Deployment Guide

     

    Prerequisites

    Installation requires the following components:

    1. Provisioner node
    2. Cluster license and pull secret (can be downloaded from your Tectonic Account, which is free for up to 10 nodes)
    3. Tectonic Installer with specific version of Terraform
    4. Matchbox Terraform Provider
    5. Matchbox v0.6+ service which matches bare-metal machines to profiles that PXE boot and provision Container Linux clusters
    6. PXE network boot environment with DHCP, TFTP, and DNS services that provided by DNSMASQ
    7. DNS records for the Kubernetes controller(s) and Tectonic Ingress worker(s).
    8. SSH keypair whose private key is present in your system's ssh-agent.

     

     

    Provisioner node

     

    As Provisioner node we have used physical sever with latest Ubuntu 16.04 server operating system.

    Assign static IP to Mellanox network interface (enp4s0) like in example below:

    $cat /etc/network/interfaces

    source /etc/network/interfaces.d/*

    auto lo

    iface lo inet loopback

    auto enp4s0
    iface enp4s0 inet static

         address 10.215.215.1      

         netmask 255.255.255.0      

         broadcast 10.215.215.255

         dns-nameservers 8.8.8.8

    Ubuntu firewall - UFW (Optional, if enabled)

    Add  ports (22,80,443,67,69/udp,5355/udp,8080,8081) for access to deployment services:

    For example allow SSH:

    #ufw allow 22

            

    Cluster license and pull secret

     

    Please download cluster license and pull secret from your Tectonic Account to provisioner node. In our case it's a files: /opt/config.json and /opt/config.json

     

     

    Tectonic Installer installation and configuration

    Below provided installation and configuration procedure of Tectonic Installer on provisioner node.

     

    • Run the following commands to download and extract Tectonic Installer

    $ curl -O https://releases.tectonic.com/tectonic-1.6.4-tectonic.1.tar.gz

     

    $ tar xzvf tectonic-1.6.4-tectonic.1.tar.gz

    $ cd tectonic

    • Initialize and configure Terraform. Start by setting the INSTALLER_PATH to the location of your platform's Tectonic installer. The platform should be linux.

    $ export INSTALLER_PATH=$(pwd)/tectonic-installer/linux/installer

    $ export PATH=$PATH:$(pwd)/tectonic-installer/linux

    • Make a copy of the Terraform configuration file for our system. Do not share this configuration file as it is specific to your machine.

    $ sed "s|<PATH_TO_INSTALLER>|$INSTALLER_PATH|g" terraformrc.example > .terraformrc

    $ export TERRAFORM_CONFIG=$(pwd)/.terraformrc

    • Our configuration file provided below:

    $ cat .terraformrc

     

    providers {

      matchbox = "/opt/go/bin/terraform-provider-matchbox"

    }

     

    Install Matchbox Terraform Provider

    Go is required for build Matchbox Terraform Provider. Install GO > 1.8. Please execute following commands on provisioner node for complete installation:

    # sudo su -

    # cd /tmp/

    # wget https://storage.googleapis.com/golang/go1.8.3.linux-amd64.tar.gz

    # tar -xvf go1.8.3.linux-amd64.tar.gz

    # mv go /usr/local

    # go version

    # export GOROOT=/usr/local/go

    # mkdir /opt/go && chmod 777 /opt/go

    # export GOPATH=/opt/go

    # go get -u github.com/coreos/terraform-provider-matchbox

    # cd /opt/go/src/github.com/coreos/terraform-provider-matchbox/

    # make build

     

     

    Matchbox service installation

     

    Detailed installation guide provided here - https://github.com/coreos/matchbox/blob/master/Documentation/deployment.md.

    In our case we have used local service deployment.

    Below provided guide for installation and configuration matchbox service on provisioner node:

    • Download the latest matchbox release to the deployment host.

    $ wget https://github.com/coreos/matchbox/releases/download/v0.6.1/matchbox-v0.6.1-linux-amd64.tar.gz
    $ wget https://github.com/coreos/matchbox/releases/download/v0.6.1/matchbox-v0.6.1-linux-amd64.tar.gz.asc

    • Untar the arhive

    $ tar xzvf matchbox-v0.6.1-linux-amd64.tar.gz
    $ cd matchbox-v0.6.1-linux-amd64

    • Pre-built binaries are available for generic Linux distributions. Copy the matchbox static binary to an appropriate location on the host.
    $ sudo cp matchbox /usr/local/bin
    • Set up User/Group. The matchbox service should be run by a non-root user with access to the matchbox data directory (/var/lib/matchbox). Create a matchbox user and group.

    $ sudo useradd -U matchbox

    $ sudo mkdir -p /var/lib/matchbox/assets

    $ sudo chown -R matchbox:matchbox /var/lib/matchbox

    • Create service

    $ sudo cp contrib/systemd/matchbox-local.service /etc/systemd/system/matchbox.service

    • Customize matchbox service by editing the systemd unit or adding a systemd dropin. Find the complete set of matchbox flags and environment variables at config.

    $ sudo vim /etc/systemd/system/matchbox.service

     

    [Unit]

    Description=CoreOS matchbox Server

    Documentation=https://github.com/coreos/matchbox

     

    [Service]

    User=matchbox

    Group=matchbox

    Environment="MATCHBOX_ADDRESS=0.0.0.0:8080"

    Environment="MATCHBOX_RPC_ADDRESS=0.0.0.0:8081"

    ExecStart=/usr/bin/matchbox

     

    # systemd.exec

    ProtectHome=yes

    ProtectSystem=full

     

    [Install]

    WantedBy=multi-user.target

    • In order to generate TLS Certificates please navigate to the scripts/tls directory.
    $ cd scripts/tls
    • Create export SAN to set the Subject Alt Names which should be used in certificates. Provide the fully qualified domain name or IP (discouraged) where Matchbox will be installed.
    $ export SAN=DNS.1:node1.kub.cloudx.mlnx,IP.1:10.215.215.1
    • Generate a ca.crt, server.crt, server.key, client.crt and client.key.
    $ ./cert-gen
    • Move TLS credentials to the matchbox server's default location.

    $ sudo mkdir -p /etc/matchbox

    $ sudo cp ca.crt server.crt server.key /etc/matchbox

    • Save client.crt, client.key and ca.crt for later use (e.g. ~/.matchbox).

       

    • Start the matchbox service and enable it if you'd like it to start on every boot.

    $ sudo systemctl daemon-reload

    $ sudo systemctl start matchbox

    $ sudo systemctl enable matchbox

    $ ./scripts/get-coreos stable 1353.8.0 .

     

    • Move the images to /var/lib/matchbox/assets

     

    $ sudo cp -r coreos /var/lib/matchbox/assets

     

     

    PXE network boot environment with DHCP, TFTP, and DNS services

    Dnsmasq provides PXE network boot environment with DHCP, TFTP and DNS services for cluster deployment. Detailed network setup guide provided here - https://coreos.com/matchbox/docs/latest/network-setup.html.

    Below provides the our guide how to create  a DHCP/TFTP/DNS network boot environment to work with matchbox service to boot and provision PXE and iPXE.

    To install dnsmasq run command below:

    #apt-get install dnsmasq

     

    Example configuration provided below describe  DHCP server with free range from x.x.x.2-x.x.x.10, assigned three static IP by MAC address, next server
    with iPXE boot and DNS domain - kub.cloudx.mlnx. DNS services served by DNS server - 10.7.136.5.

    $cat /etc/dnsmasq.conf
    interface=enp4s0
    dhcp-range=10.215.215.2,10.215.215.10,60m

    enable-tftp
    tftp-root=/tftpboot

    resolv-file=/etc/resolv.dnsmasq.conf

    # if request comes from older PXE ROM, chainload to iPXE (via TFTP)
    dhcp-boot=tag:!ipxe,undionly.kpxe
    # if request comes from iPXE user class, set tag "ipxe"
    dhcp-userclass=set:ipxe,iPXE
    # point ipxe tagged requests to the matchbox iPXE boot script (via HTTP)
    dhcp-boot=tag:ipxe,http://10.215.215.1:8080/boot.ipxe
    dhcp-host=24:8A:07:2F:DA:AE,10.215.215.2
    dhcp-host=24:8A:07:2F:DA:26,10.215.215.3
    dhcp-host=24:8A:07:2F:DA:8E,10.215.215.4

    domain=kub.cloudx.mlnx

    # verbose
    log-queries
    log-dhcp

    # static DNS assignements
    address=/node1.kub.cloudx.mlnx/10.215.215.1

    # (optional) disable DNS and specify alternate
    # port=0
    dhcp-option=6,10.7.136.5

     

    DNS records for the Kubernetes controller(s) and Tectonic Ingress worker(s)

           

    For our deployment the following two records are required for Tectonic Installer. We were created two DNS records:

     

    cluster.kub.cloudx.mlnx as CNAME of controller node name

    tectonic.kub.cloudx.mlnx as CNAME at least one of worker(s) node name

     

     

    ssh-agent configuration

     

    Tectonic installer will add the installer machine's public SSH key to all machines in the cluster. The key must be on the installing machine's ssh-agent, and it is used to configure nodes.

    Check if a key already exists in the ssh-agent using

    $ ssh-add -l

    a key must be added to the agent by:

    $ ssh-add Path/ToYour/KeyFile

    If you do not want to use ssh-agent you must add private_key parameter to connection sections in platform/metal/remore.tf like below:

     

    $ cat platforms/metal/remote.tf

     

    resource "null_resource" "kubeconfig" {

      count = "${length(var.tectonic_metal_controller_domains) + length(var.tectonic_metal_worker_domains)}"

     

      connection {

        type    = "ssh"

        host    = "${element(concat(var.tectonic_metal_controller_domains, var.tectonic_metal_worker_domains), count.index)}"

        user    = "core"

        timeout = "60m"

        private_key = "${file("/root/.ssh/id_rsa")}"

      }

     

      provisioner "file" {

        content     = "${module.bootkube.kubeconfig}"

        destination = "$HOME/kubeconfig"

      }

     

      provisioner "remote-exec" {

        inline = [

          "sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",

        ]

      }

    }

     

    resource "null_resource" "bootstrap" {

      # Without depends_on, this remote-exec may start before the kubeconfig copy.

      # Terraform only does one task at a time, so it would try to bootstrap

      # Kubernetes and Tectonic while no Kubelets are running. Ensure all nodes

      # receive a kubeconfig before proceeding with bootkube and tectonic.

      depends_on = ["null_resource.kubeconfig"]

     

      connection {

        type    = "ssh"

        host    = "${element(var.tectonic_metal_controller_domains, 0)}"

        user    = "core"

        timeout = "60m"

        private_key = "${file("/root/.ssh/id_rsa")}"

      }

     

      provisioner "file" {

        source      = "./generated"

        destination = "$HOME/tectonic"

      }

     

      provisioner "remote-exec" {

        inline = [

          "sudo mkdir -p /opt",

          "sudo rm -rf /opt/tectonic",

          "sudo mv /home/core/tectonic /opt/",

          "sudo systemctl start ${var.tectonic_vanilla_k8s ? "bootkube" : "tectonic"}",

        ]

      }

    }

     

     

    Customize the deployment

     

    Please make following steps for customize your deployment:

     

    • Get the modules that Terraform will use to create the cluster resources:

     

    $ terraform get ./platforms/metal

     

    • Create a build directory to hold your customizations and copy the example file into it:

    $ export CLUSTER=my-cluster

    $ mkdir -p build/${CLUSTER}

    $ cp examples/terraform.tfvars.metal build/${CLUSTER}/terraform.tfvars

    • Example of the our deployment configuration parameters provided below with updated parameters in RED:

    $ cat /opt/tectonic/build/my-cluster/terraform.tfvars

     

    // The e-mail address used to login as the admin user to the Tectonic Console.

    //

    // Note: This field MUST be set manually prior to creating the cluster.

    tectonic_admin_email = "admin@***.com"

     

    // The bcrypt hash of admin user password to login to the Tectonic Console.

    // Use the bcrypt-hash tool (https://github.com/coreos/bcrypt-tool/releases/tag/v1.0.0) to generate it.

    //

    // Note: This field MUST be set manually prior to creating the cluster.

    tectonic_admin_password_hash = "***"

     

    // The base DNS domain of the cluster.

    //

    // Example: `openstack.dev.coreos.systems`.

    //

    // Note: This field MUST be set manually prior to creating the cluster.

    // This applies only to cloud platforms.

    tectonic_base_domain = "kub.cloudx.mlnx"

     

    // (optional) The content of the PEM-encoded CA certificate, used to generate Tectonic Console's server certificate.

    // If left blank, a CA certificate will be automatically generated.

    // tectonic_ca_cert = ""

     

    // (optional) The content of the PEM-encoded CA key, used to generate Tectonic Console's server certificate.

    // This field is mandatory if `tectonic_ca_cert` is set.

    // tectonic_ca_key = ""

     

    // (optional) The algorithm used to generate tectonic_ca_key.

    // The default value is currently recommend.

    // This field is mandatory if `tectonic_ca_cert` is set.

    // tectonic_ca_key_alg = "RSA"

     

    // The Container Linux update channel.

    //

    // Examples: `stable`, `beta`, `alpha`

    tectonic_cl_channel = "stable"

     

    // This declares the IP range to assign Kubernetes pod IPs in CIDR notation.

    tectonic_cluster_cidr = "10.216.0.0/16"

     

    // The name of the cluster.

    // If used in a cloud-environment, this will be prepended to `tectonic_base_domain` resulting in the URL to the Tectonic console.

    //

    // Note: This field MUST be set manually prior to creating the cluster.

    // Warning: Special characters in the name like '.' may cause errors on OpenStack platforms due to resource name constraints.

    tectonic_cluster_name = "cloudx"

     

    // (optional) The path of the file containing the CA certificate for TLS communication with etcd.

    //

    // Note: This works only when used in conjunction with an external etcd cluster.

    // If set, the variables `tectonic_etcd_servers`, `tectonic_etcd_client_cert_path`, and `tectonic_etcd_client_key_path` must also be set.

    // tectonic_etcd_ca_cert_path = ""

     

    // (optional) The path of the file containing the client certificate for TLS communication with etcd.

    //

    // Note: This works only when used in conjunction with an external etcd cluster.

    // If set, the variables `tectonic_etcd_servers`, `tectonic_etcd_ca_cert_path`, and `tectonic_etcd_client_key_path` must also be set.

    // tectonic_etcd_client_cert_path = ""

     

    // (optional) The path of the file containing the client key for TLS communication with etcd.

    //

    // Note: This works only when used in conjunction with an external etcd cluster.

    // If set, the variables `tectonic_etcd_servers`, `tectonic_etcd_ca_cert_path`, and `tectonic_etcd_client_cert_path` must also be set.

    // tectonic_etcd_client_key_path = ""

     

    // The number of etcd nodes to be created.

    // If set to zero, the count of etcd nodes will be determined automatically.

    //

    // Note: This is currently only supported on AWS.

    tectonic_etcd_count = "1"

     

    // (optional) List of external etcd v3 servers to connect with (hostnames/IPs only).

    // Needs to be set if using an external etcd cluster.

    //

    // Example: `["etcd1", "etcd2", "etcd3"]`

    // tectonic_etcd_servers = ""

     

    // If set to true, experimental Tectonic assets are being deployed.

    tectonic_experimental = true

     

    // The path to the tectonic licence file.

    //

    // Note: This field MUST be set manually prior to creating the cluster unless `tectonic_vanilla_k8s` is set to `true`.

    tectonic_license_path = "/opt/tectonic-license.txt"

     

    // The number of master nodes to be created.

    // This applies only to cloud platforms.

    tectonic_master_count = "1"

     

    // CoreOS kernel/initrd version to PXE boot.

    // Must be present in Matchbox assets and correspond to `tectonic_cl_channel`.

    //

    // Example: `1298.7.0`

    tectonic_metal_cl_version = "1353.8.0"

     

    // The domain name which resolves to controller node(s)

    //

    // Example: `cluster.example.com`

    tectonic_metal_controller_domain = "cluster.kub.cloudx.mlnx"

     

    // Ordered list of controller domain names.

    //

    // Example: `["node2.example.com", "node3.example.com"]`

    tectonic_metal_controller_domains = ["node2.kub.cloudx.mlnx"]

     

    // Ordered list of controller MAC addresses for matching machines.

    //

    // Example: `["52:54:00:a1:9c:ae"]`

    tectonic_metal_controller_macs = ["38:63:bb:33:48:ec"]

     

    // Ordered list of controller names.

    //

    // Example: `["node1"]`

    tectonic_metal_controller_names = ["node2.kub.cloudx.mlnx"]

     

    // The domain name which resolves to Tectonic Ingress (i.e. worker node(s))

    //

    // Example: `tectonic.example.com`

    tectonic_metal_ingress_domain = "tectonic.kub.cloudx.mlnx"

     

    // The content of the Matchbox CA certificate to trust.

    //

    // Example:

    // ```

    // <<EOD

    // -----BEGIN CERTIFICATE-----

    // MIIFDTCCAvWgAwIBAgIJAIuXq10k2OFlMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV

    // ...

    // Od27a+1We/P5ey7WRlwCfuEcFV7nYS/qMykYdQ9fxHSPgTPlrGrSwKstaaIIqOkE

    // kA==

    // -----END CERTIFICATE-----

    // EOD

    // ```

    tectonic_metal_matchbox_ca =

    <<EOD

    -----BEGIN CERTIFICATE-----

    *****

    -----END CERTIFICATE-----

    EOD

     

    // The content of the Matchbox client TLS certificate.

    //

    // Example:

    // ```

    // <<EOD

    // -----BEGIN CERTIFICATE-----

    // MIIEYDCCAkigAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UEAwwHZmFr

    // ...

    // jyXQv9IZPMTwOndF6AVLH7l1F0E=

    // -----END CERTIFICATE-----

    // EOD

    // ```

    tectonic_metal_matchbox_client_cert =

    <<EOD

    -----BEGIN CERTIFICATE-----

    *****

    -----END CERTIFICATE-----

    EOD

     

    // The content of the Matchbox client TLS key.

    //

    // Example:

    // ```

    // <<EOD

    // -----BEGIN RSA PRIVATE KEY-----

    // MIIEpQIBAAKCAQEAr8S7x/tAS6W+aRW3X833OvNfxXjUJAiRkUV85Raln7tqVcTG

    // ...

    // Pikk0rvNVB/vrPeVjAdGY9TJC/vpz3om92DRDmUifu8rCFxIHE0GrQ0=

    // -----END RSA PRIVATE KEY-----

    // EOD

    // ```

    tectonic_metal_matchbox_client_key =

    <<EOD

    -----BEGIN RSA PRIVATE KEY-----

    *****

    -----END RSA PRIVATE KEY-----

    EOD

     

    // Matchbox HTTP read-only URL.

    //

    // Example: `e.g. http://matchbox.example.com:8080`

    tectonic_metal_matchbox_http_url = "http://10.215.215.1:8080"

     

    // The Matchbox gRPC API endpoint.

    //

    // Example: `matchbox.example.com:8081`

    tectonic_metal_matchbox_rpc_endpoint = "10.215.215.1:8081"

     

    // Ordered list of worker domain names.

    //

    // Example: `["node2.example.com", "node3.example.com"]`

    tectonic_metal_worker_domains = ["node3.kub.cloudx.mlnx", "node4.kub.cloudx.mlnx"]

     

    // Ordered list of worker MAC addresses for matching machines.

    //

    // Example: `["52:54:00:b2:2f:86", "52:54:00:c3:61:77"]`

    tectonic_metal_worker_macs = ["38:63:bb:33:58:5c", "38:63:bb:33:29:9c"]

     

    // Ordered list of worker names.

    //

    // Example: `["node2", "node3"]`

    tectonic_metal_worker_names = ["node3.kub.cloudx.mlnx", "node4.kub.cloudx.mlnx"]

     

    // The path the pull secret file in JSON format.

    //

    // Note: This field MUST be set manually prior to creating the cluster unless `tectonic_vanilla_k8s` is set to `true`.

    tectonic_pull_secret_path = "/opt/config.json"

     

    // This declares the IP range to assign Kubernetes service cluster IPs in CIDR notation.

    tectonic_service_cidr = "10.223.0.0/16"

     

    // SSH public key to use as an authorized key.

    //

    // Example: `ssh-rsa AAAB3N...`

    tectonic_ssh_authorized_key = "ssh-rsa **** root@server"

     

    // If set to true, a vanilla Kubernetes cluster will be deployed, omitting any Tectonic assets.

    tectonic_vanilla_k8s = false

     

    // The number of worker nodes to be created.

    // This applies only to cloud platforms.

    tectonic_worker_count = "2"

     

     

    Deploy the cluster

     

    • Test out the plan before deploying everything:

    $ terraform plan -var-file=build/${CLUSTER}/terraform.tfvars platforms/metal

     

    • Next, deploy the cluster:
    $ terraform apply -var-file=build/${CLUSTER}/terraform.tfvars platforms/metal

     

    This will write machine profiles and matcher groups to the matchbox service.

    Power On hosts with boot from Network adapter and wait about 30-40 min (for my deployment) until all tasks complete.

     

    • After finish deployment you can access to cluster:

     

    # export KUBECONFIG=generated/auth/kubeconfig

    # kubectl cluster-info

     

    Kubernetes master is running at https://cluster.kub.cloudx.mlnx:443

    Heapster is running at https://cluster.kub.cloudx.mlnx:443/api/v1/proxy/namespaces/kube-system/services/heapster

    KubeDNS is running at https://cluster.kub.cloudx.mlnx:443/api/v1/proxy/namespaces/kube-system/services/kube-dns

    # kubectl get pod --all-namespaces=true -o wide

     

    NAMESPACE         NAME                                              READY     STATUS    RESTARTS   AGE       IP             NODE

    kube-system       etcd-operator-4083686351-mhmr3                    1/1       Running   5          3d        10.216.0.19    node2.kub.cloudx.mlnx

    kube-system       heapster-1767220581-4bggw                         2/2       Running   2          3d        10.216.1.19    node4.kub.cloudx.mlnx

    kube-system       kube-apiserver-f8g8b                              1/1       Running   9          3d        10.215.215.2   node2.kub.cloudx.mlnx

    kube-system       kube-controller-manager-1158569765-hq387          1/1       Running   5          3d        10.216.0.22    node2.kub.cloudx.mlnx

    kube-system       kube-controller-manager-1158569765-ptdfb          1/1       Running   4          3d        10.216.0.23    node2.kub.cloudx.mlnx

    kube-system       kube-dns-2431531914-v4m46                         3/3       Running   6          3d        10.216.0.21    node2.kub.cloudx.mlnx

    kube-system       kube-etcd-0000                                    1/1       Running   2          3d        10.215.215.2   node2.kub.cloudx.mlnx

    kube-system       kube-etcd-network-checkpointer-x5crk              1/1       Running   2          3d        10.215.215.2   node2.kub.cloudx.mlnx

    kube-system       kube-flannel-5h3tp                                2/2       Running   4          3d        10.215.215.4   node4.kub.cloudx.mlnx

    kube-system       kube-flannel-cq3jx                                2/2       Running   4          3d        10.215.215.2   node2.kub.cloudx.mlnx

    kube-system       kube-flannel-thtxc                                2/2       Running   6          3d        10.215.215.3   node3.kub.cloudx.mlnx

    kube-system       kube-proxy-1g313                                  1/1       Running   2          3d        10.215.215.2   node2.kub.cloudx.mlnx

    kube-system       kube-proxy-3gt66                                  1/1       Running   1          3d        10.215.215.4   node4.kub.cloudx.mlnx

    kube-system       kube-proxy-w8rfm                                  1/1       Running   2          3d        10.215.215.3   node3.kub.cloudx.mlnx

    kube-system       kube-scheduler-1310662694-4b5gl                   1/1       Running   5          3d        10.216.0.20    node2.kub.cloudx.mlnx

    kube-system       kube-scheduler-1310662694-kw8t8                   1/1       Running   4          3d        10.216.0.24    node2.kub.cloudx.mlnx

    kube-system       pod-checkpointer-2kjz8                            1/1       Running   2          3d        10.215.215.2   node2.kub.cloudx.mlnx

    kube-system       pod-checkpointer-2kjz8-node2.kub.cloudx.mlnx      1/1       Running   2          3d        10.215.215.2   node2.kub.cloudx.mlnx

    tectonic-system   container-linux-update-agent-ds-jd5cj             1/1       Running   117        3d        10.216.1.20    node4.kub.cloudx.mlnx

    tectonic-system   container-linux-update-agent-ds-m4ppm             1/1       Running   113        3d        10.216.2.24    node3.kub.cloudx.mlnx

    tectonic-system   container-linux-update-agent-ds-mrb7x             1/1       Running   109        3d        10.216.0.18    node2.kub.cloudx.mlnx

    tectonic-system   container-linux-update-operator-886082281-h37jg   1/1       Running   1          1d        10.216.1.21    node4.kub.cloudx.mlnx

    tectonic-system   default-http-backend-784379803-j8rfn              1/1       Running   0          1d        10.216.1.31    node4.kub.cloudx.mlnx

    tectonic-system   kube-state-metrics-959915017-2qz77                1/1       Running   0          1d        10.216.1.22    node4.kub.cloudx.mlnx

    tectonic-system   kube-version-operator-3749920060-x8p3c            1/1       Running   2          1d        10.216.1.25    node4.kub.cloudx.mlnx

    tectonic-system   node-agent-71v6f                                  1/1       Running   1          3d        10.216.1.18    node4.kub.cloudx.mlnx

    tectonic-system   node-agent-p2kvc                                  1/1       Running   2          3d        10.216.0.25    node2.kub.cloudx.mlnx

    tectonic-system   node-agent-psmzp                                  1/1       Running   2          3d        10.216.2.25    node3.kub.cloudx.mlnx

    tectonic-system   node-exporter-4l2bm                               1/1       Running   1          3d        10.215.215.4   node4.kub.cloudx.mlnx

    tectonic-system   node-exporter-t5lf7                               1/1       Running   2          3d        10.215.215.2   node2.kub.cloudx.mlnx

    tectonic-system   node-exporter-z8fbj                               1/1       Running   2          3d        10.215.215.3   node3.kub.cloudx.mlnx

    tectonic-system   prometheus-k8s-0                                  2/2       Running   0          1d        10.216.1.32    node4.kub.cloudx.mlnx

    tectonic-system   prometheus-operator-3021942885-bq3ks              1/1       Running   0          1d        10.216.1.23    node4.kub.cloudx.mlnx

    tectonic-system   tectonic-channel-operator-2230218413-jgwvf        1/1       Running   1          1d        10.216.1.24    node4.kub.cloudx.mlnx

    tectonic-system   tectonic-console-554101123-94l73                  1/1       Running   6          1d        10.216.1.29    node4.kub.cloudx.mlnx

    tectonic-system   tectonic-console-554101123-ms03h                  1/1       Running   5          1d        10.216.1.28    node4.kub.cloudx.mlnx

    tectonic-system   tectonic-identity-1964251573-v8mpq                1/1       Running   3          1d        10.216.1.30    node4.kub.cloudx.mlnx

    tectonic-system   tectonic-ingress-controller-6mr9d                 1/1       Running   6          3d        10.215.215.3   node3.kub.cloudx.mlnx

    tectonic-system   tectonic-ingress-controller-s1crg                 1/1       Running   2          3d        10.215.215.4   node4.kub.cloudx.mlnx

    tectonic-system   tectonic-prometheus-operator-2797150806-zxcpg     1/1       Running   0          1d        10.216.1.26    node4.kub.cloudx.mlnx

    tectonic-system   tectonic-stats-emitter-2611508926-pc888           2/2       Running   0          1d        10.216.1.27    node4.kub.cloudx.mlnx