Default Kubernetes cluster templates in CREODIAS Cloud

In this article we shall list Kubernetes cluster templates available on CREODIAS and explain the differences among them.

What We Are Going To Cover

  • List available templates on your cloud

  • Explain the difference between calico and cilium network drivers

  • How to choose proper template

  • Overview and benefits of localstorage templates

  • Example of creating localstorage template using HMD and HMAD flavors

Prerequisites

No. 1 Account

You need a CREODIAS hosting account with access to the Horizon interface: https://horizon.cloudferro.com.

No. 2 Private and public keys

To create a cluster, you will need an available SSH key pair. If you do not have one already, follow this article to create it in the OpenStack dashboard: How to create key pair in OpenStack Dashboard on CREODIAS.

No. 3 Documentation for standard templates

Documentation for all 1.23.16 drivers is here.

Documentation for localstorage templates:

k8s-stable-localstorage-1.21.5

Kubernetes release 1.21

k8s-stable-localstorage-1.22.5

Kubernetes release 1.22

k8s-stable-localstorage-1.23.5

Kubernetes release 1.23

No. 4 How to create Kubernetes clusters

The general procedure is explained in How to Create a Kubernetes Cluster Using CREODIAS OpenStack Magnum.

No. 5 Using vGPU in Kubernetes clusters

If template name contains “vgpu”, it will support vGPU workloads on the cluster.

To learn how to set up vGPU in Kubernetes clusters on CREODIAS cloud, see Deploying vGPU workloads on CREODIAS Kubernetes.

Templates available on your cloud

The exact number of available default Kubernetes cluster templates depends on the cloud you choose to work with.

WAW3-1

These are the default Kubernetes cluster templates on WAW3-1 cloud:

../_images/waw3-1-default.png
WAW3-2

Default templates for WAW3-2 cloud:

../_images/waw3-2-default_templates.png
FRA1-2

Default templates for FRA1-2 cloud:

../_images/fra1-2-default-template.png

The converse is also true, you may want to select the cloud that you want to use according to the type of cluster that you would want to use. For instance, you would have to select WAW3-1 cloud if you wanted to use vGPU on your cluster.

How to choose a proper template

Standard templates

Standard templates are general in nature and you can use them for any type of Kubernetes cluster. Each will produce a working Kubernetes cluster on CREODIAS OpenStack Magnum hosting. The default network driver is calico. Template that does not specify calico, k8s-1.23.16-v1.0.3, and is identical to the template that does specify calico in its name. Both are placed in the left column in the following table:

calico cilium
k8s-1.23.16-v1.0.3 k8s-1.23.16-cilium-v1.0.3
k8s-1.23.16-calico-v1.0.3

Standard templates can also use vGPU hardware if available in the cloud. Using vGPU with Kubernetes clusters is explained in Prerequisite No. 5.

Templates with vGPU

calico vGPU cilium vGPU
k8s-1.23.16-vgpu-v1.0.0 k8s-1.23.16-cilium-vgpu-v1.0.0
k8s-1.23.16-calico-vgpu-v1.0.0

Again, the templates in the left column are identical.

If the application does not require a great many operations, then a standard template should be sufficient.

You can also dig deeper and choose the template according to the the network plugin used.

Network plugins for Kubernetes clusters

Kubernetes cluster templates at CREODIAS cloud use calico or cilium plugins for controlling network traffic. Both are CNI compliant. Calico is the default plugin, meaning that if the template name does not specify the plugin, the calico driver is used. If the template name specifies cilium then, of course, the cilium driver is used.

Calico (the default)

Calico uses BGP protocol to move network packets towards IP addresses of the pods. Calico can be faster then its competitors but its most remarkable feature is support for network policies. With those, you can define which pods can send and receive traffic and also manage the security of the network.

Calico can apply policies to multiple types of endpoints such as pods, virtual machines and host interfaces. It also supports cryptographics identity. Calico policies can be used on its own or together with the Kubernetes network policies.

Cilium

Cilium is drawing its power from a technology called eBPF. It exposes programmable hooks to the network stack in Linux kernel. eBPF uses those hooks to reprogram Linux runtime behaviour without any loss of speed or safety. There also is no need to recompile Linux kernel in order to become aware of events in Kubernetes clusters. In essence, eBPF enables Linux to watch over Kubernetes and react appropriately.

With Cilium, the relationships amongst various cluster parts are as follows:

  • pods in the cluster (as well as the Cilium driver itself) are using eBPF instead of using Linux kernel directly,

  • kubelet uses Cilium driver through the CNI compliance and

  • the Cilium driver implements network policy, services and load balancing, flow and policy logging, as well as computing various metrics.

Using Cilium especially makes sense if you require fine-grained security controls or need to reduce latency in large Kubernetes clusters.

Overview and benefits of localstorage templates

Compared to standard templates, the localstorage templates may be a better fit for resources intensive apps.

NVMe stands for Nonvolatile Memory Express and is a newer storage access and transport protocol for flash and solid-state drives (SSDs). localstorage templates provision the cluster with Virtual Machine flavors which have NVMe storage available.

Each cluster contains an instance of etcd volume, which serves as its external database. Using NVMe storage will speed up access to etcd and it will, by definition, speed up cluster operations.

Applications such as day trading, personal finances, EODATA processing, AI and the similar, may have so many transactions that using localstorage templates may become a viable option.

In WAW3-1 cloud, virtual machine flavors with NVMe have the prefix of HMD and they are resource-intensive:

openstack flavor list
+--------------+--------+------+-----------+-------+
| Name         |    RAM | Disk | Ephemeral | VCPUs |
+--------------+--------+------+-----------+-------+
| hmd.xlarge   |  65536 |  200 |         0 |     8 |
| hmd.medium   |  16384 |   50 |         0 |     2 |
| hmd.large    |  32768 |  100 |         0 |     4 |

You would use an HMD flavor mainly for the master node(s) in the cluster.

In WAW3-2 cloud, you would use flavors starting with HMAD instead of HMD.

Example parameters to create a new cluster with localstorage and NVMe

For general discussion of parameters, see Prerequisite No. 4. What follows is a simplified example, geared to creation of cluster using localstorage. We shall use WAW3-1 with HMD flavors in the example but you can, of course, supply HMAD flavors for WAW3-2 and so on.

The only deviation from the usual procedure is that it is mandatory to add label etcd_volume_size=0 in the Advanced window. Without it, localstorage template won’t work.

Start creating a cluster with the usual chain of commands Container Infra -> Clusters -> + Create New Cluster.

In the screenshot below, we selected k8s-stable-localstorage-1.23.5 as our local storage template of choice, in mandatory field Cluster Template.

For field Keypair use SSH key that you already have and if you do not have it yet, use Prerequisite No. 2 to obtain it.

../_images/create_cluster_details.png

Let master nodes use one of the HMD flavors:

../_images/create_cluster_size.png

Proceed to enter the usual parameters into the Network and Management windows.

The last window, Advanced, is the place to add label etcd_volume_size=0.

../_images/create_cluster_advanced.png

The result will be a formed cluster NVMe:

../_images/create_cluster_working.png