How to create a Kubernetes cluster using the Creodias Managed Kubernetes launcher GUI

Note

Managed Kubernetes is available as an Early Access preview — reach out to sales@creodias.eu to learn how you can take part.

In this tutorial, you will learn how to create a new Kubernetes cluster using the Creodias Managed Kubernetes launcher graphical user interface (GUI). It allows you to create and manage Kubernetes clusters directly from your browser, without installing or configuring any CLI tools. It’s especially useful when you want to

  • quickly deploy a cluster with control plane and worker nodes,

  • enable autoscaling, and

  • download a ready-to-use kubeconfig file

all through a visual, guided process.

This article walks you through each step using real examples from the CREODIAS cloud platform.

Note

This tutorial results in a fully functional Kubernetes cluster with optional worker node pools and kubectl access configured on your local machine.

What we are going to cover

Prerequisites

No. 1 Account

You need a CREODIAS hosting account with the Horizon interface https://horizon.cloudferro.com.

No. 2 Quotas and Resources

Be aware of the available resources in your cloud. Refer to the article on Dashboard Overview – Project Quotas and Flavor Limits on CREODIAS.

If the available resources are insufficient for the cluster you want to create, consider this three-step approach:

  • First, create a control plane.

  • Then, reach out to Support (see Helpdesk and Support) to extend the quota for this cluster.

  • Finally, add worker nodes by creating a node pool.

No. 3 Installation of kubectl

You will access the cluster using kubectl. The standard installation methods for kubectl are described on the Install Tools page of the official Kubernetes website.

In this article, you will learn how to point kubectl to the managed Kubernetes cluster you want to work with.

No. 4 Sharing networks

It is possible to share networks between cluster pods and a VM outside of the cluster. See /kubernetes_managed/Accessing-OpenStack-resources-from-Creodias-Managed-Kubernetes-using-shared-networks

Create New Cluster

When you first encounter the Creodias Managed Kubernetes launcher screen, it will look like this:

../_images/kubernetes-launcher-gui-1.png

Initial Creodias Managed Kubernetes launcher interface.

Click the CREATECLUSTER button. A form will appear on the screen, allowing you to enter data for the new cluster.

../_images/kubernetes-launcher-gui-3.png

Define Cluster Name and Kubernetes Version

Cluster Name

Enter an appropriate name for your cluster. If this field is left empty, the system will automatically generate a cluster name.

Kubernetes Version

At the time of writing, there are two available versions: 1.29.13 and 1.30.10. It is recommended to always use the latest version; however, you can upgrade an older version by clicking the Upgrade button in the Details cluster view (explained later in the article).

Add Control Plane Nodes

Flavors

Select the flavor for the virtual machines in the cluster’s control plane. There are 13 available flavors for the control plane:

../_images/machine-specs-1.png
Size

Choose 1, 3, or 5 control plane nodes.

Use 1 for testing, and 3 for production-grade High Availability.

Tip

For production-grade reliability, choose 3 control plane nodes to achieve high availability. Use 1 only for development or testing.

Add Node Pools

You may want to define the properties of worker nodes right away (you may change them later, when the cluster is running).

Click the NODEPOOLS button to create worker nodes for the cluster. Enter the required information in the form:

../_images/kubernetes-launcher-gui-5.png

Node pool creation form.

Node Pool Name

If left empty, the name will be generated automatically.

Flavor

Choose a flavor based on your needs. For minimal usage, eo2a.large consumes fewer resources compared to larger flavors.

../_images/select_flavor_for_node_pool.png

Select flavor for node pool.

Autoscale

Enable to allow the cluster to automatically increase or decrease nodes based on demand.

The cluster autoscaler adds or removes nodes based on pending pods. This complements the Horizontal Pod Autoscaler (HPA), which adjusts the number of pods inside existing nodes.

Tip

The cluster autoscaler is most effective when combined with pod resource limits and requests. Make sure your workloads define them correctly.

Size of Node Pool

Start with 1 node if unsure; you can resize later.

Advanced Settings

In Advanced Settings, you can:

  • Assign Kubernetes labels

  • Apply taints

  • Specify OpenStack shared network IDs

../_images/add_advanced_settings_345.png

Define labels, taints and/or add OpenStack shared networks.

Warning

It is only possible to define labels and taints while creating a node group.

Taints and labels are out of scope of this article. For sharing networks, see Prerequisite No. 4.

Now finish creating the node pool and click on Add node pool. That brings you back to the “create cluster form”:

../_images/preparation_for_cluster_creation.png

Example setup with 1 control plane and 1 worker node.

Click Create cluster to start creating the cluster.

Creating the Cluster

The status will change to CREATING.

../_images/kubernetes-launcher-gui-7.png

Cluster creation in progress. Status shows CREATING.

Once the creation starts, you see a list of existing Kubernetes clusters.

Cluster List View

Cluster List View will show up if there is at least one cluster present.

After the cluster has been created, its status will become RUNNING.

../_images/kubernetes-launcher-gui-10.png

Cluster status changes to RUNNING when ready.

Single Cluster View – Cluster Details

Click on cluster name in the list to open its Details view:

../_images/kubernetes-upgrade-12.png

Cluster details view.

Access the Cluster Using kubectl

To connect kubectl command to the cluster, download its config file. Click on button Get kubeconfig. A file named <clustername>_config.yaml will download, in this case, it will be called networktest_config.yaml.

To configure kubectl:

export KUBECONFIG=networktest_config.yaml

Assuming that the folder already exists, you can also place the config file in a “centralized” folder:

export KUBECONFIG=$HOME/kubeconfigs/networktest_config.yaml


If you get an error like “Unable to connect to the server,” verify the config path, your network access, and that the cluster is |RUNNING|.

To verify access:

kubectl get nodes -o wide

This is the output of one such cluster:

../_images/kubernetes-upgrade-13.png

One-node control plane cluster using kubectl.

The cluster is running and kubectl is working.

Single Cluster View – Node Pools

How to add a node pool

Click on option Node Pools to edit the existing node pool or add a new one. In the following image, we see a list of existing node pools (there is only one at the moment), as well as button Create node pool to create a new pool.

To start, click on that button and get this window on the screen:

../_images/kubernetes-upgrade-15.png

Example node pool list.

You can create another node pool by clicking on Create node pool. The process is identical to the one we already saw in the first part of the article.

How to edit a node pool

It is also possible to change the parameters of the existing node pool, by clicking on pen icon, PENICON, at the right side of n1 row.

../_images/edit_node_pool_333.png

Node pool creation screen.

You now cannot change the name or the flavor of the node pool, but it is possible to define the number of nodes in the pool in two ways:

Define a range

Turn Autoscale option on and two new options appear in the form:

../_images/kubernetes-upgrade-8.png

Autoscaling enabled with min/max limits.

Redefine fixed size of node pool

Just enter the required number in that field and Save changes.

../_images/kubernetes-upgrade-9.png

Scale worker nodes manually. Status temporarily changes to SCALING.

When editing, in Advanced settings you can only change the related networks while it is not possible to change node labels and taints.

Delete a Node Pool

Click the TRASHCAN icon next to the node pool.

Upgrading to a Newer Version of Kubernetes

You can upgrade your cluster to a newer Kubernetes version while it is live, as long as it is not already on the latest available version.

During the upgrade process, nodes are updated gradually to minimize disruption. Workloads are also rescheduled across available nodes to ensure the continuous operation of your applications.

Our current system supports two versions of Kubernetes, 1.29.13 and 1.30.10. To illustrate the process, let us first create a cluster with the lower version mark.

Click on button Create cluster and enter the following information on screen:

../_images/previous_version_to_upgrade.png

The second cluster, previous, is now in status of CREATING. Eventually, that phase will turn to RUNNING; click then on cluster name row and see its config page.

../_images/upgrade_button_active.png

Notice that button Upgrade to 1.30.10 is active and click on it. The explanation now appears on screen:

../_images/kubernetes-upgrade-5.png

Click on Upgrade and the process will start:

../_images/upgrading_process_will_start.png

The status in cluster view will also change to UPGRADING:

../_images/list_cluster_view_upgrading.png

Finally, the cluster has been upgraded to 1.30.10 and is RUNNING normally.

After upgrade, the Upgrade button will become inactive.

Note

New Kubernetes versions are released frequently. Always check which version is the current.

The downloaded kubeconfig remains valid across upgrades.

Cluster resources

The Resources section is a central place not only to browse what is running, but also to validate, debug, and audit the state of your Kubernetes cluster.

Click on Resources and see the main resource categories e.g. Namespaces, Nodes, Workloads, Storage etc:

../_images/namespaces_0987.png

Each category can be expanded to display applicable resources; in the image above, it is showing available Namespaces.

When clicking on a specific instance of a resource in the right-hand table, you can access the JSON representation of this resource. Here is what a typical JSON screen might look like:

../_images/some_other_screenshot_resources.png

Cluster backup

The single cluster view also provided button to initiate the backup of the cluster. Please see an article dedicated to the topic: /kubernetes_managed/Managed-Kubernetes-Backups-on-Creodias.

Deleting a Cluster

To delete the cluster click on TRASHCAN icon in its row. Then confirm that you want to delete it:

../_images/both_clusters_running.png

Confirm deletion using the TRASHCAN icon.

../_images/kubernetes-launcher-gui-12.png

Cluster enters DELETING state until removal completes.

Deleting a cluster will also take a couple of minutes.

Now there is one cluster less:

../_images/the_cluster_has_been_deleted_twice.png

One cluster deleted, one RUNNING.

Cluster Development Checklist

Before proceeding with deployments, make sure:

  • The cluster is in RUNNING state.

  • You can run kubectl get nodes successfully.

  • At least one node pool with worker nodes is defined.

What To Do Next

With your cluster created and kubectl configured, you can start deploying pods, creating services and so on.

You can also create the Managed Kubernetes cluster using CLI:

https://github.com/CloudFerro/cf-mkcli

or with a dedicated Terraform provider:

https://registry.terraform.io/providers/CloudFerro/cloudferro/latest