Skip to main content

Kubernetes

Bamboozle's managed Kubernetes service (KaaS) lets you deploy production-ready Kubernetes clusters without managing the control plane yourself. You choose the location, version, node sizes, and scaling behaviour — Bamboozle handles the rest.

Navigate to Cloud → Kubernetes to manage your clusters.


Creating a cluster

Click + Create Kubernetes Cluster to open the creation form.

Location

Choose the datacenter where your cluster will run:

  • UAE, Dubai (DX1)
  • UAE, Fujeirah (FJ1)
  • Austria, Vienna (VIE2)

All nodes in the cluster are created in the selected region.

Kubernetes version

Select the Kubernetes version for your cluster. Bamboozle offers recent stable releases (e.g. v1.29.3). Choose the latest version unless you have a specific version requirement.

Control Plane

The control plane manages the cluster's API server, scheduler, and controller manager.

High Availability

When enabled, the control plane is deployed across three master nodes in Active/Active mode. This eliminates the control plane as a single point of failure and is recommended for production workloads.

When disabled, a single master node is used — suitable for development or testing.

Master Flavor

Select the hardware flavor for your master node(s). For production clusters with High Availability enabled, this flavor is applied to all three master nodes.

Recommended minimum for production: General_Compute_2 (1 vCPU, 2 GB RAM) or larger depending on cluster size.

Worker Nodes

Worker nodes run your application workloads. Select a flavor from the list — options range from small general-purpose instances to large compute-optimised and high-frequency nodes.

Available worker flavors include General Compute, Compute Optimised, High Frequency, and AI/GPU variants. Worker node count is set separately (see Autoscaling below).

Autoscaling

When enabled, the cluster automatically adjusts the number of worker nodes based on CPU and memory usage. The cluster scales out when demand increases and scales back in when resources are no longer needed.

When disabled, the cluster runs a fixed number of worker nodes.

tip

Enable autoscaling for production clusters to handle variable traffic without manual intervention.

Cluster network

By default, a new network is created for the cluster automatically. Check Choose an existing network if you want to deploy the cluster into a private network you have already set up in your project.

Boot Volume

Each node boots from a dedicated volume.

FieldDescription
Storage PolicyThe volume type/tier to use for node boot disks
Storage size (GiB)Disk size per node. Default is 20 GiB; increase for workloads that store data locally.

Floating IP

When enabled, each node receives a public IP address, making nodes directly reachable from the internet. Enable this if you need direct SSH access to worker nodes or if your workloads require public IPs per node.

For most clusters, a single load balancer or ingress controller is sufficient and individual node public IPs are not needed.

SSH Key

Select an existing SSH key from your project to install on all cluster nodes. This allows direct SSH access to nodes for debugging.

If you don't have a key yet, click + Add SSH Public Key or + Generate SSH Key.

Labels

Optionally add key/value labels to your cluster nodes (e.g. env=production, team=backend). Labels are interpreted by Kubernetes and can be used for node selection in pod scheduling rules.

Format: key1=value1, key2=value2

Name

Give your cluster a recognisable name (e.g. k8s-project-production). Use lowercase letters, numbers, and hyphens.

Click + Create Cluster to provision the cluster. This typically takes a few minutes.


Connecting to your cluster

Once the cluster status shows Active, download the kubeconfig file from the cluster detail page and configure kubectl:

export KUBECONFIG=/path/to/your-cluster-kubeconfig.yaml
kubectl get nodes

All nodes should show Ready status.


Managing a cluster

From the cluster list, click a cluster name to open its management view. From here you can:

  • View cluster status, node count, and Kubernetes version
  • Download the kubeconfig file
  • Scale worker nodes up or down (if autoscaling is disabled)
  • Upgrade the Kubernetes version
  • Delete the cluster

Deleting a cluster

Open the cluster and click Delete. This permanently removes all nodes and the control plane. Persistent volumes attached to workloads are not automatically deleted — clean these up manually if no longer needed.

warning

Deleting a cluster is irreversible. Ensure all important data has been backed up or migrated before proceeding.

Was this page helpful?