Sign In
Sign In

Autoscaling Down to Zero Nodes

Updated on 01 October 2025

Automatic scaling of a node group down to zero helps save resources when they are not being used. This is convenient for one-off tasks (for example, Jobs) or staging environments that remain inactive at night.

Scaling down to zero nodes is a special case of autoscaling. Therefore, the same principles, limitations, and requirements described for regular autoscaling also apply here.

Requirements

For scaling down to zero to work, the cluster must still have at least one other group with 1–2 permanently active nodes. These nodes are required for Kubernetes system components.

Pod Configuration

For the autoscaler to launch nodes in the desired group, specify the ID of that group in your manifest using nodeSelector or nodeAffinity.

How to Find the Group ID

  1. Go to the Kubernetes section and click on the cluster.
  2. Open the Resources tab.
  3. Click the three dots next to the group and select Edit group.
  4. The group ID will be shown in the URL, for example:
https://hostman.com/my/kubernetes/1048329/54289/edit

Here:

  • 1048329: cluster ID
  • 54289: node group ID

Example with nodeSelector:

nodeSelector:
  k8s.hostman.com/cluster-node-group-id: "54289"

Example with nodeAffinity:

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: k8s.hostman.com/cluster-node-group-id
          operator: In
          values:
          - "54289"

When Scaling Down to Zero Won’t Work

The autoscaler will not be able to remove the last node in a group in the following cases:

  • The pod has the annotation: cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
  • Pods cannot be moved to other nodes due to scheduler restrictions.
  • A PodDisruptionBudget prevents pods from being deleted without exceeding the limit.
  • The pod is not managed by a controller (Deployment, StatefulSet, Job, ReplicaSet).

Practical Example

In this example, we’ll create a node group with autoscaling down to zero enabled, run a Job in it, and see how the cluster automatically creates a node to run the task and deletes it once the task is complete.

Prerequisites

An existing Kubernetes cluster with at least one node group.

Creating a Node Group with Zero Autoscaling

  1. Go to the Kubernetes section and click on the cluster.
  2. Open the Resources tab.
  3. Click Add group.
  4. Select the worker node configuration.
  5. Enable the Autoscaling toggle and set the minimum number of nodes to 0.

After the group is created, one node will appear and will be automatically deleted if no user pods are running on it.

Now the cluster has two groups:

  • A group with active nodes that do not scale down to zero.
  • A group with autoscaling down to zero enabled. In this example, its ID is 54289.

Checking Existing Nodes

Run the command:

kubectl get nodes

Example output:

NAME                  STATUS   ROLES    AGE   VERSION
worker-192.168.0.25   Ready    <none>   21h   v1.33.3+k0s
worker-192.168.0.8    Ready    <none>   22h   v1.33.3+k0s

Creating a Job

Create a file named job.yaml with the following content:

apiVersion: batch/v1
kind: Job
metadata:
  name: hello-job
spec:
  ttlSecondsAfterFinished: 30
  template:
    metadata:
      name: hello-job
    spec:
      restartPolicy: Never
      nodeSelector:
        k8s.hostman.com/cluster-node-group-id: "54289"
      containers:
      - name: hello
        image: busybox
        command:
          - sh
          - -c
          - 'i=0; while [ $i -lt 10 ]; do echo "Hello from job"; sleep 30; i=$((i+1)); done'
        resources:
          requests:
            cpu: "50m"
            memory: "32Mi"
          limits:
            cpu: "100m"
            memory: "64Mi"

This Job runs a container using the busybox image that writes a message to the log 10 times at 30-second intervals.

Note: In the nodeSelector section, we specify the node group ID (54289).

Apply the manifest:

kubectl apply -f job.yaml

Check the list of pods:

kubectl get pod

Example output:

NAME              READY   STATUS    RESTARTS   AGE
hello-job-s7ktd   0/1     Pending   0          4s

The pod is in Pending status because there are no nodes in the group yet. Go to the Resources section in the management panel; you’ll see that a new node is being created in the autoscaling group.

After the node is created, check the nodes again:

kubectl get nodes

Example output:

NAME                  STATUS   ROLES    AGE   VERSION
worker-192.168.0.25   Ready    <none>   21h   v1.33.3+k0s
worker-192.168.0.6    Ready    <none>   7m    v1.33.3+k0s
worker-192.168.0.8    Ready    <none>   22h   v1.33.3+k0s

worker-192.168.0.6 is the new node created for the Job.

Check the pod again:

kubectl get pod

Example output:

NAME              READY   STATUS    RESTARTS   AGE
hello-job-s7ktd   1/1     Running   0          5m30s

Now the pod is running.

Job Completion and Node Deletion

After the Job completes, the node where it was running will be tainted. View the taint with:

kubectl describe node worker-192.168.0.6

Look for the line:

Taints:  DeletionCandidateOfClusterAutoscaler=1755679271:PreferNoSchedule

This means the node is marked for deletion. Two minutes after the taint is applied, the node will be removed.

Check it with:

kubectl get nodes
Was this page helpful?
Updated on 01 October 2025

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support