This page shows you how to autoscale your clusters. To learn about how the cluster autoscaler works, refer to Overview of cluster autoscaling.
Cluster autoscaling resizes the number of nodes in a given node pool based on
the demands of your workloads. You specify minReplicas
and maxReplicas
values for each node pool in your cluster.
For an individual node pool, minReplicas
must be ≥ 1. However, the sum
of the untainted user cluster nodes at any given time must be at least 3. This
means the sum of the minReplicas
values for all autoscaled node pools, plus
the sum of the replicas
values for all non-autoscaled node pools, must be at
least 3.
Create a user cluster with autoscaling
To create a user cluster with autoscaling, add the autoscaling
field to
the nodePools
section in the
user cluster configuration file.
nodePools: - name: pool‐1 … replicas: 3 ... autoscaling: minReplicas: 1 maxReplicas: 5
This configuration creates a node pool with 3 replicas, and applies autoscaling with the minimum node pool size as 1 and the maximum node pool size as 5.
The minReplicas
value must be ≥ 1.
Add a node pool with autoscaling
To add a node pool with autoscaling to an existing cluster:
Edit the user cluster configuration file to add a new node pool, and include the
autoscaling
field. Adapt the values ofminReplicas
andmaxReplicas
as needed.nodePools: - name: my-new-node-pool … replicas: 3 ... autoscaling: minReplicas: 1 maxReplicas: 5
Run the following command:
gkectl update cluster --config USER_CLUSTER_CONFIG \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG
Enable an existing node pool for autoscaling
To enable autoscaling for a node pool in an existing cluster:
Edit a specific
nodePool
in the user cluster configuration file, and include theautoscaling
field. Adapt the values ofminReplicas
andmaxReplicas
as needed.nodePools: - name: my-existing-node-pool … replicas: 3 ... autoscaling: minReplicas: 1 maxReplicas: 5
Run the following command:
gkectl update cluster --config USER_CLUSTER_CONFIG \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG
Disable autoscaling for an existing node pool
To disable autoscaling for a specific node pool:
Edit the user cluster configuration file and remove the
autoscaling
field for that node pool.Run the
gkectl update cluster
command.
Check cluster autoscaler behavior
You can determine what the cluster autoscaler is doing in several ways.
Check cluster autoscaler logs
First, find the name of the cluster autoscaler Pod. Run this command, replacing `USER_CLUSTER_NAME with the user cluster name:
kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get pods -n USER_CLUSTER_NAME | grep cluster-autoscaler
To check logs on the cluster autoscaler Pod, replacing POD_NAME with the Pod name:
kubectl --kubeconfig ADMIN_KUBECONFIG logs cluster-autoscaler-POD_NAME --container cluster-autoscaler -n USER_CLUSTER_NAME
Check the configuration map
The cluster autoscaler publishes the kube-system/cluster-autoscaler-status
configuration map. To see this map, run this command:
kubectl --kubeconfig USER_KUBECONFIG get configmap cluster-autoscaler-status -n kube-system -o yaml
Check cluster autoscale events.
You can check cluster autoscale events:
- On pods (particularly those that cannot be scheduled, or on underutilized nodes)
- On nodes
- On the
kube-system/cluster-autoscaler-status
config map.
Troubleshooting
See the following troubleshooting information for cluster autoscaler:
- You might be experiencing one of the limitations for cluster autoscaler.
- If you are having problems with downscaling your cluster, see
Pod scheduling and disruption.
You might have to add a
PodDisruptionBudget
for thekube-system
Pods. For more information about manually adding aPodDisruptionBudget
for thekube-system
Pods, see the Kubernetes cluster autoscaler FAQ. - When scaling down, cluster autoscaler respects scheduling and eviction rules
set on Pods. These restrictions can prevent a node from being deleted by the
autoscaler. A node's deletion could be prevented if it contains a Pod with any
of these conditions:
- The Pod's affinity or anti-affinity rules prevent rescheduling.
- The Pod has local storage.
- The Pod is not managed by a Controller such as a Deployment, StatefulSet, Job or ReplicaSet.
For more information about cluster autoscaler and preventing disruptions, see the following questions in the Kubernetes cluster autoscaler FAQ:
- How does scale-down work?
- Does Cluster autoscaler work with PodDisruptionBudget in scale-down?
- What types of Pods can prevent Cluster autoscaler from removing a node?