Deploy fleet packages

This page explains how to use Config Sync fleet packages to deploy Kubernetes resources across clusters that are registered to a fleet. After you create and deploy a fleet package, when you add a new cluster to your fleet, the Kubernetes configuration in the Git repository referenced by the fleet package automatically deploy to the new cluster.

A FleetPackage is a declarative API to deploy Kubernetes raw manifests to a fleet of clusters. Any Kubernetes resources that you want to deploy with a fleet package must be already-hydrated (WET).

Before you begin

  1. Create, or make sure you have access to, a Git repository with the Kubernetes resources that you want to deploy across a fleet.

  2. Install and initialize the Google Cloud CLI, which provides the gcloud, and nomos commands. If you use Cloud Shell, the Google Cloud CLI is pre-installed. If you previously installed the Google Cloud CLI, get the latest version by running gcloud components update.

  3. Enable the Google Kubernetes Engine (GKE) Enterprise edition API, the ConfigManagement API, and the ConfigDelivery API:

    gcloud services enable anthos.googleapis.com anthosconfigmanagement.googleapis.com configdelivery.googleapis.com
    
  4. Set a default location:

    gcloud config set config_delivery/location us-central1
    
  5. Ensure that your clusters are registered to a fleet.

  6. Use Cloud Build repositories to create a connection to a supported provider like GitHub or GitLab.

Review cluster requirements

To use Config Sync, your cluster must meet the following requirements:

  • Must be a Google Kubernetes Engine (GKE) Enterprise edition supported platform and version.

  • If you use GKE clusters, ensure that Workload Identity is enabled. Autopilot clusters have Workload Identity enabled by default.

  • Has the correct metric writing permissions so that Config Sync can send metrics to Cloud Monitoring.

  • If you want to auto-upgrade the Config Sync version, ensure your GKE cluster is enrolled in a release channel. Config Sync treats a cluster not using a GKE release channel as using the Stable release channel.

  • If you want to use a private GKE cluster, configure the Cloud NAT to permit egress from private GKE nodes. For details, see Example GKE setup. Alternatively, you can enable Private Google Access to connect to the set of external IP addresses used by Google APIs and services.

  • If you want to use an IAM service account when you grant Config Sync access to your source of truth, then you must include the read-only scope in access scopes for the nodes in the cluster for Cloud Source Repositories.

    You can add the read-only scope by including cloud-source-repos-ro in the --scopes list specified at cluster creation time, or by using the cloud-platform scope at cluster creation time. For example:

    gcloud container clusters create CLUSTER_NAME --scopes=cloud-platform
    

    You cannot modify access scopes after you create a node pool. However, you can create a new node pool with the proper access scope while using the same cluster. The default gke-default scope does not include cloud-source-repos-ro.

  • If you have strict VPC Firewall requirements that block any unnecessary traffic, you need to Create firewall rules to permit the following traffic on public GKE clusters:

    • TCP: Allow ingress and egress on port 53 and 443

    • UDP: Allow egress on port 53

    If you don't include these rules, Config Sync doesn't sync correctly, with nomos status reporting the following error:

    Error: KNV2004: unable to sync repo Error in the git-sync container

    You can skip these steps if using a private GKE cluster.

  • Config Sync must run on an amd64 node pool. Config Sync backend component container images are only built, distributed, and tested for the amd64 machine architecture. If a Config Sync component is scheduled on an Arm node it experiences the exec format error and crashes.

    If you have Arm nodes in your cluster, add one or more amd64 nodes to your cluster and, if you're not using a GKE cluster, add a taint to your arm64 nodes, to avoid scheduling Pods onto your arm64 nodes without specific toleration. GKE arm nodes already have a default taint, so you don't need to add one.

If your cluster is an Autopilot cluster, you should also be aware that Autopilot adjusts the container resource requirements to meet the following rules:

Due to these rules, for Autopilot clusters, Config Sync:

  • adjusts user-specified resource override limits to match requests.
  • only applies overrides when there exists one or more resource requests higher than the corresponding adjusted output declared in the annotation, or there exists one or more resource requests lower than the corresponding input declared in the annotation.

To prepare your environment for Config Sync fleet packages, complete the following steps:

  1. Grant the required IAM roles to the user registering the cluster.

  2. Ensure that your clusters are enrolled in the Rapid channel so that your clusters are on a version of Config Sync that supports fleet packages.

Install Config Sync

You can install Config Sync with either the Google Cloud console or Google Cloud CLI.

Console

To install Config Sync, all clusters must be registered to a fleet. When you install Config Sync in the Google Cloud console, selecting individual clusters automatically registers those clusters to your fleet.

  1. In the Google Cloud console, go to the Config page under the Features section.

    Go to Config

  2. Click Install Config Sync.

  3. Select Auto-upgrades (Preview).

  4. Under Installation options, select Install Config Sync on entire fleet (recommended).

  5. Click Install Config Sync. In the Settings tab, after a few minutes, you should see Enabled in the Status column for the clusters in your fleet.

gcloud

  1. Enable the ConfigManagement fleet feature:

    gcloud beta container fleet config-management enable
    
  2. To enable Config Sync with auto-upgrades, create a file named apply-spec.yaml with the following content:

    applySpecVersion: 1
    spec:
      upgrades: auto
      configSync:
        enabled: true
    
  3. Apply the apply-spec.yaml file:

    gcloud beta container fleet config-management apply \
        --membership=MEMBERSHIP_NAME \
        --config=apply-spec.yaml \
        --project=PROJECT_ID
    

    Replace the following:

    • MEMBERSHIP_NAME: the fleet membership name that you chose when you registered your cluster. To find the membership name, run the gcloud container fleet memberships list command.
    • PROJECT_ID: the project ID of the fleet host project.

Create a service account for Cloud Build

Fleet packages use Cloud Build to fetch the Kubernetes resources from your Git repository and deploys to your clusters. Cloud Build requires a service account that has the permissions to run this job. To create the service account and grant the required permissions, complete the following steps:

  1. Create the service account:

    gcloud iam service-accounts create "SERVICE_ACCOUNT_NAME"
    

    Replace SERVICE_ACCOUNT_NAME with a name for the service account.

  2. Add an IAM policy binding for the Resource Bundle Publisher role:

    gcloud projects add-iam-policy-binding PROJECT_ID \
       --member="serviceAccount:SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com" \
       --role='roles/configdelivery.resourceBundlePublisher'
    
  3. Add an IAM policy binding for the Logs Writer role:

    gcloud projects add-iam-policy-binding PROJECT_ID \
       --member="serviceAccount:SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com" \
       --role='roles/logging.logWriter'
    

Create a fleet package

To create a fleet package, you define a FleetPackage spec that points to the repository with your Kubernetes resources that you connected to Cloud Build. Then you apply the FleetPackage which fetches the resources from Git and deploys them across the fleet.

  1. Create a file named fleetpackage-spec.yaml with the following content::

    resourceBundleSelector:
      cloudBuildRepository:
        name: projects/PROJECT_ID/locations/us-central1/connections/CONNECTION_NAME/repositories/REPOSITORY_NAME
        tag: TAG
        serviceAccount: projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com
        path: CONFIG_FILE_PATH
    target:
      fleet:
        project: projects/PROJECT_ID
    rolloutStrategy:
      rolling:
        maxConcurrent: MAX_CLUSTERS
    

    Replace the following:

    • CONNECTION_NAME: the name that you chose when you connected your Git host to Cloud Build. You can view all Cloud Build connections in your project by running gcloud builds connections list or by opening the Repositories page in the Google Cloud console:

      Open the Repositories page

    • REPOSITORY_NAME: the name of your repository.

    • TAG: the Git tag of your repository.

    • CONFIG_FILE_PATH: the path to your Kubernetes resources in the repository. If your files are in the root of the repository, you can omit this field.

    • MAX_CLUSTERS: the maximum number of clusters to deploy Kubernetes resources at one time. For example, if you set this to 1, resource bundles deploy to one cluster at a time.

      For a complete list of all fields you can configure, see FleetPackage fields.

  2. Create the fleet package:

    gcloud alpha container fleet packages create FLEET_PACKAGE_NAME \
        --source=fleetpackage-spec.yaml \
        --project=PROJECT_ID
    

    Replace FLEET_PACKAGE_NAME with a name for your fleet package rollout.

  3. Verify that the fleet package was created:

    gcloud alpha container fleet packages list
    

    The output lists the status of the build trigger. If the build trigger is successful, the fleet package starts rolling out the Kubernetes resources across your fleet.

  4. Confirm that the Kubernetes resources are deployed on your clusters:

    gcloud alpha container fleet packages describe FLEET_PACKAGE_NAME --show-cluster-status
    

Now that you've deployed a fleet package, when you add a new cluster to your fleet, the Kubernetes resources defined in the fleet package automatically deploy to the new cluster.

Update a fleet package

You can update a fleet package to change settings such as the rollout strategy. To update the Kubernetes resources that the fleet package deploys, update the tag field to pull from a different Git tag.

To update a fleet package, complete the following steps:

  1. Update your FleetPackage spec.

  2. Update the fleet package to start a new rollout:

    gcloud alpha container fleet packages update FLEET_PACKAGE_NAME \
        --source=fleetpackage-spec.yaml \
        --project=PROJECT_ID
    

    It can take a few minutes before the change is picked up and starts rolling out to your clusters.

Use labels to deploy to different clusters

Labels are key-value pairs that you attach to objects. For fleet packages, the labels must be attached to the fleet membership. For more information about labels, see the Kubernetes documentation for labels and selectors.

You can use labels to deploy a fleet package to a subset of clusters in your fleet.

Add membership labels

To add a membership label, complete the following steps:

  1. Get a list of memberships in the fleet:

    gcloud container fleet memberships list
    
  2. Add a label to the membership:

    gcloud container fleet memberships update MEMBERSHIP_NAME \
        --update-labels=KEY=VALUE
    

    Replace the following:

    • MEMBERSHIP_NAME: the name of the cluster registered to the fleet.
    • KEY and VALUE: the label to add to the membership. If a label exists, its value is modified. Otherwise, a new label is created. Keys must start with a lowercase character and contain only hyphens (-), underscores (_), lowercase characters, and numbers. Values must contain only hyphens (-), underscores (_), lowercase characters, and numbers.

    Repeat this command for each membership to which you want to add a label.

Deploy to a subset of clusters

You can deploy to a subset of clusters by specifying the target.fleet.selector.matchLabels field with your key-value pair. For example, if you set matchLabels as country: "us", the fleet package service deploys your resources only to clusters with the label country that matches "us".

To deploy a fleet package to a subset of clusters, complete the following steps:

  1. Create or update your FleetPackage spec with the label selector:

    resourceBundleSelector:
      cloudBuildRepository:
        name: projects/PROJECT_ID/locations/us-central1/connections/CONNECTION_NAME/repositories/REPOSITORY_NAME
        tag: TAG
        serviceAccount: projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com
        path: CONFIG_FILE_PATH
    target:
      fleet:
        project: projects/PROJECT_ID
        selector:
          matchLabels:
            KEY: "VALUE"
    rolloutStrategy:
      rolling:
        maxConcurrent: MAX_CLUSTERS
    
  2. Create or update the fleet package:

    Create a fleet package

    gcloud alpha container fleet packages create FLEET_PACKAGE_NAME \
        --source=fleetpackage-spec.yaml \
        --project=PROJECT_ID
    

    Update a fleet package

    gcloud alpha container fleet packages update FLEET_PACKAGE_NAME \
        --source=fleetpackage-spec.yaml \
        --project=PROJECT_ID
    

Deploy variant resources to clusters

Variants are different versions of a resource. These resources might have different values depending on the location, project, or name of the cluster. You can deploy variant resources to different clusters by specifying the variantsPattern and variantNameTemplate fields.

You can use membership labels or other membership metadata like location, project, or name to match variants.

To deploy a fleet package with variants, complete the following steps:

  1. Create or update your FleetPackage spec with the variant details:

    resourceBundleSelector:
      cloudBuildRepository:
        name: projects/PROJECT_ID/locations/us-central1/connections/CONNECTION_NAME/repositories/REPOSITORY_NAME
        tag: TAG
        serviceAccount: projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com
        path: CONFIG_FILE_PATH
        variantsPattern: VARIANT_PATTERN
    target:
      fleet:
        project: projects/PROJECT_ID
    rolloutStrategy:
      rolling:
        maxConcurrent: MAX_CLUSTERS
    target:
      fleet:
        project: projects/PROJECT_ID
     variantSelector:
      variantNameTemplate: VARIANT_NAME_TEMPLATE
    

    Replace the following:

    • VARIANT_PATTERN: the pattern for the variant, for example "\*.yaml" or "us-*".
    • VARIANT_NAME_TEMPLATE : A template string that refers to variables containing cluster membership metadata such as location, project, name, or label to determine the name of the variant for a target cluster. For more examples, see FleetPackage fields.
  2. Create or update the fleet package:

    Create a fleet package

    gcloud alpha container fleet packages create FLEET_PACKAGE_NAME \
        --source=fleetpackage-spec.yaml \
        --project=PROJECT_ID
    

    Update a fleet package

    gcloud alpha container fleet packages update FLEET_PACKAGE_NAME \
        --source=fleetpackage-spec.yaml \
        --project=PROJECT_ID
    

Delete a fleet package

Deleting a fleet package also deletes the following resources:

  • The Kubernetes resources deployed on your clusters
  • The fleet package rollout history

To delete a fleet package, run the following command:

gcloud alpha container fleet packages delete FLEET_PACKAGE_NAME --force

Troubleshoot

To find methods for diagnosing and resolving errors related to Cloud Build, see Troubleshooting build errors.

What's next