This page explains how to use Config Sync and Terraform to dynamically create team-scoped resources across a fleet of clusters. Config Sync extends the capabilities of fleet team management to create and manage infrastructure and cluster configurations across your fleets.
This guide assumes that you are already familiar with fleet team management concepts like team scopes and fleet namespaces. For more information, see the fleet team management overview.
For an end-to-end tutorial with sample configurations, see the fleet tenancy tutorial in the sample repository.
For a list of fields supported for Config Sync in Terraform, see the Terraform reference documentation for GKE fleet features.
Example workflow
You're a platform administrator who wants to dynamically create resources across a fleet of clusters
where different teams have different needs. For example, you might want to apply a
NetworkPolicy
to your Backend team's namespaces, but not your Frontend team's namespaces.
In this scenario, the procedure for creating team-scoped resources across a namespace is as follows:
- Choose or create the fleet where you want to manage resources for teams.
Set up your source of truth. The source of truth contains the
NamespaceSelector
objects that you use to to select fleet-level namespaces in your team scopes, and any resources (like aNetworkPolicy
) that you want to sync across these namespaces.Create the fleet-level default configuration for Config Sync. Config Sync uses these default settings when syncing from the source of truth created in the previous step. These Config Sync settings apply to any new clusters created in the fleet.
Create clusters in your fleet.
Create your Frontend and Backend team scopes and namespaces so that Config Sync can detect and reconcile resources in your namespaces.
After you complete these steps, Config Sync creates and applies the
NetworkPolicy
based on the NamespaceSelector
to the Backend team's namespaces. If you change or add any
resources, Config Sync continuously detects and applies any changes to your configuration
files, team scopes, fleet namespaces, and fleet members.
Pricing
Config Sync and fleet team management features are only available for users who have enabled GKE Enterprise. For more information about GKE Enterprise pricing, see the GKE Pricing page.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
Create or select a Google Cloud project.
-
Create a Google Cloud project:
gcloud projects create PROJECT_ID
Replace
PROJECT_ID
with a name for the Google Cloud project you are creating. -
Select the Google Cloud project that you created:
gcloud config set project PROJECT_ID
Replace
PROJECT_ID
with your Google Cloud project name.
-
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
Create or select a Google Cloud project.
-
Create a Google Cloud project:
gcloud projects create PROJECT_ID
Replace
PROJECT_ID
with a name for the Google Cloud project you are creating. -
Select the Google Cloud project that you created:
gcloud config set project PROJECT_ID
Replace
PROJECT_ID
with your Google Cloud project name.
-
- Create, or have access to, a source of truth (either a Git repository or an OCI image) where you can store your configuration files. The examples in this guide use a Git repository.
Required roles
To get the permissions that you need to create team resources for your fleet, ask your administrator to grant you the following IAM roles on your project:
-
Managing fleet resources:
Fleet Admin (formerly GKE Hub Admin) (
roles/gkehub.admin
) -
Creating GKE clusters:
Kubernetes Engine Cluster Admin (
roles/container.clusterAdmin
) -
Enabling GKE Enterprise:
Service Usage Admin (
roles/serviceusage.serviceUsageAdmin
)
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Acquire user credentials
To run the Terraform commands in this guide in your local environment, run the following command to acquire new user credentials:
gcloud auth application-default login
Set up your fleet
In this section, you create your fleet and enable the required services.
To set up your fleet, complete the following steps:
Create a directory for the fleet configuration Terraform files. To that directory, add a
main.tf
file and avariables.tf
file.In the
variables.tf
file, add the following variables:In the
main.tf
file, add the following resources:Export the PROJECT_ID variable:
export TF_VAR_project=PROJECT_ID
Replace
PROJECT_ID
with the project ID where you want to create your fleet.Initialize Terraform in the directory that you created:
terraform init
Check that the changes you propose with Terraform match the expected plan:
terraform plan
Create the fleet, enable the APIs, and create the service account:
terraform apply
It can sometimes take a few minutes to enable all of the services.
Set up your source of truth
In this section, you add configuration files to a source of truth.
You need a NamespaceSelector
object
for each team scope that you want to use.
For example, if you have Frontend and Backend teams, you must create a NamespaceSelector
object for each team. The NamespaceSelector
object selects all or some of the
namespaces within a team scope. You can add additional team resources
to your source of truth, like a NetworkPolicy
. When you create these resources,
you reference the NamespaceSelector
so that Config Sync can deploy and sync those resources
dynamically across namespaces.
To set up your source of truth, complete the following steps:
In your source of truth, create a directory for the configuration files that you want Config Sync to sync from.
For each team, create a
NamespaceSelector
object in your configuration directory:apiVersion: configmanagement.gke.io/v1 kind: NamespaceSelector metadata: name: NAMESPACE_SELECTOR_NAME spec: mode: dynamic selector: matchLabels: fleet.gke.io/fleet-scope: SCOPE_NAME
Replace the following:
NAMESPACE_SELECTOR_NAME
: the name for theNamespaceSelector
object, for examplebackend-scope
.SCOPE_NAME
: the name of your team scope, for examplebackend
.
Any namespaces that are part of a fleet namespace automatically have the label
fleet.gke.io/fleet-scope: SCOPE_NAME
. TheNamespaceSelector
selects all fleet namespaces of a team scope using that label. For more examples about how to include or exclude namespaces, seeNamespaceSelector
examples.Create any objects that you want to sync across namespaces.
To sync an object only to a particular team, set the following annotation in that object's metadata:
annotations: configmanagement.gke.io/namespace-selector: NAMESPACE_SELECTOR_NAME
For example, a
NetworkPolicy
for the Backend team might resemble the following:
Create fleet-level defaults for Config Sync
In this section, you create fleet-level defaults for Config Sync, which applies the same Config Sync configuration to all clusters created in your fleet.
To create a fleet-level default configuration for Config Sync, complete the following steps:
Create a directory for the fleet-default configuration Terraform files. To that directory, add a
main.tf
file and avariables.tf
file.In the
variables.tf
file, add the following variables:In the
main.tf
file, add the following resource to configure Config Sync's settings:git
terraform { required_providers { google = { source = "hashicorp/google" version = ">=5.16.0" } } } provider "google" { project = var.project } resource "google_gke_hub_feature" "feature" { name = "configmanagement" location = "global" provider = google fleet_default_member_config { configmanagement { version = "VERSION" config_sync { source_format = "unstructured" git { sync_repo = "REPO" sync_branch = "BRANCH" policy_dir = "DIRECTORY" secret_type = "SECRET" } } } } }
Replace the following:
VERSION
: (optional) the Config Sync version number. Must be set to version 1.17.0 or later. If left blank, the default is the latest version.REPO
: the URL to the repository containing your configuration files.BRANCH
: the repository branch, for examplemain
.DIRECTORY
: the path within the Git repository that represents the top level of the repository you want to sync.SECRET
: the secret authentication type.
For a full list of settings supported in the Config Sync
git
block, see the Terraform reference documentation for GKE hub features.OCI
terraform { required_providers { google = { source = "hashicorp/google" version = ">=5.16.0" } } } provider "google" { project = var.project } resource "google_gke_hub_feature" "feature" { name = "configmanagement" location = "global" provider = google fleet_default_member_config { configmanagement { version = "VERSION" config_sync { source_format = "unstructured" oci { sync_repo = "REPO" policy_dir = "DIRECTORY" secret_type = "SECRET" } } } } }
Replace the following:
VERSION
: the Config Sync version number. Must be set to version 1.17.0 or later. If left blank, the default is the latest version.REPO
: the URL to the OCI image repository containing configuration files.DIRECTORY
: the absolute path of the directory containing the resources you want to sync. Leave blank to use the root directory.SECRET
: the secret auth type.
For a full list of settings supported in the Config Sync
oci
block, see the Terraform reference documentation for GKE hub features.As an example, the following
main.tf
file configures Config Sync to sync from a Git repository and syncs all of the objects present in theconfig
directory:Initialize Terraform in the directory that you created:
terraform init
Check that the changes you propose with Terraform match the expected plan:
terraform plan
Create the default fleet member configurations:
terraform apply
Create clusters in your fleet
In this section, you create a shared cluster configuration and then create clusters in your fleet.
To create and register new clusters to your fleet, complete the following steps:
Create a directory for the cluster configuration Terraform files. To that directory, add a
main.tf
file and avariables.tf
file.In the
variables.tf
file, add the following variables:Create a
cluster.tf
file that contains default values used across all your clusters, such as your project and fleet IDs:variable "location" { type = string } variable "cluster_name" { type = string } data "google_project" "project" { provider = google } resource "google_container_cluster" "cluster" { provider = google name = var.cluster_name location = var.location initial_node_count = 3 project = data.google_project.project.project_id fleet { project = data.google_project.project.project_id } workload_identity_config { workload_pool = "${data.google_project.project.project_id}.svc.id.goog" } deletion_protection = false }
In the
main.tf
file, add the following resources:terraform { required_providers { google = { source = "hashicorp/google" version = ">=5.16.0" } } } provider "google" { project = var.project } module "MODULE_NAME" { source = "CLUSTER_CONFIGURATION_FILEPATH" cluster_name = "CLUSTER_NAME" location="CLUSTER_LOCATION" }
Replace the following:
MODULE_NAME
: the name that you want to give the cluster module. MODULE_NAME and CLUSTER_NAME can be the same value, for exampleus-east-cluster
.CLUSTER_CONFIGURATION_FILEPATH
: the relative path to thecluster.tf
file that you created.CLUSTER_NAME
: the name of your cluster. MODULE_NAME and CLUSTER_NAME can be the same value, for exampleus-east-cluster
.CLUSTER_LOCATION
: the location of your cluster, for exampleus-east1
.
You can create as many clusters as you want. As an example, the following
main.tf
file creates 3 clusters in different regions:Initialize Terraform in the directory that you created:
terraform init
Check that the changes you propose with Terraform match the expected plan:
terraform plan
Create the clusters:
terraform apply
Configure team scopes and fleet namespaces
In this section, you create your team scopes and associate your clusters with those scopes. Then you create the fleet namespaces that you require, for example one for each team, in each scope, and Config Sync creates the resources across your namespaces.
To configure team scopes and namespaces, complete the following steps:
Create a directory for the team scope and namespace configuration Terraform files. To that directory, add a
main.tf
file and avariables.tf
file.In the
variables.tf
file, add the following variables:In the
main.tf
file, add the following resources:Add the provider information:
terraform { required_providers { google = { source = "hashicorp/google" version = ">=5.16.0" } } } provider "google" { project = var.project }
Add the team scope resource:
resource "google_gke_hub_scope" "scope" { provider = google for_each = toset([ "SCOPE_NAME", "SCOPE_NAME_2", ]) scope_id = each.value }
Replace the following:
SCOPE_NAME
: the name of your team scope, for examplebackend
.SCOPE_NAME_2
: an additional team scope if you created one.
You can add as many team scopes as you need. When a fleet namespace is created in the cluster, the namespace is automatically labelled with
fleet.gke.io/fleet-scope: SCOPE_NAME
, allowing Config Sync to select namespaces based on theNamespaceSelector
labels present when syncing Kubernetes resources.As an example, a team scope Terraform resource that includes a scope for both the Frontend and Backend team might resemble the following:
Add a fleet membership binding for each cluster that you want to apply to a team scope:
resource "google_gke_hub_membership_binding" "membership-binding" { provider = google for_each = { MEMBERSHIP_BINDING_NAME = { membership_binding_id = "MEMBERSHIP_BINDING_ID" scope = google_gke_hub_scope.scope["SCOPE_NAME"].name membership_id = "CLUSTER_NAME" location = "CLUSTER_LOCATION" } MEMBERSHIP_BINDING_NAME_2 = { membership_binding_id = "MEMBERSHIP_BINDING_ID_2" scope = google_gke_hub_scope.scope["SCOPE_NAME_2"].name membership_id = "CLUSTER_NAME_2" location = "CLUSTER_LOCATION_2" } } membership_binding_id = each.value.membership_binding_id scope = each.value.scope membership_id = each.value.membership_id location = each.value.location depends_on = [google_gke_hub_scope.scope] }
Replace the following:
MEMBERSHIP_BINDING_NAME
: the membership binding name, for exampleus-east-backend
.MEMBERSIP_BINDING_ID
: the membership binding ID. This can be the same as the MEMBERSHIP_BINDING_NAME.SCOPE_NAME
: the label selector that you gave your team scope when you created aNamespaceSelector
, for examplebackend
.CLUSTER_NAME
: the name of the cluster that you created when you created clusters, for exampleus-east-cluster
.CLUSTER_LOCATION
: the cluster location, for exampleus-east1
.
You need to define a fleet membership binding for each cluster. If you don't define a team scope for a cluster, then that cluster is not created for that namespace. For example, if you have three clusters in regions
us-east1
,us-west1
, andus-central1
, but theus-central1
cluster is only for the Frontend team, your membership binding resource would resemble the following:Add any namespaces that you want to define for your teams:
resource "google_gke_hub_namespace" "fleet_namespace" { provider = google for_each = { FLEET_NAMESPACE = { scope_id = "SCOPE_NAME" scope_namespace_id = "FLEET_NAMESPACE_ID" scope = google_gke_hub_scope.scope["SCOPE_NAME"].name } FLEET_NAMESPACE_2 = { scope_id = "SCOPE_NAME" scope_namespace_id = "FLEET_NAMESPACE_ID_2" scope = google_gke_hub_scope.scope["SCOPE_NAME"].name } } scope_namespace_id = each.value.scope_namespace_id scope_id = each.value.scope_id scope = each.value.scope depends_on = [google_gke_hub_scope.scope] }
Replace the following:
FLEET_NAMESPACE
: that name that you want to give the namespace, for examplebackend-a
.SCOPE_NAME
: the label selector that you gave your team scope when you created aNamespaceSelector
, for examplebackend
.FLEET_NAMESPACE_ID
: the namespace ID. This can be the same value as FLEET_NAMESPACE.
For example, if you wanted both the Frontend and Backend team to have two namespaces each, your fleet namespace resource might resemble the following:
Initialize Terraform in the directory that you created:
terraform init
Check that the changes you propose with Terraform match the expected plan:
terraform plan
Create the fleet scopes and namespaces:
terraform apply
After you create fleet scopes and namespaces, Config Sync detects those new namespaces and their scopes, selects resources in the fleet namespaces, and reconciles them with your configuration files.
You can check that your resources are applied to the correct cluster by using nomos status
or by visiting the
Config Sync Packages tab in the Google Cloud console and changing the View by radio button to Cluster.
Config Sync syncs your resources across namespaces based on your team scopes
according to the configuration stored in your source of truth. Whenever you add a new resource,
as long as you include the correct NamespaceSelector
annotation, Config Sync
automatically reconciles that resource across your team namespaces.
If you want to apply Config Sync settings to your existing clusters, see the instructions for Configuring fleet-level defaults in the Config Sync installation guide.
What's next
- Learn more about setting up teams for your fleet.