A Google Kubernetes Engine (GKE) cluster consists of a control plane and worker machines called nodes. You can run your containerized Kubernetes workloads in a GKE cluster. Nodes are the worker machines that run your containerized applications and other workloads, and the control plane is the unified endpoint for your cluster. For more information, see GKE cluster architecture.
The Kubernetes API server runs on the control plane, allowing you to interact with Kubernetes objects in the cluster through Kubernetes API calls. Objects are persistent entities in the Kubernetes system and represent the state of your cluster. For more information, in the Kubernetes documentation, see Objects in Kubernetes, and the API Overview which links to the "Kubernetes API reference" pages.
This document shows you how to use the Kubernetes API connector in a workflow to make requests to the Kubernetes service endpoint hosted on a GKE cluster's control plane. For example, you can use the connector to create Kubernetes Deployments, run Jobs, manage Pods, or access deployed apps through a proxy. For more information, see the Kubernetes API Connector Overview
Before you begin
Before you proceed with the tasks in this document, make sure that you have completed certain prerequisites.
Enable APIs
Before you can access Kubernetes API objects using the Kubernetes API connector, you must enable the following APIs:
- Google Kubernetes Engine API: to build and manage container-based applications using GKE
Workflows APIs: to manage workflow definitions and executions; enabling the Workflows API automatically enables the Workflow Executions API
Console
Enable the APIs:
gcloud
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Enable the APIs:
gcloud services enable container.googleapis.com workflows.googleapis.com
Create a service account
Create a user-managed service account that
acts as the identity of your workflow, and grant it the
Kubernetes Engine Developer
(roles/container.developer
) role so that the workflow can access Kubernetes
API objects inside clusters.
Console
In the Google Cloud console, go to the Service accounts page.
Select a project and then click Create service account.
In the Service account name field, enter a name. The Google Cloud console fills in the Service account ID field based on this name.
In the Service account description field, enter a description. For example,
Service account for Kubernetes API
.Click Create and continue.
In the Select a role list, filter for, and select the Kubernetes Engine Developer role.
Click Continue.
To finish creating the account, click Done.
gcloud
Create the service account:
gcloud iam service-accounts create SERVICE_ACCOUNT_NAME
Replace
SERVICE_ACCOUNT_NAME
with the name of the service account.Grant the
container.developer
role to your service account:gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com \ --role=roles/container.developer
Replace
PROJECT_ID
with your Google Cloud project ID.
Note that you can use both IAM and Kubernetes role-based access control (RBAC) to control access to your GKE cluster:
IAM is not specific to Kubernetes; it provides identity management for multiple Google Cloud products, and operates primarily at the level of the Google Cloud project.
Kubernetes RBAC is a core component of Kubernetes and lets you create and grant roles (sets of permissions) for any object or type of object within the cluster. If you primarily use GKE, and need fine-grained permissions for every object and operation within your cluster, Kubernetes RBAC is the best choice.
For more information, see Access control.
Create a GKE cluster
To use the Kubernetes API connector, you must have already created a public or private GKE cluster. In a private cluster, nodes only have internal IP addresses, which means that nodes and Pods are isolated from the internet by default. For more information, see Private clusters.
You can also specify the mode of operation which offers you different levels of flexibility, responsibility, and control. For example, you can create an Autopilot cluster which is a mode of operation in GKE in which Google manages your cluster configuration, including your nodes, scaling, security, and other preconfigured settings. For more information, see Choose a GKE mode of operation.
If you have not yet created a GKE cluster, you can deploy a web server containerized application to a GKE cluster. Or, to try out the instructions in this document, you can create an Autopilot cluster by completing the following.
Console
In the Google Cloud console, go to the Kubernetes clusters page.
Click add_box Create.
If you are asked to select a cluster mode, select Autopilot.
In the Cluster basics section, complete the following:
- Enter the Name for your cluster, such as
hello-cluster
. - Select a region for your
cluster, such as
us-central1
.
- Enter the Name for your cluster, such as
Click Next: Networking.
In the IPv4 network access section, to create a cluster with a publicly accessible endpoint, choose Public cluster.
For all the other settings, accept the defaults.
Click Create.
It might take several minutes for the creation of the cluster to complete. Once the cluster is created, a checkmark
indicates that it is running.gcloud
Run the following command:
gcloud container clusters create-auto CLUSTER_NAME \ --location=LOCATION \ --project=PROJECT_ID
Replace the following:
CLUSTER_NAME
: the name of your GKE cluster, such ashello-cluster
LOCATION
: the region for your cluster, such asus-central1
PROJECT_ID
: your Google Cloud project ID
It might take several minutes for the creation of the cluster to complete. Once the cluster is created, the output should be similar to the following:
Creating cluster hello-cluster...done.
Created [https://container.googleapis.com/v1/projects/MY_PROJECT
/zones/us-central1/clusters/hello-cluster].
[...]
STATUS: RUNNING
Use the connector to send an HTTP request
You can use the Kubernetes API connector to send an HTTP request to a
GKE cluster's control plane. For example, the following workflow
creates a Deployment named nginx-deployment
in the specified Kubernetes
cluster. The Deployment describes a required state; in this case, to run three
Pods with the nginx:1.14.2
image and expose their service on port 80. (If not
specified, the project
and location
default to that of the workflow.)
For more information, see the reference page for the Kubernetes API connector function, gke.request.
Note the following:
- The
path
field corresponds to the Kubernetes API method path. For more information, in the Kubernetes documentation, see the API Overview which links to the "Kubernetes API reference" pages. - You can catch and handle HTTP request errors in your workflow. For more information, see Workflow errors.
Deploy your workflow
Before executing a workflow, you must create and deploy it.
Console
In the Google Cloud console, go to the Workflows page.
Click
Create.Enter a name for the new workflow, such as
kubernetes-api-request
.In the Region list, select us-central1.
Select the Service account you previously created.
Click Next.
In the workflow editor, enter the following definition for your workflow:
YAML
JSON
Replace the following:
CLUSTER_NAME
: the name of your GKE cluster, such ashello-cluster
PROJECT_ID
: your Google Cloud project IDLOCATION
: the region for your cluster, such asus-central1
Click Deploy.
gcloud
Create a source code file for your workflow:
touch kubernetes-api-request.JSON_OR_YAML
Replace
JSON_OR_YAML
withyaml
orjson
depending on the format of your workflow.In a text editor, copy the following workflow to your source code file:
YAML
JSON
Replace the following:
CLUSTER_NAME
: the name of your GKE cluster, such ashello-cluster
LOCATION
: the region for your cluster, such asus-central1
Deploy the workflow:
gcloud workflows deploy kubernetes-api-request \ --source=kubernetes-api-request.
JSON_OR_YAML
\ --location=LOCATION
\ --service-account=SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com
Execute your workflow
After successfully deploying your workflow, you can execute it. Executing a workflow runs the current workflow definition associated with the workflow.
Console
In the Google Cloud console, go to the Workflows page.
On the Workflows page, select your workflow to go to its details page.
On the Workflow details page, click play_arrow Execute.
Click Execute again.
View the results of the workflow in the Output pane.
If successful, the execution state should be
Succeeded
and the body of the response is returned.
gcloud
Execute the workflow:
gcloud workflows run kubernetes-api-request \ --location=LOCATION
If successful, the state should be SUCCEEDED
and the body of the response
is returned.
Use the connector to run a Kubernetes Job
You can use the Kubernetes API connector to deploy and run a Kubernetes Job in a GKE cluster. The following workflow creates a Kubernetes Job that runs a Bash script that iterates through a sequence of numbers. The workflow waits for up to 90 seconds for the Kubernetes Job to complete; otherwise, an error is raised. If the Job completes, it is then deleted.
Note that a Job is considered complete if its status includes a condition
type of Complete
. For example:
"status": { "conditions": [ { "type": "Complete", "status": "True" } ] }
If the Job fails, a FailedJobError
tag is returned. For example:
{ "tags": ["FailedJobError"] "job": {...} "message":"Kubernetes job failed" }
For more information, see the reference pages for the following Kubernetes API connector functions:
Deploy your workflow
Before executing a workflow, you must create and deploy it.
Console
In the Google Cloud console, go to the Workflows page.
Click
Create.Enter a name for the new workflow, such as
kubernetes-api-job
.In the Region list, select us-central1.
Select the Service account you previously created.
Click Next.
In the workflow editor, enter the following definition for your workflow:
YAML
JSON
Replace the following:
LOCATION
: the region for your cluster, such asus-central1
CLUSTER_NAME
: the name of your GKE cluster, such ashello-cluster
JOB_NAME
: the name of the Kubernetes Job, such ashello-job
Click Deploy.
gcloud
Create a source code file for your workflow:
touch kubernetes-api-job.JSON_OR_YAML
Replace
JSON_OR_YAML
withyaml
orjson
depending on the format of your workflow.In a text editor, copy the following workflow to your source code file:
YAML
JSON
Replace the following:
LOCATION
: the region for your cluster, such asus-central1
CLUSTER_NAME
: the name of your GKE cluster, such ashello-cluster
JOB_NAME
: the name of the Kubernetes Job, such ashello-job
Deploy the workflow:
gcloud workflows deploy kubernetes-api-job \ --source=kubernetes-api-job.
JSON_OR_YAML
\ --location=LOCATION
\ --service-account=SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com
Execute your workflow
After successfully deploying your workflow, you can execute it. Executing a workflow runs the current workflow definition associated with the workflow.
Console
In the Google Cloud console, go to the Workflows page.
On the Workflows page, select your workflow to go to its details page.
On the Workflow details page, click play_arrow Execute.
Click Execute again.
The workflow execution might take a couple of minutes.
View the results of the workflow in the Output pane.
The results should be similar to the following:
{ ... }, "status": { "completionTime": "2023-10-31T17:04:32Z", "conditions": [ { "lastProbeTime": "2023-10-31T17:04:33Z", "lastTransitionTime": "2023-10-31T17:04:33Z", "status": "True", "type": "Complete" } ], "ready": 0, "startTime": "2023-10-31T17:04:28Z", "succeeded": 1, "uncountedTerminatedPods": {} } }
gcloud
Execute the workflow:
gcloud workflows run kubernetes-api-job \ --location=LOCATION
The workflow execution might take a couple of minutes. The results should be similar to the following:
{
...
},
"status": {
"completionTime": "2023-10-31T17:04:32Z",
"conditions": [
{
"lastProbeTime": "2023-10-31T17:04:33Z",
"lastTransitionTime": "2023-10-31T17:04:33Z",
"status": "True",
"type": "Complete"
}
],
"ready": 0,
"startTime": "2023-10-31T17:04:28Z",
"succeeded": 1,
"uncountedTerminatedPods": {}
}
}