Set up an Envoy sidecar service mesh
This configuration is supported for Preview customers but we do not recommended it for new Cloud Service Mesh users. For more information, see the Cloud Service Mesh overview.
This guide demonstrates how to configure a simple service mesh in your Fleet. The guide includes the following steps:
- Deploying the Envoy sidecar injector into the cluster. The injector injects the Envoy proxy container into application Pods.
- Deploying Gateway API resources that configure the Envoy sidecar in the
service mesh to route requests to an example service in the namespace
store
. - Deploying a simple client to verify the deployment.
The following diagram shows the configured service mesh.
You can configure only one Mesh
in a cluster, because the mesh name in
the sidecar injector configuration and the Mesh
resource's name must be
identical.
Deploy the Envoy sidecar injector
To deploy the sidecar injector,
Configure project information
# The project that contains your GKE cluster. export CLUSTER_PROJECT_ID=YOUR_CLUSTER_PROJECT_NUMBER_HERE # The name of your GKE cluster. export CLUSTER=YOUR_CLUSTER_NAME # The channel of your GKE cluster. Eg: rapid, regular, stable. export CHANNEL=YOUR_CLUSTER_CHANNEL # The location of your GKE cluster, Eg: us-central1 for regional GKE cluster, # us-central1-a for zonal GKE cluster export LOCATION=ZONE # The mesh name of the traffic director load balancing API. export MESH_NAME=YOUR_MESH_NAME # The project that holds the mesh resources. export MESH_PROJECT_NUMBER=YOUR_PROJECT_NUMBER_HERE export TARGET=projects/${MESH_PROJECT_NUMBER}/locations/global/meshes/${MESH_NAME} gcloud config set project ${CLUSTER_PROJECT_ID}
To find out the
MESH_NAME
, assign the value as follows, whereMESH_NAME
is the value of the fieldmetadata.name
in theMesh
resource spec:gketd-MESH_NAME
For example, if the value of
metadata.name
in theMesh
resource isbutterfly-mesh
, set the value ofMESH_NAME
as follows:export MESH_NAME="gketd-butterfly-mesh"
Apply the configurations for Mutating Webhook
The following sections provide instructions apply the MutatingWebhookConfiguration to the cluster. When a pod is created, the in-cluster admission controller is invoked. The admission controller, talks to the managed sidecar injector to add the Envoy container to the pod.
Apply the following mutating webhook configurations to your cluster.
cat <<EOF | kubectl apply -f - apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: labels: app: sidecar-injector name: td-mutating-webhook webhooks: - admissionReviewVersions: - v1beta1 - v1 clientConfig: url: https://meshconfig.googleapis.com/v1internal/projects/${CLUSTER_PROJECT_ID}/locations/${LOCATION}/clusters/${CLUSTER}/channels/${CHANNEL}/targets/${TARGET}:tdInject failurePolicy: Fail matchPolicy: Exact name: namespace.sidecar-injector.csm.io namespaceSelector: matchExpressions: - key: td-injection operator: Exists reinvocationPolicy: Never rules: - apiGroups: - "" apiVersions: - v1 operations: - CREATE resources: - pods scope: '*' sideEffects: None timeoutSeconds: 30 EOF
If you need to customize the sidecar injector, following these steps to customize the sidecar injector to your cluster:
Deploy the store
service
In this section, you deploy the store
service in the mesh.
In the
store.yaml
file, save the following manifest:kind: Namespace apiVersion: v1 metadata: name: store --- apiVersion: apps/v1 kind: Deployment metadata: name: store namespace: store spec: replicas: 2 selector: matchLabels: app: store version: v1 template: metadata: labels: app: store version: v1 spec: containers: - name: whereami image: gcr.io/google-samples/whereami:v1.2.20 ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: store namespace: store spec: selector: app: store ports: - port: 8080 targetPort: 8080
Apply the manifest to
gke-1
:kubectl apply -f store.yaml
Create a service mesh
In the
mesh.yaml
file, save the followingmesh
manifest. The name of themesh
resource needs to match with the mesh name specified in the injector configmap. In this example configuration, the nametd-mesh
is used in both places:apiVersion: net.gke.io/v1alpha1 kind: TDMesh metadata: name: td-mesh namespace: default spec: gatewayClassName: gke-td allowedRoutes: namespaces: from: All
Apply the
mesh
manifest togke-1
, which creates a logical mesh with the nametd-mesh
:kubectl apply -f mesh.yaml
In the
store-route.yaml
file, save the followingHTTPRoute
manifest. The manifest defines anHTTPRoute
resource that routes HTTP traffic specifying the hostnameexample.com
to a Kubernetes servicestore
in the namespacestore
:apiVersion: gateway.networking.k8s.io/v1alpha2 kind: HTTPRoute metadata: name: store-route namespace: store spec: parentRefs: - name: td-mesh namespace: default group: net.gke.io kind: TDMesh hostnames: - "example.com" rules: - backendRefs: - name: store namespace: store port: 8080
Apply the route manifest to
gke-1
:kubectl apply -f store-route.yaml
Validate the deployment
Inspect the
Mesh
status and events to validate that theMesh
andHTTPRoute
resources are successfully deployed:kubectl describe tdmesh td-mesh
The output is similar to the following:
... Status: Conditions: Last Transition Time: 2022-04-14T22:08:39Z Message: Reason: MeshReady Status: True Type: Ready Last Transition Time: 2022-04-14T22:08:28Z Message: Reason: Scheduled Status: True Type: Scheduled Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 36s mc-mesh-controller Processing mesh default/td-mesh Normal UPDATE 35s mc-mesh-controller Processing mesh default/td-mesh Normal SYNC 24s mc-mesh-controller SYNC on default/td-mesh was a success
To make sure that sidecar injection is enabled in the default namespace, run the following command:
kubectl get namespace default --show-labels
If sidecar injection is enabled, you see the following in the output:
istio-injection=enabled
If sidecar injection is not enabled, see Enable sidecar injections.
To verify the deployment, deploy a client Pod that serves as a client to the
store
service defined previously. In theclient.yaml
file, save the following:apiVersion: apps/v1 kind: Deployment metadata: labels: run: client name: client namespace: default spec: replicas: 1 selector: matchLabels: run: client template: metadata: labels: run: client spec: containers: - name: client image: curlimages/curl command: - sh - -c - while true; do sleep 1; done
Deploy the spec:
kubectl apply -f client.yaml
The sidecar injector running in the cluster automatically injects an Envoy container into the client Pod.
To verify that the Envoy container is injected, run the following command:
kubectl describe pods -l run=client
The output is similar to the following:
... Init Containers: # Istio-init sets up traffic interception for the Pod. istio-init: ... # td-bootstrap-writer generates the Envoy bootstrap file for the Envoy container td-bootstrap-writer: ... Containers: # client is the client container that runs application code. client: ... # Envoy is the container that runs the injected Envoy proxy. envoy: ...
After the client Pod is provisioned, send a request from the client Pod to the
store
service.
Get the name of the client Pod:
CLIENT_POD=$(kubectl get pod -l run=client -o=jsonpath='{.items[0].metadata.name}') # The VIP where the following request will be sent. Because all requests # from the client container are redirected to the Envoy proxy sidecar, you # can use any IP address, including 10.0.0.2, 192.168.0.1, and others. VIP='10.0.0.1'
Send a request to store service and output the response headers:
TEST_CMD="curl -v -H 'host: example.com' $VIP"
Execute the test command in the client container:
kubectl exec -it $CLIENT_POD -c client -- /bin/sh -c "$TEST_CMD"
The output is similar to the following:
< Trying 10.0.0.1:80... < Connected to 10.0.0.1 (10.0.0.1) port 80 (#0) < GET / HTTP/1.1 < Host: example.com < User-Agent: curl/7.82.0-DEV < Accept: */* < < Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < content-type: application/json < content-length: 318 < access-control-allow-origin: * < server: envoy < date: Tue, 12 Apr 2022 22:30:13 GMT < { "cluster_name": "gke-1", "zone": "us-west1-a", "host_header": "example.com", ... }