Configure networking and access to your Cloud TPU
This page describes how to set up custom network and access configurations for your Cloud TPU, including:
- Specifying a custom network and subnetwork
- Specifying internal IP addresses
- Enabling SSH access to TPUs
- Attaching a custom service account to your TPU
- Enabling custom SSH methods
Prerequisites
Before you run these procedures, you must install the Google Cloud CLI, create a Google Cloud project, and enable the Cloud TPU API. For instructions, see Set up the Cloud TPU environment.
Specify a custom network and subnetwork
You can optionally specify the network and subnetwork to use for the TPU. If the
network not specified, the TPU will be in the default
network. The subnetwork
needs to be in the same region as the zone where the TPU runs.
Create a network that matches one of the following valid formats:
https://www.googleapis.com/compute/{version}/projects/{proj-id}/global/networks/{network}
compute/{version}/projects/{proj-id}/global/networks/{network}
compute/{version}/projects/{proj-##}/global/networks/{network}
projects/{proj-id}/global/networks/{network}
projects/{proj-##}/global/networks/{network}
global/networks/{network}
{network}
For more information, see Create and manage VPC networks.
Create a subnetwork that matches one of the following valid formats:
https://www.googleapis.com/compute/{version}/projects/{proj-id}/regions/{region}/subnetworks/{subnetwork}
compute/{version}/projects/{proj-id}/regions/{region}/subnetworks/{subnetwork}
compute/{version}/projects/{proj-##}/regions/{region}/subnetworks/{subnetwork}
projects/{proj-id}/regions/{region}/subnetworks/{subnetwork}
projects/{proj-##}/regions/{region}/subnetworks/{subnetwork}
regions/{region}/subnetworks/{subnetwork}
{subnetwork}
For more information, see Create and manage VPC networks.
Create the TPU, specifying the custom network and subnetwork:
gcloud
To specify the network and subnetwork using the
gcloud
CLI, add the--network
and--subnetwork
flags to your create request:$ gcloud compute tpus tpu-vm create TPU_NAME \ --zone=us-central2-b \ --accelerator-type=v4-8 \ --version=TPU_SOFTWARE_VERSION --network=NETWORK --subnetwork=SUBNETWORK
curl
To specify the network and subnetwork in a
curl
call, add thenetwork
andsubnetwork
fields to the request body:$ curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" -d "{accelerator_type: 'v4-8', \ runtime_version:'tpu-vm-tf-2.18.0-pjrt', \ network_config: {network: 'NETWORK', subnetwork: 'SUBNETWORK', enable_external_ips: true}, \ shielded_instance_config: { enable_secure_boot: true }}" \ https://tpu.googleapis.com/v2/projects/PROJECT_ID/locations/us-central2-b/nodes?node_id=TPU_NAME
Enable internal IP addresses
When you create a TPU, external IP addresses are created by default for each TPU VM.
If you want to create internal IP addresses for your TPU VMs instead, use the
--internal-ips
flag when you create the TPU.
gcloud
If you are using queued resources:
gcloud compute tpus queued-resources create your-queued-resource-id \ --node-id your-node-id \ --project your-project \ --zone us-central2-b \ --accelerator-type v4-8 \ --runtime-version tpu_software_version \ --internal-ips
If you are using the Create Node API:
$ gcloud compute tpus tpu-vm create TPU_NAME \ --zone=us-central2-b \ --accelerator-type=v4-8 \ --version=tpu_software_version \ --internal-ips
curl
Set the enable_external_ips
field to false
in the request body:
$ curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" -d "{accelerator_type: 'v4-8', \ runtime_version:'tpu-vm-tf-2.18.0-pjrt', \ network_config: {enable_external_ips: false}, \ shielded_instance_config: { enable_secure_boot: true }}" \ https://tpu.googleapis.com/v2/projects/PROJECT_ID/locations/us-central2-b/nodes?node_id=TPU_NAME
Enable custom SSH methods
To connect to TPUs using SSH, you need to either enable external IP addresses for the TPUs, or enable Private Google Access for the subnetwork to which the TPU VMs are connected.
Enable Private Google Access
TPUs that don't have external IP addresses can use Private Google Access to access Google APIs and services. For more information about enabling Private Google Access, see Configure Private Google Access.
After you have configured Private Google Access, connect to the VM using SSH.
Attach a custom service account
Each TPU VM has an associated service account it uses to make API requests on your behalf. TPU VMs use this service account to call Cloud TPU APIs and access Cloud Storage and other services. By default, your TPU VM uses the default Compute Engine service account.
The service account must be defined in the same Google Cloud project where you create your TPU VM. Custom service accounts used for TPU VMs must have the TPU Viewer role to call the Cloud TPU API. If the code running in your TPU VM calls other Google Cloud services, it must have the roles necessary to access those services.
For more information about service accounts, see Service accounts.
Use the following commands to specify a custom service account.
gcloud
Use the --service-account
flag when creating a TPU:
$ gcloud compute tpus tpu-vm create TPU_NAME \ --zone=us-central2-b \ --accelerator-type=TPU_TYPE \ --version=tpu-vm-tf-2.18.0-pjrt \ --service-account=SERVICE_ACCOUNT
curl
Set the service_account
field in the request body:
$ curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" -d "{accelerator_type: 'v4-8', \ runtime_version:'tpu-vm-tf-2.18.0-pjrt', \ network_config: {enable_external_ips: true}, \ shielded_instance_config: { enable_secure_boot: true }}" \ service_account: {email: 'SERVICE_ACCOUNT'} \ https://tpu.googleapis.com/v2/projects/PROJECT_ID/locations/us-central2-b/nodes?node_id=TPU_NAME
Enable custom SSH methods
The default network allows SSH access to all TPU VMs. If you use a network other than the default or you change the default network settings, you need to explicitly enable SSH access by adding a firewall rule:
$ gcloud compute tpus tpu-vm compute firewall-rules create \ --network=NETWORK allow-ssh \ --allow=tcp:22