This page shows how to mount a Cloud Storage bucket as a storage volume, using Cloud Run volume mounts.
Mounting the bucket as a volume in Cloud Run presents the bucket content as files in the container file system. After you mount the bucket as a volume, you access the bucket as if it were a directory on your local file system, using your programming language's file system operations and libraries instead of using Google API Client Libraries.
Memory requirements
Cloud Storage volume mounts use the Cloud Run container memory for the following activities:For all Cloud Storage FUSE caching, Cloud Run uses the stat cache setting with a Time-to-live (TTL) of 60 seconds by default. The default maximum size of the stat cache is 32 MB, the default maximum size of the type cache is 4 MB.
When reading, Cloud Storage FUSE also consumes memory other than stat and type caches, for example a 1 MiB array for every file being read and for goroutines.
When writing to Cloud Storage, the entire file is staged in Cloud Run memory before the file is written to Cloud Storage.
Limitations
Since Cloud Run uses Cloud Storage FUSE for this volume mount, there are a few things to keep in mind when mounting a Cloud Storage bucket as a volume:
- Cloud Storage FUSE does not provide concurrency control for multiple writes (file locking) to the same file. When multiple writes try to replace a file, the last write wins and all previous writes are lost.
- Cloud Storage FUSE is not a fully POSIX-compliant file system. For more details, refer to the Cloud Storage FUSE documentation.
Disallowed paths
Cloud Run does not allow you to mount a volume at /dev
,
/proc
and /sys
, or on their subdirectories.
Before you begin
You need a Cloud Storage bucket to mount as the volume.
For optimal read/write performance to Cloud Storage, see Optimizing Cloud Storage FUSE network bandwidth performance.
Required roles
To get the permissions that you need to configure Cloud Storage volume mounts, ask your administrator to grant you the following IAM roles:
-
Cloud Run Developer (
roles/run.developer
) on the Cloud Run service -
Service Account User (
roles/iam.serviceAccountUser
) on the service identity
To get the permissions that your service identity needs to access the file and Cloud Storage bucket, ask your administrator to grant the service identity the following IAM role:
- Storage Admin (
roles/storage.admin
)
For more details on Cloud Storage roles and permissions, see IAM for Cloud Storage.
For a list of IAM roles and permissions that are associated with Cloud Run, see Cloud Run IAM roles and Cloud Run IAM permissions. If your Cloud Run service interfaces with Google Cloud APIs, such as Cloud Client Libraries, see the service identity configuration guide. For more information about granting roles, see deployment permissions and manage access.
Mount a Cloud Storage volume
You can mount multiple buckets at different mount paths. You can also mount a volume to more than one container using the same or different mount paths across containers.
If you are using multiple containers, first specify the volumes, then specify the volume mounts for each container.
Volume mounts require the second generation execution environment. Cloud Run automatically selects the second generation execution environment for your service if no execution environment is explicitly configured.
Console
In the Google Cloud console, go to Cloud Run:
Click Deploy container and select Service to configure a new service. If you are configuring an existing service, click the service, then click Edit and deploy new revision.
If you are configuring a new service, fill out the initial service settings page, then click Container(s), volumes, networking, security to expand the service configuration page.
Click the Volumes tab.
- Click Add volume.
- In the Volume type drop-down, select Cloud Storage bucket as the volume type.
- In the Volume name field, enter the name you want to use for the volume.
- Browse and select the Cloud Storage bucket to be used for the volume, or, optionally, create a new bucket.
- If you want to make the bucket read-only, select the Read-only checkbox.
- Click Done.
- Click the Volume Mounts tab.
- Click Mount volume.
- Select the storage volume from the menu.
- Specify the path where you want to mount the volume.
- Click Done
Click Create or Deploy.
gcloud
To add a volume and mount it:
gcloud run services update SERVICE \ --add-volume name=VOLUME_NAME,type=cloud-storage,bucket=BUCKET_NAME \ --add-volume-mount volume=VOLUME_NAME,mount-path=MOUNT_PATH
Replace:
- SERVICE with the name of your service.
- MOUNT_PATH with the relative path where you are mounting the volume, for example,
/mnt/my-volume
. - VOLUME_NAME with any name you want for your volume. The VOLUME_NAME value is used to map the volume to the volume mount.
- BUCKET_NAME with the name of your Cloud Storage bucket.
To mount your volume as a read-only volume:
--add-volume=name=VOLUME_NAME,type=cloud-storage,bucket=BUCKET_NAME,readonly=true
If you are using multiple containers, first specify your volume(s), then specify the volume mount(s) for each container:
gcloud run services update SERVICE \ --add-volume name=VOLUME_NAME,type=cloud-storage,bucket=BUCKET_NAME \ --container CONTAINER_1 \ --add-volume-mount volume=VOLUME_NAME,mount-path=MOUNT_PATH \ --container CONTAINER_2 \ --add-volume-mount volume=VOLUME_NAME,mount-path=MOUNT_PATH2
YAML
If you are creating a new service, skip this step. If you are updating an existing service, download its YAML configuration:
gcloud run services describe SERVICE --format export > service.yaml
Update as needed.
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: SERVICE spec: template: metadata: annotations: run.googleapis.com/execution-environment: gen2 spec: containers: - image: IMAGE_URL volumeMounts: - name: VOLUME_NAME mountPath: MOUNT_PATH volumes: - name: VOLUME_NAME csi: driver: gcsfuse.run.googleapis.com readOnly: IS_READ_ONLY volumeAttributes: bucketName: BUCKET_NAME
Replace
- IMAGE_URL with a reference to the container image, for
example,
us-docker.pkg.dev/cloudrun/container/hello:latest
. If you use Artifact Registry, the repository REPO_NAME must already be created. The URL has the shapeLOCATION-docker.pkg.dev/PROJECT_ID/REPO_NAME/PATH:TAG
- MOUNT_PATH with the relative path where you are mounting the volume, for example,
/mnt/my-volume
. - VOLUME_NAME with any name you want for your volume. The VOLUME_NAME value is used to map the volume to the volume mount.
- IS_READ_ONLY with
True
to make the volume read-only, orFalse
to allow writes. - BUCKET_NAME with the name of the Cloud Storage bucket.
- IMAGE_URL with a reference to the container image, for
example,
Create or update the service using the following command:
gcloud run services replace service.yaml
Reading and writing to a volume
If you use the Cloud Run volume mount feature, you access a mounted volume using the same libraries in your programming language that you use to read and write files on your local file system.
This is especially useful if you're using an existing container that expects data to be stored on the local file system and uses regular file system operations to access it.
The following snippets assume a volume mount with a mountPath
set to /mnt/my-volume
.
Nodejs
Use the File System module to create a new file or append to an existing
in the volume, /mnt/my-volume
:
var fs = require('fs'); fs.appendFileSync('/mnt/my-volume/sample-logfile.txt', 'Hello logs!', { flag: 'a+' });
Python
Write to a file kept in the volume, /mnt/my-volume
:
f = open("/mnt/my-volume/sample-logfile.txt", "a")
Go
Use the os
package to create a new file kept in the volume, /mnt/my-volume
f, err := os.Create("/mnt/my-volume/sample-logfile.txt")
Java
Use the Java.io.File
class to create a log file in the volume, /mnt/my-volume
:
import java.io.File; File f = new File("/mnt/my-volume/sample-logfile.txt");
View Volume mounts settings
To view the current Volume mounts settings for your Cloud Run service:
Console
In the Google Cloud console, go to Cloud Run:
Click the service you are interested in to open the Service details page.
Click the Revisions tab.
In the details panel at the right, the Volume mounts setting is listed under the Volumes tab.
gcloud
Use the following command:
gcloud run services describe SERVICE
Locate the Volume mounts setting in the returned configuration.
Optimizing Cloud Storage FUSE network bandwidth performance
For better read and write performance, connect your Cloud Run service to a VPC network using Direct VPC and routing all outbound traffic through your VPC network. You can do this using any of the following options:
- Enable Private Google Access, making sure to set the
vpc-egress
parameter toall-traffic
. - Use one of the options described in the networking best practices page in example 2, Internal traffic to a Google API.
Container startup time and Cloud Storage FUSE mounts
Using Cloud Storage FUSE can slightly increase your Cloud Run container cold start time because the volume mount is started prior to starting the container(s). Your container will start only if Cloud Storage FUSE is successfully mounted.
Note that Cloud Storage FUSE successfully mounts a volume only after establishing a connection to the Cloud Storage. Any networking delays can have an impact on container startup time. Correspondingly, if the connection attempt fails, Cloud Storage FUSE will fail to mount and the Cloud Run service will fail to start. Also, if Cloud Storage FUSE takes longer than 30 seconds to mount, the Cloud Run service will fail to start because Cloud Run has a total timeout of 30 seconds to perform all mounts.
Cloud Storage FUSE performance characteristics
If you define two volumes, each pointing to a different bucket, two Cloud Storage FUSE processes will be started. The mounts and processes occur in parallel.
Operations using Cloud Storage FUSE are impacted by network bandwidth because Cloud Storage FUSE communicates with Cloud Storage using the Cloud Storage API. Some operations such as listing the content of a bucket can be slow if the network bandwidth is low. Similarly, reading a large file can take time as this is also limited by network bandwidth.
When you write to a bucket, Cloud Storage FUSE fully stages the object in memory. This means that writing large files is limited by the amount of memory available to the container instance (the maximum container memory limit is 32 GiB).
The write is flushed to the bucket only when you perform a close
or
an fsync
: the full object is then uploaded/re-uploaded to the bucket. The
only exception to an object being entirely re-uploaded to the bucket is in the
case of a file with appended content when the file is 2 MiB or more.
For more information, see the following resources: