Backup
A backup of a Cloud Spanner database.Fields | |
---|---|
createTime |
Output only. The time the CreateBackup request is received. If the request does not specify
|
database |
Required for the CreateBackup operation. Name of the database from which this backup was created. This needs to be in the same instance as the backup. Values are of the form
|
databaseDialect |
Output only. The database dialect information for the backup.
|
Enum type. Can be one of the following: | |
DATABASE_DIALECT_UNSPECIFIED |
Default value. This value will create a database with the GOOGLE_STANDARD_SQL dialect. |
GOOGLE_STANDARD_SQL |
Google standard SQL. |
POSTGRESQL |
PostgreSQL supported SQL. |
encryptionInfo |
Output only. The encryption information for the backup.
|
expireTime |
Required for the CreateBackup operation. The expiration time of the backup, with microseconds granularity that must be at least 6 hours and at most 366 days from the time the CreateBackup request is processed. Once the
|
maxExpireTime |
Output only. The max allowed expiration time of the backup, with microseconds granularity. A backup's expiration time can be configured in multiple APIs: CreateBackup, UpdateBackup, CopyBackup. When updating or copying an existing backup, the expiration time specified must be less than
|
name |
Output only for the CreateBackup operation. Required for the UpdateBackup operation. A globally unique identifier for the backup which cannot be changed. Values are of the form
|
referencingBackups[] |
Output only. The names of the destination backups being created by copying this source backup. The backup names are of the form
|
referencingDatabases[] |
Output only. The names of the restored databases that reference the backup. The database names are of the form
|
sizeBytes |
Output only. Size of the backup in bytes.
|
state |
Output only. The current state of the backup.
|
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Not specified. |
CREATING |
The pending backup is still being created. Operations on the backup may fail with FAILED_PRECONDITION in this state. |
READY |
The backup is complete and ready for use. |
versionTime |
The backup will contain an externally consistent copy of the database at the timestamp specified by
|
BackupInfo
Information about a backup.Fields | |
---|---|
backup |
Name of the backup.
|
createTime |
The time the CreateBackup request was received.
|
sourceDatabase |
Name of the database the backup was created from.
|
versionTime |
The backup contains an externally consistent copy of
|
BatchCreateSessionsRequest
The request for BatchCreateSessions.Fields | |
---|---|
sessionCount |
Required. The number of sessions to be created in this batch call. The API may return fewer than the requested number of sessions. If a specific number of sessions are desired, the client can make additional calls to BatchCreateSessions (adjusting session_count as necessary).
|
sessionTemplate |
Parameters to be applied to each created session.
|
BatchCreateSessionsResponse
The response for BatchCreateSessions.Fields | |
---|---|
session[] |
The freshly created sessions.
|
BeginTransactionRequest
The request for BeginTransaction.Fields | |
---|---|
options |
Required. Options for the new transaction.
|
requestOptions |
Common options for this request. Priority is ignored for this request. Setting the priority in this request_options struct will not do anything. To set the priority for a transaction, set it on the reads and writes that are part of this transaction instead.
|
Binding
Associatesmembers
, or principals, with a role
.
Fields | |
---|---|
condition |
The condition that is associated with this binding. If the condition evaluates to
|
members[] |
Specifies the principals requesting access for a Google Cloud resource.
|
role |
Role that is assigned to the list of
|
ChildLink
Metadata associated with a parent-child relationship appearing in a PlanNode.Fields | |
---|---|
childIndex |
The node to which the link points.
|
type |
The type of the link. For example, in Hash Joins this could be used to distinguish between the build child and the probe child, or in the case of the child being an output variable, to represent the tag associated with the output variable.
|
variable |
Only present if the child node is SCALAR and corresponds to an output variable of the parent node. The field carries the name of the output variable. For example, a
|
CommitRequest
The request for Commit.Fields | |
---|---|
mutations[] |
The mutations to be executed when this transaction commits. All mutations are applied atomically, in the order they appear in this list.
|
requestOptions |
Common options for this request.
|
returnCommitStats |
If
|
singleUseTransaction |
Execute mutations in a temporary transaction. Note that unlike commit of a previously-started transaction, commit with a temporary transaction is non-idempotent. That is, if the
|
transactionId |
Commit a previously-started transaction.
|
CommitResponse
The response for Commit.Fields | |
---|---|
commitStats |
The statistics about this Commit. Not returned by default. For more information, see CommitRequest.return_commit_stats.
|
commitTimestamp |
The Cloud Spanner timestamp at which the transaction committed.
|
CommitStats
Additional statistics about a commit.Fields | |
---|---|
mutationCount |
The total number of mutations for the transaction. Knowing the
|
ContextValue
A message representing context for a KeyRangeInfo, including a label, value, unit, and severity.Fields | |
---|---|
label |
The label for the context value. e.g. "latency".
|
severity |
The severity of this context.
|
Enum type. Can be one of the following: | |
SEVERITY_UNSPECIFIED |
Required default value. |
INFO |
Lowest severity level "Info". |
WARNING |
Middle severity level "Warning". |
ERROR |
Severity level signaling an error "Error" |
FATAL |
Severity level signaling a non recoverable error "Fatal" |
unit |
The unit of the context value.
|
value |
The value for the context.
|
CopyBackupEncryptionConfig
Encryption configuration for the copied backup.Fields | |
---|---|
encryptionType |
Required. The encryption type of the backup.
|
Enum type. Can be one of the following: | |
ENCRYPTION_TYPE_UNSPECIFIED |
Unspecified. Do not use. |
USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION |
This is the default option for CopyBackup when encryption_config is not specified. For example, if the source backup is using Customer_Managed_Encryption , the backup will be using the same Cloud KMS key as the source backup. |
GOOGLE_DEFAULT_ENCRYPTION |
Use Google default encryption. |
CUSTOMER_MANAGED_ENCRYPTION |
Use customer managed encryption. If specified, either kms_key_name or kms_key_names must contain valid Cloud KMS key(s). |
kmsKeyName |
Optional. The Cloud KMS key that will be used to protect the backup. This field should be set only when encryption_type is
|
CopyBackupMetadata
Metadata type for the operation returned by CopyBackup.Fields | |
---|---|
cancelTime |
The time at which cancellation of CopyBackup operation was received. Operations.CancelOperation starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to
|
name |
The name of the backup being created through the copy operation. Values are of the form
|
progress |
The progress of the CopyBackup operation.
|
sourceBackup |
The name of the source backup that is being copied. Values are of the form
|
CopyBackupRequest
The request for CopyBackup.Fields | |
---|---|
backupId |
Required. The id of the backup copy. The
|
encryptionConfig |
Optional. The encryption configuration used to encrypt the backup. If this field is not specified, the backup will use the same encryption configuration as the source backup by default, namely encryption_type =
|
expireTime |
Required. The expiration time of the backup in microsecond granularity. The expiration time must be at least 6 hours and at most 366 days from the
|
sourceBackup |
Required. The source backup to be copied. The source backup needs to be in READY state for it to be copied. Once CopyBackup is in progress, the source backup cannot be deleted or cleaned up on expiration until CopyBackup is finished. Values are of the form:
|
CreateBackupMetadata
Metadata type for the operation returned by CreateBackup.Fields | |
---|---|
cancelTime |
The time at which cancellation of this operation was received. Operations.CancelOperation starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to
|
database |
The name of the database the backup is created from.
|
name |
The name of the backup being created.
|
progress |
The progress of the CreateBackup operation.
|
CreateDatabaseMetadata
Metadata type for the operation returned by CreateDatabase.Fields | |
---|---|
database |
The database being created.
|
CreateDatabaseRequest
The request for CreateDatabase.Fields | |
---|---|
createStatement |
Required. A
|
databaseDialect |
Optional. The dialect of the Cloud Spanner Database.
|
Enum type. Can be one of the following: | |
DATABASE_DIALECT_UNSPECIFIED |
Default value. This value will create a database with the GOOGLE_STANDARD_SQL dialect. |
GOOGLE_STANDARD_SQL |
Google standard SQL. |
POSTGRESQL |
PostgreSQL supported SQL. |
encryptionConfig |
Optional. The encryption configuration for the database. If this field is not specified, Cloud Spanner will encrypt/decrypt all data at rest using Google default encryption.
|
extraStatements[] |
Optional. A list of DDL statements to run inside the newly created database. Statements can create tables, indexes, etc. These statements execute atomically with the creation of the database: if there is an error in any statement, the database is not created.
|
protoDescriptors |
Optional. Proto descriptors used by CREATE/ALTER PROTO BUNDLE statements in 'extra_statements' above. Contains a protobuf-serialized google.protobuf.FileDescriptorSet. To generate it, install and run
|
CreateInstanceConfigMetadata
Metadata type for the operation returned by CreateInstanceConfig.Fields | |
---|---|
cancelTime |
The time at which this operation was cancelled.
|
instanceConfig |
The target instance config end state.
|
progress |
The progress of the CreateInstanceConfig operation.
|
CreateInstanceConfigRequest
The request for CreateInstanceConfigRequest.Fields | |
---|---|
instanceConfig |
Required. The InstanceConfig proto of the configuration to create. instance_config.name must be
|
instanceConfigId |
Required. The ID of the instance config to create. Valid identifiers are of the form
|
validateOnly |
An option to validate, but not actually execute, a request, and provide the same response.
|
CreateInstanceMetadata
Metadata type for the operation returned by CreateInstance.Fields | |
---|---|
cancelTime |
The time at which this operation was cancelled. If set, this operation is in the process of undoing itself (which is guaranteed to succeed) and cannot be cancelled again.
|
endTime |
The time at which this operation failed or was completed successfully.
|
instance |
The instance being created.
|
startTime |
The time at which the CreateInstance request was received.
|
CreateInstanceRequest
The request for CreateInstance.Fields | |
---|---|
instance |
Required. The instance to create. The name may be omitted, but if specified must be
|
instanceId |
Required. The ID of the instance to create. Valid identifiers are of the form
|
CreateSessionRequest
The request for CreateSession.Fields | |
---|---|
session |
Required. The session to create.
|
Database
A Cloud Spanner database.Fields | |
---|---|
createTime |
Output only. If exists, the time at which the database creation started.
|
databaseDialect |
Output only. The dialect of the Cloud Spanner Database.
|
Enum type. Can be one of the following: | |
DATABASE_DIALECT_UNSPECIFIED |
Default value. This value will create a database with the GOOGLE_STANDARD_SQL dialect. |
GOOGLE_STANDARD_SQL |
Google standard SQL. |
POSTGRESQL |
PostgreSQL supported SQL. |
defaultLeader |
Output only. The read-write region which contains the database's leader replicas. This is the same as the value of default_leader database option set using DatabaseAdmin.CreateDatabase or DatabaseAdmin.UpdateDatabaseDdl. If not explicitly set, this is empty.
|
earliestVersionTime |
Output only. Earliest timestamp at which older versions of the data can be read. This value is continuously updated by Cloud Spanner and becomes stale the moment it is queried. If you are using this value to recover data, make sure to account for the time from the moment when the value is queried to the moment when you initiate the recovery.
|
encryptionConfig |
Output only. For databases that are using customer managed encryption, this field contains the encryption configuration for the database. For databases that are using Google default or other types of encryption, this field is empty.
|
encryptionInfo[] |
Output only. For databases that are using customer managed encryption, this field contains the encryption information for the database, such as all Cloud KMS key versions that are in use. The
|
name |
Required. The name of the database. Values are of the form
|
restoreInfo |
Output only. Applicable only for restored databases. Contains information about the restore source.
|
state |
Output only. The current database state.
|
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Not specified. |
CREATING |
The database is still being created. Operations on the database may fail with FAILED_PRECONDITION in this state. |
READY |
The database is fully created and ready for use. |
READY_OPTIMIZING |
The database is fully created and ready for use, but is still being optimized for performance and cannot handle full load. In this state, the database still references the backup it was restore from, preventing the backup from being deleted. When optimizations are complete, the full performance of the database will be restored, and the database will transition to READY state. |
versionRetentionPeriod |
Output only. The period in which Cloud Spanner retains all versions of data for the database. This is the same as the value of version_retention_period database option set using UpdateDatabaseDdl. Defaults to 1 hour, if not set.
|
DatabaseRole
A Cloud Spanner database role.Fields | |
---|---|
name |
Required. The name of the database role. Values are of the form
|
Delete
Arguments to delete operations.Fields | |
---|---|
keySet |
Required. The primary keys of the rows within table to delete. The primary keys must be specified in the order in which they appear in the
|
table |
Required. The table whose rows will be deleted.
|
DerivedMetric
A message representing a derived metric.Fields | |
---|---|
denominator |
The name of the denominator metric. e.g. "rows".
|
numerator |
The name of the numerator metric. e.g. "latency".
|
DiagnosticMessage
A message representing the key visualizer diagnostic messages.Fields | |
---|---|
info |
Information about this diagnostic information.
|
metric |
The metric.
|
metricSpecific |
Whether this message is specific only for the current metric. By default Diagnostics are shown for all metrics, regardless which metric is the currently selected metric in the UI. However occasionally a metric will generate so many messages that the resulting visual clutter becomes overwhelming. In this case setting this to true, will show the diagnostic messages for that metric only if it is the currently selected metric.
|
severity |
The severity of the diagnostic message.
|
Enum type. Can be one of the following: | |
SEVERITY_UNSPECIFIED |
Required default value. |
INFO |
Lowest severity level "Info". |
WARNING |
Middle severity level "Warning". |
ERROR |
Severity level signaling an error "Error" |
FATAL |
Severity level signaling a non recoverable error "Fatal" |
shortMessage |
The short message.
|
EncryptionConfig
Encryption configuration for a Cloud Spanner database.Fields | |
---|---|
kmsKeyName |
The Cloud KMS key to be used for encrypting and decrypting the database. Values are of the form
|
EncryptionInfo
Encryption information for a Cloud Spanner database or backup.Fields | |
---|---|
encryptionStatus |
Output only. If present, the status of a recent encrypt/decrypt call on underlying data for this database or backup. Regardless of status, data is always encrypted at rest.
|
encryptionType |
Output only. The type of encryption.
|
Enum type. Can be one of the following: | |
TYPE_UNSPECIFIED |
Encryption type was not specified, though data at rest remains encrypted. |
GOOGLE_DEFAULT_ENCRYPTION |
The data is encrypted at rest with a key that is fully managed by Google. No key version or status will be populated. This is the default state. |
CUSTOMER_MANAGED_ENCRYPTION |
The data is encrypted at rest with a key that is managed by the customer. The active version of the key. kms_key_version will be populated, and encryption_status may be populated. |
kmsKeyVersion |
Output only. A Cloud KMS key version that is being used to protect the database or backup.
|
ExecuteBatchDmlRequest
The request for ExecuteBatchDml.Fields | |
---|---|
requestOptions |
Common options for this request.
|
seqno |
Required. A per-transaction sequence number used to identify this request. This field makes each request idempotent such that if the request is received multiple times, at most one will succeed. The sequence number must be monotonically increasing within the transaction. If a request arrives for the first time with an out-of-order sequence number, the transaction may be aborted. Replays of previously handled requests will yield the same response as the first execution.
|
statements[] |
Required. The list of statements to execute in this batch. Statements are executed serially, such that the effects of statement
|
transaction |
Required. The transaction to use. Must be a read-write transaction. To protect against replays, single-use transactions are not supported. The caller must either supply an existing transaction ID or begin a new transaction.
|
ExecuteBatchDmlResponse
The response for ExecuteBatchDml. Contains a list of ResultSet messages, one for each DML statement that has successfully executed, in the same order as the statements in the request. If a statement fails, the status in the response body identifies the cause of the failure. To check for DML statements that failed, use the following approach: 1. Check the status in the response message. The google.rpc.Code enum valueOK
indicates that all statements were executed successfully. 2. If the status was not OK
, check the number of result sets in the response. If the response contains N
ResultSet messages, then statement N+1
in the request failed. Example 1: * Request: 5 DML statements, all executed successfully. * Response: 5 ResultSet messages, with the status OK
. Example 2: * Request: 5 DML statements. The third statement has a syntax error. * Response: 2 ResultSet messages, and a syntax error (INVALID_ARGUMENT
) status. The number of ResultSet messages indicates that the third statement failed, and the fourth and fifth statements were not executed.
Fields | |
---|---|
resultSets[] |
One ResultSet for each statement in the request that ran successfully, in the same order as the statements in the request. Each ResultSet does not contain any rows. The ResultSetStats in each ResultSet contain the number of rows modified by the statement. Only the first ResultSet in the response contains valid ResultSetMetadata.
|
status |
If all DML statements are executed successfully, the status is
|
ExecuteSqlRequest
The request for ExecuteSql and ExecuteStreamingSql.Fields | |
---|---|
paramTypes |
It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type
|
params |
Parameter names and values that bind to placeholders in the SQL string. A parameter placeholder consists of the
|
partitionToken |
If present, results will be restricted to the specified partition previously created using PartitionQuery(). There must be an exact match for the values of fields common to this message and the PartitionQueryRequest message used to create this partition_token.
|
queryMode |
Used to control the amount of debugging information returned in ResultSetStats. If partition_token is set, query_mode can only be set to QueryMode.NORMAL.
|
Enum type. Can be one of the following: | |
NORMAL |
The default mode. Only the statement results are returned. |
PLAN |
This mode returns only the query plan, without any results or execution statistics information. |
PROFILE |
This mode returns both the query plan and the execution statistics along with the results. |
queryOptions |
Query optimizer configuration to use for the given query.
|
requestOptions |
Common options for this request.
|
resumeToken |
If this request is resuming a previously interrupted SQL statement execution,
|
seqno |
A per-transaction sequence number used to identify this request. This field makes each request idempotent such that if the request is received multiple times, at most one will succeed. The sequence number must be monotonically increasing within the transaction. If a request arrives for the first time with an out-of-order sequence number, the transaction may be aborted. Replays of previously handled requests will yield the same response as the first execution. Required for DML statements. Ignored for queries.
|
sql |
Required. The SQL string.
|
transaction |
The transaction to use. For queries, if none is provided, the default is a temporary read-only transaction with strong concurrency. Standard DML statements require a read-write transaction. To protect against replays, single-use transactions are not supported. The caller must either supply an existing transaction ID or begin a new transaction. Partitioned DML requires an existing Partitioned DML transaction ID.
|
Expr
Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.Fields | |
---|---|
description |
Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
|
expression |
Textual representation of an expression in Common Expression Language syntax.
|
location |
Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
|
title |
Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
|
Field
Message representing a single field of a struct.Fields | |
---|---|
name |
The name of the field. For reads, this is the column name. For SQL queries, it is the column alias (e.g.,
|
type |
The type of the field.
|
FreeInstanceMetadata
Free instance specific metadata that is kept even after an instance has been upgraded for tracking purposes.Fields | |
---|---|
expireBehavior |
Specifies the expiration behavior of a free instance. The default of ExpireBehavior is
|
Enum type. Can be one of the following: | |
EXPIRE_BEHAVIOR_UNSPECIFIED |
Not specified. |
FREE_TO_PROVISIONED |
When the free instance expires, upgrade the instance to a provisioned instance. |
REMOVE_AFTER_GRACE_PERIOD |
When the free instance expires, disable the instance, and delete it after the grace period passes if it has not been upgraded. |
expireTime |
Output only. Timestamp after which the instance will either be upgraded or scheduled for deletion after a grace period. ExpireBehavior is used to choose between upgrading or scheduling the free instance for deletion. This timestamp is set during the creation of a free instance.
|
upgradeTime |
Output only. If present, the timestamp at which the free instance was upgraded to a provisioned instance.
|
GetDatabaseDdlResponse
The response for GetDatabaseDdl.Fields | |
---|---|
protoDescriptors |
Proto descriptors stored in the database. Contains a protobuf-serialized google.protobuf.FileDescriptorSet. For more details, see protobuffer self description.
|
statements[] |
A list of formatted DDL statements defining the schema of the database specified in the request.
|
GetIamPolicyRequest
Request message forGetIamPolicy
method.
Fields | |
---|---|
options |
OPTIONAL: A
|
GetPolicyOptions
Encapsulates settings provided to GetIamPolicy.Fields | |
---|---|
requestedPolicyVersion |
Optional. The maximum policy version that will be used to format the policy. Valid values are 0, 1, and 3. Requests specifying an invalid value will be rejected. Requests for policies with any conditional role bindings must specify version 3. Policies with no conditional role bindings may specify any valid value or leave the field unset. The policy in the response might use the policy version that you specified, or it might use a lower policy version. For example, if you specify version 3, but the policy has no conditional role bindings, the response uses version 1. To learn which resources support conditions in their IAM policies, see the IAM documentation.
|
IndexedHotKey
A message representing a (sparse) collection of hot keys for specific key buckets.Fields | |
---|---|
sparseHotKeys |
A (sparse) mapping from key bucket index to the index of the specific hot row key for that key bucket. The index of the hot row key can be translated to the actual row key via the ScanData.VisualizationData.indexed_keys repeated field.
|
IndexedKeyRangeInfos
A message representing a (sparse) collection of KeyRangeInfos for specific key buckets.Fields | |
---|---|
keyRangeInfos |
A (sparse) mapping from key bucket index to the KeyRangeInfos for that key bucket.
|
Instance
An isolated set of Cloud Spanner resources on which databases can be hosted.Fields | |
---|---|
config |
Required. The name of the instance's configuration. Values are of the form
|
createTime |
Output only. The time at which the instance was created.
|
displayName |
Required. The descriptive name for this instance as it appears in UIs. Must be unique per project and between 4 and 30 characters in length.
|
endpointUris[] |
Deprecated. This field is not populated.
|
freeInstanceMetadata |
Free instance metadata. Only populated for free instances.
|
instanceType |
The
|
Enum type. Can be one of the following: | |
INSTANCE_TYPE_UNSPECIFIED |
Not specified. |
PROVISIONED |
Provisioned instances have dedicated resources, standard usage limits and support. |
FREE_INSTANCE |
Free instances provide no guarantee for dedicated resources, [node_count, processing_units] should be 0. They come with stricter usage limits and limited support. |
labels |
Cloud Labels are a flexible and lightweight mechanism for organizing cloud resources into groups that reflect a customer's organizational needs and deployment strategies. Cloud Labels can be used to filter collections of resources. They can be used to control how resource metrics are aggregated. And they can be used as arguments to policy management rules (e.g. route, firewall, load balancing, etc.). * Label keys must be between 1 and 63 characters long and must conform to the following regular expression:
|
name |
Required. A unique identifier for the instance, which cannot be changed after the instance is created. Values are of the form
|
nodeCount |
The number of nodes allocated to this instance. At most one of either node_count or processing_units should be present in the message. This may be zero in API responses for instances that are not yet in state
|
processingUnits |
The number of processing units allocated to this instance. At most one of processing_units or node_count should be present in the message. This may be zero in API responses for instances that are not yet in state
|
state |
Output only. The current instance state. For CreateInstance, the state must be either omitted or set to
|
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Not specified. |
CREATING |
The instance is still being created. Resources may not be available yet, and operations such as database creation may not work. |
READY |
The instance is fully created and ready to do work such as creating databases. |
updateTime |
Output only. The time at which the instance was most recently updated.
|
InstanceConfig
A possible configuration for a Cloud Spanner instance. Configurations define the geographic placement of nodes and their replication.Fields | |
---|---|
baseConfig |
Base configuration name, e.g. projects//instanceConfigs/nam3, based on which this configuration is created. Only set for user managed configurations.
|
configType |
Output only. Whether this instance config is a Google or User Managed Configuration.
|
Enum type. Can be one of the following: | |
TYPE_UNSPECIFIED |
Unspecified. |
GOOGLE_MANAGED |
Google managed configuration. |
USER_MANAGED |
User managed configuration. |
displayName |
The name of this instance configuration as it appears in UIs.
|
etag |
etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a instance config from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform instance config updates in order to avoid race conditions: An etag is returned in the response which contains instance configs, and systems are expected to put that etag in the request to update instance config to ensure that their change will be applied to the same version of the instance config. If no etag is provided in the call to update instance config, then the existing instance config is overwritten blindly.
|
freeInstanceAvailability |
Output only. Describes whether free instances are available to be created in this instance config.
|
Enum type. Can be one of the following: | |
FREE_INSTANCE_AVAILABILITY_UNSPECIFIED |
Not specified. |
AVAILABLE |
Indicates that free instances are available to be created in this instance config. |
UNSUPPORTED |
Indicates that free instances are not supported in this instance config. |
DISABLED |
Indicates that free instances are currently not available to be created in this instance config. |
QUOTA_EXCEEDED |
Indicates that additional free instances cannot be created in this instance config because the project has reached its limit of free instances. |
labels |
Cloud Labels are a flexible and lightweight mechanism for organizing cloud resources into groups that reflect a customer's organizational needs and deployment strategies. Cloud Labels can be used to filter collections of resources. They can be used to control how resource metrics are aggregated. And they can be used as arguments to policy management rules (e.g. route, firewall, load balancing, etc.). * Label keys must be between 1 and 63 characters long and must conform to the following regular expression:
|
leaderOptions[] |
Allowed values of the "default_leader" schema option for databases in instances that use this instance configuration.
|
name |
A unique identifier for the instance configuration. Values are of the form
|
optionalReplicas[] |
Output only. The available optional replicas to choose from for user managed configurations. Populated for Google managed configurations.
|
reconciling |
Output only. If true, the instance config is being created or updated. If false, there are no ongoing operations for the instance config.
|
replicas[] |
The geographic placement of nodes in this instance configuration and their replication properties.
|
state |
Output only. The current instance config state.
|
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Not specified. |
CREATING |
The instance config is still being created. |
READY |
The instance config is fully created and ready to be used to create instances. |
InstanceOperationProgress
Encapsulates progress related information for a Cloud Spanner long running instance operations.Fields | |
---|---|
endTime |
If set, the time at which this operation failed or was completed successfully.
|
progressPercent |
Percent completion of the operation. Values are between 0 and 100 inclusive.
|
startTime |
Time the request was received.
|
KeyRange
KeyRange represents a range of rows in a table or index. A range has a start key and an end key. These keys can be open or closed, indicating if the range includes rows with that key. Keys are represented by lists, where the ith value in the list corresponds to the ith component of the table or index primary key. Individual values are encoded as described here. For example, consider the following table definition: CREATE TABLE UserEvents ( UserName STRING(MAX), EventDate STRING(10) ) PRIMARY KEY(UserName, EventDate); The following keys name rows in this table: "Bob", "2014-09-23" Since theUserEvents
table's PRIMARY KEY
clause names two columns, each UserEvents
key has two elements; the first is the UserName
, and the second is the EventDate
. Key ranges with multiple components are interpreted lexicographically by component using the table or index key's declared sort order. For example, the following range returns all events for user "Bob"
that occurred in the year 2015: "start_closed": ["Bob", "2015-01-01"] "end_closed": ["Bob", "2015-12-31"] Start and end keys can omit trailing key components. This affects the inclusion and exclusion of rows that exactly match the provided key components: if the key is closed, then rows that exactly match the provided components are included; if the key is open, then rows that exactly match are not included. For example, the following range includes all events for "Bob"
that occurred during and after the year 2000: "start_closed": ["Bob", "2000-01-01"] "end_closed": ["Bob"] The next example retrieves all events for "Bob"
: "start_closed": ["Bob"] "end_closed": ["Bob"] To retrieve events before the year 2000: "start_closed": ["Bob"] "end_open": ["Bob", "2000-01-01"] The following range includes all rows in the table: "start_closed": [] "end_closed": [] This range returns all users whose UserName
begins with any character from A to C: "start_closed": ["A"] "end_open": ["D"] This range returns all users whose UserName
begins with B: "start_closed": ["B"] "end_open": ["C"] Key ranges honor column sort order. For example, suppose a table is defined as follows: CREATE TABLE DescendingSortedTable { Key INT64, ... ) PRIMARY KEY(Key DESC); The following range retrieves all rows with key values between 1 and 100 inclusive: "start_closed": ["100"] "end_closed": ["1"] Note that 100 is passed as the start, and 1 is passed as the end, because Key
is a descending column in the schema.
Fields | |
---|---|
endClosed[] |
If the end is closed, then the range includes all rows whose first
|
endOpen[] |
If the end is open, then the range excludes rows whose first
|
startClosed[] |
If the start is closed, then the range includes all rows whose first
|
startOpen[] |
If the start is open, then the range excludes rows whose first
|
KeyRangeInfo
A message representing information for a key range (possibly one key).Fields | |
---|---|
contextValues[] |
The list of context values for this key range.
|
endKeyIndex |
The index of the end key in indexed_keys.
|
info |
Information about this key range, for all metrics.
|
keysCount |
The number of keys this range covers.
|
metric |
The name of the metric. e.g. "latency".
|
startKeyIndex |
The index of the start key in indexed_keys.
|
timeOffset |
The time offset. This is the time since the start of the time interval.
|
unit |
The unit of the metric. This is an unstructured field and will be mapped as is to the user.
|
value |
The value of the metric.
|
KeyRangeInfos
A message representing a list of specific information for multiple key ranges.Fields | |
---|---|
infos[] |
The list individual KeyRangeInfos.
|
totalSize |
The total size of the list of all KeyRangeInfos. This may be larger than the number of repeated messages above. If that is the case, this number may be used to determine how many are not being shown.
|
KeySet
KeySet
defines a collection of Cloud Spanner keys and/or key ranges. All the keys are expected to be in the same table or index. The keys need not be sorted in any particular way. If the same key is specified multiple times in the set (for example if two ranges, two keys, or a key and a range overlap), Cloud Spanner behaves as if the key were only specified once.
Fields | |
---|---|
all |
For convenience
|
keys[] |
A list of specific keys. Entries in
|
ranges[] |
A list of key ranges. See KeyRange for more information about key range specifications.
|
ListBackupOperationsResponse
The response for ListBackupOperations.Fields | |
---|---|
nextPageToken |
|
operations[] |
The list of matching backup long-running operations. Each operation's name will be prefixed by the backup's name. The operation's metadata field type
|
ListBackupsResponse
The response for ListBackups.Fields | |
---|---|
backups[] |
The list of matching backups. Backups returned are ordered by
|
nextPageToken |
|
ListDatabaseOperationsResponse
The response for ListDatabaseOperations.Fields | |
---|---|
nextPageToken |
|
operations[] |
The list of matching database long-running operations. Each operation's name will be prefixed by the database's name. The operation's metadata field type
|
ListDatabaseRolesResponse
The response for ListDatabaseRoles.Fields | |
---|---|
databaseRoles[] |
Database roles that matched the request.
|
nextPageToken |
|
ListDatabasesResponse
The response for ListDatabases.Fields | |
---|---|
databases[] |
Databases that matched the request.
|
nextPageToken |
|
ListInstanceConfigOperationsResponse
The response for ListInstanceConfigOperations.Fields | |
---|---|
nextPageToken |
|
operations[] |
The list of matching instance config long-running operations. Each operation's name will be prefixed by the instance config's name. The operation's metadata field type
|
ListInstanceConfigsResponse
The response for ListInstanceConfigs.Fields | |
---|---|
instanceConfigs[] |
The list of requested instance configurations.
|
nextPageToken |
|
ListInstancesResponse
The response for ListInstances.Fields | |
---|---|
instances[] |
The list of requested instances.
|
nextPageToken |
|
unreachable[] |
The list of unreachable instances. It includes the names of instances whose metadata could not be retrieved within instance_deadline.
|
ListOperationsResponse
The response message for Operations.ListOperations.Fields | |
---|---|
nextPageToken |
The standard List next-page token.
|
operations[] |
A list of operations that matches the specified filter in the request.
|
ListScansResponse
Response method from the ListScans method.Fields | |
---|---|
nextPageToken |
Token to retrieve the next page of results, or empty if there are no more results in the list.
|
scans[] |
Available scans based on the list query parameters.
|
ListSessionsResponse
The response for ListSessions.Fields | |
---|---|
nextPageToken |
|
sessions[] |
The list of requested sessions.
|
LocalizedString
A message representing a user-facing string whose value may need to be translated before being displayed.Fields | |
---|---|
args |
A map of arguments used when creating the localized message. Keys represent parameter names which may be used by the localized version when substituting dynamic values.
|
message |
The canonical English version of this message. If no token is provided or the front-end has no message associated with the token, this text will be displayed as-is.
|
token |
The token identifying the message, e.g. 'METRIC_READ_CPU'. This should be unique within the service.
|
Metric
A message representing the actual monitoring data, values for each key bucket over time, of a metric.Fields | |
---|---|
aggregation |
The aggregation function used to aggregate each key bucket
|
Enum type. Can be one of the following: | |
AGGREGATION_UNSPECIFIED |
Required default value. |
MAX |
Use the maximum of all values. |
SUM |
Use the sum of all values. |
category |
The category of the metric, e.g. "Activity", "Alerts", "Reads", etc.
|
derived |
The references to numerator and denominator metrics for a derived metric.
|
displayLabel |
The displayed label of the metric.
|
hasNonzeroData |
Whether the metric has any non-zero data.
|
hotValue |
The value that is considered hot for the metric. On a per metric basis hotness signals high utilization and something that might potentially be a cause for concern by the end user. hot_value is used to calibrate and scale visual color scales.
|
indexedHotKeys |
The (sparse) mapping from time index to an IndexedHotKey message, representing those time intervals for which there are hot keys.
|
indexedKeyRangeInfos |
The (sparse) mapping from time interval index to an IndexedKeyRangeInfos message, representing those time intervals for which there are informational messages concerning key ranges.
|
info |
Information about the metric.
|
matrix |
The data for the metric as a matrix.
|
unit |
The unit of the metric.
|
visible |
Whether the metric is visible to the end user.
|
MetricMatrix
A message representing a matrix of floats.Fields | |
---|---|
rows[] |
The rows of the matrix.
|
MetricMatrixRow
A message representing a row of a matrix of floats.Fields | |
---|---|
cols[] |
The columns of the row.
|
Mutation
A modification to one or more Cloud Spanner rows. Mutations can be applied to a Cloud Spanner database by sending them in a Commit call.Fields | |
---|---|
delete |
Delete rows from a table. Succeeds whether or not the named rows were present.
|
insert |
Insert new rows in a table. If any of the rows already exist, the write or transaction fails with error
|
insertOrUpdate |
Like insert, except that if the row already exists, then its column values are overwritten with the ones provided. Any column values not explicitly written are preserved. When using insert_or_update, just as when using insert, all
|
replace |
Like insert, except that if the row already exists, it is deleted, and the column values provided are inserted instead. Unlike insert_or_update, this means any values not explicitly written become
|
update |
Update existing rows in a table. If any of the rows does not already exist, the transaction fails with error
|
Operation
This resource represents a long-running operation that is the result of a network API call.Fields | |
---|---|
done |
If the value is
|
error |
The error result of the operation in case of failure or cancellation.
|
metadata |
Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
|
name |
The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the
|
response |
The normal response of the operation in case of success. If the original method returns no data on success, such as
|
OperationProgress
Encapsulates progress related information for a Cloud Spanner long running operation.Fields | |
---|---|
endTime |
If set, the time at which this operation failed or was completed successfully.
|
progressPercent |
Percent completion of the operation. Values are between 0 and 100 inclusive.
|
startTime |
Time the request was received.
|
OptimizeRestoredDatabaseMetadata
Metadata type for the long-running operation used to track the progress of optimizations performed on a newly restored database. This long-running operation is automatically created by the system after the successful completion of a database restore, and cannot be cancelled.Fields | |
---|---|
name |
Name of the restored database being optimized.
|
progress |
The progress of the post-restore optimizations.
|
PartialResultSet
Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result sets, large rows, and large values, but are a little trickier to consume.Fields | |
---|---|
chunkedValue |
If true, then the final value in values is chunked, and must be combined with more values from subsequent
|
metadata |
Metadata about the result set, such as row type information. Only present in the first response.
|
resumeToken |
Streaming calls might be interrupted for a variety of reasons, such as TCP connection loss. If this occurs, the stream of results can be resumed by re-sending the original request and including
|
stats |
Query plan and execution statistics for the statement that produced this streaming result set. These can be requested by setting ExecuteSqlRequest.query_mode and are sent only once with the last response in the stream. This field will also be present in the last response for DML statements.
|
values[] |
A streamed result set consists of a stream of values, which might be split into many
|
Partition
Information returned for each partition returned in a PartitionResponse.Fields | |
---|---|
partitionToken |
This token can be passed to Read, StreamingRead, ExecuteSql, or ExecuteStreamingSql requests to restrict the results to those identified by this partition token.
|
PartitionOptions
Options for a PartitionQueryRequest and PartitionReadRequest.Fields | |
---|---|
maxPartitions |
Note: This hint is currently ignored by PartitionQuery and PartitionRead requests. The desired maximum number of partitions to return. For example, this may be set to the number of workers available. The default for this option is currently 10,000. The maximum value is currently 200,000. This is only a hint. The actual number of partitions returned may be smaller or larger than this maximum count request.
|
partitionSizeBytes |
Note: This hint is currently ignored by PartitionQuery and PartitionRead requests. The desired data size for each partition generated. The default for this option is currently 1 GiB. This is only a hint. The actual size of each partition may be smaller or larger than this size request.
|
PartitionQueryRequest
The request for PartitionQueryFields | |
---|---|
paramTypes |
It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type
|
params |
Parameter names and values that bind to placeholders in the SQL string. A parameter placeholder consists of the
|
partitionOptions |
Additional options that affect how many partitions are created.
|
sql |
Required. The query request to generate partitions for. The request will fail if the query is not root partitionable. The query plan of a root partitionable query has a single distributed union operator. A distributed union operator conceptually divides one or more tables into multiple splits, remotely evaluates a subquery independently on each split, and then unions all results. This must not contain DML commands, such as INSERT, UPDATE, or DELETE. Use ExecuteStreamingSql with a PartitionedDml transaction for large, partition-friendly DML operations.
|
transaction |
Read only snapshot transactions are supported, read/write and single use transactions are not.
|
PartitionReadRequest
The request for PartitionReadFields | |
---|---|
columns[] |
The columns of table to be returned for each row matching this request.
|
index |
If non-empty, the name of an index on table. This index is used instead of the table primary key when interpreting key_set and sorting result rows. See key_set for further information.
|
keySet |
Required.
|
partitionOptions |
Additional options that affect how many partitions are created.
|
table |
Required. The name of the table in the database to be read.
|
transaction |
Read only snapshot transactions are supported, read/write and single use transactions are not.
|
PartitionResponse
The response for PartitionQuery or PartitionReadFields | |
---|---|
partitions[] |
Partitions created by this request.
|
transaction |
Transaction created by this request.
|
PlanNode
Node information for nodes appearing in a QueryPlan.plan_nodes.Fields | |
---|---|
childLinks[] |
List of child node
|
displayName |
The display name for the node.
|
executionStats |
The execution statistics associated with the node, contained in a group of key-value pairs. Only present if the plan was returned as a result of a profile query. For example, number of executions, number of rows/time per execution etc.
|
index |
The
|
kind |
Used to determine the type of node. May be needed for visualizing different kinds of nodes differently. For example, If the node is a SCALAR node, it will have a condensed representation which can be used to directly embed a description of the node in its parent.
|
Enum type. Can be one of the following: | |
KIND_UNSPECIFIED |
Not specified. |
RELATIONAL |
Denotes a Relational operator node in the expression tree. Relational operators represent iterative processing of rows during query execution. For example, a TableScan operation that reads rows from a table. |
SCALAR |
Denotes a Scalar node in the expression tree. Scalar nodes represent non-iterable entities in the query plan. For example, constants or arithmetic operators appearing inside predicate expressions or references to column names. |
metadata |
Attributes relevant to the node contained in a group of key-value pairs. For example, a Parameter Reference node could have the following information in its metadata: { "parameter_reference": "param1", "parameter_type": "array" }
|
shortRepresentation |
Condensed representation for SCALAR nodes.
|
Policy
An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources. APolicy
is a collection of bindings
. A binding
binds one or more members
, or principals, to a single role
. Principals can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role
is a named list of permissions; each role
can be an IAM predefined role or a user-created custom role. For some types of Google Cloud resources, a binding
can also specify a condition
, which is a logical expression that allows access to a resource only if the expression evaluates to true
. A condition can add constraints based on attributes of the request, the resource, or both. To learn which resources support conditions in their IAM policies, see the IAM documentation. JSON example: { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 } YAML example: bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3 For a description of IAM and its features, see the IAM documentation.
Fields | |
---|---|
bindings[] |
Associates a list of
|
etag |
|
version |
Specifies the format of the policy. Valid values are
|
PrefixNode
A message representing a key prefix node in the key prefix hierarchy. for eg. Bigtable keyspaces are lexicographically ordered mappings of keys to values. Keys often have a shared prefix structure where users use the keys to organize data. Eg ///employee In this case Keysight will possibly use one node for a company and reuse it for all employees that fall under the company. Doing so improves legibility in the UI.Fields | |
---|---|
dataSourceNode |
Whether this corresponds to a data_source name.
|
depth |
The depth in the prefix hierarchy.
|
endIndex |
The index of the end key bucket of the range that this node spans.
|
startIndex |
The index of the start key bucket of the range that this node spans.
|
word |
The string represented by the prefix node.
|
QueryOptions
Query optimizer configuration.Fields | |
---|---|
optimizerStatisticsPackage |
An option to control the selection of optimizer statistics package. This parameter allows individual queries to use a different query optimizer statistics package. Specifying
|
optimizerVersion |
An option to control the selection of optimizer version. This parameter allows individual queries to pick different query optimizer versions. Specifying
|
QueryPlan
Contains an ordered list of nodes appearing in the query plan.Fields | |
---|---|
planNodes[] |
The nodes in the query plan. Plan nodes are returned in pre-order starting with the plan root. Each PlanNode's
|
ReadOnly
Message type to initiate a read-only transaction.Fields | |
---|---|
exactStaleness |
Executes all reads at a timestamp that is
|
maxStaleness |
Read data at a timestamp >=
|
minReadTimestamp |
Executes all reads at a timestamp >=
|
readTimestamp |
Executes all reads at the given timestamp. Unlike other modes, reads at a specific timestamp are repeatable; the same read at the same timestamp always returns the same data. If the timestamp is in the future, the read will block until the specified timestamp, modulo the read's deadline. Useful for large scale consistent reads such as mapreduces, or for coordinating many reads against a consistent snapshot of the data. A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds. Example:
|
returnReadTimestamp |
If true, the Cloud Spanner-selected read timestamp is included in the Transaction message that describes the transaction.
|
strong |
Read at a timestamp where all previously committed transactions are visible.
|
ReadRequest
The request for Read and StreamingRead.Fields | |
---|---|
columns[] |
Required. The columns of table to be returned for each row matching this request.
|
index |
If non-empty, the name of an index on table. This index is used instead of the table primary key when interpreting key_set and sorting result rows. See key_set for further information.
|
keySet |
Required.
|
limit |
If greater than zero, only the first
|
partitionToken |
If present, results will be restricted to the specified partition previously created using PartitionRead(). There must be an exact match for the values of fields common to this message and the PartitionReadRequest message used to create this partition_token.
|
requestOptions |
Common options for this request.
|
resumeToken |
If this request is resuming a previously interrupted read,
|
table |
Required. The name of the table in the database to be read.
|
transaction |
The transaction to use. If none is provided, the default is a temporary read-only transaction with strong concurrency.
|
ReadWrite
Message type to initiate a read-write transaction. Currently this transaction type has no options.Fields | |
---|---|
readLockMode |
Read lock mode for the transaction.
|
Enum type. Can be one of the following: | |
READ_LOCK_MODE_UNSPECIFIED |
Default value. If the value is not specified, the pessimistic read lock is used. |
PESSIMISTIC |
Pessimistic lock mode. Read locks are acquired immediately on read. |
OPTIMISTIC |
Optimistic lock mode. Locks for reads within the transaction are not acquired on read. Instead the locks are acquired on a commit to validate that read/queried data has not changed since the transaction started. |
ReplicaInfo
(No description provided)Fields | |
---|---|
defaultLeaderLocation |
If true, this location is designated as the default leader location where leader replicas are placed. See the region types documentation for more details.
|
location |
The location of the serving resources, e.g. "us-central1".
|
type |
The type of replica.
|
Enum type. Can be one of the following: | |
TYPE_UNSPECIFIED |
Not specified. |
READ_WRITE |
Read-write replicas support both reads and writes. These replicas: * Maintain a full copy of your data. * Serve reads. * Can vote whether to commit a write. * Participate in leadership election. * Are eligible to become a leader. |
READ_ONLY |
Read-only replicas only support reads (not writes). Read-only replicas: * Maintain a full copy of your data. * Serve reads. * Do not participate in voting to commit writes. * Are not eligible to become a leader. |
WITNESS |
Witness replicas don't support reads but do participate in voting to commit writes. Witness replicas: * Do not maintain a full copy of data. * Do not serve reads. * Vote whether to commit writes. * Participate in leader election but are not eligible to become leader. |
RequestOptions
Common request options for various APIs.Fields | |
---|---|
priority |
Priority for the request.
|
Enum type. Can be one of the following: | |
PRIORITY_UNSPECIFIED |
PRIORITY_UNSPECIFIED is equivalent to PRIORITY_HIGH . |
PRIORITY_LOW |
This specifies that the request is low priority. |
PRIORITY_MEDIUM |
This specifies that the request is medium priority. |
PRIORITY_HIGH |
This specifies that the request is high priority. |
requestTag |
A per-request tag which can be applied to queries or reads, used for statistics collection. Both request_tag and transaction_tag can be specified for a read or query that belongs to a transaction. This field is ignored for requests where it's not applicable (e.g. CommitRequest). Legal characters for
|
transactionTag |
A tag used for statistics collection about this transaction. Both request_tag and transaction_tag can be specified for a read or query that belongs to a transaction. The value of transaction_tag should be the same for all requests belonging to the same transaction. If this request doesn't belong to any transaction, transaction_tag will be ignored. Legal characters for
|
RestoreDatabaseEncryptionConfig
Encryption configuration for the restored database.Fields | |
---|---|
encryptionType |
Required. The encryption type of the restored database.
|
Enum type. Can be one of the following: | |
ENCRYPTION_TYPE_UNSPECIFIED |
Unspecified. Do not use. |
USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION |
This is the default option when encryption_config is not specified. |
GOOGLE_DEFAULT_ENCRYPTION |
Use Google default encryption. |
CUSTOMER_MANAGED_ENCRYPTION |
Use customer managed encryption. If specified, kms_key_name must must contain a valid Cloud KMS key. |
kmsKeyName |
Optional. The Cloud KMS key that will be used to encrypt/decrypt the restored database. This field should be set only when encryption_type is
|
RestoreDatabaseMetadata
Metadata type for the long-running operation returned by RestoreDatabase.Fields | |
---|---|
backupInfo |
Information about the backup used to restore the database.
|
cancelTime |
The time at which cancellation of this operation was received. Operations.CancelOperation starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to
|
name |
Name of the database being created and restored to.
|
optimizeDatabaseOperationName |
If exists, the name of the long-running operation that will be used to track the post-restore optimization process to optimize the performance of the restored database, and remove the dependency on the restore source. The name is of the form
|
progress |
The progress of the RestoreDatabase operation.
|
sourceType |
The type of the restore source.
|
Enum type. Can be one of the following: | |
TYPE_UNSPECIFIED |
No restore associated. |
BACKUP |
A backup was used as the source of the restore. |
RestoreDatabaseRequest
The request for RestoreDatabase.Fields | |
---|---|
backup |
Name of the backup from which to restore. Values are of the form
|
databaseId |
Required. The id of the database to create and restore to. This database must not already exist. The
|
encryptionConfig |
Optional. An encryption configuration describing the encryption type and key resources in Cloud KMS used to encrypt/decrypt the database to restore to. If this field is not specified, the restored database will use the same encryption configuration as the backup by default, namely encryption_type =
|
RestoreInfo
Information about the database restore.Fields | |
---|---|
backupInfo |
Information about the backup used to restore the database. The backup may no longer exist.
|
sourceType |
The type of the restore source.
|
Enum type. Can be one of the following: | |
TYPE_UNSPECIFIED |
No restore associated. |
BACKUP |
A backup was used as the source of the restore. |
ResultSet
Results from Read or ExecuteSql.Fields | |
---|---|
metadata |
Metadata about the result set, such as row type information.
|
rows[] |
Each element in
|
stats |
Query plan and execution statistics for the SQL statement that produced this result set. These can be requested by setting ExecuteSqlRequest.query_mode. DML statements always produce stats containing the number of rows modified, unless executed using the ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode. Other fields may or may not be populated, based on the ExecuteSqlRequest.query_mode.
|
ResultSetMetadata
Metadata about a ResultSet or PartialResultSet.Fields | |
---|---|
rowType |
Indicates the field names and types for the rows in the result set. For example, a SQL query like
|
transaction |
If the read or SQL query began a transaction as a side-effect, the information about the new transaction is yielded here.
|
undeclaredParameters |
A SQL query can be parameterized. In PLAN mode, these parameters can be undeclared. This indicates the field names and types for those undeclared parameters in the SQL query. For example, a SQL query like
|
ResultSetStats
Additional statistics about a ResultSet or PartialResultSet.Fields | |
---|---|
queryPlan |
QueryPlan for the query associated with this result.
|
queryStats |
Aggregated statistics from the execution of the query. Only present when the query is profiled. For example, a query could return the statistics as follows: { "rows_returned": "3", "elapsed_time": "1.22 secs", "cpu_time": "1.19 secs" }
|
rowCountExact |
Standard DML returns an exact count of rows that were modified.
|
rowCountLowerBound |
Partitioned DML does not offer exactly-once semantics, so it returns a lower bound of the rows modified.
|
RollbackRequest
The request for Rollback.Fields | |
---|---|
transactionId |
Required. The transaction to roll back.
|
Scan
Scan is a structure which describes Cloud Key Visualizer scan information.Fields | |
---|---|
details |
Additional information provided by the implementer.
|
endTime |
The upper bound for when the scan is defined.
|
name |
The unique name of the scan, specific to the Database service implementing this interface.
|
scanData |
Output only. Cloud Key Visualizer scan data. Note, this field is not available to the ListScans method.
|
startTime |
A range of time (inclusive) for when the scan is defined. The lower bound for when the scan is defined.
|
ScanData
ScanData contains Cloud Key Visualizer scan data used by the caller to construct a visualization.Fields | |
---|---|
data |
Cloud Key Visualizer scan data. The range of time this information covers is captured via the above time range fields. Note, this field is not available to the ListScans method.
|
endTime |
The upper bound for when the contained data is defined.
|
startTime |
A range of time (inclusive) for when the contained data is defined. The lower bound for when the contained data is defined.
|
Session
A session in the Cloud Spanner API.Fields | |
---|---|
approximateLastUseTime |
Output only. The approximate timestamp when the session is last used. It is typically earlier than the actual last use time.
|
createTime |
Output only. The timestamp when the session is created.
|
creatorRole |
The database role which created this session.
|
labels |
The labels for the session. * Label keys must be between 1 and 63 characters long and must conform to the following regular expression:
|
name |
Output only. The name of the session. This is always system-assigned.
|
SetIamPolicyRequest
Request message forSetIamPolicy
method.
Fields | |
---|---|
policy |
REQUIRED: The complete policy to be applied to the
|
ShortRepresentation
Condensed representation of a node and its subtree. Only present forSCALAR
PlanNode(s).
Fields | |
---|---|
description |
A string representation of the expression subtree rooted at this node.
|
subqueries |
A mapping of (subquery variable name) -> (subquery node id) for cases where the
|
Statement
A single DML statement.Fields | |
---|---|
paramTypes |
It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type
|
params |
Parameter names and values that bind to placeholders in the DML string. A parameter placeholder consists of the
|
sql |
Required. The DML string.
|
Status
TheStatus
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
Fields | |
---|---|
code |
The status code, which should be an enum value of google.rpc.Code.
|
details[] |
A list of messages that carry the error details. There is a common set of message types for APIs to use.
|
message |
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
|
StructType
StructType
defines the fields of a STRUCT type.
Fields | |
---|---|
fields[] |
The list of fields that make up this struct. Order is significant, because values of this struct type are represented as lists, where the order of field values matches the order of fields in the StructType. In turn, the order of fields matches the order of columns in a read request, or the order of fields in the
|
TestIamPermissionsRequest
Request message forTestIamPermissions
method.
Fields | |
---|---|
permissions[] |
REQUIRED: The set of permissions to check for 'resource'. Permissions with wildcards (such as '', 'spanner.', 'spanner.instances.*') are not allowed.
|
TestIamPermissionsResponse
Response message forTestIamPermissions
method.
Fields | |
---|---|
permissions[] |
A subset of
|
Transaction
A transaction.Fields | |
---|---|
id |
|
readTimestamp |
For snapshot read-only transactions, the read timestamp chosen for the transaction. Not returned by default: see TransactionOptions.ReadOnly.return_read_timestamp. A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds. Example:
|
TransactionOptions
Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returnsABORTED
, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error ABORTED
. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, SELECT 1
) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error FAILED_PRECONDITION
. You can configure and extend the VERSION_RETENTION_PERIOD
of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement will be applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as UPDATE table SET column = column + 1
as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table.
Fields | |
---|---|
partitionedDml |
Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires
|
readOnly |
Transaction will not write. Authorization to begin a read-only transaction requires
|
readWrite |
Transaction may write. Authorization to begin a read-write transaction requires
|
TransactionSelector
This message is used to select the transaction in which a Read or ExecuteSql call runs. See TransactionOptions for more information about transactions.Fields | |
---|---|
begin |
Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction.
|
id |
Execute the read or SQL query in a previously-started transaction.
|
singleUse |
Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query.
|
Type
Type
indicates the type of a Cloud Spanner value, as might be stored in a table cell or returned from an SQL query.
Fields | |
---|---|
arrayElementType |
If code == ARRAY, then
|
code |
Required. The TypeCode for this type.
|
Enum type. Can be one of the following: | |
TYPE_CODE_UNSPECIFIED |
Not specified. |
BOOL |
Encoded as JSON true or false . |
INT64 |
Encoded as string , in decimal format. |
FLOAT64 |
Encoded as number , or the strings "NaN" , "Infinity" , or "-Infinity" . |
TIMESTAMP |
Encoded as string in RFC 3339 timestamp format. The time zone must be present, and must be "Z" . If the schema has the column option allow_commit_timestamp=true , the placeholder string "spanner.commit_timestamp()" can be used to instruct the system to insert the commit timestamp associated with the transaction commit. |
DATE |
Encoded as string in RFC 3339 date format. |
STRING |
Encoded as string . |
BYTES |
Encoded as a base64-encoded string , as described in RFC 4648, section 4. |
ARRAY |
Encoded as list , where the list elements are represented according to array_element_type. |
STRUCT |
Encoded as list , where list element i is represented according to [struct_type.fields[i]][google.spanner.v1.StructType.fields]. |
NUMERIC |
Encoded as string , in decimal format or scientific notation format. Decimal format: [+-]Digits[.[Digits]] or +-.Digits Scientific notation: [+-]Digits[.[Digits]][ExponentIndicator[+-]Digits] or +-.Digits[ExponentIndicator[+-]Digits] (ExponentIndicator is "e" or "E" ) |
JSON |
Encoded as a JSON-formatted string as described in RFC 7159. The following rules are applied when parsing JSON input: - Whitespace characters are not preserved. - If a JSON object has duplicate keys, only the first key is preserved. - Members of a JSON object are not guaranteed to have their order preserved. - JSON array elements will have their order preserved. |
PROTO |
Encoded as a base64-encoded string , as described in RFC 4648, section 4. |
ENUM |
Encoded as string , in decimal format. |
protoTypeFqn |
If code == PROTO or code == ENUM, then
|
structType |
If code == STRUCT, then
|
typeAnnotation |
The TypeAnnotationCode that disambiguates SQL type that Spanner will use to represent values of this type during query processing. This is necessary for some type codes because a single TypeCode can be mapped to different SQL types depending on the SQL dialect. type_annotation typically is not needed to process the content of a value (it doesn't affect serialization) and clients can ignore it on the read path.
|
Enum type. Can be one of the following: | |
TYPE_ANNOTATION_CODE_UNSPECIFIED |
Not specified. |
PG_NUMERIC |
PostgreSQL compatible NUMERIC type. This annotation needs to be applied to Type instances having NUMERIC type code to specify that values of this type should be treated as PostgreSQL NUMERIC values. Currently this annotation is always needed for NUMERIC when a client interacts with PostgreSQL-enabled Spanner databases. |
PG_JSONB |
PostgreSQL compatible JSONB type. This annotation needs to be applied to Type instances having JSON type code to specify that values of this type should be treated as PostgreSQL JSONB values. Currently this annotation is always needed for JSON when a client interacts with PostgreSQL-enabled Spanner databases. |
UpdateDatabaseDdlMetadata
Metadata type for the operation returned by UpdateDatabaseDdl.Fields | |
---|---|
commitTimestamps[] |
Reports the commit timestamps of all statements that have succeeded so far, where
|
database |
The database being modified.
|
progress[] |
The progress of the UpdateDatabaseDdl operations. Currently, only index creation statements will have a continuously updating progress. For non-index creation statements,
|
statements[] |
For an update this list contains all the statements. For an individual statement, this list contains only that statement.
|
throttled |
Output only. When true, indicates that the operation is throttled e.g due to resource constraints. When resources become available the operation will resume and this field will be false again.
|
UpdateDatabaseDdlRequest
Enqueues the given DDL statements to be applied, in order but not necessarily all at once, to the database schema at some point (or points) in the future. The server checks that the statements are executable (syntactically valid, name tables that exist, etc.) before enqueueing them, but they may still fail upon later execution (e.g., if a statement from another batch of statements is applied first and it conflicts in some way, or if there is some data-related problem like aNULL
value in a column to which NOT NULL
would be added). If a statement fails, all subsequent statements in the batch are automatically cancelled. Each batch of statements is assigned a name which can be used with the Operations API to monitor progress. See the operation_id field for more details.
Fields | |
---|---|
operationId |
If empty, the new update request is assigned an automatically-generated operation ID. Otherwise,
|
protoDescriptors |
Optional. Proto descriptors used by CREATE/ALTER PROTO BUNDLE statements. Contains a protobuf-serialized google.protobuf.FileDescriptorSet. To generate it, install and run
|
statements[] |
Required. DDL statements to be applied to the database.
|
UpdateInstanceConfigMetadata
Metadata type for the operation returned by UpdateInstanceConfig.Fields | |
---|---|
cancelTime |
The time at which this operation was cancelled.
|
instanceConfig |
The desired instance config after updating.
|
progress |
The progress of the UpdateInstanceConfig operation.
|
UpdateInstanceConfigRequest
The request for UpdateInstanceConfigRequest.Fields | |
---|---|
instanceConfig |
Required. The user instance config to update, which must always include the instance config name. Otherwise, only fields mentioned in update_mask need be included. To prevent conflicts of concurrent updates, etag can be used.
|
updateMask |
Required. A mask specifying which fields in InstanceConfig should be updated. The field mask must always be specified; this prevents any future fields in InstanceConfig from being erased accidentally by clients that do not know about them. Only display_name and labels can be updated.
|
validateOnly |
An option to validate, but not actually execute, a request, and provide the same response.
|
UpdateInstanceMetadata
Metadata type for the operation returned by UpdateInstance.Fields | |
---|---|
cancelTime |
The time at which this operation was cancelled. If set, this operation is in the process of undoing itself (which is guaranteed to succeed) and cannot be cancelled again.
|
endTime |
The time at which this operation failed or was completed successfully.
|
instance |
The desired end state of the update.
|
startTime |
The time at which UpdateInstance request was received.
|
UpdateInstanceRequest
The request for UpdateInstance.Fields | |
---|---|
fieldMask |
Required. A mask specifying which fields in Instance should be updated. The field mask must always be specified; this prevents any future fields in Instance from being erased accidentally by clients that do not know about them.
|
instance |
Required. The instance to update, which must always include the instance name. Otherwise, only fields mentioned in field_mask need be included.
|
VisualizationData
(No description provided)Fields | |
---|---|
dataSourceEndToken |
The token signifying the end of a data_source.
|
dataSourceSeparatorToken |
The token delimiting a datasource name from the rest of a key in a data_source.
|
diagnosticMessages[] |
The list of messages (info, alerts, ...)
|
endKeyStrings[] |
We discretize the entire keyspace into buckets. Assuming each bucket has an inclusive keyrange and covers keys from k(i) ... k(n). In this case k(n) would be an end key for a given range. end_key_string is the collection of all such end keys
|
hasPii |
Whether this scan contains PII.
|
indexedKeys[] |
Keys of key ranges that contribute significantly to a given metric Can be thought of as heavy hitters.
|
keySeparator |
The token delimiting the key prefixes.
|
keyUnit |
The unit for the key: e.g. 'key' or 'chunk'.
|
Enum type. Can be one of the following: | |
KEY_UNIT_UNSPECIFIED |
Required default value |
KEY |
Each entry corresponds to one key |
CHUNK |
Each entry corresponds to a chunk of keys |
metrics[] |
The list of data objects for each metric.
|
prefixNodes[] |
The list of extracted key prefix nodes used in the key prefix hierarchy.
|
Write
Arguments to insert, update, insert_or_update, and replace operations.Fields | |
---|---|
columns[] |
The names of the columns in table to be written. The list of columns must contain enough columns to allow Cloud Spanner to derive values for all primary key columns in the row(s) to be modified.
|
table |
Required. The table whose rows will be written.
|
values[] |
The values to be written.
|