TableAsync(client: google.cloud.bigtable.data._async.client.BigtableDataClientAsync, instance_id: str, table_id: str, app_profile_id: typing.Optional[str] = None, *, default_read_rows_operation_timeout: float = 600, default_read_rows_attempt_timeout: float | None = 20, default_mutate_rows_operation_timeout: float = 600, default_mutate_rows_attempt_timeout: float | None = 60, default_operation_timeout: float = 60, default_attempt_timeout: float | None = 20, default_read_rows_retryable_errors: typing.Sequence[type[Exception]] = (<class 'google.api_core.exceptions.DeadlineExceeded'>, <class 'google.api_core.exceptions.ServiceUnavailable'>, <class 'google.api_core.exceptions.Aborted'>), default_mutate_rows_retryable_errors: typing.Sequence[type[Exception]] = (<class 'google.api_core.exceptions.DeadlineExceeded'>, <class 'google.api_core.exceptions.ServiceUnavailable'>), default_retryable_errors: typing.Sequence[type[Exception]] = (<class 'google.api_core.exceptions.DeadlineExceeded'>, <class 'google.api_core.exceptions.ServiceUnavailable'>))
Main Data API surface
Table object maintains table_id, and app_profile_id context, and passes them with each call
Methods
TableAsync
TableAsync(client: google.cloud.bigtable.data._async.client.BigtableDataClientAsync, instance_id: str, table_id: str, app_profile_id: typing.Optional[str] = None, *, default_read_rows_operation_timeout: float = 600, default_read_rows_attempt_timeout: float | None = 20, default_mutate_rows_operation_timeout: float = 600, default_mutate_rows_attempt_timeout: float | None = 60, default_operation_timeout: float = 60, default_attempt_timeout: float | None = 20, default_read_rows_retryable_errors: typing.Sequence[type[Exception]] = (<class 'google.api_core.exceptions.DeadlineExceeded'>, <class 'google.api_core.exceptions.ServiceUnavailable'>, <class 'google.api_core.exceptions.Aborted'>), default_mutate_rows_retryable_errors: typing.Sequence[type[Exception]] = (<class 'google.api_core.exceptions.DeadlineExceeded'>, <class 'google.api_core.exceptions.ServiceUnavailable'>), default_retryable_errors: typing.Sequence[type[Exception]] = (<class 'google.api_core.exceptions.DeadlineExceeded'>, <class 'google.api_core.exceptions.ServiceUnavailable'>))
Initialize a Table instance
Must be created within an async context (running event loop)
Exceptions | |
---|---|
Type | Description |
RuntimeError |
if called outside of an async context (no running event loop) |
__aenter__
__aenter__()
Implement async context manager protocol
Ensure registration task has time to run, so that grpc channels will be warmed for the specified instance
__aexit__
__aexit__(exc_type, exc_val, exc_tb)
Implement async context manager protocol
Unregister this instance with the client, so that grpc channels will no longer be warmed
bulk_mutate_rows
bulk_mutate_rows(
mutation_entries: list[google.cloud.bigtable.data.mutations.RowMutationEntry],
*,
operation_timeout: (
float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.MUTATE_ROWS,
attempt_timeout: (
float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.MUTATE_ROWS,
retryable_errors: typing.Union[
typing.Sequence[type[Exception]],
google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
] = TABLE_DEFAULT.MUTATE_ROWS
)
Applies mutations for multiple rows in a single batched request.
Each individual RowMutationEntry is applied atomically, but separate entries may be applied in arbitrary order (even for entries targetting the same row) In total, the row_mutations can contain at most 100000 individual mutations across all entries
Idempotent entries (i.e., entries with mutations with explicit timestamps) will be retried on failure. Non-idempotent will not, and will reported in a raised exception group
Exceptions | |
---|---|
Type | Description |
MutationsExceptionGroup |
if one or more mutations fails Contains details about any failed entries in .exceptions |
ValueError |
if invalid arguments are provided |
check_and_mutate_row
check_and_mutate_row(
row_key: str | bytes,
predicate: google.cloud.bigtable.data.row_filters.RowFilter | None,
*,
true_case_mutations: typing.Optional[
typing.Union[
google.cloud.bigtable.data.mutations.Mutation,
list[google.cloud.bigtable.data.mutations.Mutation],
]
] = None,
false_case_mutations: typing.Optional[
typing.Union[
google.cloud.bigtable.data.mutations.Mutation,
list[google.cloud.bigtable.data.mutations.Mutation],
]
] = None,
operation_timeout: (
float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.DEFAULT
) -> bool
Mutates a row atomically based on the output of a predicate filter
Non-idempotent operation: will not be retried
Exceptions | |
---|---|
Type | Description |
google.api_core.exceptions.GoogleAPIError |
exceptions from grpc call |
close
close()
Called to close the Table instance and release any resources held by it.
mutate_row
mutate_row(
row_key: str | bytes,
mutations: (
list[google.cloud.bigtable.data.mutations.Mutation]
| google.cloud.bigtable.data.mutations.Mutation
),
*,
operation_timeout: (
float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.DEFAULT,
attempt_timeout: (
float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.DEFAULT,
retryable_errors: typing.Union[
typing.Sequence[type[Exception]],
google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
] = TABLE_DEFAULT.DEFAULT
)
Mutates a row atomically.
Cells already present in the row are left unchanged unless explicitly changed
by mutation
.
Idempotent operations (i.e, all mutations have an explicit timestamp) will be retried on server failure. Non-idempotent operations will not.
Exceptions | |
---|---|
Type | Description |
google.api_core.exceptions.DeadlineExceeded |
raised after operation timeout will be chained with a RetryExceptionGroup containing all GoogleAPIError exceptions from any retries that failed |
google.api_core.exceptions.GoogleAPIError |
raised on non-idempotent operations that cannot be safely retried. |
ValueError |
if invalid arguments are provided |
mutations_batcher
mutations_batcher(
*,
flush_interval: float | None = 5,
flush_limit_mutation_count: int | None = 1000,
flush_limit_bytes: int = 20971520,
flow_control_max_mutation_count: int = 100000,
flow_control_max_bytes: int = 104857600,
batch_operation_timeout: (
float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.MUTATE_ROWS,
batch_attempt_timeout: (
float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.MUTATE_ROWS,
batch_retryable_errors: typing.Union[
typing.Sequence[type[Exception]],
google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
] = TABLE_DEFAULT.MUTATE_ROWS
) -> google.cloud.bigtable.data._async.mutations_batcher.MutationsBatcherAsync
Returns a new mutations batcher instance.
Can be used to iteratively add mutations that are flushed as a group, to avoid excess network calls
Returns | |
---|---|
Type | Description |
MutationsBatcherAsync |
a MutationsBatcherAsync context manager that can batch requests |
read_modify_write_row
read_modify_write_row(
row_key: str | bytes,
rules: (
google.cloud.bigtable.data.read_modify_write_rules.ReadModifyWriteRule
| list[google.cloud.bigtable.data.read_modify_write_rules.ReadModifyWriteRule]
),
*,
operation_timeout: (
float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.DEFAULT
) -> google.cloud.bigtable.data.row.Row
Reads and modifies a row atomically according to input ReadModifyWriteRules, and returns the contents of all modified cells
The new value for the timestamp is the greater of the existing timestamp or the current server time.
Non-idempotent operation: will not be retried
Exceptions | |
---|---|
Type | Description |
google.api_core.exceptions.GoogleAPIError |
exceptions from grpc call |
ValueError |
if invalid arguments are provided |
Returns | |
---|---|
Type | Description |
Row |
a Row containing cell data that was modified as part of the operation |
read_row
read_row(
row_key: str | bytes,
*,
row_filter: typing.Optional[
google.cloud.bigtable.data.row_filters.RowFilter
] = None,
operation_timeout: (
float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.READ_ROWS,
attempt_timeout: (
float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.READ_ROWS,
retryable_errors: typing.Union[
typing.Sequence[type[Exception]],
google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
] = TABLE_DEFAULT.READ_ROWS
) -> google.cloud.bigtable.data.row.Row | None
Read a single row from the table, based on the specified key.
Failed requests within operation_timeout will be retried based on the retryable_errors list until operation_timeout is reached.
Exceptions | |
---|---|
Type | Description |
google.api_core.exceptions.DeadlineExceeded |
raised after operation timeout will be chained with a RetryExceptionGroup containing GoogleAPIError exceptions from any retries that failed |
google.api_core.exceptions.GoogleAPIError |
raised if the request encounters an unrecoverable error |
Returns | |
---|---|
Type | Description |
Row None |
a Row object if the row exists, otherwise None |
read_rows
read_rows(
query: google.cloud.bigtable.data.read_rows_query.ReadRowsQuery,
*,
operation_timeout: (
float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.READ_ROWS,
attempt_timeout: (
float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.READ_ROWS,
retryable_errors: typing.Union[
typing.Sequence[type[Exception]],
google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
] = TABLE_DEFAULT.READ_ROWS
) -> list[google.cloud.bigtable.data.row.Row]
Read a set of rows from the table, based on the specified query. Retruns results as a list of Row objects when the request is complete. For streamed results, use read_rows_stream.
Failed requests within operation_timeout will be retried based on the retryable_errors list until operation_timeout is reached.
Exceptions | |
---|---|
Type | Description |
google.api_core.exceptions.DeadlineExceeded |
raised after operation timeout will be chained with a RetryExceptionGroup containing GoogleAPIError exceptions from any retries that failed |
google.api_core.exceptions.GoogleAPIError |
raised if the request encounters an unrecoverable error |
Returns | |
---|---|
Type | Description |
list[Row] |
a list of Rows returned by the query |
read_rows_sharded
read_rows_sharded(
sharded_query: ShardedQuery,
*,
operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS,
attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS,
retryable_errors: (
Sequence[type[Exception]] | TABLE_DEFAULT
) = TABLE_DEFAULT.READ_ROWS
) -> list[Row]
Runs a sharded query in parallel, then return the results in a single list. Results will be returned in the order of the input queries.
This function is intended to be run on the results on a query.shard() call. For example::
table_shard_keys = await table.sample_row_keys()
query = ReadRowsQuery(...)
shard_queries = query.shard(table_shard_keys)
results = await table.read_rows_sharded(shard_queries)
Exceptions | |
---|---|
Type | Description |
ShardedReadRowsExceptionGroup |
if any of the queries failed |
ValueError |
if the query_list is empty |
Returns | |
---|---|
Type | Description |
list[Row] |
a list of Rows returned by the query |
read_rows_stream
read_rows_stream(
query: google.cloud.bigtable.data.read_rows_query.ReadRowsQuery,
*,
operation_timeout: (
float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.READ_ROWS,
attempt_timeout: (
float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.READ_ROWS,
retryable_errors: typing.Union[
typing.Sequence[type[Exception]],
google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
] = TABLE_DEFAULT.READ_ROWS
) -> typing.AsyncIterable[google.cloud.bigtable.data.row.Row]
Read a set of rows from the table, based on the specified query. Returns an iterator to asynchronously stream back row data.
Failed requests within operation_timeout will be retried based on the retryable_errors list until operation_timeout is reached.
Exceptions | |
---|---|
Type | Description |
google.api_core.exceptions.DeadlineExceeded |
raised after operation timeout will be chained with a RetryExceptionGroup containing GoogleAPIError exceptions from any retries that failed |
google.api_core.exceptions.GoogleAPIError |
raised if the request encounters an unrecoverable error |
Returns | |
---|---|
Type | Description |
AsyncIterable[Row] |
an asynchronous iterator that yields rows returned by the query |
row_exists
row_exists(
row_key: str | bytes,
*,
operation_timeout: (
float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.READ_ROWS,
attempt_timeout: (
float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
) = TABLE_DEFAULT.READ_ROWS,
retryable_errors: typing.Union[
typing.Sequence[type[Exception]],
google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
] = TABLE_DEFAULT.READ_ROWS
) -> bool
Return a boolean indicating whether the specified row exists in the table. uses the filters: chain(limit cells per row = 1, strip value)
Exceptions | |
---|---|
Type | Description |
google.api_core.exceptions.DeadlineExceeded |
raised after operation timeout will be chained with a RetryExceptionGroup containing GoogleAPIError exceptions from any retries that failed |
google.api_core.exceptions.GoogleAPIError |
raised if the request encounters an unrecoverable error |
Returns | |
---|---|
Type | Description |
bool |
a bool indicating whether the row exists |
sample_row_keys
sample_row_keys(
*,
operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT,
attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT,
retryable_errors: Sequence[type[Exception]] | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT
) -> RowKeySamples
Return a set of RowKeySamples that delimit contiguous sections of the table of approximately equal size
RowKeySamples output can be used with ReadRowsQuery.shard() to create a sharded query that can be parallelized across multiple backend nodes read_rows and read_rows_stream requests will call sample_row_keys internally for this purpose when sharding is enabled
RowKeySamples is simply a type alias for list[tuple[bytes, int]]; a list of row_keys, along with offset positions in the table
Exceptions | |
---|---|
Type | Description |
google.api_core.exceptions.DeadlineExceeded |
raised after operation timeout will be chained with a RetryExceptionGroup containing GoogleAPIError exceptions from any retries that failed |
google.api_core.exceptions.GoogleAPIError |
raised if the request encounters an unrecoverable error |
Returns | |
---|---|
Type | Description |
RowKeySamples |
a set of RowKeySamples the delimit contiguous sections of the table |