Reference documentation and code samples for the Google BigQuery Storage V1 Client class BigQueryWriteClient.
Service Description: BigQuery Write API.
The Write API can be used to write data to BigQuery.
For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api
This class provides the ability to make remote calls to the backing service through method calls that map to API methods.
Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parseName method to extract the individual identifiers contained within formatted names that are returned by the API.
Namespace
Google \ Cloud \ BigQuery \ Storage \ V1 \ ClientMethods
__construct
Constructor.
Parameters | |
---|---|
Name | Description |
options |
array
Optional. Options for configuring the service API wrapper. |
↳ apiEndpoint |
string
The address of the API remote host. May optionally include the port, formatted as "
|
↳ credentials |
string|array|FetchAuthTokenInterface|CredentialsWrapper
The credentials to be used by the client to authorize API calls. This option accepts either a path to a credentials file, or a decoded credentials file as a PHP array. Advanced usage: In addition, this option can also accept a pre-constructed Google\Auth\FetchAuthTokenInterface object or Google\ApiCore\CredentialsWrapper object. Note that when one of these objects are provided, any settings in $credentialsConfig will be ignored. |
↳ credentialsConfig |
array
Options used to configure credentials, including auth token caching, for the client. For a full list of supporting configuration options, see Google\ApiCore\CredentialsWrapper::build() . |
↳ disableRetries |
bool
Determines whether or not retries defined by the client configuration should be disabled. Defaults to |
↳ clientConfig |
string|array
Client method configuration, including retry settings. This option can be either a path to a JSON file, or a PHP array containing the decoded JSON data. By default this settings points to the default client config file, which is provided in the resources folder. |
↳ transport |
string|TransportInterface
The transport used for executing network requests. May be either the string |
↳ transportConfig |
array
Configuration options that will be used to construct the transport. Options for each supported transport type should be passed in a key for that transport. For example: $transportConfig = [ 'grpc' => [...], 'rest' => [...], ]; See the Google\ApiCore\Transport\GrpcTransport::build() and Google\ApiCore\Transport\RestTransport::build() methods for the supported options. |
↳ clientCertSource |
callable
A callable which returns the client cert as a string. This can be used to provide a certificate and private key to the transport layer for mTLS. |
appendRows
Appends data to the given stream.
If offset
is specified, the offset
is checked against the end of
stream. The server returns OUT_OF_RANGE
in AppendRowsResponse
if an
attempt is made to append to an offset beyond the current end of the stream
or ALREADY_EXISTS
if user provides an offset
that has already been
written to. User can retry with adjusted offset within the same RPC
connection. If offset
is not specified, append happens at the end of the
stream.
The response contains an optional offset at which the append happened. No offset information will be returned for appends to a default stream.
Responses are received in the same order in which requests are sent. There will be one response for each successful inserted request. Responses may optionally embed error information if the originating AppendRequest was not successfully processed.
The specifics of when successfully appended data is made visible to the table are governed by the type of stream:
For COMMITTED streams (which includes the default stream), data is visible immediately upon successful append.
For BUFFERED streams, data is made visible via a subsequent
FlushRows
rpc which advances a cursor to a newer offset in the stream.For PENDING streams, data is not made visible until the stream itself is finalized (via the
FinalizeWriteStream
rpc), and the stream is explicitly committed via theBatchCommitWriteStreams
rpc.
Parameters | |
---|---|
Name | Description |
callOptions |
array
Optional. |
↳ timeoutMillis |
int
Timeout to use for this call. |
Returns | |
---|---|
Type | Description |
Google\ApiCore\BidiStream |
use Google\ApiCore\ApiException;
use Google\ApiCore\BidiStream;
use Google\Cloud\BigQuery\Storage\V1\AppendRowsRequest;
use Google\Cloud\BigQuery\Storage\V1\AppendRowsResponse;
use Google\Cloud\BigQuery\Storage\V1\Client\BigQueryWriteClient;
/**
* @param string $formattedWriteStream The write_stream identifies the append operation. It must be
* provided in the following scenarios:
*
* * In the first request to an AppendRows connection.
*
* * In all subsequent requests to an AppendRows connection, if you use the
* same connection to write to multiple tables or change the input schema for
* default streams.
*
* For explicitly created write streams, the format is:
*
* * `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
*
* For the special default stream, the format is:
*
* * `projects/{project}/datasets/{dataset}/tables/{table}/streams/_default`.
*
* An example of a possible sequence of requests with write_stream fields
* within a single connection:
*
* * r1: {write_stream: stream_name_1}
*
* * r2: {write_stream: /*omit*/}
*
* * r3: {write_stream: /*omit*/}
*
* * r4: {write_stream: stream_name_2}
*
* * r5: {write_stream: stream_name_2}
*
* The destination changed in request_4, so the write_stream field must be
* populated in all subsequent requests in this stream. Please see
* {@see BigQueryWriteClient::writeStreamName()} for help formatting this field.
*/
function append_rows_sample(string $formattedWriteStream): void
{
// Create a client.
$bigQueryWriteClient = new BigQueryWriteClient();
// Prepare the request message.
$request = (new AppendRowsRequest())
->setWriteStream($formattedWriteStream);
// Call the API and handle any network failures.
try {
/** @var BidiStream $stream */
$stream = $bigQueryWriteClient->appendRows();
$stream->writeAll([$request,]);
/** @var AppendRowsResponse $element */
foreach ($stream->closeWriteAndReadAll() as $element) {
printf('Element data: %s' . PHP_EOL, $element->serializeToJsonString());
}
} catch (ApiException $ex) {
printf('Call failed with message: %s' . PHP_EOL, $ex->getMessage());
}
}
/**
* Helper to execute the sample.
*
* This sample has been automatically generated and should be regarded as a code
* template only. It will require modifications to work:
* - It may require correct/in-range values for request initialization.
* - It may require specifying regional endpoints when creating the service client,
* please see the apiEndpoint client configuration option for more details.
*/
function callSample(): void
{
$formattedWriteStream = BigQueryWriteClient::writeStreamName(
'[PROJECT]',
'[DATASET]',
'[TABLE]',
'[STREAM]'
);
append_rows_sample($formattedWriteStream);
}
batchCommitWriteStreams
Atomically commits a group of PENDING
streams that belong to the same
parent
table.
Streams must be finalized before commit and cannot be committed multiple times. Once a stream is committed, data in the stream becomes available for read operations.
The async variant is BigQueryWriteClient::batchCommitWriteStreamsAsync() .
Parameters | |
---|---|
Name | Description |
request |
Google\Cloud\BigQuery\Storage\V1\BatchCommitWriteStreamsRequest
A request to house fields associated with the call. |
callOptions |
array
Optional. |
↳ retrySettings |
RetrySettings|array
Retry settings to use for this call. Can be a Google\ApiCore\RetrySettings object, or an associative array of retry settings parameters. See the documentation on Google\ApiCore\RetrySettings for example usage. |
Returns | |
---|---|
Type | Description |
Google\Cloud\BigQuery\Storage\V1\BatchCommitWriteStreamsResponse |
use Google\ApiCore\ApiException;
use Google\Cloud\BigQuery\Storage\V1\BatchCommitWriteStreamsRequest;
use Google\Cloud\BigQuery\Storage\V1\BatchCommitWriteStreamsResponse;
use Google\Cloud\BigQuery\Storage\V1\Client\BigQueryWriteClient;
/**
* @param string $formattedParent Parent table that all the streams should belong to, in the form
* of `projects/{project}/datasets/{dataset}/tables/{table}`. Please see
* {@see BigQueryWriteClient::tableName()} for help formatting this field.
* @param string $writeStreamsElement The group of streams that will be committed atomically.
*/
function batch_commit_write_streams_sample(
string $formattedParent,
string $writeStreamsElement
): void {
// Create a client.
$bigQueryWriteClient = new BigQueryWriteClient();
// Prepare the request message.
$writeStreams = [$writeStreamsElement,];
$request = (new BatchCommitWriteStreamsRequest())
->setParent($formattedParent)
->setWriteStreams($writeStreams);
// Call the API and handle any network failures.
try {
/** @var BatchCommitWriteStreamsResponse $response */
$response = $bigQueryWriteClient->batchCommitWriteStreams($request);
printf('Response data: %s' . PHP_EOL, $response->serializeToJsonString());
} catch (ApiException $ex) {
printf('Call failed with message: %s' . PHP_EOL, $ex->getMessage());
}
}
/**
* Helper to execute the sample.
*
* This sample has been automatically generated and should be regarded as a code
* template only. It will require modifications to work:
* - It may require correct/in-range values for request initialization.
* - It may require specifying regional endpoints when creating the service client,
* please see the apiEndpoint client configuration option for more details.
*/
function callSample(): void
{
$formattedParent = BigQueryWriteClient::tableName('[PROJECT]', '[DATASET]', '[TABLE]');
$writeStreamsElement = '[WRITE_STREAMS]';
batch_commit_write_streams_sample($formattedParent, $writeStreamsElement);
}
createWriteStream
Creates a write stream to the given table.
Additionally, every table has a special stream named '_default' to which data can be written. This stream doesn't need to be created using CreateWriteStream. It is a stream that can be used simultaneously by any number of clients. Data written to this stream is considered committed as soon as an acknowledgement is received.
The async variant is BigQueryWriteClient::createWriteStreamAsync() .
Parameters | |
---|---|
Name | Description |
request |
Google\Cloud\BigQuery\Storage\V1\CreateWriteStreamRequest
A request to house fields associated with the call. |
callOptions |
array
Optional. |
↳ retrySettings |
RetrySettings|array
Retry settings to use for this call. Can be a Google\ApiCore\RetrySettings object, or an associative array of retry settings parameters. See the documentation on Google\ApiCore\RetrySettings for example usage. |
Returns | |
---|---|
Type | Description |
Google\Cloud\BigQuery\Storage\V1\WriteStream |
use Google\ApiCore\ApiException;
use Google\Cloud\BigQuery\Storage\V1\Client\BigQueryWriteClient;
use Google\Cloud\BigQuery\Storage\V1\CreateWriteStreamRequest;
use Google\Cloud\BigQuery\Storage\V1\WriteStream;
/**
* @param string $formattedParent Reference to the table to which the stream belongs, in the format
* of `projects/{project}/datasets/{dataset}/tables/{table}`. Please see
* {@see BigQueryWriteClient::tableName()} for help formatting this field.
*/
function create_write_stream_sample(string $formattedParent): void
{
// Create a client.
$bigQueryWriteClient = new BigQueryWriteClient();
// Prepare the request message.
$writeStream = new WriteStream();
$request = (new CreateWriteStreamRequest())
->setParent($formattedParent)
->setWriteStream($writeStream);
// Call the API and handle any network failures.
try {
/** @var WriteStream $response */
$response = $bigQueryWriteClient->createWriteStream($request);
printf('Response data: %s' . PHP_EOL, $response->serializeToJsonString());
} catch (ApiException $ex) {
printf('Call failed with message: %s' . PHP_EOL, $ex->getMessage());
}
}
/**
* Helper to execute the sample.
*
* This sample has been automatically generated and should be regarded as a code
* template only. It will require modifications to work:
* - It may require correct/in-range values for request initialization.
* - It may require specifying regional endpoints when creating the service client,
* please see the apiEndpoint client configuration option for more details.
*/
function callSample(): void
{
$formattedParent = BigQueryWriteClient::tableName('[PROJECT]', '[DATASET]', '[TABLE]');
create_write_stream_sample($formattedParent);
}
finalizeWriteStream
Finalize a write stream so that no new data can be appended to the stream. Finalize is not supported on the '_default' stream.
The async variant is BigQueryWriteClient::finalizeWriteStreamAsync() .
Parameters | |
---|---|
Name | Description |
request |
Google\Cloud\BigQuery\Storage\V1\FinalizeWriteStreamRequest
A request to house fields associated with the call. |
callOptions |
array
Optional. |
↳ retrySettings |
RetrySettings|array
Retry settings to use for this call. Can be a Google\ApiCore\RetrySettings object, or an associative array of retry settings parameters. See the documentation on Google\ApiCore\RetrySettings for example usage. |
Returns | |
---|---|
Type | Description |
Google\Cloud\BigQuery\Storage\V1\FinalizeWriteStreamResponse |
use Google\ApiCore\ApiException;
use Google\Cloud\BigQuery\Storage\V1\Client\BigQueryWriteClient;
use Google\Cloud\BigQuery\Storage\V1\FinalizeWriteStreamRequest;
use Google\Cloud\BigQuery\Storage\V1\FinalizeWriteStreamResponse;
/**
* @param string $formattedName Name of the stream to finalize, in the form of
* `projects/{project}/datasets/{dataset}/tables/{table}/streams/{stream}`. Please see
* {@see BigQueryWriteClient::writeStreamName()} for help formatting this field.
*/
function finalize_write_stream_sample(string $formattedName): void
{
// Create a client.
$bigQueryWriteClient = new BigQueryWriteClient();
// Prepare the request message.
$request = (new FinalizeWriteStreamRequest())
->setName($formattedName);
// Call the API and handle any network failures.
try {
/** @var FinalizeWriteStreamResponse $response */
$response = $bigQueryWriteClient->finalizeWriteStream($request);
printf('Response data: %s' . PHP_EOL, $response->serializeToJsonString());
} catch (ApiException $ex) {
printf('Call failed with message: %s' . PHP_EOL, $ex->getMessage());
}
}
/**
* Helper to execute the sample.
*
* This sample has been automatically generated and should be regarded as a code
* template only. It will require modifications to work:
* - It may require correct/in-range values for request initialization.
* - It may require specifying regional endpoints when creating the service client,
* please see the apiEndpoint client configuration option for more details.
*/
function callSample(): void
{
$formattedName = BigQueryWriteClient::writeStreamName(
'[PROJECT]',
'[DATASET]',
'[TABLE]',
'[STREAM]'
);
finalize_write_stream_sample($formattedName);
}
flushRows
Flushes rows to a BUFFERED stream.
If users are appending rows to BUFFERED stream, flush operation is required in order for the rows to become available for reading. A Flush operation flushes up to any previously flushed offset in a BUFFERED stream, to the offset specified in the request.
Flush is not supported on the _default stream, since it is not BUFFERED.
The async variant is BigQueryWriteClient::flushRowsAsync() .
Parameters | |
---|---|
Name | Description |
request |
Google\Cloud\BigQuery\Storage\V1\FlushRowsRequest
A request to house fields associated with the call. |
callOptions |
array
Optional. |
↳ retrySettings |
RetrySettings|array
Retry settings to use for this call. Can be a Google\ApiCore\RetrySettings object, or an associative array of retry settings parameters. See the documentation on Google\ApiCore\RetrySettings for example usage. |
Returns | |
---|---|
Type | Description |
Google\Cloud\BigQuery\Storage\V1\FlushRowsResponse |
use Google\ApiCore\ApiException;
use Google\Cloud\BigQuery\Storage\V1\Client\BigQueryWriteClient;
use Google\Cloud\BigQuery\Storage\V1\FlushRowsRequest;
use Google\Cloud\BigQuery\Storage\V1\FlushRowsResponse;
/**
* @param string $formattedWriteStream The stream that is the target of the flush operation. Please see
* {@see BigQueryWriteClient::writeStreamName()} for help formatting this field.
*/
function flush_rows_sample(string $formattedWriteStream): void
{
// Create a client.
$bigQueryWriteClient = new BigQueryWriteClient();
// Prepare the request message.
$request = (new FlushRowsRequest())
->setWriteStream($formattedWriteStream);
// Call the API and handle any network failures.
try {
/** @var FlushRowsResponse $response */
$response = $bigQueryWriteClient->flushRows($request);
printf('Response data: %s' . PHP_EOL, $response->serializeToJsonString());
} catch (ApiException $ex) {
printf('Call failed with message: %s' . PHP_EOL, $ex->getMessage());
}
}
/**
* Helper to execute the sample.
*
* This sample has been automatically generated and should be regarded as a code
* template only. It will require modifications to work:
* - It may require correct/in-range values for request initialization.
* - It may require specifying regional endpoints when creating the service client,
* please see the apiEndpoint client configuration option for more details.
*/
function callSample(): void
{
$formattedWriteStream = BigQueryWriteClient::writeStreamName(
'[PROJECT]',
'[DATASET]',
'[TABLE]',
'[STREAM]'
);
flush_rows_sample($formattedWriteStream);
}
getWriteStream
Gets information about a write stream.
The async variant is BigQueryWriteClient::getWriteStreamAsync() .
Parameters | |
---|---|
Name | Description |
request |
Google\Cloud\BigQuery\Storage\V1\GetWriteStreamRequest
A request to house fields associated with the call. |
callOptions |
array
Optional. |
↳ retrySettings |
RetrySettings|array
Retry settings to use for this call. Can be a Google\ApiCore\RetrySettings object, or an associative array of retry settings parameters. See the documentation on Google\ApiCore\RetrySettings for example usage. |
Returns | |
---|---|
Type | Description |
Google\Cloud\BigQuery\Storage\V1\WriteStream |
use Google\ApiCore\ApiException;
use Google\Cloud\BigQuery\Storage\V1\Client\BigQueryWriteClient;
use Google\Cloud\BigQuery\Storage\V1\GetWriteStreamRequest;
use Google\Cloud\BigQuery\Storage\V1\WriteStream;
/**
* @param string $formattedName Name of the stream to get, in the form of
* `projects/{project}/datasets/{dataset}/tables/{table}/streams/{stream}`. Please see
* {@see BigQueryWriteClient::writeStreamName()} for help formatting this field.
*/
function get_write_stream_sample(string $formattedName): void
{
// Create a client.
$bigQueryWriteClient = new BigQueryWriteClient();
// Prepare the request message.
$request = (new GetWriteStreamRequest())
->setName($formattedName);
// Call the API and handle any network failures.
try {
/** @var WriteStream $response */
$response = $bigQueryWriteClient->getWriteStream($request);
printf('Response data: %s' . PHP_EOL, $response->serializeToJsonString());
} catch (ApiException $ex) {
printf('Call failed with message: %s' . PHP_EOL, $ex->getMessage());
}
}
/**
* Helper to execute the sample.
*
* This sample has been automatically generated and should be regarded as a code
* template only. It will require modifications to work:
* - It may require correct/in-range values for request initialization.
* - It may require specifying regional endpoints when creating the service client,
* please see the apiEndpoint client configuration option for more details.
*/
function callSample(): void
{
$formattedName = BigQueryWriteClient::writeStreamName(
'[PROJECT]',
'[DATASET]',
'[TABLE]',
'[STREAM]'
);
get_write_stream_sample($formattedName);
}
batchCommitWriteStreamsAsync
Parameters | |
---|---|
Name | Description |
request |
Google\Cloud\BigQuery\Storage\V1\BatchCommitWriteStreamsRequest
|
optionalArgs |
array
|
Returns | |
---|---|
Type | Description |
GuzzleHttp\Promise\PromiseInterface<Google\Cloud\BigQuery\Storage\V1\BatchCommitWriteStreamsResponse> |
createWriteStreamAsync
Parameters | |
---|---|
Name | Description |
request |
Google\Cloud\BigQuery\Storage\V1\CreateWriteStreamRequest
|
optionalArgs |
array
|
Returns | |
---|---|
Type | Description |
GuzzleHttp\Promise\PromiseInterface<Google\Cloud\BigQuery\Storage\V1\WriteStream> |
finalizeWriteStreamAsync
Parameters | |
---|---|
Name | Description |
request |
Google\Cloud\BigQuery\Storage\V1\FinalizeWriteStreamRequest
|
optionalArgs |
array
|
Returns | |
---|---|
Type | Description |
GuzzleHttp\Promise\PromiseInterface<Google\Cloud\BigQuery\Storage\V1\FinalizeWriteStreamResponse> |
flushRowsAsync
Parameters | |
---|---|
Name | Description |
request |
Google\Cloud\BigQuery\Storage\V1\FlushRowsRequest
|
optionalArgs |
array
|
Returns | |
---|---|
Type | Description |
GuzzleHttp\Promise\PromiseInterface<Google\Cloud\BigQuery\Storage\V1\FlushRowsResponse> |
getWriteStreamAsync
Parameters | |
---|---|
Name | Description |
request |
Google\Cloud\BigQuery\Storage\V1\GetWriteStreamRequest
|
optionalArgs |
array
|
Returns | |
---|---|
Type | Description |
GuzzleHttp\Promise\PromiseInterface<Google\Cloud\BigQuery\Storage\V1\WriteStream> |
static::tableName
Formats a string containing the fully-qualified path to represent a table resource.
Parameters | |
---|---|
Name | Description |
project |
string
|
dataset |
string
|
table |
string
|
Returns | |
---|---|
Type | Description |
string |
The formatted table resource. |
static::writeStreamName
Formats a string containing the fully-qualified path to represent a write_stream resource.
Parameters | |
---|---|
Name | Description |
project |
string
|
dataset |
string
|
table |
string
|
stream |
string
|
Returns | |
---|---|
Type | Description |
string |
The formatted write_stream resource. |
static::parseName
Parses a formatted name string and returns an associative array of the components in the name.
The following name formats are supported: Template: Pattern
- table: projects/{project}/datasets/{dataset}/tables/{table}
- writeStream: projects/{project}/datasets/{dataset}/tables/{table}/streams/{stream}
The optional $template argument can be supplied to specify a particular pattern, and must match one of the templates listed above. If no $template argument is provided, or if the $template argument does not match one of the templates listed, then parseName will check each of the supported templates, and return the first match.
Parameters | |
---|---|
Name | Description |
formattedName |
string
The formatted name string |
template |
string
Optional name of template to match |
Returns | |
---|---|
Type | Description |
array |
An associative array from name component IDs to component values. |