Class BaseBigQueryStorageClient (3.9.0)

GitHub RepositoryProduct Reference

Service Description: BigQuery storage API.

The BigQuery storage API can be used to read data stored in BigQuery.

The v1beta1 API is not yet officially deprecated, and will go through a full deprecation cycle (https://cloud.google.com/products#product-launch-stages) before the service is turned down. However, new code should use the v1 API going forward.

This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   TableReferenceProto.TableReference tableReference =
       TableReferenceProto.TableReference.newBuilder().build();
   ProjectName parent = ProjectName.of("[PROJECT]");
   int requestedStreams = 1017221410;
   Storage.ReadSession response =
       baseBigQueryStorageClient.createReadSession(tableReference, parent, requestedStreams);
 }
 

Note: close() needs to be called on the BaseBigQueryStorageClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().

Methods
Method Description Method Variants

CreateReadSession

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.

Request object method variants only take one parameter, a request object, which must be constructed before the call.

  • createReadSession(TableReferenceProto.TableReference tableReference, ProjectName parent, int requestedStreams)

  • createReadSession(TableReferenceProto.TableReference tableReference, String parent, int requestedStreams)

  • createReadSession(Storage.CreateReadSessionRequest request)

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • createReadSessionCallable()

ReadRows

Reads rows from the table in the format prescribed by the read session. Each response contains one or more table rows, up to a maximum of 10 MiB per response; read requests which attempt to read individual rows larger than this will fail.

Each request also returns a set of stream statistics reflecting the estimated total number of rows in the read stream. This number is computed based on the total table size and the number of active streams in the read session, and may change as other streams continue to read data.

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • readRowsCallable()

BatchCreateReadSessionStreams

Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers.

Request object method variants only take one parameter, a request object, which must be constructed before the call.

  • batchCreateReadSessionStreams(Storage.ReadSession session, int requestedStreams)

  • batchCreateReadSessionStreams(Storage.BatchCreateReadSessionStreamsRequest request)

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • batchCreateReadSessionStreamsCallable()

FinalizeStream

Causes a single stream in a ReadSession to gracefully stop. This API can be used to dynamically adjust the parallelism of a batch processing task downwards without losing data.

This API does not delete the stream -- it remains visible in the ReadSession, and any data processed by the stream is not released to other streams. However, no additional data will be assigned to the stream once this call completes. Callers must continue reading data on the stream until the end of the stream is reached so that data which has already been assigned to the stream will be processed.

This method will return an error if there are no other live streams in the Session, or if SplitReadStream() has been called on the given Stream.

Request object method variants only take one parameter, a request object, which must be constructed before the call.

  • finalizeStream(Storage.FinalizeStreamRequest request)

"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.

  • finalizeStream(Storage.Stream stream)

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • finalizeStreamCallable()

SplitReadStream

Splits a given read stream into two Streams. These streams are referred to as the primary and the residual of the split. The original stream can still be read from in the same manner as before. Both of the returned streams can also be read from, and the total rows return by both child streams will be the same as the rows read from the original stream.

Moreover, the two child streams will be allocated back to back in the original Stream. Concretely, it is guaranteed that for streams Original, Primary, and Residual, that Original[0-j] = Primary[0-j] and Original[j-n] = Residual[0-m] once the streams have been read to completion.

This method is guaranteed to be idempotent.

Request object method variants only take one parameter, a request object, which must be constructed before the call.

  • splitReadStream(Storage.SplitReadStreamRequest request)

"Flattened" method variants have converted the fields of the request object into function parameters to enable multiple ways to call the same method.

  • splitReadStream(Storage.Stream originalStream)

Callable method variants take no parameters and return an immutable API callable object, which can be used to initiate calls to the service.

  • splitReadStreamCallable()

See the individual methods for example code.

Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.

This class can be customized by passing in a custom instance of BaseBigQueryStorageSettings to create(). For example:

To customize credentials:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 BaseBigQueryStorageSettings baseBigQueryStorageSettings =
     BaseBigQueryStorageSettings.newBuilder()
         .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
         .build();
 BaseBigQueryStorageClient baseBigQueryStorageClient =
     BaseBigQueryStorageClient.create(baseBigQueryStorageSettings);
 

To customize the endpoint:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 BaseBigQueryStorageSettings baseBigQueryStorageSettings =
     BaseBigQueryStorageSettings.newBuilder().setEndpoint(myEndpoint).build();
 BaseBigQueryStorageClient baseBigQueryStorageClient =
     BaseBigQueryStorageClient.create(baseBigQueryStorageSettings);
 

Please refer to the GitHub repository's samples for more quickstart code snippets.

Inheritance

java.lang.Object > BaseBigQueryStorageClient

Static Methods

create()

public static final BaseBigQueryStorageClient create()

Constructs an instance of BaseBigQueryStorageClient with default settings.

Returns
Type Description
BaseBigQueryStorageClient
Exceptions
Type Description
IOException

create(BaseBigQueryStorageSettings settings)

public static final BaseBigQueryStorageClient create(BaseBigQueryStorageSettings settings)

Constructs an instance of BaseBigQueryStorageClient, using the given settings. The channels are created based on the settings passed in, or defaults for any settings that are not set.

Parameter
Name Description
settings BaseBigQueryStorageSettings
Returns
Type Description
BaseBigQueryStorageClient
Exceptions
Type Description
IOException

create(BigQueryStorageStub stub)

public static final BaseBigQueryStorageClient create(BigQueryStorageStub stub)

Constructs an instance of BaseBigQueryStorageClient, using the given stub for making calls. This is for advanced usage - prefer using create(BaseBigQueryStorageSettings).

Parameter
Name Description
stub BigQueryStorageStub
Returns
Type Description
BaseBigQueryStorageClient

Constructors

BaseBigQueryStorageClient(BaseBigQueryStorageSettings settings)

protected BaseBigQueryStorageClient(BaseBigQueryStorageSettings settings)

Constructs an instance of BaseBigQueryStorageClient, using the given settings. This is protected so that it is easy to make a subclass, but otherwise, the static factory methods should be preferred.

Parameter
Name Description
settings BaseBigQueryStorageSettings

BaseBigQueryStorageClient(BigQueryStorageStub stub)

protected BaseBigQueryStorageClient(BigQueryStorageStub stub)
Parameter
Name Description
stub BigQueryStorageStub

Methods

awaitTermination(long duration, TimeUnit unit)

public boolean awaitTermination(long duration, TimeUnit unit)
Parameters
Name Description
duration long
unit TimeUnit
Returns
Type Description
boolean
Exceptions
Type Description
InterruptedException

batchCreateReadSessionStreams(Storage.BatchCreateReadSessionStreamsRequest request)

public final Storage.BatchCreateReadSessionStreamsResponse batchCreateReadSessionStreams(Storage.BatchCreateReadSessionStreamsRequest request)

Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.BatchCreateReadSessionStreamsRequest request =
       Storage.BatchCreateReadSessionStreamsRequest.newBuilder()
           .setSession(Storage.ReadSession.newBuilder().build())
           .setRequestedStreams(1017221410)
           .build();
   Storage.BatchCreateReadSessionStreamsResponse response =
       baseBigQueryStorageClient.batchCreateReadSessionStreams(request);
 }
 
Parameter
Name Description
request Storage.BatchCreateReadSessionStreamsRequest

The request object containing all of the parameters for the API call.

Returns
Type Description
Storage.BatchCreateReadSessionStreamsResponse

batchCreateReadSessionStreams(Storage.ReadSession session, int requestedStreams)

public final Storage.BatchCreateReadSessionStreamsResponse batchCreateReadSessionStreams(Storage.ReadSession session, int requestedStreams)

Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.ReadSession session = Storage.ReadSession.newBuilder().build();
   int requestedStreams = 1017221410;
   Storage.BatchCreateReadSessionStreamsResponse response =
       baseBigQueryStorageClient.batchCreateReadSessionStreams(session, requestedStreams);
 }
 
Parameters
Name Description
session Storage.ReadSession

Required. Must be a non-expired session obtained from a call to CreateReadSession. Only the name field needs to be set.

requestedStreams int

Required. Number of new streams requested. Must be positive. Number of added streams may be less than this, see CreateReadSessionRequest for more information.

Returns
Type Description
Storage.BatchCreateReadSessionStreamsResponse

batchCreateReadSessionStreamsCallable()

public final UnaryCallable<Storage.BatchCreateReadSessionStreamsRequest,Storage.BatchCreateReadSessionStreamsResponse> batchCreateReadSessionStreamsCallable()

Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.BatchCreateReadSessionStreamsRequest request =
       Storage.BatchCreateReadSessionStreamsRequest.newBuilder()
           .setSession(Storage.ReadSession.newBuilder().build())
           .setRequestedStreams(1017221410)
           .build();
   ApiFuture<Storage.BatchCreateReadSessionStreamsResponse> future =
       baseBigQueryStorageClient.batchCreateReadSessionStreamsCallable().futureCall(request);
   // Do something.
   Storage.BatchCreateReadSessionStreamsResponse response = future.get();
 }
 
Returns
Type Description
UnaryCallable<BatchCreateReadSessionStreamsRequest,BatchCreateReadSessionStreamsResponse>

close()

public final void close()

createReadSession(Storage.CreateReadSessionRequest request)

public final Storage.ReadSession createReadSession(Storage.CreateReadSessionRequest request)

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.CreateReadSessionRequest request =
       Storage.CreateReadSessionRequest.newBuilder()
           .setTableReference(TableReferenceProto.TableReference.newBuilder().build())
           .setParent(ProjectName.of("[PROJECT]").toString())
           .setTableModifiers(TableReferenceProto.TableModifiers.newBuilder().build())
           .setRequestedStreams(1017221410)
           .setReadOptions(ReadOptions.TableReadOptions.newBuilder().build())
           .setFormat(Storage.DataFormat.forNumber(0))
           .setShardingStrategy(Storage.ShardingStrategy.forNumber(0))
           .build();
   Storage.ReadSession response = baseBigQueryStorageClient.createReadSession(request);
 }
 
Parameter
Name Description
request Storage.CreateReadSessionRequest

The request object containing all of the parameters for the API call.

Returns
Type Description
Storage.ReadSession

createReadSession(TableReferenceProto.TableReference tableReference, ProjectName parent, int requestedStreams)

public final Storage.ReadSession createReadSession(TableReferenceProto.TableReference tableReference, ProjectName parent, int requestedStreams)

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   TableReferenceProto.TableReference tableReference =
       TableReferenceProto.TableReference.newBuilder().build();
   ProjectName parent = ProjectName.of("[PROJECT]");
   int requestedStreams = 1017221410;
   Storage.ReadSession response =
       baseBigQueryStorageClient.createReadSession(tableReference, parent, requestedStreams);
 }
 
Parameters
Name Description
tableReference TableReferenceProto.TableReference

Required. Reference to the table to read.

parent ProjectName

Required. String of the form projects/{project_id} indicating the project this ReadSession is associated with. This is the project that will be billed for usage.

requestedStreams int

Initial number of streams. If unset or 0, we will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table and the maximum amount of parallelism allowed by the system.

Streams must be read starting from offset 0.

Returns
Type Description
Storage.ReadSession

createReadSession(TableReferenceProto.TableReference tableReference, String parent, int requestedStreams)

public final Storage.ReadSession createReadSession(TableReferenceProto.TableReference tableReference, String parent, int requestedStreams)

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   TableReferenceProto.TableReference tableReference =
       TableReferenceProto.TableReference.newBuilder().build();
   String parent = ProjectName.of("[PROJECT]").toString();
   int requestedStreams = 1017221410;
   Storage.ReadSession response =
       baseBigQueryStorageClient.createReadSession(tableReference, parent, requestedStreams);
 }
 
Parameters
Name Description
tableReference TableReferenceProto.TableReference

Required. Reference to the table to read.

parent String

Required. String of the form projects/{project_id} indicating the project this ReadSession is associated with. This is the project that will be billed for usage.

requestedStreams int

Initial number of streams. If unset or 0, we will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table and the maximum amount of parallelism allowed by the system.

Streams must be read starting from offset 0.

Returns
Type Description
Storage.ReadSession

createReadSessionCallable()

public final UnaryCallable<Storage.CreateReadSessionRequest,Storage.ReadSession> createReadSessionCallable()

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.CreateReadSessionRequest request =
       Storage.CreateReadSessionRequest.newBuilder()
           .setTableReference(TableReferenceProto.TableReference.newBuilder().build())
           .setParent(ProjectName.of("[PROJECT]").toString())
           .setTableModifiers(TableReferenceProto.TableModifiers.newBuilder().build())
           .setRequestedStreams(1017221410)
           .setReadOptions(ReadOptions.TableReadOptions.newBuilder().build())
           .setFormat(Storage.DataFormat.forNumber(0))
           .setShardingStrategy(Storage.ShardingStrategy.forNumber(0))
           .build();
   ApiFuture<Storage.ReadSession> future =
       baseBigQueryStorageClient.createReadSessionCallable().futureCall(request);
   // Do something.
   Storage.ReadSession response = future.get();
 }
 
Returns
Type Description
UnaryCallable<CreateReadSessionRequest,ReadSession>

finalizeStream(Storage.FinalizeStreamRequest request)

public final void finalizeStream(Storage.FinalizeStreamRequest request)

Causes a single stream in a ReadSession to gracefully stop. This API can be used to dynamically adjust the parallelism of a batch processing task downwards without losing data.

This API does not delete the stream -- it remains visible in the ReadSession, and any data processed by the stream is not released to other streams. However, no additional data will be assigned to the stream once this call completes. Callers must continue reading data on the stream until the end of the stream is reached so that data which has already been assigned to the stream will be processed.

This method will return an error if there are no other live streams in the Session, or if SplitReadStream() has been called on the given Stream.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.FinalizeStreamRequest request =
       Storage.FinalizeStreamRequest.newBuilder()
           .setStream(Storage.Stream.newBuilder().build())
           .build();
   baseBigQueryStorageClient.finalizeStream(request);
 }
 
Parameter
Name Description
request Storage.FinalizeStreamRequest

The request object containing all of the parameters for the API call.

finalizeStream(Storage.Stream stream)

public final void finalizeStream(Storage.Stream stream)

Causes a single stream in a ReadSession to gracefully stop. This API can be used to dynamically adjust the parallelism of a batch processing task downwards without losing data.

This API does not delete the stream -- it remains visible in the ReadSession, and any data processed by the stream is not released to other streams. However, no additional data will be assigned to the stream once this call completes. Callers must continue reading data on the stream until the end of the stream is reached so that data which has already been assigned to the stream will be processed.

This method will return an error if there are no other live streams in the Session, or if SplitReadStream() has been called on the given Stream.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.Stream stream = Storage.Stream.newBuilder().build();
   baseBigQueryStorageClient.finalizeStream(stream);
 }
 
Parameter
Name Description
stream Storage.Stream

Required. Stream to finalize.

finalizeStreamCallable()

public final UnaryCallable<Storage.FinalizeStreamRequest,Empty> finalizeStreamCallable()

Causes a single stream in a ReadSession to gracefully stop. This API can be used to dynamically adjust the parallelism of a batch processing task downwards without losing data.

This API does not delete the stream -- it remains visible in the ReadSession, and any data processed by the stream is not released to other streams. However, no additional data will be assigned to the stream once this call completes. Callers must continue reading data on the stream until the end of the stream is reached so that data which has already been assigned to the stream will be processed.

This method will return an error if there are no other live streams in the Session, or if SplitReadStream() has been called on the given Stream.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.FinalizeStreamRequest request =
       Storage.FinalizeStreamRequest.newBuilder()
           .setStream(Storage.Stream.newBuilder().build())
           .build();
   ApiFuture<Empty> future =
       baseBigQueryStorageClient.finalizeStreamCallable().futureCall(request);
   // Do something.
   future.get();
 }
 
Returns
Type Description
UnaryCallable<FinalizeStreamRequest,Empty>

getSettings()

public final BaseBigQueryStorageSettings getSettings()
Returns
Type Description
BaseBigQueryStorageSettings

getStub()

public BigQueryStorageStub getStub()
Returns
Type Description
BigQueryStorageStub

isShutdown()

public boolean isShutdown()
Returns
Type Description
boolean

isTerminated()

public boolean isTerminated()
Returns
Type Description
boolean

readRowsCallable()

public final ServerStreamingCallable<Storage.ReadRowsRequest,Storage.ReadRowsResponse> readRowsCallable()

Reads rows from the table in the format prescribed by the read session. Each response contains one or more table rows, up to a maximum of 10 MiB per response; read requests which attempt to read individual rows larger than this will fail.

Each request also returns a set of stream statistics reflecting the estimated total number of rows in the read stream. This number is computed based on the total table size and the number of active streams in the read session, and may change as other streams continue to read data.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.ReadRowsRequest request =
       Storage.ReadRowsRequest.newBuilder()
           .setReadPosition(Storage.StreamPosition.newBuilder().build())
           .build();
   ServerStream<Storage.ReadRowsResponse> stream =
       baseBigQueryStorageClient.readRowsCallable().call(request);
   for (Storage.ReadRowsResponse response : stream) {
     // Do something when a response is received.
   }
 }
 
Returns
Type Description
ServerStreamingCallable<ReadRowsRequest,ReadRowsResponse>

shutdown()

public void shutdown()

shutdownNow()

public void shutdownNow()

splitReadStream(Storage.SplitReadStreamRequest request)

public final Storage.SplitReadStreamResponse splitReadStream(Storage.SplitReadStreamRequest request)

Splits a given read stream into two Streams. These streams are referred to as the primary and the residual of the split. The original stream can still be read from in the same manner as before. Both of the returned streams can also be read from, and the total rows return by both child streams will be the same as the rows read from the original stream.

Moreover, the two child streams will be allocated back to back in the original Stream. Concretely, it is guaranteed that for streams Original, Primary, and Residual, that Original[0-j] = Primary[0-j] and Original[j-n] = Residual[0-m] once the streams have been read to completion.

This method is guaranteed to be idempotent.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.SplitReadStreamRequest request =
       Storage.SplitReadStreamRequest.newBuilder()
           .setOriginalStream(Storage.Stream.newBuilder().build())
           .setFraction(-1653751294)
           .build();
   Storage.SplitReadStreamResponse response = baseBigQueryStorageClient.splitReadStream(request);
 }
 
Parameter
Name Description
request Storage.SplitReadStreamRequest

The request object containing all of the parameters for the API call.

Returns
Type Description
Storage.SplitReadStreamResponse

splitReadStream(Storage.Stream originalStream)

public final Storage.SplitReadStreamResponse splitReadStream(Storage.Stream originalStream)

Splits a given read stream into two Streams. These streams are referred to as the primary and the residual of the split. The original stream can still be read from in the same manner as before. Both of the returned streams can also be read from, and the total rows return by both child streams will be the same as the rows read from the original stream.

Moreover, the two child streams will be allocated back to back in the original Stream. Concretely, it is guaranteed that for streams Original, Primary, and Residual, that Original[0-j] = Primary[0-j] and Original[j-n] = Residual[0-m] once the streams have been read to completion.

This method is guaranteed to be idempotent.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.Stream originalStream = Storage.Stream.newBuilder().build();
   Storage.SplitReadStreamResponse response =
       baseBigQueryStorageClient.splitReadStream(originalStream);
 }
 
Parameter
Name Description
originalStream Storage.Stream

Required. Stream to split.

Returns
Type Description
Storage.SplitReadStreamResponse

splitReadStreamCallable()

public final UnaryCallable<Storage.SplitReadStreamRequest,Storage.SplitReadStreamResponse> splitReadStreamCallable()

Splits a given read stream into two Streams. These streams are referred to as the primary and the residual of the split. The original stream can still be read from in the same manner as before. Both of the returned streams can also be read from, and the total rows return by both child streams will be the same as the rows read from the original stream.

Moreover, the two child streams will be allocated back to back in the original Stream. Concretely, it is guaranteed that for streams Original, Primary, and Residual, that Original[0-j] = Primary[0-j] and Original[j-n] = Residual[0-m] once the streams have been read to completion.

This method is guaranteed to be idempotent.

Sample code:


 // This snippet has been automatically generated and should be regarded as a code template only.
 // It will require modifications to work:
 // - It may require correct/in-range values for request initialization.
 // - It may require specifying regional endpoints when creating the service client as shown in
 // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
 try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
   Storage.SplitReadStreamRequest request =
       Storage.SplitReadStreamRequest.newBuilder()
           .setOriginalStream(Storage.Stream.newBuilder().build())
           .setFraction(-1653751294)
           .build();
   ApiFuture<Storage.SplitReadStreamResponse> future =
       baseBigQueryStorageClient.splitReadStreamCallable().futureCall(request);
   // Do something.
   Storage.SplitReadStreamResponse response = future.get();
 }
 
Returns
Type Description
UnaryCallable<SplitReadStreamRequest,SplitReadStreamResponse>