Cloud Bigtable V2 API - Class Google::Cloud::Bigtable::V2::PartialResultSet (v1.6.0)

Reference documentation and code samples for the Cloud Bigtable V2 API class Google::Cloud::Bigtable::V2::PartialResultSet.

A partial result set from the streaming query API. Cloud Bigtable clients buffer partial results received in this message until a resume_token is received.

The pseudocode below describes how to buffer and parse a stream of PartialResultSet messages.

Having:

  • queue of row results waiting to be returned queue
  • extensible buffer of bytes buffer
  • a place to keep track of the most recent resume_token for each PartialResultSet p received { if p.reset { ensure queue is empty ensure buffer is empty } if p.estimated_batch_size != 0 { (optional) ensure buffer is sized to at least p.estimated_batch_size } if p.proto_rows_batch is set { append p.proto_rows_batch.bytes to buffer } if p.batch_checksum is set and buffer is not empty { validate the checksum matches the contents of buffer (see comments on batch_checksum) parse buffer as ProtoRows message, clearing buffer add parsed rows to end of queue } if p.resume_token is set { release results in queue save p.resume_token in resume_token } }

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#batch_checksum

def batch_checksum() -> ::Integer
Returns
  • (::Integer) — CRC32C checksum of concatenated partial_rows data for the current batch.

    When present, the buffered data from partial_rows forms a complete parseable message of the appropriate type.

    The client should mark the end of a parseable message and prepare to receive a new one starting from the next PartialResultSet message. Clients must verify the checksum of the serialized batch before yielding it to the caller.

    This does NOT mean the values can be yielded to the callers since a resume_token is required to safely do so.

    If resume_token is non-empty and any data has been received since the last one, this field is guaranteed to be non-empty. In other words, clients may assume that a batch will never cross a resume_token boundary.

#batch_checksum=

def batch_checksum=(value) -> ::Integer
Parameter
  • value (::Integer) — CRC32C checksum of concatenated partial_rows data for the current batch.

    When present, the buffered data from partial_rows forms a complete parseable message of the appropriate type.

    The client should mark the end of a parseable message and prepare to receive a new one starting from the next PartialResultSet message. Clients must verify the checksum of the serialized batch before yielding it to the caller.

    This does NOT mean the values can be yielded to the callers since a resume_token is required to safely do so.

    If resume_token is non-empty and any data has been received since the last one, this field is guaranteed to be non-empty. In other words, clients may assume that a batch will never cross a resume_token boundary.

Returns
  • (::Integer) — CRC32C checksum of concatenated partial_rows data for the current batch.

    When present, the buffered data from partial_rows forms a complete parseable message of the appropriate type.

    The client should mark the end of a parseable message and prepare to receive a new one starting from the next PartialResultSet message. Clients must verify the checksum of the serialized batch before yielding it to the caller.

    This does NOT mean the values can be yielded to the callers since a resume_token is required to safely do so.

    If resume_token is non-empty and any data has been received since the last one, this field is guaranteed to be non-empty. In other words, clients may assume that a batch will never cross a resume_token boundary.

#estimated_batch_size

def estimated_batch_size() -> ::Integer
Returns
  • (::Integer) — Estimated size of the buffer required to hold the next batch of results.

    This value will be sent with the first partial_rows of a batch. That is, on the first partial_rows received in a stream, on the first message after a batch_checksum message, and any time reset is true.

    The client can use this estimate to allocate a buffer for the next batch of results. This helps minimize the number of allocations required, though the buffer size may still need to be increased if the estimate is too low.

#estimated_batch_size=

def estimated_batch_size=(value) -> ::Integer
Parameter
  • value (::Integer) — Estimated size of the buffer required to hold the next batch of results.

    This value will be sent with the first partial_rows of a batch. That is, on the first partial_rows received in a stream, on the first message after a batch_checksum message, and any time reset is true.

    The client can use this estimate to allocate a buffer for the next batch of results. This helps minimize the number of allocations required, though the buffer size may still need to be increased if the estimate is too low.

Returns
  • (::Integer) — Estimated size of the buffer required to hold the next batch of results.

    This value will be sent with the first partial_rows of a batch. That is, on the first partial_rows received in a stream, on the first message after a batch_checksum message, and any time reset is true.

    The client can use this estimate to allocate a buffer for the next batch of results. This helps minimize the number of allocations required, though the buffer size may still need to be increased if the estimate is too low.

#proto_rows_batch

def proto_rows_batch() -> ::Google::Cloud::Bigtable::V2::ProtoRowsBatch
Returns

#proto_rows_batch=

def proto_rows_batch=(value) -> ::Google::Cloud::Bigtable::V2::ProtoRowsBatch
Parameter
Returns

#reset

def reset() -> ::Boolean
Returns
  • (::Boolean) — If true, any data buffered since the last non-empty resume_token must be discarded before the other parts of this message, if any, are handled.

#reset=

def reset=(value) -> ::Boolean
Parameter
  • value (::Boolean) — If true, any data buffered since the last non-empty resume_token must be discarded before the other parts of this message, if any, are handled.
Returns
  • (::Boolean) — If true, any data buffered since the last non-empty resume_token must be discarded before the other parts of this message, if any, are handled.

#resume_token

def resume_token() -> ::String
Returns
  • (::String) — An opaque token sent by the server to allow query resumption and signal that the buffered values constructed from received partial_rows can be yielded to the caller. Clients can provide this token in a subsequent request to resume the result stream from the current point.

    When resume_token is non-empty, the buffered values received from partial_rows since the last non-empty resume_token can be yielded to the callers, provided that the client keeps the value of resume_token and uses it on subsequent retries.

    A resume_token may be sent without information in partial_rows to checkpoint the progress of a sparse query. Any previous partial_rows data should still be yielded in this case, and the new resume_token should be saved for future retries as normal.

    A resume_token will only be sent on a boundary where there is either no ongoing result batch, or batch_checksum is also populated.

    The server will also send a sentinel resume_token when last batch of partial_rows is sent. If the client retries the ExecuteQueryRequest with the sentinel resume_token, the server will emit it again without any data in partial_rows, then return OK.

#resume_token=

def resume_token=(value) -> ::String
Parameter
  • value (::String) — An opaque token sent by the server to allow query resumption and signal that the buffered values constructed from received partial_rows can be yielded to the caller. Clients can provide this token in a subsequent request to resume the result stream from the current point.

    When resume_token is non-empty, the buffered values received from partial_rows since the last non-empty resume_token can be yielded to the callers, provided that the client keeps the value of resume_token and uses it on subsequent retries.

    A resume_token may be sent without information in partial_rows to checkpoint the progress of a sparse query. Any previous partial_rows data should still be yielded in this case, and the new resume_token should be saved for future retries as normal.

    A resume_token will only be sent on a boundary where there is either no ongoing result batch, or batch_checksum is also populated.

    The server will also send a sentinel resume_token when last batch of partial_rows is sent. If the client retries the ExecuteQueryRequest with the sentinel resume_token, the server will emit it again without any data in partial_rows, then return OK.

Returns
  • (::String) — An opaque token sent by the server to allow query resumption and signal that the buffered values constructed from received partial_rows can be yielded to the caller. Clients can provide this token in a subsequent request to resume the result stream from the current point.

    When resume_token is non-empty, the buffered values received from partial_rows since the last non-empty resume_token can be yielded to the callers, provided that the client keeps the value of resume_token and uses it on subsequent retries.

    A resume_token may be sent without information in partial_rows to checkpoint the progress of a sparse query. Any previous partial_rows data should still be yielded in this case, and the new resume_token should be saved for future retries as normal.

    A resume_token will only be sent on a boundary where there is either no ongoing result batch, or batch_checksum is also populated.

    The server will also send a sentinel resume_token when last batch of partial_rows is sent. If the client retries the ExecuteQueryRequest with the sentinel resume_token, the server will emit it again without any data in partial_rows, then return OK.