Reference documentation and code samples for the Cloud Bigtable V2 API class Google::Cloud::Bigtable::V2::PartialResultSet.
A partial result set from the streaming query API.
Cloud Bigtable clients buffer partial results received in this message until
a resume_token
is received.
The pseudocode below describes how to buffer and parse a stream of
PartialResultSet
messages.
Having:
- queue of row results waiting to be returned
queue
- extensible buffer of bytes
buffer
- a place to keep track of the most recent
resume_token
for each PartialResultSetp
received { if p.reset { ensurequeue
is empty ensurebuffer
is empty } if p.estimated_batch_size != 0 { (optional) ensurebuffer
is sized to at leastp.estimated_batch_size
} ifp.proto_rows_batch
is set { appendp.proto_rows_batch.bytes
tobuffer
} if p.batch_checksum is set andbuffer
is not empty { validate the checksum matches the contents ofbuffer
(see comments onbatch_checksum
) parsebuffer
asProtoRows
message, clearingbuffer
add parsed rows to end ofqueue
} if p.resume_token is set { release results inqueue
savep.resume_token
inresume_token
} }
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#batch_checksum
def batch_checksum() -> ::Integer
-
(::Integer) — CRC32C checksum of concatenated
partial_rows
data for the current batch.When present, the buffered data from
partial_rows
forms a complete parseable message of the appropriate type.The client should mark the end of a parseable message and prepare to receive a new one starting from the next
PartialResultSet
message. Clients must verify the checksum of the serialized batch before yielding it to the caller.This does NOT mean the values can be yielded to the callers since a
resume_token
is required to safely do so.If
resume_token
is non-empty and any data has been received since the last one, this field is guaranteed to be non-empty. In other words, clients may assume that a batch will never cross aresume_token
boundary.
#batch_checksum=
def batch_checksum=(value) -> ::Integer
-
value (::Integer) — CRC32C checksum of concatenated
partial_rows
data for the current batch.When present, the buffered data from
partial_rows
forms a complete parseable message of the appropriate type.The client should mark the end of a parseable message and prepare to receive a new one starting from the next
PartialResultSet
message. Clients must verify the checksum of the serialized batch before yielding it to the caller.This does NOT mean the values can be yielded to the callers since a
resume_token
is required to safely do so.If
resume_token
is non-empty and any data has been received since the last one, this field is guaranteed to be non-empty. In other words, clients may assume that a batch will never cross aresume_token
boundary.
-
(::Integer) — CRC32C checksum of concatenated
partial_rows
data for the current batch.When present, the buffered data from
partial_rows
forms a complete parseable message of the appropriate type.The client should mark the end of a parseable message and prepare to receive a new one starting from the next
PartialResultSet
message. Clients must verify the checksum of the serialized batch before yielding it to the caller.This does NOT mean the values can be yielded to the callers since a
resume_token
is required to safely do so.If
resume_token
is non-empty and any data has been received since the last one, this field is guaranteed to be non-empty. In other words, clients may assume that a batch will never cross aresume_token
boundary.
#estimated_batch_size
def estimated_batch_size() -> ::Integer
-
(::Integer) — Estimated size of the buffer required to hold the next batch of results.
This value will be sent with the first
partial_rows
of a batch. That is, on the firstpartial_rows
received in a stream, on the first message after abatch_checksum
message, and any timereset
is true.The client can use this estimate to allocate a buffer for the next batch of results. This helps minimize the number of allocations required, though the buffer size may still need to be increased if the estimate is too low.
#estimated_batch_size=
def estimated_batch_size=(value) -> ::Integer
-
value (::Integer) — Estimated size of the buffer required to hold the next batch of results.
This value will be sent with the first
partial_rows
of a batch. That is, on the firstpartial_rows
received in a stream, on the first message after abatch_checksum
message, and any timereset
is true.The client can use this estimate to allocate a buffer for the next batch of results. This helps minimize the number of allocations required, though the buffer size may still need to be increased if the estimate is too low.
-
(::Integer) — Estimated size of the buffer required to hold the next batch of results.
This value will be sent with the first
partial_rows
of a batch. That is, on the firstpartial_rows
received in a stream, on the first message after abatch_checksum
message, and any timereset
is true.The client can use this estimate to allocate a buffer for the next batch of results. This helps minimize the number of allocations required, though the buffer size may still need to be increased if the estimate is too low.
#proto_rows_batch
def proto_rows_batch() -> ::Google::Cloud::Bigtable::V2::ProtoRowsBatch
- (::Google::Cloud::Bigtable::V2::ProtoRowsBatch) — Partial rows in serialized ProtoRows format.
#proto_rows_batch=
def proto_rows_batch=(value) -> ::Google::Cloud::Bigtable::V2::ProtoRowsBatch
- value (::Google::Cloud::Bigtable::V2::ProtoRowsBatch) — Partial rows in serialized ProtoRows format.
- (::Google::Cloud::Bigtable::V2::ProtoRowsBatch) — Partial rows in serialized ProtoRows format.
#reset
def reset() -> ::Boolean
-
(::Boolean) — If
true
, any data buffered since the last non-emptyresume_token
must be discarded before the other parts of this message, if any, are handled.
#reset=
def reset=(value) -> ::Boolean
-
value (::Boolean) — If
true
, any data buffered since the last non-emptyresume_token
must be discarded before the other parts of this message, if any, are handled.
-
(::Boolean) — If
true
, any data buffered since the last non-emptyresume_token
must be discarded before the other parts of this message, if any, are handled.
#resume_token
def resume_token() -> ::String
-
(::String) — An opaque token sent by the server to allow query resumption and signal
that the buffered values constructed from received
partial_rows
can be yielded to the caller. Clients can provide this token in a subsequent request to resume the result stream from the current point.When
resume_token
is non-empty, the buffered values received frompartial_rows
since the last non-emptyresume_token
can be yielded to the callers, provided that the client keeps the value ofresume_token
and uses it on subsequent retries.A
resume_token
may be sent without information inpartial_rows
to checkpoint the progress of a sparse query. Any previouspartial_rows
data should still be yielded in this case, and the newresume_token
should be saved for future retries as normal.A
resume_token
will only be sent on a boundary where there is either no ongoing result batch, orbatch_checksum
is also populated.The server will also send a sentinel
resume_token
when last batch ofpartial_rows
is sent. If the client retries the ExecuteQueryRequest with the sentinelresume_token
, the server will emit it again without any data inpartial_rows
, then return OK.
#resume_token=
def resume_token=(value) -> ::String
-
value (::String) — An opaque token sent by the server to allow query resumption and signal
that the buffered values constructed from received
partial_rows
can be yielded to the caller. Clients can provide this token in a subsequent request to resume the result stream from the current point.When
resume_token
is non-empty, the buffered values received frompartial_rows
since the last non-emptyresume_token
can be yielded to the callers, provided that the client keeps the value ofresume_token
and uses it on subsequent retries.A
resume_token
may be sent without information inpartial_rows
to checkpoint the progress of a sparse query. Any previouspartial_rows
data should still be yielded in this case, and the newresume_token
should be saved for future retries as normal.A
resume_token
will only be sent on a boundary where there is either no ongoing result batch, orbatch_checksum
is also populated.The server will also send a sentinel
resume_token
when last batch ofpartial_rows
is sent. If the client retries the ExecuteQueryRequest with the sentinelresume_token
, the server will emit it again without any data inpartial_rows
, then return OK.
-
(::String) — An opaque token sent by the server to allow query resumption and signal
that the buffered values constructed from received
partial_rows
can be yielded to the caller. Clients can provide this token in a subsequent request to resume the result stream from the current point.When
resume_token
is non-empty, the buffered values received frompartial_rows
since the last non-emptyresume_token
can be yielded to the callers, provided that the client keeps the value ofresume_token
and uses it on subsequent retries.A
resume_token
may be sent without information inpartial_rows
to checkpoint the progress of a sparse query. Any previouspartial_rows
data should still be yielded in this case, and the newresume_token
should be saved for future retries as normal.A
resume_token
will only be sent on a boundary where there is either no ongoing result batch, orbatch_checksum
is also populated.The server will also send a sentinel
resume_token
when last batch ofpartial_rows
is sent. If the client retries the ExecuteQueryRequest with the sentinelresume_token
, the server will emit it again without any data inpartial_rows
, then return OK.