Cloud Speech V2 Client - Class Recognizer (1.11.2)

Reference documentation and code samples for the Cloud Speech V2 Client class Recognizer.

A Recognizer message. Stores recognition configuration and metadata.

Generated from protobuf message google.cloud.speech.v2.Recognizer

Methods

__construct

Constructor.

Parameters
NameDescription
data array

Optional. Data for populating the Message object.

↳ name string

Output only. The resource name of the Recognizer. Format: projects/{project}/locations/{location}/recognizers/{recognizer}.

↳ uid string

Output only. System-assigned unique identifier for the Recognizer.

↳ display_name string

User-settable, human-readable name for the Recognizer. Must be 63 characters or less.

↳ model string

Required. Which model to use for recognition requests. Select the model best suited to your domain to get best results. Supported models: - latest_long Best for long form content like media or conversation. - latest_short Best for short form content like commands or single shot directed speech. When using this model, the service will stop transcribing audio after the first utterance is detected and completed. When using this model, SEPARATE_RECOGNITION_PER_CHANNEL is not supported; multi-channel audio is accepted, but only the first channel will be processed and transcribed. - telephony Best for audio that originated from a phone call (typically recorded at an 8khz sampling rate). - medical_conversation For conversations between a medical provider—for example, a doctor or nurse—and a patient. Use this model when both a provider and a patient are speaking. Words uttered by each speaker are automatically detected and labeled in the returned transcript. For supported features please see medical models documentation. - medical_dictation For dictated notes spoken by a single medical provider—for example, a doctor dictating notes about a patient's blood test results. For supported features please see medical models documentation. - usm The next generation of Speech-to-Text models from Google.

↳ language_codes array

Required. The language of the supplied audio as a BCP-47 language tag. Supported languages for each model are listed at: https://cloud.google.com/speech-to-text/docs/languages If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio. When you create or update a Recognizer, these values are stored in normalized BCP-47 form. For example, "en-us" is stored as "en-US".

↳ default_recognition_config Google\Cloud\Speech\V2\RecognitionConfig

Default configuration to use for requests with this Recognizer. This can be overwritten by inline configuration in the RecognizeRequest.config field.

↳ annotations array|Google\Protobuf\Internal\MapField

Allows users to store small amounts of arbitrary data. Both the key and the value must be 63 characters or less each. At most 100 annotations.

↳ state int

Output only. The Recognizer lifecycle state.

↳ create_time Google\Protobuf\Timestamp

Output only. Creation time.

↳ update_time Google\Protobuf\Timestamp

Output only. The most recent time this Recognizer was modified.

↳ delete_time Google\Protobuf\Timestamp

Output only. The time at which this Recognizer was requested for deletion.

↳ expire_time Google\Protobuf\Timestamp

Output only. The time at which this Recognizer will be purged.

↳ etag string

Output only. This checksum is computed by the server based on the value of other fields. This may be sent on update, undelete, and delete requests to ensure the client has an up-to-date value before proceeding.

↳ reconciling bool

Output only. Whether or not this Recognizer is in the process of being updated.

↳ kms_key_name string

Output only. The KMS key name with which the Recognizer is encrypted. The expected format is projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}.

↳ kms_key_version_name string

Output only. The KMS key version name with which the Recognizer is encrypted. The expected format is projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}/cryptoKeyVersions/{crypto_key_version}.

getName

Output only. The resource name of the Recognizer.

Format: projects/{project}/locations/{location}/recognizers/{recognizer}.

Returns
TypeDescription
string

setName

Output only. The resource name of the Recognizer.

Format: projects/{project}/locations/{location}/recognizers/{recognizer}.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getUid

Output only. System-assigned unique identifier for the Recognizer.

Returns
TypeDescription
string

setUid

Output only. System-assigned unique identifier for the Recognizer.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getDisplayName

User-settable, human-readable name for the Recognizer. Must be 63 characters or less.

Returns
TypeDescription
string

setDisplayName

User-settable, human-readable name for the Recognizer. Must be 63 characters or less.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getModel

Required. Which model to use for recognition requests. Select the model best suited to your domain to get best results.

Supported models:

  • latest_long Best for long form content like media or conversation.
  • latest_short Best for short form content like commands or single shot directed speech. When using this model, the service will stop transcribing audio after the first utterance is detected and completed. When using this model, SEPARATE_RECOGNITION_PER_CHANNEL is not supported; multi-channel audio is accepted, but only the first channel will be processed and transcribed.
  • telephony Best for audio that originated from a phone call (typically recorded at an 8khz sampling rate).
  • medical_conversation For conversations between a medical provider—for example, a doctor or nurse—and a patient. Use this model when both a provider and a patient are speaking. Words uttered by each speaker are automatically detected and labeled in the returned transcript. For supported features please see medical models documentation.
  • medical_dictation For dictated notes spoken by a single medical provider—for example, a doctor dictating notes about a patient's blood test results. For supported features please see medical models documentation.
  • usm The next generation of Speech-to-Text models from Google.
Returns
TypeDescription
string

setModel

Required. Which model to use for recognition requests. Select the model best suited to your domain to get best results.

Supported models:

  • latest_long Best for long form content like media or conversation.
  • latest_short Best for short form content like commands or single shot directed speech. When using this model, the service will stop transcribing audio after the first utterance is detected and completed. When using this model, SEPARATE_RECOGNITION_PER_CHANNEL is not supported; multi-channel audio is accepted, but only the first channel will be processed and transcribed.
  • telephony Best for audio that originated from a phone call (typically recorded at an 8khz sampling rate).
  • medical_conversation For conversations between a medical provider—for example, a doctor or nurse—and a patient. Use this model when both a provider and a patient are speaking. Words uttered by each speaker are automatically detected and labeled in the returned transcript. For supported features please see medical models documentation.
  • medical_dictation For dictated notes spoken by a single medical provider—for example, a doctor dictating notes about a patient's blood test results. For supported features please see medical models documentation.
  • usm The next generation of Speech-to-Text models from Google.
Parameter
NameDescription
var string
Returns
TypeDescription
$this

getLanguageCodes

Required. The language of the supplied audio as a BCP-47 language tag.

Supported languages for each model are listed at: https://cloud.google.com/speech-to-text/docs/languages If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio. When you create or update a Recognizer, these values are stored in normalized BCP-47 form. For example, "en-us" is stored as "en-US".

Returns
TypeDescription
Google\Protobuf\Internal\RepeatedField

setLanguageCodes

Required. The language of the supplied audio as a BCP-47 language tag.

Supported languages for each model are listed at: https://cloud.google.com/speech-to-text/docs/languages If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio. When you create or update a Recognizer, these values are stored in normalized BCP-47 form. For example, "en-us" is stored as "en-US".

Parameter
NameDescription
var string[]
Returns
TypeDescription
$this

getDefaultRecognitionConfig

Default configuration to use for requests with this Recognizer.

This can be overwritten by inline configuration in the RecognizeRequest.config field.

Returns
TypeDescription
Google\Cloud\Speech\V2\RecognitionConfig|null

hasDefaultRecognitionConfig

clearDefaultRecognitionConfig

setDefaultRecognitionConfig

Default configuration to use for requests with this Recognizer.

This can be overwritten by inline configuration in the RecognizeRequest.config field.

Parameter
NameDescription
var Google\Cloud\Speech\V2\RecognitionConfig
Returns
TypeDescription
$this

getAnnotations

Allows users to store small amounts of arbitrary data.

Both the key and the value must be 63 characters or less each. At most 100 annotations.

Returns
TypeDescription
Google\Protobuf\Internal\MapField

setAnnotations

Allows users to store small amounts of arbitrary data.

Both the key and the value must be 63 characters or less each. At most 100 annotations.

Parameter
NameDescription
var array|Google\Protobuf\Internal\MapField
Returns
TypeDescription
$this

getState

Output only. The Recognizer lifecycle state.

Returns
TypeDescription
int

setState

Output only. The Recognizer lifecycle state.

Parameter
NameDescription
var int
Returns
TypeDescription
$this

getCreateTime

Output only. Creation time.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasCreateTime

clearCreateTime

setCreateTime

Output only. Creation time.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getUpdateTime

Output only. The most recent time this Recognizer was modified.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasUpdateTime

clearUpdateTime

setUpdateTime

Output only. The most recent time this Recognizer was modified.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getDeleteTime

Output only. The time at which this Recognizer was requested for deletion.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasDeleteTime

clearDeleteTime

setDeleteTime

Output only. The time at which this Recognizer was requested for deletion.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getExpireTime

Output only. The time at which this Recognizer will be purged.

Returns
TypeDescription
Google\Protobuf\Timestamp|null

hasExpireTime

clearExpireTime

setExpireTime

Output only. The time at which this Recognizer will be purged.

Parameter
NameDescription
var Google\Protobuf\Timestamp
Returns
TypeDescription
$this

getEtag

Output only. This checksum is computed by the server based on the value of other fields. This may be sent on update, undelete, and delete requests to ensure the client has an up-to-date value before proceeding.

Returns
TypeDescription
string

setEtag

Output only. This checksum is computed by the server based on the value of other fields. This may be sent on update, undelete, and delete requests to ensure the client has an up-to-date value before proceeding.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getReconciling

Output only. Whether or not this Recognizer is in the process of being updated.

Returns
TypeDescription
bool

setReconciling

Output only. Whether or not this Recognizer is in the process of being updated.

Parameter
NameDescription
var bool
Returns
TypeDescription
$this

getKmsKeyName

Output only. The KMS key name with which the Recognizer is encrypted. The expected format is projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}.

Returns
TypeDescription
string

setKmsKeyName

Output only. The KMS key name with which the Recognizer is encrypted. The expected format is projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}.

Parameter
NameDescription
var string
Returns
TypeDescription
$this

getKmsKeyVersionName

Output only. The KMS key version name with which the Recognizer is encrypted. The expected format is projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}/cryptoKeyVersions/{crypto_key_version}.

Returns
TypeDescription
string

setKmsKeyVersionName

Output only. The KMS key version name with which the Recognizer is encrypted. The expected format is projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}/cryptoKeyVersions/{crypto_key_version}.

Parameter
NameDescription
var string
Returns
TypeDescription
$this