public sealed class SynthesizeSpeechResponse : IMessage<SynthesizeSpeechResponse>, IEquatable<SynthesizeSpeechResponse>, IDeepCloneable<SynthesizeSpeechResponse>, IBufferMessage, IMessage
Reference documentation and code samples for the Google Cloud Text-to-Speech v1beta1 API class SynthesizeSpeechResponse.
The message returned to the client by the SynthesizeSpeech
method.
Implements
IMessageSynthesizeSpeechResponse, IEquatableSynthesizeSpeechResponse, IDeepCloneableSynthesizeSpeechResponse, IBufferMessage, IMessageNamespace
Google.Cloud.TextToSpeech.V1Beta1Assembly
Google.Cloud.TextToSpeech.V1Beta1.dll
Constructors
SynthesizeSpeechResponse()
public SynthesizeSpeechResponse()
SynthesizeSpeechResponse(SynthesizeSpeechResponse)
public SynthesizeSpeechResponse(SynthesizeSpeechResponse other)
Parameter | |
---|---|
Name | Description |
other |
SynthesizeSpeechResponse |
Properties
AudioConfig
public AudioConfig AudioConfig { get; set; }
The audio metadata of audio_content
.
Property Value | |
---|---|
Type | Description |
AudioConfig |
AudioContent
public ByteString AudioContent { get; set; }
The audio data bytes encoded as specified in the request, including the header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS). For LINEAR16 audio, we include the WAV header. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.
Property Value | |
---|---|
Type | Description |
ByteString |
Timepoints
public RepeatedField<Timepoint> Timepoints { get; }
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
Property Value | |
---|---|
Type | Description |
RepeatedFieldTimepoint |