Class InferenceParameter.Builder (4.55.0)

public static final class InferenceParameter.Builder extends GeneratedMessageV3.Builder<InferenceParameter.Builder> implements InferenceParameterOrBuilder

The parameters of inference.

Protobuf type google.cloud.dialogflow.v2.InferenceParameter

Static Methods

getDescriptor()

public static final Descriptors.Descriptor getDescriptor()
Returns
Type Description
Descriptor

Methods

addRepeatedField(Descriptors.FieldDescriptor field, Object value)

public InferenceParameter.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters
Name Description
field FieldDescriptor
value Object
Returns
Type Description
InferenceParameter.Builder
Overrides

build()

public InferenceParameter build()
Returns
Type Description
InferenceParameter

buildPartial()

public InferenceParameter buildPartial()
Returns
Type Description
InferenceParameter

clear()

public InferenceParameter.Builder clear()
Returns
Type Description
InferenceParameter.Builder
Overrides

clearField(Descriptors.FieldDescriptor field)

public InferenceParameter.Builder clearField(Descriptors.FieldDescriptor field)
Parameter
Name Description
field FieldDescriptor
Returns
Type Description
InferenceParameter.Builder
Overrides

clearMaxOutputTokens()

public InferenceParameter.Builder clearMaxOutputTokens()

Optional. Maximum number of the output tokens for the generator.

optional int32 max_output_tokens = 1 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
InferenceParameter.Builder

This builder for chaining.

clearOneof(Descriptors.OneofDescriptor oneof)

public InferenceParameter.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter
Name Description
oneof OneofDescriptor
Returns
Type Description
InferenceParameter.Builder
Overrides

clearTemperature()

public InferenceParameter.Builder clearTemperature()

Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.

optional double temperature = 2 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
InferenceParameter.Builder

This builder for chaining.

clearTopK()

public InferenceParameter.Builder clearTopK()

Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.

optional int32 top_k = 3 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
InferenceParameter.Builder

This builder for chaining.

clearTopP()

public InferenceParameter.Builder clearTopP()

Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.

optional double top_p = 4 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
InferenceParameter.Builder

This builder for chaining.

clone()

public InferenceParameter.Builder clone()
Returns
Type Description
InferenceParameter.Builder
Overrides

getDefaultInstanceForType()

public InferenceParameter getDefaultInstanceForType()
Returns
Type Description
InferenceParameter

getDescriptorForType()

public Descriptors.Descriptor getDescriptorForType()
Returns
Type Description
Descriptor
Overrides

getMaxOutputTokens()

public int getMaxOutputTokens()

Optional. Maximum number of the output tokens for the generator.

optional int32 max_output_tokens = 1 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
int

The maxOutputTokens.

getTemperature()

public double getTemperature()

Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.

optional double temperature = 2 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
double

The temperature.

getTopK()

public int getTopK()

Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.

optional int32 top_k = 3 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
int

The topK.

getTopP()

public double getTopP()

Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.

optional double top_p = 4 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
double

The topP.

hasMaxOutputTokens()

public boolean hasMaxOutputTokens()

Optional. Maximum number of the output tokens for the generator.

optional int32 max_output_tokens = 1 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
boolean

Whether the maxOutputTokens field is set.

hasTemperature()

public boolean hasTemperature()

Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.

optional double temperature = 2 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
boolean

Whether the temperature field is set.

hasTopK()

public boolean hasTopK()

Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.

optional int32 top_k = 3 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
boolean

Whether the topK field is set.

hasTopP()

public boolean hasTopP()

Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.

optional double top_p = 4 [(.google.api.field_behavior) = OPTIONAL];

Returns
Type Description
boolean

Whether the topP field is set.

internalGetFieldAccessorTable()

protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns
Type Description
FieldAccessorTable
Overrides

isInitialized()

public final boolean isInitialized()
Returns
Type Description
boolean
Overrides

mergeFrom(InferenceParameter other)

public InferenceParameter.Builder mergeFrom(InferenceParameter other)
Parameter
Name Description
other InferenceParameter
Returns
Type Description
InferenceParameter.Builder

mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)

public InferenceParameter.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
Name Description
input CodedInputStream
extensionRegistry ExtensionRegistryLite
Returns
Type Description
InferenceParameter.Builder
Overrides
Exceptions
Type Description
IOException

mergeFrom(Message other)

public InferenceParameter.Builder mergeFrom(Message other)
Parameter
Name Description
other Message
Returns
Type Description
InferenceParameter.Builder
Overrides

mergeUnknownFields(UnknownFieldSet unknownFields)

public final InferenceParameter.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter
Name Description
unknownFields UnknownFieldSet
Returns
Type Description
InferenceParameter.Builder
Overrides

setField(Descriptors.FieldDescriptor field, Object value)

public InferenceParameter.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters
Name Description
field FieldDescriptor
value Object
Returns
Type Description
InferenceParameter.Builder
Overrides

setMaxOutputTokens(int value)

public InferenceParameter.Builder setMaxOutputTokens(int value)

Optional. Maximum number of the output tokens for the generator.

optional int32 max_output_tokens = 1 [(.google.api.field_behavior) = OPTIONAL];

Parameter
Name Description
value int

The maxOutputTokens to set.

Returns
Type Description
InferenceParameter.Builder

This builder for chaining.

setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)

public InferenceParameter.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters
Name Description
field FieldDescriptor
index int
value Object
Returns
Type Description
InferenceParameter.Builder
Overrides

setTemperature(double value)

public InferenceParameter.Builder setTemperature(double value)

Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.

optional double temperature = 2 [(.google.api.field_behavior) = OPTIONAL];

Parameter
Name Description
value double

The temperature to set.

Returns
Type Description
InferenceParameter.Builder

This builder for chaining.

setTopK(int value)

public InferenceParameter.Builder setTopK(int value)

Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.

optional int32 top_k = 3 [(.google.api.field_behavior) = OPTIONAL];

Parameter
Name Description
value int

The topK to set.

Returns
Type Description
InferenceParameter.Builder

This builder for chaining.

setTopP(double value)

public InferenceParameter.Builder setTopP(double value)

Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.

optional double top_p = 4 [(.google.api.field_behavior) = OPTIONAL];

Parameter
Name Description
value double

The topP to set.

Returns
Type Description
InferenceParameter.Builder

This builder for chaining.

setUnknownFields(UnknownFieldSet unknownFields)

public final InferenceParameter.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter
Name Description
unknownFields UnknownFieldSet
Returns
Type Description
InferenceParameter.Builder
Overrides