- Resource: Generator
- Methods
Resource: Generator
LLM generator.
JSON representation |
---|
{ "name": string, "description": string, "inferenceParameter": { object ( |
Fields | |
---|---|
name |
Output only. Identifier. The resource name of the generator. Format: |
description |
Optional. Human readable description of the generator. |
inference |
Optional. Inference parameters for this generator. |
trigger |
Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation. |
create |
Output only. Creation time of this generator. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: |
update |
Output only. Update time of this generator. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: |
Union field context . Required. Input context of the generator. context can be only one of the following: |
|
summarization |
Input of prebuilt Summarization feature. |
SummarizationContext
Summarization context that customer can configure.
JSON representation |
---|
{ "summarizationSections": [ { object ( |
Fields | |
---|---|
summarization |
Optional. List of sections. Note it contains both predefined section sand customer defined sections. |
few |
Optional. List of few shot examples. |
version |
Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"]. |
output |
Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions. |
SummarizationSection
Represents the section of summarization.
JSON representation |
---|
{
"key": string,
"definition": string,
"type": enum ( |
Fields | |
---|---|
key |
Optional. Name of the section, for example, "situation". |
definition |
Optional. Definition of the section, for example, "what the customer needs help with or has question about." |
type |
Optional. Type of the summarization section. |
Type
Type enum of the summarization sections.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Undefined section type, does not return anything. |
SITUATION |
What the customer needs help with or has question about. Section name: "situation". |
ACTION |
What the agent does to help the customer. Section name: "action". |
RESOLUTION |
Result of the customer service. A single word describing the result of the conversation. Section name: "resolution". |
REASON_FOR_CANCELLATION |
Reason for cancellation if the customer requests for a cancellation. "N/A" otherwise. Section name: "reason_for_cancellation". |
CUSTOMER_SATISFACTION |
"Unsatisfied" or "Satisfied" depending on the customer's feelings at the end of the conversation. Section name: "customer_satisfaction". |
ENTITIES |
Key entities extracted from the conversation, such as ticket number, order number, dollar amount, etc. Section names are prefixed by "entities/". |
CUSTOMER_DEFINED |
Customer defined sections. |
FewShotExample
Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
JSON representation |
---|
{ "conversationContext": { object ( |
Fields | |
---|---|
conversation |
Optional. Conversation transcripts. |
extra |
Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10"> An object containing a list of |
output |
Required. Example output of the model. |
Union field instruction_list . Instruction list of this few_shot example. instruction_list can be only one of the following: |
|
summarization |
Summarization sections. |
ConversationContext
Context of the conversation, including transcripts.
JSON representation |
---|
{
"messageEntries": [
{
object ( |
Fields | |
---|---|
message |
Optional. List of message transcripts in the conversation. |
MessageEntry
Represents a message entry of a conversation.
JSON representation |
---|
{
"role": enum ( |
Fields | |
---|---|
role |
Optional. Participant role of the message. |
text |
Optional. Transcript content of the message. |
language |
Optional. The language of the text. See Language Support for a list of the currently supported language codes. |
create |
Optional. Create time of the message entry. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: |
Role
Enumeration of the roles a participant can play in a conversation.
Enums | |
---|---|
ROLE_UNSPECIFIED |
Participant role not set. |
HUMAN_AGENT |
Participant is a human agent. |
AUTOMATED_AGENT |
Participant is an automated agent, such as a Dialogflow agent. |
END_USER |
Participant is an end user that has called or chatted with Dialogflow services. |
SummarizationSectionList
List of summarization sections.
JSON representation |
---|
{
"summarizationSections": [
{
object ( |
Fields | |
---|---|
summarization |
Optional. Summarization sections. |
GeneratorSuggestion
Suggestion generated using a Generator.
JSON representation |
---|
{ // Union field |
Fields | |
---|---|
Union field suggestion . The suggestion could be one of the many types suggestion can be only one of the following: |
|
summary |
Optional. Suggested summary. |
SummarySuggestion
Suggested summary of the conversation.
JSON representation |
---|
{
"summarySections": [
{
object ( |
Fields | |
---|---|
summary |
Required. All the parts of generated summary. |
SummarySection
A component of the generated summary.
JSON representation |
---|
{ "section": string, "summary": string } |
Fields | |
---|---|
section |
Required. Name of the section. |
summary |
Required. Summary text for the section. |
InferenceParameter
The parameters of inference.
JSON representation |
---|
{ "maxOutputTokens": integer, "temperature": number, "topK": integer, "topP": number } |
Fields | |
---|---|
max |
Optional. Maximum number of the output tokens for the generator. |
temperature |
Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0. |
top |
Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40. |
top |
Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95. |
TriggerEvent
The event that triggers the generator and LLM execution.
Enums | |
---|---|
TRIGGER_EVENT_UNSPECIFIED |
Default value for TriggerEvent. |
END_OF_UTTERANCE |
Triggers when each chat message or voice utterance ends. |
MANUAL_CALL |
Triggers on the conversation manually by API calls, such as Conversations.GenerateStatelessSuggestion and Conversations.GenerateSuggestions. |
Methods |
|
---|---|
|
Creates a generator. |
|
Lists generators. |