CloudAiLargeModelsVisionFilteredText
Details for filtered input text.Fields | |
---|---|
category |
Confidence level |
Enum type. Can be one of the following: | |
RAI_CATEGORY_UNSPECIFIED |
(No description provided) |
OBSCENE |
(No description provided) |
SEXUALLY_EXPLICIT |
Porn |
IDENTITY_ATTACK |
Hate |
VIOLENCE_ABUSE |
(No description provided) |
CSAI |
(No description provided) |
SPII |
(No description provided) |
CELEBRITY |
(No description provided) |
FACE_IMG |
(No description provided) |
WATERMARK_IMG |
(No description provided) |
MEMORIZATION_IMG |
(No description provided) |
CSAI_IMG |
(No description provided) |
PORN_IMG |
(No description provided) |
VIOLENCE_IMG |
(No description provided) |
CHILD_IMG |
(No description provided) |
TOXIC |
(No description provided) |
SENSITIVE_WORD |
(No description provided) |
PERSON_IMG |
(No description provided) |
ICA_IMG |
(No description provided) |
SEXUAL_IMG |
(No description provided) |
IU_IMG |
(No description provided) |
RACY_IMG |
(No description provided) |
PEDO_IMG |
(No description provided) |
DEATH_HARM_TRAGEDY |
SafetyAttributes returned but not filtered on |
HEALTH |
(No description provided) |
FIREARMS_WEAPONS |
(No description provided) |
RELIGIOUS_BELIEF |
(No description provided) |
ILLICIT_DRUGS |
(No description provided) |
WAR_CONFLICT |
(No description provided) |
POLITICS |
(No description provided) |
HATE_SYMBOL_IMG |
End of list |
CHILD_TEXT |
(No description provided) |
DANGEROUS_CONTENT |
Text category from SafetyCat v3 |
RECITATION_TEXT |
(No description provided) |
CELEBRITY_IMG |
(No description provided) |
WATERMARK_IMG_REMOVAL |
Error message when user attempts to remove watermark from editing image |
confidence |
Filtered category |
Enum type. Can be one of the following: | |
CONFIDENCE_UNSPECIFIED |
(No description provided) |
CONFIDENCE_LOW |
(No description provided) |
CONFIDENCE_MEDIUM |
(No description provided) |
CONFIDENCE_HIGH |
(No description provided) |
prompt |
Input prompt |
score |
Score for category |
CloudAiLargeModelsVisionGenerateVideoResponse
Generate video response.Fields | |
---|---|
generatedSamples[] |
The generates samples. |
raiErrorMessage |
Returns rai error message for filtered videos. |
raiMediaFilteredCount |
Returns if any videos were filtered due to RAI policies. |
raiMediaFilteredReasons[] |
Returns rai failure reasons if any. |
raiTextFilteredReason |
Returns filtered text rai info. |
CloudAiLargeModelsVisionImage
Image.Fields | |
---|---|
encoding |
Image encoding, encoded as "image/png" or "image/jpg". |
image |
Raw bytes. |
imageRaiScores |
RAI scores for generated image. |
raiInfo |
RAI info for image. |
semanticFilterResponse |
Semantic filter info for image. |
text |
Text/Expanded text input for imagen. |
uri |
Path to another storage (typically Google Cloud Storage). |
CloudAiLargeModelsVisionImageRAIScores
RAI scores for generated image returned.Fields | |
---|---|
agileWatermarkDetectionScore |
Agile watermark score for image. |
CloudAiLargeModelsVisionMedia
Media.Fields | |
---|---|
image |
Image. |
video |
Video |
CloudAiLargeModelsVisionNamedBoundingBox
(No description provided)Fields | |
---|---|
classes[] |
(No description provided) |
entities[] |
(No description provided) |
scores[] |
(No description provided) |
x1 |
(No description provided) |
x2 |
(No description provided) |
y1 |
(No description provided) |
y2 |
(No description provided) |
CloudAiLargeModelsVisionRaiInfo
(No description provided)Fields | |
---|---|
raiCategories[] |
List of rai categories' information to return |
scores[] |
List of rai scores mapping to the rai categories. Rounded to 1 decimal place. |
CloudAiLargeModelsVisionSemanticFilterResponse
(No description provided)Fields | |
---|---|
namedBoundingBoxes[] |
Class labels of the bounding boxes that failed the semantic filtering. Bounding box coordinates. |
passedSemanticFilter |
This response is added when semantic filter config is turned on in EditConfig. It reports if this image is passed semantic filter response. If passed_semantic_filter is false, the bounding box information will be populated for user to check what caused the semantic filter to fail. |
CloudAiLargeModelsVisionVideo
VideoFields | |
---|---|
uri |
Path to another storage (typically Google Cloud Storage). |
video |
Raw bytes. |
GoogleApiHttpBody
Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page. This message can be used both in streaming and non-streaming API methods in the request as well as the response. It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body. Example: message GetResourceRequest { // A unique request id. string request_id = 1; // The raw HTTP body is bound to this field. google.api.HttpBody http_body = 2; } service ResourceService { rpc GetResource(GetResourceRequest) returns (google.api.HttpBody); rpc UpdateResource(google.api.HttpBody) returns (google.protobuf.Empty); } Example with streaming methods: service CaldavService { rpc GetCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); rpc UpdateCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); } Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged.Fields | |
---|---|
contentType |
The HTTP Content-Type header value specifying the content type of the body. |
data |
The HTTP request/response body as raw binary. |
extensions[] |
Application specific response metadata. Must be set in the first response for streaming APIs. |
GoogleCloudAiplatformV1ActiveLearningConfig
Parameters that configure the active learning pipeline. Active learning will label the data incrementally by several iterations. For every iteration, it will select a batch of data based on the sampling strategy.Fields | |
---|---|
maxDataItemCount |
Max number of human labeled DataItems. |
maxDataItemPercentage |
Max percent of total DataItems for human labeling. |
sampleConfig |
Active learning data sampling config. For every active learning labeling iteration, it will select a batch of data based on the sampling strategy. |
trainingConfig |
CMLE training config. For every active learning labeling iteration, system will train a machine learning model on CMLE. The trained model will be used by data sampling algorithm to select DataItems. |
GoogleCloudAiplatformV1AddContextArtifactsAndExecutionsRequest
Request message for MetadataService.AddContextArtifactsAndExecutions.Fields | |
---|---|
artifacts[] |
The resource names of the Artifacts to attribute to the Context. Format: |
executions[] |
The resource names of the Executions to associate with the Context. Format: |
GoogleCloudAiplatformV1AddContextChildrenRequest
Request message for MetadataService.AddContextChildren.Fields | |
---|---|
childContexts[] |
The resource names of the child Contexts. |
GoogleCloudAiplatformV1AddExecutionEventsRequest
Request message for MetadataService.AddExecutionEvents.Fields | |
---|---|
events[] |
The Events to create and add. |
GoogleCloudAiplatformV1AddTrialMeasurementRequest
Request message for VizierService.AddTrialMeasurement.Fields | |
---|---|
measurement |
Required. The measurement to be added to a Trial. |
GoogleCloudAiplatformV1Annotation
Used to assign specific AnnotationSpec to a particular area of a DataItem or the whole part of the DataItem.Fields | |
---|---|
annotationSource |
Output only. The source of the Annotation. |
createTime |
Output only. Timestamp when this Annotation was created. |
etag |
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
Optional. The labels with user-defined metadata to organize your Annotations. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Annotation(System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each Annotation: * "aiplatform.googleapis.com/annotation_set_name": optional, name of the UI's annotation set this Annotation belongs to. If not set, the Annotation is not visible in the UI. * "aiplatform.googleapis.com/payload_schema": output only, its value is the payload_schema's title. |
name |
Output only. Resource name of the Annotation. |
payload |
Required. The schema of the payload can be found in payload_schema. |
payloadSchemaUri |
Required. Google Cloud Storage URI points to a YAML file describing payload. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/, note that the chosen schema must be consistent with the parent Dataset's metadata. |
updateTime |
Output only. Timestamp when this Annotation was last updated. |
GoogleCloudAiplatformV1AnnotationSpec
Identifies a concept with which DataItems may be annotated with.Fields | |
---|---|
createTime |
Output only. Timestamp when this AnnotationSpec was created. |
displayName |
Required. The user-defined name of the AnnotationSpec. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
etag |
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
name |
Output only. Resource name of the AnnotationSpec. |
updateTime |
Output only. Timestamp when AnnotationSpec was last updated. |
GoogleCloudAiplatformV1Artifact
Instance of a general artifact.Fields | |
---|---|
createTime |
Output only. Timestamp when this Artifact was created. |
description |
Description of the Artifact |
displayName |
User provided display name of the Artifact. May be up to 128 Unicode characters. |
etag |
An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
The labels with user-defined metadata to organize your Artifacts. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Artifact (System labels are excluded). |
metadata |
Properties of the Artifact. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB. |
name |
Output only. The resource name of the Artifact. |
schemaTitle |
The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store. |
schemaVersion |
The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store. |
state |
The state of this Artifact. This is a property of the Artifact, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines), and the system does not prescribe or check the validity of state transitions. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Unspecified state for the Artifact. |
PENDING |
A state used by systems like Vertex AI Pipelines to indicate that the underlying data item represented by this Artifact is being created. |
LIVE |
A state indicating that the Artifact should exist, unless something external to the system deletes it. |
updateTime |
Output only. Timestamp when this Artifact was last updated. |
uri |
The uniform resource identifier of the artifact file. May be empty if there is no actual artifact file. |
GoogleCloudAiplatformV1AssignNotebookRuntimeOperationMetadata
Metadata information for NotebookService.AssignNotebookRuntime.Fields | |
---|---|
genericMetadata |
The operation generic information. |
progressMessage |
A human-readable message that shows the intermediate progress details of NotebookRuntime. |
GoogleCloudAiplatformV1AssignNotebookRuntimeRequest
Request message for NotebookService.AssignNotebookRuntime.Fields | |
---|---|
notebookRuntime |
Required. Provide runtime specific information (e.g. runtime owner, notebook id) used for NotebookRuntime assignment. |
notebookRuntimeId |
Optional. User specified ID for the notebook runtime. |
notebookRuntimeTemplate |
Required. The resource name of the NotebookRuntimeTemplate based on which a NotebookRuntime will be assigned (reuse or create a new one). |
GoogleCloudAiplatformV1Attribution
Attribution that explains a particular prediction output.Fields | |
---|---|
approximationError |
Output only. Error of feature_attributions caused by approximation used in the explanation method. Lower value means more precise attributions. * For Sampled Shapley attribution, increasing path_count might reduce the error. * For Integrated Gradients attribution, increasing step_count might reduce the error. * For XRAI attribution, increasing step_count might reduce the error. See this introduction for more information. |
baselineOutputValue |
Output only. Model predicted output if the input instance is constructed from the baselines of all the features defined in ExplanationMetadata.inputs. The field name of the output is determined by the key in ExplanationMetadata.outputs. If the Model's predicted output has multiple dimensions (rank > 1), this is the value in the output located by output_index. If there are multiple baselines, their output values are averaged. |
featureAttributions |
Output only. Attributions of each explained feature. Features are extracted from the prediction instances according to explanation metadata for inputs. The value is a struct, whose keys are the name of the feature. The values are how much the feature in the instance contributed to the predicted result. The format of the value is determined by the feature's input format: * If the feature is a scalar value, the attribution value is a floating number. * If the feature is an array of scalar values, the attribution value is an array. * If the feature is a struct, the attribution value is a struct. The keys in the attribution value struct are the same as the keys in the feature struct. The formats of the values in the attribution struct are determined by the formats of the values in the feature struct. The ExplanationMetadata.feature_attributions_schema_uri field, pointed to by the ExplanationSpec field of the Endpoint.deployed_models object, points to the schema file that describes the features and their attribution values (if it is populated). |
instanceOutputValue |
Output only. Model predicted output on the corresponding explanation instance. The field name of the output is determined by the key in ExplanationMetadata.outputs. If the Model predicted output has multiple dimensions, this is the value in the output located by output_index. |
outputDisplayName |
Output only. The display name of the output identified by output_index. For example, the predicted class name by a multi-classification Model. This field is only populated iff the Model predicts display names as a separate field along with the explained output. The predicted display name must has the same shape of the explained output, and can be located using output_index. |
outputIndex[] |
Output only. The index that locates the explained prediction output. If the prediction output is a scalar value, output_index is not populated. If the prediction output has multiple dimensions, the length of the output_index list is the same as the number of dimensions of the output. The i-th element in output_index is the element index of the i-th dimension of the output vector. Indices start from 0. |
outputName |
Output only. Name of the explain output. Specified as the key in ExplanationMetadata.outputs. |
GoogleCloudAiplatformV1AutomaticResources
A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines.Fields | |
---|---|
maxReplicaCount |
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number. |
minReplicaCount |
Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error. |
GoogleCloudAiplatformV1AutoscalingMetricSpec
The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.Fields | |
---|---|
metricName |
Required. The resource metric name. Supported metrics: * For Online Prediction: * |
target |
The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided. |
GoogleCloudAiplatformV1AvroSource
The storage details for Avro input content.Fields | |
---|---|
gcsSource |
Required. Google Cloud Storage location. |
GoogleCloudAiplatformV1BatchCancelPipelineJobsRequest
Request message for PipelineService.BatchCancelPipelineJobs.Fields | |
---|---|
names[] |
Required. The names of the PipelineJobs to cancel. A maximum of 32 PipelineJobs can be cancelled in a batch. Format: |
GoogleCloudAiplatformV1BatchCreateFeaturesOperationMetadata
Details of operations that perform batch create Features.Fields | |
---|---|
genericMetadata |
Operation metadata for Feature. |
GoogleCloudAiplatformV1BatchCreateFeaturesRequest
Request message for FeaturestoreService.BatchCreateFeatures.Fields | |
---|---|
requests[] |
Required. The request message specifying the Features to create. All Features must be created under the same parent EntityType. The |
GoogleCloudAiplatformV1BatchCreateFeaturesResponse
Response message for FeaturestoreService.BatchCreateFeatures.Fields | |
---|---|
features[] |
The Features created. |
GoogleCloudAiplatformV1BatchCreateTensorboardRunsRequest
Request message for TensorboardService.BatchCreateTensorboardRuns.Fields | |
---|---|
requests[] |
Required. The request message specifying the TensorboardRuns to create. A maximum of 1000 TensorboardRuns can be created in a batch. |
GoogleCloudAiplatformV1BatchCreateTensorboardRunsResponse
Response message for TensorboardService.BatchCreateTensorboardRuns.Fields | |
---|---|
tensorboardRuns[] |
The created TensorboardRuns. |
GoogleCloudAiplatformV1BatchCreateTensorboardTimeSeriesRequest
Request message for TensorboardService.BatchCreateTensorboardTimeSeries.Fields | |
---|---|
requests[] |
Required. The request message specifying the TensorboardTimeSeries to create. A maximum of 1000 TensorboardTimeSeries can be created in a batch. |
GoogleCloudAiplatformV1BatchCreateTensorboardTimeSeriesResponse
Response message for TensorboardService.BatchCreateTensorboardTimeSeries.Fields | |
---|---|
tensorboardTimeSeries[] |
The created TensorboardTimeSeries. |
GoogleCloudAiplatformV1BatchDedicatedResources
A description of resources that are used for performing batch operations, are dedicated to a Model, and need manual configuration.Fields | |
---|---|
machineSpec |
Required. Immutable. The specification of a single machine. |
maxReplicaCount |
Immutable. The maximum number of machine replicas the batch operation may be scaled to. The default value is 10. |
startingReplicaCount |
Immutable. The number of machine replicas used at the start of the batch operation. If not set, Vertex AI decides starting number, not greater than max_replica_count |
GoogleCloudAiplatformV1BatchDeletePipelineJobsRequest
Request message for PipelineService.BatchDeletePipelineJobs.Fields | |
---|---|
names[] |
Required. The names of the PipelineJobs to delete. A maximum of 32 PipelineJobs can be deleted in a batch. Format: |
GoogleCloudAiplatformV1BatchImportEvaluatedAnnotationsRequest
Request message for ModelService.BatchImportEvaluatedAnnotationsFields | |
---|---|
evaluatedAnnotations[] |
Required. Evaluated annotations resource to be imported. |
GoogleCloudAiplatformV1BatchImportEvaluatedAnnotationsResponse
Response message for ModelService.BatchImportEvaluatedAnnotationsFields | |
---|---|
importedEvaluatedAnnotationsCount |
Output only. Number of EvaluatedAnnotations imported. |
GoogleCloudAiplatformV1BatchImportModelEvaluationSlicesRequest
Request message for ModelService.BatchImportModelEvaluationSlicesFields | |
---|---|
modelEvaluationSlices[] |
Required. Model evaluation slice resource to be imported. |
GoogleCloudAiplatformV1BatchImportModelEvaluationSlicesResponse
Response message for ModelService.BatchImportModelEvaluationSlicesFields | |
---|---|
importedModelEvaluationSlices[] |
Output only. List of imported ModelEvaluationSlice.name. |
GoogleCloudAiplatformV1BatchMigrateResourcesOperationMetadata
Runtime operation information for MigrationService.BatchMigrateResources.Fields | |
---|---|
genericMetadata |
The common part of the operation metadata. |
partialResults[] |
Partial results that reflect the latest migration operation progress. |
GoogleCloudAiplatformV1BatchMigrateResourcesOperationMetadataPartialResult
Represents a partial result in batch migration operation for one MigrateResourceRequest.Fields | |
---|---|
dataset |
Migrated dataset resource name. |
error |
The error result of the migration request in case of failure. |
model |
Migrated model resource name. |
request |
It's the same as the value in MigrateResourceRequest.migrate_resource_requests. |
GoogleCloudAiplatformV1BatchMigrateResourcesRequest
Request message for MigrationService.BatchMigrateResources.Fields | |
---|---|
migrateResourceRequests[] |
Required. The request messages specifying the resources to migrate. They must be in the same location as the destination. Up to 50 resources can be migrated in one batch. |
GoogleCloudAiplatformV1BatchMigrateResourcesResponse
Response message for MigrationService.BatchMigrateResources.Fields | |
---|---|
migrateResourceResponses[] |
Successfully migrated resources. |
GoogleCloudAiplatformV1BatchPredictionJob
A job that uses a Model to produce predictions on multiple input instances. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.Fields | |
---|---|
completionStats |
Output only. Statistics on completed and failed prediction instances. |
createTime |
Output only. Time when the BatchPredictionJob was created. |
dedicatedResources |
The config of resources used by the Model during the batch prediction. If the Model supports DEDICATED_RESOURCES this config may be provided (and the job will use these resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config must be provided. |
disableContainerLogging |
For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send |
displayName |
Required. The user-defined name of this BatchPredictionJob. |
encryptionSpec |
Customer-managed encryption key options for a BatchPredictionJob. If this is set, then all resources created by the BatchPredictionJob will be encrypted with the provided encryption key. |
endTime |
Output only. Time when the BatchPredictionJob entered any of the following states: |
error |
Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED. |
explanationSpec |
Explanation configuration for this BatchPredictionJob. Can be specified only if generate_explanation is set to |
generateExplanation |
Generate explanation with the batch prediction results. When set to |
inputConfig |
Required. Input configuration of the instances on which predictions are performed. The schema of any single instance may be specified via the Model's PredictSchemata's instance_schema_uri. |
instanceConfig |
Configuration for how to convert batch prediction input instances to the prediction instances that are sent to the Model. |
labels |
The labels with user-defined metadata to organize BatchPredictionJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
manualBatchTuningParameters |
Immutable. Parameters configuring the batch behavior. Currently only applicable when dedicated_resources are used (in other cases Vertex AI does the tuning itself). |
model |
The name of the Model resource that produces the predictions via this job, must share the same ancestor Location. Starting this job has no impact on any existing deployments of the Model and their resources. Exactly one of model and unmanaged_container_model must be set. The model resource name may contain version id or version alias to specify the version. Example: |
modelParameters |
The parameters that govern the predictions. The schema of the parameters may be specified via the Model's PredictSchemata's parameters_schema_uri. |
modelVersionId |
Output only. The version ID of the Model that produces the predictions via this job. |
name |
Output only. Resource name of the BatchPredictionJob. |
outputConfig |
Required. The Configuration specifying where output predictions should be written. The schema of any single prediction may be specified as a concatenation of Model's PredictSchemata's instance_schema_uri and prediction_schema_uri. |
outputInfo |
Output only. Information further describing the output of this job. |
partialFailures[] |
Output only. Partial failures encountered. For example, single files that can't be read. This field never exceeds 20 entries. Status details fields contain standard Google Cloud error details. |
resourcesConsumed |
Output only. Information about resources that had been consumed by this job. Provided in real time at best effort basis, as well as a final value once the job completes. Note: This field currently may be not populated for batch predictions that use AutoML Models. |
serviceAccount |
The service account that the DeployedModel's container runs as. If not specified, a system generated one will be used, which has minimal permissions and the custom container, if used, may not have enough permission to access other Google Cloud resources. Users deploying the Model must have the |
startTime |
Output only. Time when the BatchPredictionJob for the first time entered the |
state |
Output only. The detailed state of the job. |
Enum type. Can be one of the following: | |
JOB_STATE_UNSPECIFIED |
The job state is unspecified. |
JOB_STATE_QUEUED |
The job has been just created or resumed and processing has not yet begun. |
JOB_STATE_PENDING |
The service is preparing to run the job. |
JOB_STATE_RUNNING |
The job is in progress. |
JOB_STATE_SUCCEEDED |
The job completed successfully. |
JOB_STATE_FAILED |
The job failed. |
JOB_STATE_CANCELLING |
The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED , JOB_STATE_FAILED or JOB_STATE_CANCELLED . |
JOB_STATE_CANCELLED |
The job has been cancelled. |
JOB_STATE_PAUSED |
The job has been stopped, and can be resumed. |
JOB_STATE_EXPIRED |
The job has expired. |
JOB_STATE_UPDATING |
The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state. |
JOB_STATE_PARTIALLY_SUCCEEDED |
The job is partially succeeded, some results may be missing due to errors. |
unmanagedContainerModel |
Contains model information necessary to perform batch prediction without requiring uploading to model registry. Exactly one of model and unmanaged_container_model must be set. |
updateTime |
Output only. Time when the BatchPredictionJob was most recently updated. |
GoogleCloudAiplatformV1BatchPredictionJobInputConfig
Configures the input to BatchPredictionJob. See Model.supported_input_storage_formats for Model's supported input formats, and how instances should be expressed via any of them.Fields | |
---|---|
bigquerySource |
The BigQuery location of the input table. The schema of the table should be in the format described by the given context OpenAPI Schema, if one is provided. The table may contain additional columns that are not described by the schema, and they will be ignored. |
gcsSource |
The Cloud Storage location for the input instances. |
instancesFormat |
Required. The format in which instances are given, must be one of the Model's supported_input_storage_formats. |
GoogleCloudAiplatformV1BatchPredictionJobInstanceConfig
Configuration defining how to transform batch prediction input instances to the instances that the Model accepts.Fields | |
---|---|
excludedFields[] |
Fields that will be excluded in the prediction instance that is sent to the Model. Excluded will be attached to the batch prediction output if key_field is not specified. When excluded_fields is populated, included_fields must be empty. The input must be JSONL with objects at each line, BigQuery or TfRecord. |
includedFields[] |
Fields that will be included in the prediction instance that is sent to the Model. If instance_type is |
instanceType |
The format of the instance that the Model accepts. Vertex AI will convert compatible batch prediction input instance formats to the specified format. Supported values are: * |
keyField |
The name of the field that is considered as a key. The values identified by the key field is not included in the transformed instances that is sent to the Model. This is similar to specifying this name of the field in excluded_fields. In addition, the batch prediction output will not include the instances. Instead the output will only include the value of the key field, in a field named |
GoogleCloudAiplatformV1BatchPredictionJobOutputConfig
Configures the output of BatchPredictionJob. See Model.supported_output_storage_formats for supported output formats, and how predictions are expressed via any of them.Fields | |
---|---|
bigqueryDestination |
The BigQuery project or dataset location where the output is to be written to. If project is provided, a new dataset is created with name |
gcsDestination |
The Cloud Storage location of the directory where the output is to be written to. In the given directory a new directory is created. Its name is |
predictionsFormat |
Required. The format in which Vertex AI gives the predictions, must be one of the Model's supported_output_storage_formats. |
GoogleCloudAiplatformV1BatchPredictionJobOutputInfo
Further describes this job's output. Supplements output_config.Fields | |
---|---|
bigqueryOutputDataset |
Output only. The path of the BigQuery dataset created, in |
bigqueryOutputTable |
Output only. The name of the BigQuery table created, in |
gcsOutputDirectory |
Output only. The full path of the Cloud Storage directory created, into which the prediction output is written. |
GoogleCloudAiplatformV1BatchReadFeatureValuesOperationMetadata
Details of operations that batch reads Feature values.Fields | |
---|---|
genericMetadata |
Operation metadata for Featurestore batch read Features values. |
GoogleCloudAiplatformV1BatchReadFeatureValuesRequest
Request message for FeaturestoreService.BatchReadFeatureValues.Fields | |
---|---|
bigqueryReadInstances |
Similar to csv_read_instances, but from BigQuery source. |
csvReadInstances |
Each read instance consists of exactly one read timestamp and one or more entity IDs identifying entities of the corresponding EntityTypes whose Features are requested. Each output instance contains Feature values of requested entities concatenated together as of the read time. An example read instance may be |
destination |
Required. Specifies output location and format. |
entityTypeSpecs[] |
Required. Specifies EntityType grouping Features to read values of and settings. |
passThroughFields[] |
When not empty, the specified fields in the *_read_instances source will be joined as-is in the output, in addition to those fields from the Featurestore Entity. For BigQuery source, the type of the pass-through values will be automatically inferred. For CSV source, the pass-through values will be passed as opaque bytes. |
startTime |
Optional. Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision. |
GoogleCloudAiplatformV1BatchReadFeatureValuesRequestEntityTypeSpec
Selects Features of an EntityType to read values of and specifies read settings.Fields | |
---|---|
entityTypeId |
Required. ID of the EntityType to select Features. The EntityType id is the entity_type_id specified during EntityType creation. |
featureSelector |
Required. Selectors choosing which Feature values to read from the EntityType. |
settings[] |
Per-Feature settings for the batch read. |
GoogleCloudAiplatformV1BatchReadFeatureValuesRequestPassThroughField
Describe pass-through fields in read_instance source.Fields | |
---|---|
fieldName |
Required. The name of the field in the CSV header or the name of the column in BigQuery table. The naming restriction is the same as Feature.name. |
GoogleCloudAiplatformV1BatchReadTensorboardTimeSeriesDataResponse
Response message for TensorboardService.BatchReadTensorboardTimeSeriesData.Fields | |
---|---|
timeSeriesData[] |
The returned time series data. |
GoogleCloudAiplatformV1BigQueryDestination
The BigQuery location for the output content.Fields | |
---|---|
outputUri |
Required. BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: |
GoogleCloudAiplatformV1BigQuerySource
The BigQuery location for the input content.Fields | |
---|---|
inputUri |
Required. BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: |
GoogleCloudAiplatformV1Blob
Content blob. It's preferred to send as text directly rather than raw bytes.Fields | |
---|---|
data |
Required. Raw bytes. |
mimeType |
Required. The IANA standard MIME type of the source data. |
GoogleCloudAiplatformV1BlurBaselineConfig
Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383Fields | |
---|---|
maxBlurSigma |
The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline. |
GoogleCloudAiplatformV1BoolArray
A list of boolean values.Fields | |
---|---|
values[] |
A list of bool values. |
GoogleCloudAiplatformV1Candidate
A response candidate generated from the model.Fields | |
---|---|
citationMetadata |
Output only. Source attribution of the generated content. |
content |
Output only. Content parts of the candidate. |
finishMessage |
Output only. Describes the reason the mode stopped generating tokens in more detail. This is only filled when |
finishReason |
Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens. |
Enum type. Can be one of the following: | |
FINISH_REASON_UNSPECIFIED |
The finish reason is unspecified. |
STOP |
Natural stop point of the model or provided stop sequence. |
MAX_TOKENS |
The maximum number of tokens as specified in the request was reached. |
SAFETY |
The token generation was stopped as the response was flagged for safety reasons. NOTE: When streaming the Candidate.content will be empty if content filters blocked the output. |
RECITATION |
The token generation was stopped as the response was flagged for unauthorized citations. |
OTHER |
All other reasons that stopped the token generation |
BLOCKLIST |
The token generation was stopped as the response was flagged for the terms which are included from the terminology blocklist. |
PROHIBITED_CONTENT |
The token generation was stopped as the response was flagged for the prohibited contents. |
SPII |
The token generation was stopped as the response was flagged for Sensitive Personally Identifiable Information (SPII) contents. |
groundingMetadata |
Output only. Metadata specifies sources used to ground generated content. |
index |
Output only. Index of the candidate. |
safetyRatings[] |
Output only. List of ratings for the safety of a response candidate. There is at most one rating per category. |
GoogleCloudAiplatformV1CheckTrialEarlyStoppingStateMetatdata
This message will be placed in the metadata field of a google.longrunning.Operation associated with a CheckTrialEarlyStoppingState request.Fields | |
---|---|
genericMetadata |
Operation metadata for suggesting Trials. |
study |
The name of the Study that the Trial belongs to. |
trial |
The Trial name. |
GoogleCloudAiplatformV1CheckTrialEarlyStoppingStateResponse
Response message for VizierService.CheckTrialEarlyStoppingState.Fields | |
---|---|
shouldStop |
True if the Trial should stop. |
GoogleCloudAiplatformV1Citation
Source attributions for content.Fields | |
---|---|
endIndex |
Output only. End index into the content. |
license |
Output only. License of the attribution. |
publicationDate |
Output only. Publication date of the attribution. |
startIndex |
Output only. Start index into the content. |
title |
Output only. Title of the attribution. |
uri |
Output only. Url reference of the attribution. |
GoogleCloudAiplatformV1CitationMetadata
A collection of source attributions for a piece of content.Fields | |
---|---|
citations[] |
Output only. List of citations. |
GoogleCloudAiplatformV1CompleteTrialRequest
Request message for VizierService.CompleteTrial.Fields | |
---|---|
finalMeasurement |
Optional. If provided, it will be used as the completed Trial's final_measurement; Otherwise, the service will auto-select a previously reported measurement as the final-measurement |
infeasibleReason |
Optional. A human readable reason why the trial was infeasible. This should only be provided if |
trialInfeasible |
Optional. True if the Trial cannot be run with the given Parameter, and final_measurement will be ignored. |
GoogleCloudAiplatformV1CompletionStats
Success and error statistics of processing multiple entities (for example, DataItems or structured data rows) in batch.Fields | |
---|---|
failedCount |
Output only. The number of entities for which any error was encountered. |
incompleteCount |
Output only. In cases when enough errors are encountered a job, pipeline, or operation may be failed as a whole. Below is the number of entities for which the processing had not been finished (either in successful or failed state). Set to -1 if the number is unknown (for example, the operation failed before the total entity number could be collected). |
successfulCount |
Output only. The number of entities that had been processed successfully. |
successfulForecastPointCount |
Output only. The number of the successful forecast points that are generated by the forecasting model. This is ONLY used by the forecasting batch prediction. |
GoogleCloudAiplatformV1ComputeTokensRequest
Request message for ComputeTokens RPC call.Fields | |
---|---|
instances[] |
Required. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models. |
GoogleCloudAiplatformV1ComputeTokensResponse
Response message for ComputeTokens RPC call.Fields | |
---|---|
tokensInfo[] |
Lists of tokens info from the input. A ComputeTokensRequest could have multiple instances with a prompt in each instance. We also need to return lists of tokens info for the request with multiple instances. |
GoogleCloudAiplatformV1ContainerRegistryDestination
The Container Registry location for the container image.Fields | |
---|---|
outputUri |
Required. Container Registry URI of a container image. Only Google Container Registry and Artifact Registry are supported now. Accepted forms: * Google Container Registry path. For example: |
GoogleCloudAiplatformV1ContainerSpec
The spec of a Container.Fields | |
---|---|
args[] |
The arguments to be passed when starting the container. |
command[] |
The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided. |
env[] |
Environment variables to be passed to the container. Maximum limit is 100. |
imageUri |
Required. The URI of a container image in the Container Registry that is to be run on each worker replica. |
GoogleCloudAiplatformV1Content
The base structured datatype containing multi-part content of a message. AContent
includes a role
field designating the producer of the Content
and a parts
field containing multi-part data that contains the content of the message turn.
Fields | |
---|---|
parts[] |
Required. Ordered |
role |
Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. |
GoogleCloudAiplatformV1Context
Instance of a general context.Fields | |
---|---|
createTime |
Output only. Timestamp when this Context was created. |
description |
Description of the Context |
displayName |
User provided display name of the Context. May be up to 128 Unicode characters. |
etag |
An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
The labels with user-defined metadata to organize your Contexts. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Context (System labels are excluded). |
metadata |
Properties of the Context. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB. |
name |
Immutable. The resource name of the Context. |
parentContexts[] |
Output only. A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts. |
schemaTitle |
The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store. |
schemaVersion |
The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store. |
updateTime |
Output only. Timestamp when this Context was last updated. |
GoogleCloudAiplatformV1CopyModelOperationMetadata
Details of ModelService.CopyModel operation.Fields | |
---|---|
genericMetadata |
The common part of the operation metadata. |
GoogleCloudAiplatformV1CopyModelRequest
Request message for ModelService.CopyModel.Fields | |
---|---|
encryptionSpec |
Customer-managed encryption key options. If this is set, then the Model copy will be encrypted with the provided encryption key. |
modelId |
Optional. Copy source_model into a new Model with this ID. The ID will become the final component of the model resource name. This value may be up to 63 characters, and valid characters are |
parentModel |
Optional. Specify this field to copy source_model into this existing Model as a new version. Format: |
sourceModel |
Required. The resource name of the Model to copy. That Model must be in the same Project. Format: |
GoogleCloudAiplatformV1CopyModelResponse
Response message of ModelService.CopyModel operation.Fields | |
---|---|
model |
The name of the copied Model resource. Format: |
modelVersionId |
Output only. The version ID of the model that is copied. |
GoogleCloudAiplatformV1CountTokensRequest
Request message for PredictionService.CountTokens.Fields | |
---|---|
contents[] |
Required. Input content. |
instances[] |
Required. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model. |
model |
Required. The name of the publisher model requested to serve the prediction. Format: |
GoogleCloudAiplatformV1CountTokensResponse
Response message for PredictionService.CountTokens.Fields | |
---|---|
totalBillableCharacters |
The total number of billable characters counted across all instances from the request. |
totalTokens |
The total number of tokens counted across all instances from the request. |
GoogleCloudAiplatformV1CreateDatasetOperationMetadata
Runtime operation information for DatasetService.CreateDataset.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1CreateDatasetVersionOperationMetadata
Runtime operation information for DatasetService.CreateDatasetVersion.Fields | |
---|---|
genericMetadata |
The common part of the operation metadata. |
GoogleCloudAiplatformV1CreateDeploymentResourcePoolOperationMetadata
Runtime operation information for CreateDeploymentResourcePool method.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1CreateDeploymentResourcePoolRequest
Request message for CreateDeploymentResourcePool method.Fields | |
---|---|
deploymentResourcePool |
Required. The DeploymentResourcePool to create. |
deploymentResourcePoolId |
Required. The ID to use for the DeploymentResourcePool, which will become the final component of the DeploymentResourcePool's resource name. The maximum length is 63 characters, and valid characters are |
GoogleCloudAiplatformV1CreateEndpointOperationMetadata
Runtime operation information for EndpointService.CreateEndpoint.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1CreateEntityTypeOperationMetadata
Details of operations that perform create EntityType.Fields | |
---|---|
genericMetadata |
Operation metadata for EntityType. |
GoogleCloudAiplatformV1CreateFeatureGroupOperationMetadata
Details of operations that perform create FeatureGroup.Fields | |
---|---|
genericMetadata |
Operation metadata for FeatureGroup. |
GoogleCloudAiplatformV1CreateFeatureOnlineStoreOperationMetadata
Details of operations that perform create FeatureOnlineStore.Fields | |
---|---|
genericMetadata |
Operation metadata for FeatureOnlineStore. |
GoogleCloudAiplatformV1CreateFeatureOperationMetadata
Details of operations that perform create Feature.Fields | |
---|---|
genericMetadata |
Operation metadata for Feature. |
GoogleCloudAiplatformV1CreateFeatureRequest
Request message for FeaturestoreService.CreateFeature. Request message for FeatureRegistryService.CreateFeature.Fields | |
---|---|
feature |
Required. The Feature to create. |
featureId |
Required. The ID to use for the Feature, which will become the final component of the Feature's resource name. This value may be up to 128 characters, and valid characters are |
parent |
Required. The resource name of the EntityType or FeatureGroup to create a Feature. Format for entity_type as parent: |
GoogleCloudAiplatformV1CreateFeatureViewOperationMetadata
Details of operations that perform create FeatureView.Fields | |
---|---|
genericMetadata |
Operation metadata for FeatureView Create. |
GoogleCloudAiplatformV1CreateFeaturestoreOperationMetadata
Details of operations that perform create Featurestore.Fields | |
---|---|
genericMetadata |
Operation metadata for Featurestore. |
GoogleCloudAiplatformV1CreateIndexEndpointOperationMetadata
Runtime operation information for IndexEndpointService.CreateIndexEndpoint.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1CreateIndexOperationMetadata
Runtime operation information for IndexService.CreateIndex.Fields | |
---|---|
genericMetadata |
The operation generic information. |
nearestNeighborSearchOperationMetadata |
The operation metadata with regard to Matching Engine Index operation. |
GoogleCloudAiplatformV1CreateMetadataStoreOperationMetadata
Details of operations that perform MetadataService.CreateMetadataStore.Fields | |
---|---|
genericMetadata |
Operation metadata for creating a MetadataStore. |
GoogleCloudAiplatformV1CreateNotebookRuntimeTemplateOperationMetadata
Metadata information for NotebookService.CreateNotebookRuntimeTemplate.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1CreatePersistentResourceOperationMetadata
Details of operations that perform create PersistentResource.Fields | |
---|---|
genericMetadata |
Operation metadata for PersistentResource. |
progressMessage |
Progress Message for Create LRO |
GoogleCloudAiplatformV1CreatePipelineJobRequest
Request message for PipelineService.CreatePipelineJob.Fields | |
---|---|
parent |
Required. The resource name of the Location to create the PipelineJob in. Format: |
pipelineJob |
Required. The PipelineJob to create. |
pipelineJobId |
The ID to use for the PipelineJob, which will become the final component of the PipelineJob name. If not provided, an ID will be automatically generated. This value should be less than 128 characters, and valid characters are |
GoogleCloudAiplatformV1CreateRegistryFeatureOperationMetadata
Details of operations that perform create FeatureGroup.Fields | |
---|---|
genericMetadata |
Operation metadata for Feature. |
GoogleCloudAiplatformV1CreateSpecialistPoolOperationMetadata
Runtime operation information for SpecialistPoolService.CreateSpecialistPool.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1CreateTensorboardOperationMetadata
Details of operations that perform create Tensorboard.Fields | |
---|---|
genericMetadata |
Operation metadata for Tensorboard. |
GoogleCloudAiplatformV1CreateTensorboardRunRequest
Request message for TensorboardService.CreateTensorboardRun.Fields | |
---|---|
parent |
Required. The resource name of the TensorboardExperiment to create the TensorboardRun in. Format: |
tensorboardRun |
Required. The TensorboardRun to create. |
tensorboardRunId |
Required. The ID to use for the Tensorboard run, which becomes the final component of the Tensorboard run's resource name. This value should be 1-128 characters, and valid characters are |
GoogleCloudAiplatformV1CreateTensorboardTimeSeriesRequest
Request message for TensorboardService.CreateTensorboardTimeSeries.Fields | |
---|---|
parent |
Required. The resource name of the TensorboardRun to create the TensorboardTimeSeries in. Format: |
tensorboardTimeSeries |
Required. The TensorboardTimeSeries to create. |
tensorboardTimeSeriesId |
Optional. The user specified unique ID to use for the TensorboardTimeSeries, which becomes the final component of the TensorboardTimeSeries's resource name. This value should match "a-z0-9{0, 127}" |
GoogleCloudAiplatformV1CsvDestination
The storage details for CSV output content.Fields | |
---|---|
gcsDestination |
Required. Google Cloud Storage location. |
GoogleCloudAiplatformV1CsvSource
The storage details for CSV input content.Fields | |
---|---|
gcsSource |
Required. Google Cloud Storage location. |
GoogleCloudAiplatformV1CustomJob
Represents a job that runs custom workloads such as a Docker container or a Python package. A CustomJob can have multiple worker pools and each worker pool can have its own machine and input spec. A CustomJob will be cleaned up once the job enters terminal state (failed or succeeded).Fields | |
---|---|
createTime |
Output only. Time when the CustomJob was created. |
displayName |
Required. The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
encryptionSpec |
Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key. |
endTime |
Output only. Time when the CustomJob entered any of the following states: |
error |
Output only. Only populated when job's state is |
jobSpec |
Required. Job spec. |
labels |
The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
name |
Output only. Resource name of a CustomJob. |
startTime |
Output only. Time when the CustomJob for the first time entered the |
state |
Output only. The detailed state of the job. |
Enum type. Can be one of the following: | |
JOB_STATE_UNSPECIFIED |
The job state is unspecified. |
JOB_STATE_QUEUED |
The job has been just created or resumed and processing has not yet begun. |
JOB_STATE_PENDING |
The service is preparing to run the job. |
JOB_STATE_RUNNING |
The job is in progress. |
JOB_STATE_SUCCEEDED |
The job completed successfully. |
JOB_STATE_FAILED |
The job failed. |
JOB_STATE_CANCELLING |
The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED , JOB_STATE_FAILED or JOB_STATE_CANCELLED . |
JOB_STATE_CANCELLED |
The job has been cancelled. |
JOB_STATE_PAUSED |
The job has been stopped, and can be resumed. |
JOB_STATE_EXPIRED |
The job has expired. |
JOB_STATE_UPDATING |
The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state. |
JOB_STATE_PARTIALLY_SUCCEEDED |
The job is partially succeeded, some results may be missing due to errors. |
updateTime |
Output only. Time when the CustomJob was most recently updated. |
webAccessUris |
Output only. URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is |
GoogleCloudAiplatformV1CustomJobSpec
Represents the spec of a CustomJob.Fields | |
---|---|
baseOutputDirectory |
The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = |
enableDashboardAccess |
Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to |
enableWebAccess |
Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to |
experiment |
Optional. The Experiment associated with this job. Format: |
experimentRun |
Optional. The Experiment Run associated with this job. Format: |
models[] |
Optional. The name of the Model resources for which to generate a mapping to artifact URIs. Applicable only to some of the Google-provided custom jobs. Format: |
network |
Optional. The full name of the Compute Engine network to which the Job should be peered. For example, |
persistentResourceId |
Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. |
protectedArtifactLocationId |
The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations |
reservedIpRanges[] |
Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. |
scheduling |
Scheduling options for a CustomJob. |
serviceAccount |
Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used. |
tensorboard |
Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: |
workerPoolSpecs[] |
Required. The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value. |
GoogleCloudAiplatformV1DataItem
A piece of data in a Dataset. Could be an image, a video, a document or plain text.Fields | |
---|---|
createTime |
Output only. Timestamp when this DataItem was created. |
etag |
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
Optional. The labels with user-defined metadata to organize your DataItems. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one DataItem(System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. |
name |
Output only. The resource name of the DataItem. |
payload |
Required. The data that the DataItem represents (for example, an image or a text snippet). The schema of the payload is stored in the parent Dataset's metadata schema's dataItemSchemaUri field. |
updateTime |
Output only. Timestamp when this DataItem was last updated. |
GoogleCloudAiplatformV1DataItemView
A container for a single DataItem and Annotations on it.Fields | |
---|---|
annotations[] |
The Annotations on the DataItem. If too many Annotations should be returned for the DataItem, this field will be truncated per annotations_limit in request. If it was, then the has_truncated_annotations will be set to true. |
dataItem |
The DataItem. |
hasTruncatedAnnotations |
True if and only if the Annotations field has been truncated. It happens if more Annotations for this DataItem met the request's annotation_filter than are allowed to be returned by annotations_limit. Note that if Annotations field is not being returned due to field mask, then this field will not be set to true no matter how many Annotations are there. |
GoogleCloudAiplatformV1DataLabelingJob
DataLabelingJob is used to trigger a human labeling job on unlabeled data from the following Dataset:Fields | |
---|---|
activeLearningConfig |
Parameters that configure the active learning pipeline. Active learning will label the data incrementally via several iterations. For every iteration, it will select a batch of data based on the sampling strategy. |
annotationLabels |
Labels to assign to annotations generated by this DataLabelingJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. |
createTime |
Output only. Timestamp when this DataLabelingJob was created. |
currentSpend |
Output only. Estimated cost(in US dollars) that the DataLabelingJob has incurred to date. |
datasets[] |
Required. Dataset resource names. Right now we only support labeling from a single Dataset. Format: |
displayName |
Required. The user-defined name of the DataLabelingJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a DataLabelingJob. |
encryptionSpec |
Customer-managed encryption key spec for a DataLabelingJob. If set, this DataLabelingJob will be secured by this key. Note: Annotations created in the DataLabelingJob are associated with the EncryptionSpec of the Dataset they are exported to. |
error |
Output only. DataLabelingJob errors. It is only populated when job's state is |
inputs |
Required. Input config parameters for the DataLabelingJob. |
inputsSchemaUri |
Required. Points to a YAML file stored on Google Cloud Storage describing the config for a specific type of DataLabelingJob. The schema files that can be used here are found in the https://storage.googleapis.com/google-cloud-aiplatform bucket in the /schema/datalabelingjob/inputs/ folder. |
instructionUri |
Required. The Google Cloud Storage location of the instruction pdf. This pdf is shared with labelers, and provides detailed description on how to label DataItems in Datasets. |
labelerCount |
Required. Number of labelers to work on each DataItem. |
labelingProgress |
Output only. Current labeling job progress percentage scaled in interval [0, 100], indicating the percentage of DataItems that has been finished. |
labels |
The labels with user-defined metadata to organize your DataLabelingJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each DataLabelingJob: * "aiplatform.googleapis.com/schema": output only, its value is the inputs_schema's title. |
name |
Output only. Resource name of the DataLabelingJob. |
specialistPools[] |
The SpecialistPools' resource names associated with this job. |
state |
Output only. The detailed state of the job. |
Enum type. Can be one of the following: | |
JOB_STATE_UNSPECIFIED |
The job state is unspecified. |
JOB_STATE_QUEUED |
The job has been just created or resumed and processing has not yet begun. |
JOB_STATE_PENDING |
The service is preparing to run the job. |
JOB_STATE_RUNNING |
The job is in progress. |
JOB_STATE_SUCCEEDED |
The job completed successfully. |
JOB_STATE_FAILED |
The job failed. |
JOB_STATE_CANCELLING |
The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED , JOB_STATE_FAILED or JOB_STATE_CANCELLED . |
JOB_STATE_CANCELLED |
The job has been cancelled. |
JOB_STATE_PAUSED |
The job has been stopped, and can be resumed. |
JOB_STATE_EXPIRED |
The job has expired. |
JOB_STATE_UPDATING |
The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state. |
JOB_STATE_PARTIALLY_SUCCEEDED |
The job is partially succeeded, some results may be missing due to errors. |
updateTime |
Output only. Timestamp when this DataLabelingJob was updated most recently. |
GoogleCloudAiplatformV1Dataset
A collection of DataItems and Annotations on them.Fields | |
---|---|
createTime |
Output only. Timestamp when this Dataset was created. |
dataItemCount |
Output only. The number of DataItems in this Dataset. Only apply for non-structured Dataset. |
description |
The description of the Dataset. |
displayName |
Required. The user-defined name of the Dataset. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
encryptionSpec |
Customer-managed encryption key spec for a Dataset. If set, this Dataset and all sub-resources of this Dataset will be secured by this key. |
etag |
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
The labels with user-defined metadata to organize your Datasets. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Dataset (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each Dataset: * "aiplatform.googleapis.com/dataset_metadata_schema": output only, its value is the metadata_schema's title. |
metadata |
Required. Additional information about the Dataset. |
metadataArtifact |
Output only. The resource name of the Artifact that was created in MetadataStore when creating the Dataset. The Artifact resource name pattern is |
metadataSchemaUri |
Required. Points to a YAML file stored on Google Cloud Storage describing additional information about the Dataset. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/metadata/. |
modelReference |
Optional. Reference to the public base model last used by the dataset. Only set for prompt datasets. |
name |
Output only. The resource name of the Dataset. |
savedQueries[] |
All SavedQueries belong to the Dataset will be returned in List/Get Dataset response. The annotation_specs field will not be populated except for UI cases which will only use annotation_spec_count. In CreateDataset request, a SavedQuery is created together if this field is set, up to one SavedQuery can be set in CreateDatasetRequest. The SavedQuery should not contain any AnnotationSpec. |
updateTime |
Output only. Timestamp when this Dataset was last updated. |
GoogleCloudAiplatformV1DatasetVersion
Describes the dataset version.Fields | |
---|---|
bigQueryDatasetName |
Output only. Name of the associated BigQuery dataset. |
createTime |
Output only. Timestamp when this DatasetVersion was created. |
displayName |
The user-defined name of the DatasetVersion. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
etag |
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
metadata |
Required. Output only. Additional information about the DatasetVersion. |
modelReference |
Output only. Reference to the public base model last used by the dataset version. Only set for prompt dataset versions. |
name |
Output only. The resource name of the DatasetVersion. |
updateTime |
Output only. Timestamp when this DatasetVersion was last updated. |
GoogleCloudAiplatformV1DedicatedResources
A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.Fields | |
---|---|
autoscalingMetricSpecs[] |
Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to |
machineSpec |
Required. Immutable. The specification of a single machine used by the prediction. |
maxReplicaCount |
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type). |
minReplicaCount |
Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. |
GoogleCloudAiplatformV1DeleteFeatureValuesOperationMetadata
Details of operations that delete Feature values.Fields | |
---|---|
genericMetadata |
Operation metadata for Featurestore delete Features values. |
GoogleCloudAiplatformV1DeleteFeatureValuesRequest
Request message for FeaturestoreService.DeleteFeatureValues.Fields | |
---|---|
selectEntity |
Select feature values to be deleted by specifying entities. |
selectTimeRangeAndFeature |
Select feature values to be deleted by specifying time range and features. |
GoogleCloudAiplatformV1DeleteFeatureValuesRequestSelectEntity
Message to select entity. If an entity id is selected, all the feature values corresponding to the entity id will be deleted, including the entityId.Fields | |
---|---|
entityIdSelector |
Required. Selectors choosing feature values of which entity id to be deleted from the EntityType. |
GoogleCloudAiplatformV1DeleteFeatureValuesRequestSelectTimeRangeAndFeature
Message to select time range and feature. Values of the selected feature generated within an inclusive time range will be deleted. Using this option permanently deletes the feature values from the specified feature IDs within the specified time range. This might include data from the online storage. If you want to retain any deleted historical data in the online storage, you must re-ingest it.Fields | |
---|---|
featureSelector |
Required. Selectors choosing which feature values to be deleted from the EntityType. |
skipOnlineStorageDelete |
If set, data will not be deleted from online storage. When time range is older than the data in online storage, setting this to be true will make the deletion have no impact on online serving. |
timeRange |
Required. Select feature generated within a half-inclusive time range. The time range is lower inclusive and upper exclusive. |
GoogleCloudAiplatformV1DeleteFeatureValuesResponse
Response message for FeaturestoreService.DeleteFeatureValues.Fields | |
---|---|
selectEntity |
Response for request specifying the entities to delete |
selectTimeRangeAndFeature |
Response for request specifying time range and feature |
GoogleCloudAiplatformV1DeleteFeatureValuesResponseSelectEntity
Response message if the request uses the SelectEntity option.Fields | |
---|---|
offlineStorageDeletedEntityRowCount |
The count of deleted entity rows in the offline storage. Each row corresponds to the combination of an entity ID and a timestamp. One entity ID can have multiple rows in the offline storage. |
onlineStorageDeletedEntityCount |
The count of deleted entities in the online storage. Each entity ID corresponds to one entity. |
GoogleCloudAiplatformV1DeleteFeatureValuesResponseSelectTimeRangeAndFeature
Response message if the request uses the SelectTimeRangeAndFeature option.Fields | |
---|---|
impactedFeatureCount |
The count of the features or columns impacted. This is the same as the feature count in the request. |
offlineStorageModifiedEntityRowCount |
The count of modified entity rows in the offline storage. Each row corresponds to the combination of an entity ID and a timestamp. One entity ID can have multiple rows in the offline storage. Within each row, only the features specified in the request are deleted. |
onlineStorageModifiedEntityCount |
The count of modified entities in the online storage. Each entity ID corresponds to one entity. Within each entity, only the features specified in the request are deleted. |
GoogleCloudAiplatformV1DeleteMetadataStoreOperationMetadata
Details of operations that perform MetadataService.DeleteMetadataStore.Fields | |
---|---|
genericMetadata |
Operation metadata for deleting a MetadataStore. |
GoogleCloudAiplatformV1DeleteOperationMetadata
Details of operations that perform deletes of any entities.Fields | |
---|---|
genericMetadata |
The common part of the operation metadata. |
GoogleCloudAiplatformV1DeployIndexOperationMetadata
Runtime operation information for IndexEndpointService.DeployIndex.Fields | |
---|---|
deployedIndexId |
The unique index id specified by user |
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1DeployIndexRequest
Request message for IndexEndpointService.DeployIndex.Fields | |
---|---|
deployedIndex |
Required. The DeployedIndex to be created within the IndexEndpoint. |
GoogleCloudAiplatformV1DeployIndexResponse
Response message for IndexEndpointService.DeployIndex.Fields | |
---|---|
deployedIndex |
The DeployedIndex that had been deployed in the IndexEndpoint. |
GoogleCloudAiplatformV1DeployModelOperationMetadata
Runtime operation information for EndpointService.DeployModel.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1DeployModelRequest
Request message for EndpointService.DeployModel.Fields | |
---|---|
deployedModel |
Required. The DeployedModel to be created within the Endpoint. Note that Endpoint.traffic_split must be updated for the DeployedModel to start receiving traffic, either as part of this call, or via EndpointService.UpdateEndpoint. |
trafficSplit |
A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If this field is non-empty, then the Endpoint's traffic_split will be overwritten with it. To refer to the ID of the just being deployed Model, a "0" should be used, and the actual ID of the new DeployedModel will be filled in its place by this method. The traffic percentage values must add up to 100. If this field is empty, then the Endpoint's traffic_split is not updated. |
GoogleCloudAiplatformV1DeployModelResponse
Response message for EndpointService.DeployModel.Fields | |
---|---|
deployedModel |
The DeployedModel that had been deployed in the Endpoint. |
GoogleCloudAiplatformV1DeployedIndex
A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.Fields | |
---|---|
automaticResources |
Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. |
createTime |
Output only. Timestamp when the DeployedIndex was created. |
dedicatedResources |
Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency. |
deployedIndexAuthConfig |
Optional. If set, the authentication is enabled for the private endpoint. |
deploymentGroup |
Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating |
displayName |
The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used. |
enableAccessLogging |
Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option. |
id |
Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in. |
index |
Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index. |
indexSyncTime |
Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex. |
privateEndpoints |
Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured. |
reservedIpRanges[] |
Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges. |
GoogleCloudAiplatformV1DeployedIndexAuthConfig
Used to set up the auth on the DeployedIndex's private endpoint.Fields | |
---|---|
authProvider |
Defines the authentication provider that the DeployedIndex uses. |
GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProvider
Configuration for an authentication provider, including support for JSON Web Token (JWT).Fields | |
---|---|
allowedIssuers[] |
A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: |
audiences[] |
The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted. |
GoogleCloudAiplatformV1DeployedIndexRef
Points to a DeployedIndex.Fields | |
---|---|
deployedIndexId |
Immutable. The ID of the DeployedIndex in the above IndexEndpoint. |
displayName |
Output only. The display name of the DeployedIndex. |
indexEndpoint |
Immutable. A resource name of the IndexEndpoint. |
GoogleCloudAiplatformV1DeployedModel
A deployment of a Model. Endpoints contain one or more DeployedModels.Fields | |
---|---|
automaticResources |
A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. |
createTime |
Output only. Timestamp when the DeployedModel was created. |
dedicatedResources |
A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration. |
disableContainerLogging |
For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send |
disableExplanations |
If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec. |
displayName |
The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used. |
enableAccessLogging |
If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option. |
explanationSpec |
Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration. |
id |
Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are |
model |
Required. The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: |
modelVersionId |
Output only. The version ID of the model that is deployed. |
privateEndpoints |
Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured. |
serviceAccount |
The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the |
sharedResources |
The resource name of the shared DeploymentResourcePool to deploy on. Format: |
GoogleCloudAiplatformV1DeployedModelRef
Points to a DeployedModel.Fields | |
---|---|
deployedModelId |
Immutable. An ID of a DeployedModel in the above Endpoint. |
endpoint |
Immutable. A resource name of an Endpoint. |
GoogleCloudAiplatformV1DeploymentResourcePool
A description of resources that can be shared by multiple DeployedModels, whose underlying specification consists of a DedicatedResources.Fields | |
---|---|
createTime |
Output only. Timestamp when this DeploymentResourcePool was created. |
dedicatedResources |
Required. The underlying DedicatedResources that the DeploymentResourcePool uses. |
name |
Immutable. The resource name of the DeploymentResourcePool. Format: |
GoogleCloudAiplatformV1DestinationFeatureSetting
(No description provided)Fields | |
---|---|
destinationField |
Specify the field name in the export destination. If not specified, Feature ID is used. |
featureId |
Required. The ID of the Feature to apply the setting to. |
GoogleCloudAiplatformV1DirectPredictRequest
Request message for PredictionService.DirectPredict.Fields | |
---|---|
inputs[] |
The prediction input. |
parameters |
The parameters that govern the prediction. |
GoogleCloudAiplatformV1DirectPredictResponse
Response message for PredictionService.DirectPredict.Fields | |
---|---|
outputs[] |
The prediction output. |
parameters |
The parameters that govern the prediction. |
GoogleCloudAiplatformV1DirectRawPredictRequest
Request message for PredictionService.DirectRawPredict.Fields | |
---|---|
input |
The prediction input. |
methodName |
Fully qualified name of the API method being invoked to perform predictions. Format: |
GoogleCloudAiplatformV1DirectRawPredictResponse
Response message for PredictionService.DirectRawPredict.Fields | |
---|---|
output |
The prediction output. |
GoogleCloudAiplatformV1DiskSpec
Represents the spec of disk options.Fields | |
---|---|
bootDiskSizeGb |
Size in GB of the boot disk (default is 100GB). |
bootDiskType |
Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive). |
GoogleCloudAiplatformV1DoubleArray
A list of double values.Fields | |
---|---|
values[] |
A list of double values. |
GoogleCloudAiplatformV1EncryptionSpec
Represents a customer-managed encryption key spec that can be applied to a top-level resource.Fields | |
---|---|
kmsKeyName |
Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: |
GoogleCloudAiplatformV1Endpoint
Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.Fields | |
---|---|
createTime |
Output only. Timestamp when this Endpoint was created. |
deployedModels[] |
Output only. The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively. |
description |
The description of the Endpoint. |
displayName |
Required. The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
enablePrivateServiceConnect |
Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set. |
encryptionSpec |
Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key. |
etag |
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
modelDeploymentMonitoringJob |
Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: |
name |
Output only. The resource name of the Endpoint. |
network |
Optional. The full name of the Google Compute Engine network to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. Format: |
predictRequestResponseLoggingConfig |
Configures the request-response logging for online prediction. |
privateServiceConnectConfig |
Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive. |
trafficSplit |
A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment. |
updateTime |
Output only. Timestamp when this Endpoint was last updated. |
GoogleCloudAiplatformV1EntityIdSelector
Selector for entityId. Getting ids from the given source.Fields | |
---|---|
csvSource |
Source of Csv |
entityIdField |
Source column that holds entity IDs. If not provided, entity IDs are extracted from the column named entity_id. |
GoogleCloudAiplatformV1EntityType
An entity type is a type of object in a system that needs to be modeled and have stored information about. For example, driver is an entity type, and driver0 is an instance of an entity type driver.Fields | |
---|---|
createTime |
Output only. Timestamp when this EntityType was created. |
description |
Optional. Description of the EntityType. |
etag |
Optional. Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
Optional. The labels with user-defined metadata to organize your EntityTypes. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one EntityType (System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. |
monitoringConfig |
Optional. The default monitoring configuration for all Features with value type (Feature.ValueType) BOOL, STRING, DOUBLE or INT64 under this EntityType. If this is populated with [FeaturestoreMonitoringConfig.monitoring_interval] specified, snapshot analysis monitoring is enabled. Otherwise, snapshot analysis monitoring is disabled. |
name |
Immutable. Name of the EntityType. Format: |
offlineStorageTtlDays |
Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than |
updateTime |
Output only. Timestamp when this EntityType was most recently updated. |
GoogleCloudAiplatformV1EnvVar
Represents an environment variable present in a Container or Python Module.Fields | |
---|---|
name |
Required. Name of the environment variable. Must be a valid C identifier. |
value |
Required. Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not. |
GoogleCloudAiplatformV1ErrorAnalysisAnnotation
Model error analysis for each annotation.Fields | |
---|---|
attributedItems[] |
Attributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type. |
outlierScore |
The outlier score of this annotated item. Usually defined as the min of all distances from attributed items. |
outlierThreshold |
The threshold used to determine if this annotation is an outlier or not. |
queryType |
The query type used for finding the attributed items. |
Enum type. Can be one of the following: | |
QUERY_TYPE_UNSPECIFIED |
Unspecified query type for model error analysis. |
ALL_SIMILAR |
Query similar samples across all classes in the dataset. |
SAME_CLASS_SIMILAR |
Query similar samples from the same class of the input sample. |
SAME_CLASS_DISSIMILAR |
Query dissimilar samples from the same class of the input sample. |
GoogleCloudAiplatformV1ErrorAnalysisAnnotationAttributedItem
Attributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type.Fields | |
---|---|
annotationResourceName |
The unique ID for each annotation. Used by FE to allocate the annotation in DB. |
distance |
The distance of this item to the annotation. |
GoogleCloudAiplatformV1EvaluatedAnnotation
True positive, false positive, or false negative. EvaluatedAnnotation is only available under ModelEvaluationSlice with slice ofannotationSpec
dimension.
Fields | |
---|---|
dataItemPayload |
Output only. The data item payload that the Model predicted this EvaluatedAnnotation on. |
errorAnalysisAnnotations[] |
Annotations of model error analysis results. |
evaluatedDataItemViewId |
Output only. ID of the EvaluatedDataItemView under the same ancestor ModelEvaluation. The EvaluatedDataItemView consists of all ground truths and predictions on data_item_payload. |
explanations[] |
Explanations of predictions. Each element of the explanations indicates the explanation for one explanation Method. The attributions list in the EvaluatedAnnotationExplanation.explanation object corresponds to the predictions list. For example, the second element in the attributions list explains the second element in the predictions list. |
groundTruths[] |
Output only. The ground truth Annotations, i.e. the Annotations that exist in the test data the Model is evaluated on. For true positive, there is one and only one ground truth annotation, which matches the only prediction in predictions. For false positive, there are zero or more ground truth annotations that are similar to the only prediction in predictions, but not enough for a match. For false negative, there is one and only one ground truth annotation, which doesn't match any predictions created by the model. The schema of the ground truth is stored in ModelEvaluation.annotation_schema_uri |
predictions[] |
Output only. The model predicted annotations. For true positive, there is one and only one prediction, which matches the only one ground truth annotation in ground_truths. For false positive, there is one and only one prediction, which doesn't match any ground truth annotation of the corresponding data_item_view_id. For false negative, there are zero or more predictions which are similar to the only ground truth annotation in ground_truths but not enough for a match. The schema of the prediction is stored in ModelEvaluation.annotation_schema_uri |
type |
Output only. Type of the EvaluatedAnnotation. |
Enum type. Can be one of the following: | |
EVALUATED_ANNOTATION_TYPE_UNSPECIFIED |
Invalid value. |
TRUE_POSITIVE |
The EvaluatedAnnotation is a true positive. It has a prediction created by the Model and a ground truth Annotation which the prediction matches. |
FALSE_POSITIVE |
The EvaluatedAnnotation is false positive. It has a prediction created by the Model which does not match any ground truth annotation. |
FALSE_NEGATIVE |
The EvaluatedAnnotation is false negative. It has a ground truth annotation which is not matched by any of the model created predictions. |
GoogleCloudAiplatformV1EvaluatedAnnotationExplanation
Explanation result of the prediction produced by the Model.Fields | |
---|---|
explanation |
Explanation attribution response details. |
explanationType |
Explanation type. For AutoML Image Classification models, possible values are: * |
GoogleCloudAiplatformV1Event
An edge describing the relationship between an Artifact and an Execution in a lineage graph.Fields | |
---|---|
artifact |
Required. The relative resource name of the Artifact in the Event. |
eventTime |
Output only. Time the Event occurred. |
execution |
Output only. The relative resource name of the Execution in the Event. |
labels |
The labels with user-defined metadata to annotate Events. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Event (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. |
type |
Required. The type of the Event. |
Enum type. Can be one of the following: | |
TYPE_UNSPECIFIED |
Unspecified whether input or output of the Execution. |
INPUT |
An input of the Execution. |
OUTPUT |
An output of the Execution. |
GoogleCloudAiplatformV1Examples
Example-based explainability that returns the nearest neighbors from the provided dataset.Fields | |
---|---|
exampleGcsSource |
The Cloud Storage input instances. |
nearestNeighborSearchConfig |
The full configuration for the generated index, the semantics are the same as metadata and should match NearestNeighborSearchConfig. |
neighborCount |
The number of neighbors to return when querying for examples. |
presets |
Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality. |
GoogleCloudAiplatformV1ExamplesExampleGcsSource
The Cloud Storage input instances.Fields | |
---|---|
dataFormat |
The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported. |
Enum type. Can be one of the following: | |
DATA_FORMAT_UNSPECIFIED |
Format unspecified, used when unset. |
JSONL |
Examples are stored in JSONL files. |
gcsSource |
The Cloud Storage location for the input instances. |
GoogleCloudAiplatformV1ExamplesOverride
Overrides for example-based explanations.Fields | |
---|---|
crowdingCount |
The number of neighbors to return that have the same crowding tag. |
dataFormat |
The format of the data being provided with each call. |
Enum type. Can be one of the following: | |
DATA_FORMAT_UNSPECIFIED |
Unspecified format. Must not be used. |
INSTANCES |
Provided data is a set of model inputs. |
EMBEDDINGS |
Provided data is a set of embeddings. |
neighborCount |
The number of neighbors to return. |
restrictions[] |
Restrict the resulting nearest neighbors to respect these constraints. |
returnEmbeddings |
If true, return the embeddings instead of neighbors. |
GoogleCloudAiplatformV1ExamplesRestrictionsNamespace
Restrictions namespace for example-based explanations overrides.Fields | |
---|---|
allow[] |
The list of allowed tags. |
deny[] |
The list of deny tags. |
namespaceName |
The namespace name. |
GoogleCloudAiplatformV1Execution
Instance of a general execution.Fields | |
---|---|
createTime |
Output only. Timestamp when this Execution was created. |
description |
Description of the Execution |
displayName |
User provided display name of the Execution. May be up to 128 Unicode characters. |
etag |
An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
The labels with user-defined metadata to organize your Executions. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Execution (System labels are excluded). |
metadata |
Properties of the Execution. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB. |
name |
Output only. The resource name of the Execution. |
schemaTitle |
The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store. |
schemaVersion |
The version of the schema in |
state |
The state of this Execution. This is a property of the Execution, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines) and the system does not prescribe or check the validity of state transitions. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Unspecified Execution state |
NEW |
The Execution is new |
RUNNING |
The Execution is running |
COMPLETE |
The Execution has finished running |
FAILED |
The Execution has failed |
CACHED |
The Execution completed through Cache hit. |
CANCELLED |
The Execution was cancelled. |
updateTime |
Output only. Timestamp when this Execution was last updated. |
GoogleCloudAiplatformV1ExplainRequest
Request message for PredictionService.Explain.Fields | |
---|---|
deployedModelId |
If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split. |
explanationSpecOverride |
If specified, overrides the explanation_spec of the DeployedModel. Can be used for explaining prediction results with different configurations, such as: - Explaining top-5 predictions results as opposed to top-1; - Increasing path count or step count of the attribution methods to reduce approximate errors; - Using different baselines for explaining the prediction results. |
instances[] |
Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri. |
parameters |
The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri. |
GoogleCloudAiplatformV1ExplainResponse
Response message for PredictionService.Explain.Fields | |
---|---|
deployedModelId |
ID of the Endpoint's DeployedModel that served this explanation. |
explanations[] |
The explanations of the Model's PredictResponse.predictions. It has the same number of elements as instances to be explained. |
predictions[] |
The predictions that are the output of the predictions call. Same as PredictResponse.predictions. |
GoogleCloudAiplatformV1Explanation
Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance.Fields | |
---|---|
attributions[] |
Output only. Feature attributions grouped by predicted outputs. For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of |
neighbors[] |
Output only. List of the nearest neighbors for example-based explanations. For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated. |
GoogleCloudAiplatformV1ExplanationMetadata
Metadata describing the Model's input and output for explanation.Fields | |
---|---|
featureAttributionsSchemaUri |
Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access. |
inputs |
Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance. |
latentSpaceSource |
Name of the source to generate embeddings for example based explanations. |
outputs |
Required. Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed. |
GoogleCloudAiplatformV1ExplanationMetadataInputMetadata
Metadata of the input of a feature. Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.Fields | |
---|---|
denseShapeTensorName |
Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor. |
encodedBaselines[] |
A list of baselines for the encoded tensor. The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor. |
encodedTensorName |
Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution or XRAI attribution and the input tensor is not differentiable. An encoded tensor is generated if the input tensor is encoded by a lookup table. |
encoding |
Defines how the feature is encoded into the input tensor. Defaults to IDENTITY. |
Enum type. Can be one of the following: | |
ENCODING_UNSPECIFIED |
Default value. This is the same as IDENTITY. |
IDENTITY |
The tensor represents one feature. |
BAG_OF_FEATURES |
The tensor represents a bag of features where each index maps to a feature. InputMetadata.index_feature_mapping must be provided for this encoding. For example: input = [27, 6.0, 150] index_feature_mapping = ["age", "height", "weight"] |
BAG_OF_FEATURES_SPARSE |
The tensor represents a bag of features where each index maps to a feature. Zero values in the tensor indicates feature being non-existent. InputMetadata.index_feature_mapping must be provided for this encoding. For example: input = [2, 0, 5, 0, 1] index_feature_mapping = ["a", "b", "c", "d", "e"] |
INDICATOR |
The tensor is a list of binaries representing whether a feature exists or not (1 indicates existence). InputMetadata.index_feature_mapping must be provided for this encoding. For example: input = [1, 0, 1, 0, 1] index_feature_mapping = ["a", "b", "c", "d", "e"] |
COMBINED_EMBEDDING |
The tensor is encoded into a 1-dimensional array represented by an encoded tensor. InputMetadata.encoded_tensor_name must be provided for this encoding. For example: input = ["This", "is", "a", "test", "."] encoded = [0.1, 0.2, 0.3, 0.4, 0.5] |
CONCAT_EMBEDDING |
Select this encoding when the input tensor is encoded into a 2-dimensional array represented by an encoded tensor. InputMetadata.encoded_tensor_name must be provided for this encoding. The first dimension of the encoded tensor's shape is the same as the input tensor's shape. For example: input = ["This", "is", "a", "test", "."] encoded = [[0.1, 0.2, 0.3, 0.4, 0.5], [0.2, 0.1, 0.4, 0.3, 0.5], [0.5, 0.1, 0.3, 0.5, 0.4], [0.5, 0.3, 0.1, 0.2, 0.4], [0.4, 0.3, 0.2, 0.5, 0.1]] |
featureValueDomain |
The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized. |
groupName |
Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions, keyed by the group name. |
indexFeatureMapping[] |
A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR. |
indicesTensorName |
Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor. |
inputBaselines[] |
Baseline inputs for this feature. If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions. For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor. For custom images, the element of the baselines must be in the same format as the feature's input in the instance[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri. |
inputTensorName |
Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow. |
modality |
Modality of the feature. Valid values are: numeric, image. Defaults to numeric. |
visualization |
Visualization configurations for image explanation. |
GoogleCloudAiplatformV1ExplanationMetadataInputMetadataFeatureValueDomain
Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained.Fields | |
---|---|
maxValue |
The maximum permissible value for this feature. |
minValue |
The minimum permissible value for this feature. |
originalMean |
If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization. |
originalStddev |
If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization. |
GoogleCloudAiplatformV1ExplanationMetadataInputMetadataVisualization
Visualization configurations for image explanation.Fields | |
---|---|
clipPercentLowerbound |
Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62. |
clipPercentUpperbound |
Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9. |
colorMap |
The color scheme used for the highlighted areas. Defaults to PINK_GREEN for Integrated Gradients attribution, which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for XRAI attribution, which highlights the most influential regions in yellow and the least influential in blue. |
Enum type. Can be one of the following: | |
COLOR_MAP_UNSPECIFIED |
Should not be used. |
PINK_GREEN |
Positive: green. Negative: pink. |
VIRIDIS |
Viridis color map: A perceptually uniform color mapping which is easier to see by those with colorblindness and progresses from yellow to green to blue. Positive: yellow. Negative: blue. |
RED |
Positive: red. Negative: red. |
GREEN |
Positive: green. Negative: green. |
RED_GREEN |
Positive: green. Negative: red. |
PINK_WHITE_GREEN |
PiYG palette. |
overlayType |
How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE. |
Enum type. Can be one of the following: | |
OVERLAY_TYPE_UNSPECIFIED |
Default value. This is the same as NONE. |
NONE |
No overlay. |
ORIGINAL |
The attributions are shown on top of the original image. |
GRAYSCALE |
The attributions are shown on top of grayscaled version of the original image. |
MASK_BLACK |
The attributions are used as a mask to reveal predictive parts of the image and hide the un-predictive parts. |
polarity |
Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE. |
Enum type. Can be one of the following: | |
POLARITY_UNSPECIFIED |
Default value. This is the same as POSITIVE. |
POSITIVE |
Highlights the pixels/outlines that were most influential to the model's prediction. |
NEGATIVE |
Setting polarity to negative highlights areas that does not lead to the models's current prediction. |
BOTH |
Shows both positive and negative attributions. |
type |
Type of the image visualization. Only applicable to Integrated Gradients attribution. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES. |
Enum type. Can be one of the following: | |
TYPE_UNSPECIFIED |
Should not be used. |
PIXELS |
Shows which pixel contributed to the image prediction. |
OUTLINES |
Shows which region contributed to the image prediction by outlining the region. |
GoogleCloudAiplatformV1ExplanationMetadataOutputMetadata
Metadata of the prediction output to be explained.Fields | |
---|---|
displayNameMappingKey |
Specify a field name in the prediction to look for the display name. Use this if the prediction contains the display names for the outputs. The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index for a specific output. |
indexDisplayNameMapping |
Static mapping between the index and display name. Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values. The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name is populated by locating in the mapping with Attribution.output_index. |
outputTensorName |
Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow. |
GoogleCloudAiplatformV1ExplanationMetadataOverride
The ExplanationMetadata entries that can be overridden at online explanation time.Fields | |
---|---|
inputs |
Required. Overrides the input metadata of the features. The key is the name of the feature to be overridden. The keys specified here must exist in the input metadata to be overridden. If a feature is not specified here, the corresponding feature's input metadata is not overridden. |
GoogleCloudAiplatformV1ExplanationMetadataOverrideInputMetadataOverride
The input metadata entries to be overridden.Fields | |
---|---|
inputBaselines[] |
Baseline inputs for this feature. This overrides the |
GoogleCloudAiplatformV1ExplanationParameters
Parameters to configure explaining for Model's predictions.Fields | |
---|---|
examples |
Example-based explanations that returns the nearest neighbors from the provided dataset. |
integratedGradientsAttribution |
An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365 |
outputIndices[] |
If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes). |
sampledShapleyAttribution |
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265. |
topK |
If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs. |
xraiAttribution |
An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead. |
GoogleCloudAiplatformV1ExplanationSpec
Specification of Model explanation.Fields | |
---|---|
metadata |
Optional. Metadata describing the Model's input and output for explanation. |
parameters |
Required. Parameters that configure explaining of the Model's predictions. |
GoogleCloudAiplatformV1ExplanationSpecOverride
The ExplanationSpec entries that can be overridden at online explanation time.Fields | |
---|---|
examplesOverride |
The example-based explanations parameter overrides. |
metadata |
The metadata to be overridden. If not specified, no metadata is overridden. |
parameters |
The parameters to be overridden. Note that the attribution method cannot be changed. If not specified, no parameter is overridden. |
GoogleCloudAiplatformV1ExportDataConfig
Describes what part of the Dataset is to be exported, the destination of the export and how to export.Fields | |
---|---|
annotationSchemaUri |
The Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/, note that the chosen schema must be consistent with metadata of the Dataset specified by dataset_id. Only used for custom training data export use cases. Only applicable to Datasets that have DataItems and Annotations. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both annotations_filter and annotation_schema_uri. |
annotationsFilter |
An expression for filtering what part of the Dataset is to be exported. Only Annotations that match this filter will be exported. The filter syntax is the same as in ListAnnotations. |
exportUse |
Indicates the usage of the exported files. |
Enum type. Can be one of the following: | |
EXPORT_USE_UNSPECIFIED |
Regular user export. |
CUSTOM_CODE_TRAINING |
Export for custom code training. |
filterSplit |
Split based on the provided filters for each set. |
fractionSplit |
Split based on fractions defining the size of each set. |
gcsDestination |
The Google Cloud Storage location where the output is to be written to. In the given directory a new directory will be created with name: |
savedQueryId |
The ID of a SavedQuery (annotation set) under the Dataset specified by dataset_id used for filtering Annotations for training. Only used for custom training data export use cases. Only applicable to Datasets that have SavedQueries. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both saved_query_id and annotations_filter. Only one of saved_query_id and annotation_schema_uri should be specified as both of them represent the same thing: problem type. |
GoogleCloudAiplatformV1ExportDataOperationMetadata
Runtime operation information for DatasetService.ExportData.Fields | |
---|---|
gcsOutputDirectory |
A Google Cloud Storage directory which path ends with '/'. The exported data is stored in the directory. |
genericMetadata |
The common part of the operation metadata. |
GoogleCloudAiplatformV1ExportDataRequest
Request message for DatasetService.ExportData.Fields | |
---|---|
exportConfig |
Required. The desired output location. |
GoogleCloudAiplatformV1ExportDataResponse
Response message for DatasetService.ExportData.Fields | |
---|---|
dataStats |
Only present for custom code training export use case. Records data stats, i.e., train/validation/test item/annotation counts calculated during the export operation. |
exportedFiles[] |
All of the files that are exported in this export operation. For custom code training export, only three (training, validation and test) Cloud Storage paths in wildcard format are populated (for example, gs://.../training-*). |
GoogleCloudAiplatformV1ExportFeatureValuesOperationMetadata
Details of operations that exports Features values.Fields | |
---|---|
genericMetadata |
Operation metadata for Featurestore export Feature values. |
GoogleCloudAiplatformV1ExportFeatureValuesRequest
Request message for FeaturestoreService.ExportFeatureValues.Fields | |
---|---|
destination |
Required. Specifies destination location and format. |
featureSelector |
Required. Selects Features to export values of. |
fullExport |
Exports all historical values of all entities of the EntityType within a time range |
settings[] |
Per-Feature export settings. |
snapshotExport |
Exports the latest Feature values of all entities of the EntityType within a time range. |
GoogleCloudAiplatformV1ExportFeatureValuesRequestFullExport
Describes exporting all historical Feature values of all entities of the EntityType between [start_time, end_time].Fields | |
---|---|
endTime |
Exports Feature values as of this timestamp. If not set, retrieve values as of now. Timestamp, if present, must not have higher than millisecond precision. |
startTime |
Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision. |
GoogleCloudAiplatformV1ExportFeatureValuesRequestSnapshotExport
Describes exporting the latest Feature values of all entities of the EntityType between [start_time, snapshot_time].Fields | |
---|---|
snapshotTime |
Exports Feature values as of this timestamp. If not set, retrieve values as of now. Timestamp, if present, must not have higher than millisecond precision. |
startTime |
Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision. |
GoogleCloudAiplatformV1ExportFilterSplit
Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign). Supported only for unstructured Datasets.Fields | |
---|---|
testFilter |
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to test the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. |
trainingFilter |
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to train the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. |
validationFilter |
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to validate the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. |
GoogleCloudAiplatformV1ExportFractionSplit
Assigns the input data to training, validation, and test sets as per the given fractions. Any oftraining_fraction
, validation_fraction
and test_fraction
may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.
Fields | |
---|---|
testFraction |
The fraction of the input data that is to be used to evaluate the Model. |
trainingFraction |
The fraction of the input data that is to be used to train the Model. |
validationFraction |
The fraction of the input data that is to be used to validate the Model. |
GoogleCloudAiplatformV1ExportModelOperationMetadata
Details of ModelService.ExportModel operation.Fields | |
---|---|
genericMetadata |
The common part of the operation metadata. |
outputInfo |
Output only. Information further describing the output of this Model export. |
GoogleCloudAiplatformV1ExportModelOperationMetadataOutputInfo
Further describes the output of the ExportModel. Supplements ExportModelRequest.OutputConfig.Fields | |
---|---|
artifactOutputUri |
Output only. If the Model artifact is being exported to Google Cloud Storage this is the full path of the directory created, into which the Model files are being written to. |
imageOutputUri |
Output only. If the Model image is being exported to Google Container Registry or Artifact Registry this is the full path of the image created. |
GoogleCloudAiplatformV1ExportModelRequest
Request message for ModelService.ExportModel.Fields | |
---|---|
outputConfig |
Required. The desired output location and configuration. |
GoogleCloudAiplatformV1ExportModelRequestOutputConfig
Output configuration for the Model export.Fields | |
---|---|
artifactDestination |
The Cloud Storage location where the Model artifact is to be written to. Under the directory given as the destination a new one with name " |
exportFormatId |
The ID of the format in which the Model must be exported. Each Model lists the export formats it supports. If no value is provided here, then the first from the list of the Model's supported formats is used by default. |
imageDestination |
The Google Container Registry or Artifact Registry uri where the Model container image will be copied to. This field should only be set when the |
GoogleCloudAiplatformV1ExportTensorboardTimeSeriesDataRequest
Request message for TensorboardService.ExportTensorboardTimeSeriesData.Fields | |
---|---|
filter |
Exports the TensorboardTimeSeries' data that match the filter expression. |
orderBy |
Field to use to sort the TensorboardTimeSeries' data. By default, TensorboardTimeSeries' data is returned in a pseudo random order. |
pageSize |
The maximum number of data points to return per page. The default page_size is 1000. Values must be between 1 and 10000. Values above 10000 are coerced to 10000. |
pageToken |
A page token, received from a previous ExportTensorboardTimeSeriesData call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to ExportTensorboardTimeSeriesData must match the call that provided the page token. |
GoogleCloudAiplatformV1ExportTensorboardTimeSeriesDataResponse
Response message for TensorboardService.ExportTensorboardTimeSeriesData.Fields | |
---|---|
nextPageToken |
A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
timeSeriesDataPoints[] |
The returned time series data points. |
GoogleCloudAiplatformV1Feature
Feature Metadata information. For example, color is a feature that describes an apple.Fields | |
---|---|
createTime |
Output only. Only applicable for Vertex AI Feature Store (Legacy). Timestamp when this EntityType was created. |
description |
Description of the Feature. |
disableMonitoring |
Optional. Only applicable for Vertex AI Feature Store (Legacy). If not set, use the monitoring_config defined for the EntityType this Feature belongs to. Only Features with type (Feature.ValueType) BOOL, STRING, DOUBLE or INT64 can enable monitoring. If set to true, all types of data monitoring are disabled despite the config on EntityType. |
etag |
Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
Optional. The labels with user-defined metadata to organize your Features. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one Feature (System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. |
monitoringStatsAnomalies[] |
Output only. Only applicable for Vertex AI Feature Store (Legacy). The list of historical stats and anomalies with specified objectives. |
name |
Immutable. Name of the Feature. Format: |
pointOfContact |
Entity responsible for maintaining this feature. Can be comma separated list of email addresses or URIs. |
updateTime |
Output only. Only applicable for Vertex AI Feature Store (Legacy). Timestamp when this EntityType was most recently updated. |
valueType |
Immutable. Only applicable for Vertex AI Feature Store (Legacy). Type of Feature value. |
Enum type. Can be one of the following: | |
VALUE_TYPE_UNSPECIFIED |
The value type is unspecified. |
BOOL |
Used for Feature that is a boolean. |
BOOL_ARRAY |
Used for Feature that is a list of boolean. |
DOUBLE |
Used for Feature that is double. |
DOUBLE_ARRAY |
Used for Feature that is a list of double. |
INT64 |
Used for Feature that is INT64. |
INT64_ARRAY |
Used for Feature that is a list of INT64. |
STRING |
Used for Feature that is string. |
STRING_ARRAY |
Used for Feature that is a list of String. |
BYTES |
Used for Feature that is bytes. |
versionColumnName |
Only applicable for Vertex AI Feature Store. The name of the BigQuery Table/View column hosting data for this version. If no value is provided, will use feature_id. |
GoogleCloudAiplatformV1FeatureGroup
Vertex AI Feature Group.Fields | |
---|---|
bigQuery |
Indicates that features for this group come from BigQuery Table/View. By default treats the source as a sparse time series source. The BigQuery source table or view must have at least one entity ID column and a column named |
createTime |
Output only. Timestamp when this FeatureGroup was created. |
description |
Optional. Description of the FeatureGroup. |
etag |
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
Optional. The labels with user-defined metadata to organize your FeatureGroup. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureGroup(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. |
name |
Identifier. Name of the FeatureGroup. Format: |
updateTime |
Output only. Timestamp when this FeatureGroup was last updated. |
GoogleCloudAiplatformV1FeatureGroupBigQuery
Input source type for BigQuery Tables and Views.Fields | |
---|---|
bigQuerySource |
Required. Immutable. The BigQuery source URI that points to either a BigQuery Table or View. |
entityIdColumns[] |
Optional. Columns to construct entity_id / row keys. If not provided defaults to |
GoogleCloudAiplatformV1FeatureMonitoringStatsAnomaly
A list of historical SnapshotAnalysis or ImportFeaturesAnalysis stats requested by user, sorted by FeatureStatsAnomaly.start_time descending.Fields | |
---|---|
featureStatsAnomaly |
Output only. The stats and anomalies generated at specific timestamp. |
objective |
Output only. The objective for each stats. |
Enum type. Can be one of the following: | |
OBJECTIVE_UNSPECIFIED |
If it's OBJECTIVE_UNSPECIFIED, monitoring_stats will be empty. |
IMPORT_FEATURE_ANALYSIS |
Stats are generated by Import Feature Analysis. |
SNAPSHOT_ANALYSIS |
Stats are generated by Snapshot Analysis. |
GoogleCloudAiplatformV1FeatureNoiseSigma
Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients.Fields | |
---|---|
noiseSigma[] |
Noise sigma per feature. No noise is added to features that are not set. |
GoogleCloudAiplatformV1FeatureNoiseSigmaNoiseSigmaForFeature
Noise sigma for a single feature.Fields | |
---|---|
name |
The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs. |
sigma |
This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1. |
GoogleCloudAiplatformV1FeatureOnlineStore
Vertex AI Feature Online Store provides a centralized repository for serving ML features and embedding indexes at low latency. The Feature Online Store is a top-level container.Fields | |
---|---|
bigtable |
Contains settings for the Cloud Bigtable instance that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. |
createTime |
Output only. Timestamp when this FeatureOnlineStore was created. |
dedicatedServingEndpoint |
Optional. The dedicated serving endpoint for this FeatureOnlineStore, which is different from common Vertex service endpoint. |
etag |
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
Optional. The labels with user-defined metadata to organize your FeatureOnlineStore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. |
name |
Identifier. Name of the FeatureOnlineStore. Format: |
optimized |
Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect to use private endpoint. Otherwise will use public endpoint by default. |
state |
Output only. State of the featureOnlineStore. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Default value. This value is unused. |
STABLE |
State when the featureOnlineStore configuration is not being updated and the fields reflect the current configuration of the featureOnlineStore. The featureOnlineStore is usable in this state. |
UPDATING |
The state of the featureOnlineStore configuration when it is being updated. During an update, the fields reflect either the original configuration or the updated configuration of the featureOnlineStore. The featureOnlineStore is still usable in this state. |
updateTime |
Output only. Timestamp when this FeatureOnlineStore was last updated. |
GoogleCloudAiplatformV1FeatureOnlineStoreBigtable
(No description provided)Fields | |
---|---|
autoScaling |
Required. Autoscaling config applied to Bigtable Instance. |
GoogleCloudAiplatformV1FeatureOnlineStoreBigtableAutoScaling
(No description provided)Fields | |
---|---|
cpuUtilizationTarget |
Optional. A percentage of the cluster's CPU capacity. Can be from 10% to 80%. When a cluster's CPU utilization exceeds the target that you have set, Bigtable immediately adds nodes to the cluster. When CPU utilization is substantially lower than the target, Bigtable removes nodes. If not set will default to 50%. |
maxNodeCount |
Required. The maximum number of nodes to scale up to. Must be greater than or equal to min_node_count, and less than or equal to 10 times of 'min_node_count'. |
minNodeCount |
Required. The minimum number of nodes to scale down to. Must be greater than or equal to 1. |
GoogleCloudAiplatformV1FeatureOnlineStoreDedicatedServingEndpoint
The dedicated serving endpoint for this FeatureOnlineStore. Only need to set when you choose Optimized storage type. Public endpoint is provisioned by default.Fields | |
---|---|
publicEndpointDomainName |
Output only. This field will be populated with the domain name to use for this FeatureOnlineStore |
GoogleCloudAiplatformV1FeatureSelector
Selector for Features of an EntityType.Fields | |
---|---|
idMatcher |
Required. Matches Features based on ID. |
GoogleCloudAiplatformV1FeatureStatsAnomaly
Stats and Anomaly generated at specific timestamp for specific Feature. The start_time and end_time are used to define the time range of the dataset that current stats belongs to, e.g. prediction traffic is bucketed into prediction datasets by time window. If the Dataset is not defined by time window, start_time = end_time. Timestamp of the stats and anomalies always refers to end_time. Raw stats and anomalies are stored in stats_uri or anomaly_uri in the tensorflow defined protos. Field data_stats contains almost identical information with the raw stats in Vertex AI defined proto, for UI to display.Fields | |
---|---|
anomalyDetectionThreshold |
This is the threshold used when detecting anomalies. The threshold can be changed by user, so this one might be different from ThresholdConfig.value. |
anomalyUri |
Path of the anomaly file for current feature values in Cloud Storage bucket. Format: gs:////anomalies. Example: gs://monitoring_bucket/feature_name/anomalies. Stats are stored as binary format with Protobuf message Anoamlies are stored as binary format with Protobuf message tensorflow.metadata.v0.AnomalyInfo. |
distributionDeviation |
Deviation from the current stats to baseline stats. 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. |
endTime |
The end timestamp of window where stats were generated. For objectives where time window doesn't make sense (e.g. Featurestore Snapshot Monitoring), end_time indicates the timestamp of the data used to generate stats (e.g. timestamp we take snapshots for feature values). |
score |
Feature importance score, only populated when cross-feature monitoring is enabled. For now only used to represent feature attribution score within range [0, 1] for ModelDeploymentMonitoringObjectiveType.FEATURE_ATTRIBUTION_SKEW and ModelDeploymentMonitoringObjectiveType.FEATURE_ATTRIBUTION_DRIFT. |
startTime |
The start timestamp of window where stats were generated. For objectives where time window doesn't make sense (e.g. Featurestore Snapshot Monitoring), start_time is only used to indicate the monitoring intervals, so it always equals to (end_time - monitoring_interval). |
statsUri |
Path of the stats file for current feature values in Cloud Storage bucket. Format: gs:////stats. Example: gs://monitoring_bucket/feature_name/stats. Stats are stored as binary format with Protobuf message tensorflow.metadata.v0.FeatureNameStatistics. |
GoogleCloudAiplatformV1FeatureValue
Value for a feature.Fields | |
---|---|
boolArrayValue |
A list of bool type feature value. |
boolValue |
Bool type feature value. |
bytesValue |
Bytes feature value. |
doubleArrayValue |
A list of double type feature value. |
doubleValue |
Double type feature value. |
int64ArrayValue |
A list of int64 type feature value. |
int64Value |
Int64 feature value. |
metadata |
Metadata of feature value. |
stringArrayValue |
A list of string type feature value. |
stringValue |
String feature value. |
GoogleCloudAiplatformV1FeatureValueDestination
A destination location for Feature values and format.Fields | |
---|---|
bigqueryDestination |
Output in BigQuery format. BigQueryDestination.output_uri in FeatureValueDestination.bigquery_destination must refer to a table. |
csvDestination |
Output in CSV format. Array Feature value types are not allowed in CSV format. |
tfrecordDestination |
Output in TFRecord format. Below are the mapping from Feature value type in Featurestore to Feature value type in TFRecord: Value type in Featurestore | Value type in TFRecord DOUBLE, DOUBLE_ARRAY | FLOAT_LIST INT64, INT64_ARRAY | INT64_LIST STRING, STRING_ARRAY, BYTES | BYTES_LIST true -> byte_string("true"), false -> byte_string("false") BOOL, BOOL_ARRAY (true, false) | BYTES_LIST |
GoogleCloudAiplatformV1FeatureValueList
Container for list of values.Fields | |
---|---|
values[] |
A list of feature values. All of them should be the same data type. |
GoogleCloudAiplatformV1FeatureValueMetadata
Metadata of feature value.Fields | |
---|---|
generateTime |
Feature generation timestamp. Typically, it is provided by user at feature ingestion time. If not, feature store will use the system timestamp when the data is ingested into feature store. For streaming ingestion, the time, aligned by days, must be no older than five years (1825 days) and no later than one year (366 days) in the future. |
GoogleCloudAiplatformV1FeatureView
FeatureView is representation of values that the FeatureOnlineStore will serve based on its syncConfig.Fields | |
---|---|
bigQuerySource |
Optional. Configures how data is supposed to be extracted from a BigQuery source to be loaded onto the FeatureOnlineStore. |
createTime |
Output only. Timestamp when this FeatureView was created. |
etag |
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
featureRegistrySource |
Optional. Configures the features from a Feature Registry source that need to be loaded onto the FeatureOnlineStore. |
indexConfig |
Optional. Configuration for index preparation for vector search. It contains the required configurations to create an index from source data, so that approximate nearest neighbor (a.k.a ANN) algorithms search can be performed during online serving. |
labels |
Optional. The labels with user-defined metadata to organize your FeatureViews. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. |
name |
Identifier. Name of the FeatureView. Format: |
syncConfig |
Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving. |
updateTime |
Output only. Timestamp when this FeatureView was last updated. |
GoogleCloudAiplatformV1FeatureViewBigQuerySource
(No description provided)Fields | |
---|---|
entityIdColumns[] |
Required. Columns to construct entity_id / row keys. |
uri |
Required. The BigQuery view URI that will be materialized on each sync trigger based on FeatureView.SyncConfig. |
GoogleCloudAiplatformV1FeatureViewDataKey
Lookup key for a feature view.Fields | |
---|---|
compositeKey |
The actual Entity ID will be composed from this struct. This should match with the way ID is defined in the FeatureView spec. |
key |
String key to use for lookup. |
GoogleCloudAiplatformV1FeatureViewDataKeyCompositeKey
ID that is comprised from several parts (columns).Fields | |
---|---|
parts[] |
Parts to construct Entity ID. Should match with the same ID columns as defined in FeatureView in the same order. |
GoogleCloudAiplatformV1FeatureViewFeatureRegistrySource
A Feature Registry source for features that need to be synced to Online Store.Fields | |
---|---|
featureGroups[] |
Required. List of features that need to be synced to Online Store. |
projectNumber |
Optional. The project number of the parent project of the Feature Groups. |
GoogleCloudAiplatformV1FeatureViewFeatureRegistrySourceFeatureGroup
Features belonging to a single feature group that will be synced to Online Store.Fields | |
---|---|
featureGroupId |
Required. Identifier of the feature group. |
featureIds[] |
Required. Identifiers of features under the feature group. |
GoogleCloudAiplatformV1FeatureViewIndexConfig
Configuration for vector indexing.Fields | |
---|---|
bruteForceConfig |
Optional. Configuration options for using brute force search, which simply implements the standard linear search in the database for each query. It is primarily meant for benchmarking and to generate the ground truth for approximate search. |
crowdingColumn |
Optional. Column of crowding. This column contains crowding attribute which is a constraint on a neighbor list produced by FeatureOnlineStoreService.SearchNearestEntities to diversify search results. If NearestNeighborQuery.per_crowding_attribute_neighbor_count is set to K in SearchNearestEntitiesRequest, it's guaranteed that no more than K entities of the same crowding attribute are returned in the response. |
distanceMeasureType |
Optional. The distance measure used in nearest neighbor search. |
Enum type. Can be one of the following: | |
DISTANCE_MEASURE_TYPE_UNSPECIFIED |
Should not be set. |
SQUARED_L2_DISTANCE |
Euclidean (L_2) Distance. |
COSINE_DISTANCE |
Cosine Distance. Defined as 1 - cosine similarity. We strongly suggest using DOT_PRODUCT_DISTANCE + UNIT_L2_NORM instead of COSINE distance. Our algorithms have been more optimized for DOT_PRODUCT distance which, when combined with UNIT_L2_NORM, is mathematically equivalent to COSINE distance and results in the same ranking. |
DOT_PRODUCT_DISTANCE |
Dot Product Distance. Defined as a negative of the dot product. |
embeddingColumn |
Optional. Column of embedding. This column contains the source data to create index for vector search. embedding_column must be set when using vector search. |
embeddingDimension |
Optional. The number of dimensions of the input embedding. |
filterColumns[] |
Optional. Columns of features that're used to filter vector search results. |
treeAhConfig |
Optional. Configuration options for the tree-AH algorithm (Shallow tree + Asymmetric Hashing). Please refer to this paper for more details: https://arxiv.org/abs/1908.10396 |
GoogleCloudAiplatformV1FeatureViewIndexConfigTreeAHConfig
Configuration options for the tree-AH algorithm.Fields | |
---|---|
leafNodeEmbeddingCount |
Optional. Number of embeddings on each leaf node. The default value is 1000 if not set. |
GoogleCloudAiplatformV1FeatureViewSync
FeatureViewSync is a representation of sync operation which copies data from data source to Feature View in Online Store.Fields | |
---|---|
createTime |
Output only. Time when this FeatureViewSync is created. Creation of a FeatureViewSync means that the job is pending / waiting for sufficient resources but may not have started the actual data transfer yet. |
finalStatus |
Output only. Final status of the FeatureViewSync. |
name |
Identifier. Name of the FeatureViewSync. Format: |
runTime |
Output only. Time when this FeatureViewSync is finished. |
syncSummary |
Output only. Summary of the sync job. |
GoogleCloudAiplatformV1FeatureViewSyncConfig
Configuration for Sync. Only one option is set.Fields | |
---|---|
cron |
Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * ", or "TZ=America/New_York 1 * * * ". |
GoogleCloudAiplatformV1FeatureViewSyncSyncSummary
Summary from the Sync job. For continuous syncs, the summary is updated periodically. For batch syncs, it gets updated on completion of the sync.Fields | |
---|---|
rowSynced |
Output only. Total number of rows synced. |
totalSlot |
Output only. BigQuery slot milliseconds consumed for the sync job. |
GoogleCloudAiplatformV1Featurestore
Vertex AI Feature Store provides a centralized repository for organizing, storing, and serving ML features. The Featurestore is a top-level container for your features and their values.Fields | |
---|---|
createTime |
Output only. Timestamp when this Featurestore was created. |
encryptionSpec |
Optional. Customer-managed encryption key spec for data storage. If set, both of the online and offline data storage will be secured by this key. |
etag |
Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
Optional. The labels with user-defined metadata to organize your Featurestore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one Featurestore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. |
name |
Output only. Name of the Featurestore. Format: |
onlineServingConfig |
Optional. Config for online storage resources. The field should not co-exist with the field of |
onlineStorageTtlDays |
Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than |
state |
Output only. State of the featurestore. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Default value. This value is unused. |
STABLE |
State when the featurestore configuration is not being updated and the fields reflect the current configuration of the featurestore. The featurestore is usable in this state. |
UPDATING |
The state of the featurestore configuration when it is being updated. During an update, the fields reflect either the original configuration or the updated configuration of the featurestore. For example, online_serving_config.fixed_node_count can take minutes to update. While the update is in progress, the featurestore is in the UPDATING state, and the value of fixed_node_count can be the original value or the updated value, depending on the progress of the operation. Until the update completes, the actual number of nodes can still be the original value of fixed_node_count . The featurestore is still usable in this state. |
updateTime |
Output only. Timestamp when this Featurestore was last updated. |
GoogleCloudAiplatformV1FeaturestoreMonitoringConfig
Configuration of how features in Featurestore are monitored.Fields | |
---|---|
categoricalThresholdConfig |
Threshold for categorical features of anomaly detection. This is shared by all types of Featurestore Monitoring for categorical features (i.e. Features with type (Feature.ValueType) BOOL or STRING). |
importFeaturesAnalysis |
The config for ImportFeatures Analysis Based Feature Monitoring. |
numericalThresholdConfig |
Threshold for numerical features of anomaly detection. This is shared by all objectives of Featurestore Monitoring for numerical features (i.e. Features with type (Feature.ValueType) DOUBLE or INT64). |
snapshotAnalysis |
The config for Snapshot Analysis Based Feature Monitoring. |
GoogleCloudAiplatformV1FeaturestoreMonitoringConfigImportFeaturesAnalysis
Configuration of the Featurestore's ImportFeature Analysis Based Monitoring. This type of analysis generates statistics for values of each Feature imported by every ImportFeatureValues operation.Fields | |
---|---|
anomalyDetectionBaseline |
The baseline used to do anomaly detection for the statistics generated by import features analysis. |
Enum type. Can be one of the following: | |
BASELINE_UNSPECIFIED |
Should not be used. |
LATEST_STATS |
Choose the later one statistics generated by either most recent snapshot analysis or previous import features analysis. If non of them exists, skip anomaly detection and only generate a statistics. |
MOST_RECENT_SNAPSHOT_STATS |
Use the statistics generated by the most recent snapshot analysis if exists. |
PREVIOUS_IMPORT_FEATURES_STATS |
Use the statistics generated by the previous import features analysis if exists. |
state |
Whether to enable / disable / inherite default hebavior for import features analysis. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Should not be used. |
DEFAULT |
The default behavior of whether to enable the monitoring. EntityType-level config: disabled. Feature-level config: inherited from the configuration of EntityType this Feature belongs to. |
ENABLED |
Explicitly enables import features analysis. EntityType-level config: by default enables import features analysis for all Features under it. Feature-level config: enables import features analysis regardless of the EntityType-level config. |
DISABLED |
Explicitly disables import features analysis. EntityType-level config: by default disables import features analysis for all Features under it. Feature-level config: disables import features analysis regardless of the EntityType-level config. |
GoogleCloudAiplatformV1FeaturestoreMonitoringConfigSnapshotAnalysis
Configuration of the Featurestore's Snapshot Analysis Based Monitoring. This type of analysis generates statistics for each Feature based on a snapshot of the latest feature value of each entities every monitoring_interval.Fields | |
---|---|
disabled |
The monitoring schedule for snapshot analysis. For EntityType-level config: unset / disabled = true indicates disabled by default for Features under it; otherwise by default enable snapshot analysis monitoring with monitoring_interval for Features under it. Feature-level config: disabled = true indicates disabled regardless of the EntityType-level config; unset monitoring_interval indicates going with EntityType-level config; otherwise run snapshot analysis monitoring with monitoring_interval regardless of the EntityType-level config. Explicitly Disable the snapshot analysis based monitoring. |
monitoringIntervalDays |
Configuration of the snapshot analysis based monitoring pipeline running interval. The value indicates number of days. |
stalenessDays |
Customized export features time window for snapshot analysis. Unit is one day. Default value is 3 weeks. Minimum value is 1 day. Maximum value is 4000 days. |
GoogleCloudAiplatformV1FeaturestoreMonitoringConfigThresholdConfig
The config for Featurestore Monitoring threshold.Fields | |
---|---|
value |
Specify a threshold value that can trigger the alert. 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature. |
GoogleCloudAiplatformV1FeaturestoreOnlineServingConfig
OnlineServingConfig specifies the details for provisioning online serving resources.Fields | |
---|---|
fixedNodeCount |
The number of nodes for the online store. The number of nodes doesn't scale automatically, but you can manually update the number of nodes. If set to 0, the featurestore will not have an online store and cannot be used for online serving. |
scaling |
Online serving scaling configuration. Only one of |
GoogleCloudAiplatformV1FeaturestoreOnlineServingConfigScaling
Online serving scaling configuration. If min_node_count and max_node_count are set to the same value, the cluster will be configured with the fixed number of node (no auto-scaling).Fields | |
---|---|
cpuUtilizationTarget |
Optional. The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80. When a cluster's CPU utilization exceeds the target that you have set, Bigtable immediately adds nodes to the cluster. When CPU utilization is substantially lower than the target, Bigtable removes nodes. If not set or set to 0, default to 50. |
maxNodeCount |
The maximum number of nodes to scale up to. Must be greater than min_node_count, and less than or equal to 10 times of 'min_node_count'. |
minNodeCount |
Required. The minimum number of nodes to scale down to. Must be greater than or equal to 1. |
GoogleCloudAiplatformV1FetchFeatureValuesRequest
Request message for FeatureOnlineStoreService.FetchFeatureValues. All the features under the requested feature view will be returned.Fields | |
---|---|
dataFormat |
Optional. Response data format. If not set, FeatureViewDataFormat.KEY_VALUE will be used. |
Enum type. Can be one of the following: | |
FEATURE_VIEW_DATA_FORMAT_UNSPECIFIED |
Not set. Will be treated as the KeyValue format. |
KEY_VALUE |
Return response data in key-value format. |
PROTO_STRUCT |
Return response data in proto Struct format. |
dataKey |
Optional. The request key to fetch feature values for. |
GoogleCloudAiplatformV1FetchFeatureValuesResponse
Response message for FeatureOnlineStoreService.FetchFeatureValuesFields | |
---|---|
dataKey |
The data key associated with this response. Will only be populated for FeatureOnlineStoreService.StreamingFetchFeatureValues RPCs. |
keyValues |
Feature values in KeyValue format. |
protoStruct |
Feature values in proto Struct format. |
GoogleCloudAiplatformV1FetchFeatureValuesResponseFeatureNameValuePairList
Response structure in the format of key (feature name) and (feature) value pair.Fields | |
---|---|
features[] |
List of feature names and values. |
GoogleCloudAiplatformV1FetchFeatureValuesResponseFeatureNameValuePairListFeatureNameValuePair
Feature name & value pair.Fields | |
---|---|
name |
Feature short name. |
value |
Feature value. |
GoogleCloudAiplatformV1FileData
URI based data.Fields | |
---|---|
fileUri |
Required. URI. |
mimeType |
Required. The IANA standard MIME type of the source data. |
GoogleCloudAiplatformV1FilterSplit
Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign). Supported only for unstructured Datasets.Fields | |
---|---|
testFilter |
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to test the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. |
trainingFilter |
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to train the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. |
validationFilter |
Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to validate the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. |
GoogleCloudAiplatformV1FindNeighborsRequest
The request message for MatchService.FindNeighbors.Fields | |
---|---|
deployedIndexId |
The ID of the DeployedIndex that will serve the request. This request is sent to a specific IndexEndpoint, as per the IndexEndpoint.network. That IndexEndpoint also has IndexEndpoint.deployed_indexes, and each such index has a DeployedIndex.id field. The value of the field below must equal one of the DeployedIndex.id fields of the IndexEndpoint that is being called for this request. |
queries[] |
The list of queries. |
returnFullDatapoint |
If set to true, the full datapoints (including all vector values and restricts) of the nearest neighbors are returned. Note that returning full datapoint will significantly increase the latency and cost of the query. |
GoogleCloudAiplatformV1FindNeighborsRequestQuery
A query to find a number of the nearest neighbors (most similar vectors) of a vector.Fields | |
---|---|
approximateNeighborCount |
The number of neighbors to find via approximate search before exact reordering is performed. If not set, the default value from scam config is used; if set, this value must be > 0. |
datapoint |
Required. The datapoint/vector whose nearest neighbors should be searched for. |
fractionLeafNodesToSearchOverride |
The fraction of the number of leaves to search, set at query time allows user to tune search performance. This value increase result in both search accuracy and latency increase. The value should be between 0.0 and 1.0. If not set or set to 0.0, query uses the default value specified in NearestNeighborSearchConfig.TreeAHConfig.fraction_leaf_nodes_to_search. |
neighborCount |
The number of nearest neighbors to be retrieved from database for each query. If not set, will use the default from the service configuration (https://cloud.google.com/vertex-ai/docs/matching-engine/configuring-indexes#nearest-neighbor-search-config). |
perCrowdingAttributeNeighborCount |
Crowding is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute. It's used for improving result diversity. This field is the maximum number of matches with the same crowding tag. |
rrf |
Optional. Represents RRF algorithm that combines search results. |
GoogleCloudAiplatformV1FindNeighborsRequestQueryRRF
Parameters for RRF algorithm that combines search results.Fields | |
---|---|
alpha |
Required. Users can provide an alpha value to give more weight to dense vs sparse results. For example, if the alpha is 0, we only return sparse and if the alpha is 1, we only return dense. |
GoogleCloudAiplatformV1FindNeighborsResponse
The response message for MatchService.FindNeighbors.Fields | |
---|---|
nearestNeighbors[] |
The nearest neighbors of the query datapoints. |
GoogleCloudAiplatformV1FindNeighborsResponseNearestNeighbors
Nearest neighbors for one query.Fields | |
---|---|
id |
The ID of the query datapoint. |
neighbors[] |
All its neighbors. |
GoogleCloudAiplatformV1FindNeighborsResponseNeighbor
A neighbor of the query vector.Fields | |
---|---|
datapoint |
The datapoint of the neighbor. Note that full datapoints are returned only when "return_full_datapoint" is set to true. Otherwise, only the "datapoint_id" and "crowding_tag" fields are populated. |
distance |
The distance between the neighbor and the dense embedding query. |
sparseDistance |
The distance between the neighbor and the query sparse_embedding. |
GoogleCloudAiplatformV1FractionSplit
Assigns the input data to training, validation, and test sets as per the given fractions. Any oftraining_fraction
, validation_fraction
and test_fraction
may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.
Fields | |
---|---|
testFraction |
The fraction of the input data that is to be used to evaluate the Model. |
trainingFraction |
The fraction of the input data that is to be used to train the Model. |
validationFraction |
The fraction of the input data that is to be used to validate the Model. |
GoogleCloudAiplatformV1FunctionCall
A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values.Fields | |
---|---|
args |
Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. |
name |
Required. The name of the function to call. Matches [FunctionDeclaration.name]. |
GoogleCloudAiplatformV1FunctionDeclaration
Structured representation of a function declaration as defined by the OpenAPI 3.0 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as aTool
by the model and executed by the client.
Fields | |
---|---|
description |
Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. |
name |
Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. |
parameters |
Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 |
GoogleCloudAiplatformV1FunctionResponse
The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction.Fields | |
---|---|
name |
Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. |
response |
Required. The function response in JSON object format. |
GoogleCloudAiplatformV1GcsDestination
The Google Cloud Storage location where the output is to be written to.Fields | |
---|---|
outputUriPrefix |
Required. Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist. |
GoogleCloudAiplatformV1GcsSource
The Google Cloud Storage location for the input content.Fields | |
---|---|
uris[] |
Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames. |
GoogleCloudAiplatformV1GenerateContentRequest
Request message for [PredictionService.GenerateContent].Fields | |
---|---|
contents[] |
Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request. |
generationConfig |
Optional. Generation config. |
safetySettings[] |
Optional. Per request settings for blocking unsafe content. Enforced on GenerateContentResponse.candidates. |
systemInstruction |
Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph. |
tools[] |
Optional. A list of |
GoogleCloudAiplatformV1GenerateContentResponse
Response message for [PredictionService.GenerateContent].Fields | |
---|---|
candidates[] |
Output only. Generated candidates. |
promptFeedback |
Output only. Content filter results for a prompt sent in the request. Note: Sent only in the first stream chunk. Only happens when no candidates were generated due to content violations. |
usageMetadata |
Usage metadata about the response(s). |
GoogleCloudAiplatformV1GenerateContentResponsePromptFeedback
Content filter results for a prompt sent in the request.Fields | |
---|---|
blockReason |
Output only. Blocked reason. |
Enum type. Can be one of the following: | |
BLOCKED_REASON_UNSPECIFIED |
Unspecified blocked reason. |
SAFETY |
Candidates blocked due to safety. |
OTHER |
Candidates blocked due to other reason. |
BLOCKLIST |
Candidates blocked due to the terms which are included from the terminology blocklist. |
PROHIBITED_CONTENT |
Candidates blocked due to prohibited content. |
blockReasonMessage |
Output only. A readable block reason message. |
safetyRatings[] |
Output only. Safety ratings. |
GoogleCloudAiplatformV1GenerateContentResponseUsageMetadata
Usage metadata about response(s).Fields | |
---|---|
candidatesTokenCount |
Number of tokens in the response(s). |
promptTokenCount |
Number of tokens in the request. |
totalTokenCount |
(No description provided) |
GoogleCloudAiplatformV1GenerationConfig
Generation config.Fields | |
---|---|
candidateCount |
Optional. Number of candidates to generate. |
frequencyPenalty |
Optional. Frequency penalties. |
maxOutputTokens |
Optional. The maximum number of output tokens to generate per message. |
presencePenalty |
Optional. Positive penalties. |
responseMimeType |
Optional. Output response mimetype of the generated candidate text. Supported mimetype: - |
responseStyle |
Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED |
Enum type. Can be one of the following: | |
RESPONSE_STYLE_UNSPECIFIED |
response style unspecified. |
RESPONSE_STYLE_PRECISE |
Precise response. |
RESPONSE_STYLE_BALANCED |
Default response style. |
RESPONSE_STYLE_CREATIVE |
Creative response style. |
stopSequences[] |
Optional. Stop sequences. |
temperature |
Optional. Controls the randomness of predictions. |
topK |
Optional. If specified, top-k sampling will be used. |
topP |
Optional. If specified, nucleus sampling will be used. |
GoogleCloudAiplatformV1GenericOperationMetadata
Generic Metadata shared by all operations.Fields | |
---|---|
createTime |
Output only. Time when the operation was created. |
partialFailures[] |
Output only. Partial failures encountered. E.g. single files that couldn't be read. This field should never exceed 20 entries. Status details field will contain standard Google Cloud error details. |
updateTime |
Output only. Time when the operation was updated for the last time. If the operation has finished (successfully or not), this is the finish time. |
GoogleCloudAiplatformV1GenieSource
Contains information about the source of the models generated from Generative AI Studio.Fields | |
---|---|
baseModelUri |
Required. The public base model URI. |
GoogleCloudAiplatformV1GroundingMetadata
Metadata returned to client when grounding is enabled.Fields | |
---|---|
searchEntryPoint |
Optional. Google search entry for the following-up web searches. |
webSearchQueries[] |
Optional. Web search queries for the following-up web search. |
GoogleCloudAiplatformV1HyperparameterTuningJob
Represents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification.Fields | |
---|---|
createTime |
Output only. Time when the HyperparameterTuningJob was created. |
displayName |
Required. The display name of the HyperparameterTuningJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
encryptionSpec |
Customer-managed encryption key options for a HyperparameterTuningJob. If this is set, then all resources created by the HyperparameterTuningJob will be encrypted with the provided encryption key. |
endTime |
Output only. Time when the HyperparameterTuningJob entered any of the following states: |
error |
Output only. Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED. |
labels |
The labels with user-defined metadata to organize HyperparameterTuningJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
maxFailedTrialCount |
The number of failed Trials that need to be seen before failing the HyperparameterTuningJob. If set to 0, Vertex AI decides how many Trials must fail before the whole job fails. |
maxTrialCount |
Required. The desired total number of Trials. |
name |
Output only. Resource name of the HyperparameterTuningJob. |
parallelTrialCount |
Required. The desired number of Trials to run in parallel. |
startTime |
Output only. Time when the HyperparameterTuningJob for the first time entered the |
state |
Output only. The detailed state of the job. |
Enum type. Can be one of the following: | |
JOB_STATE_UNSPECIFIED |
The job state is unspecified. |
JOB_STATE_QUEUED |
The job has been just created or resumed and processing has not yet begun. |
JOB_STATE_PENDING |
The service is preparing to run the job. |
JOB_STATE_RUNNING |
The job is in progress. |
JOB_STATE_SUCCEEDED |
The job completed successfully. |
JOB_STATE_FAILED |
The job failed. |
JOB_STATE_CANCELLING |
The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED , JOB_STATE_FAILED or JOB_STATE_CANCELLED . |
JOB_STATE_CANCELLED |
The job has been cancelled. |
JOB_STATE_PAUSED |
The job has been stopped, and can be resumed. |
JOB_STATE_EXPIRED |
The job has expired. |
JOB_STATE_UPDATING |
The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state. |
JOB_STATE_PARTIALLY_SUCCEEDED |
The job is partially succeeded, some results may be missing due to errors. |
studySpec |
Required. Study configuration of the HyperparameterTuningJob. |
trialJobSpec |
Required. The spec of a trial job. The same spec applies to the CustomJobs created in all the trials. |
trials[] |
Output only. Trials of the HyperparameterTuningJob. |
updateTime |
Output only. Time when the HyperparameterTuningJob was most recently updated. |
GoogleCloudAiplatformV1IdMatcher
Matcher for Features of an EntityType by Feature ID.Fields | |
---|---|
ids[] |
Required. The following are accepted as |
GoogleCloudAiplatformV1ImportDataConfig
Describes the location from where we import data into a Dataset, together with the labels that will be applied to the DataItems and the Annotations.Fields | |
---|---|
annotationLabels |
Labels that will be applied to newly imported Annotations. If two Annotations are identical, one of them will be deduped. Two Annotations are considered identical if their payload, payload_schema_uri and all of their labels are the same. These labels will be overridden by Annotation labels specified inside index file referenced by import_schema_uri, e.g. jsonl file. |
dataItemLabels |
Labels that will be applied to newly imported DataItems. If an identical DataItem as one being imported already exists in the Dataset, then these labels will be appended to these of the already existing one, and if labels with identical key is imported before, the old label value will be overwritten. If two DataItems are identical in the same import data operation, the labels will be combined and if key collision happens in this case, one of the values will be picked randomly. Two DataItems are considered identical if their content bytes are identical (e.g. image bytes or pdf bytes). These labels will be overridden by Annotation labels specified inside index file referenced by import_schema_uri, e.g. jsonl file. |
gcsSource |
The Google Cloud Storage location for the input content. |
importSchemaUri |
Required. Points to a YAML file stored on Google Cloud Storage describing the import format. Validation will be done against the schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. |
GoogleCloudAiplatformV1ImportDataOperationMetadata
Runtime operation information for DatasetService.ImportData.Fields | |
---|---|
genericMetadata |
The common part of the operation metadata. |
GoogleCloudAiplatformV1ImportDataRequest
Request message for DatasetService.ImportData.Fields | |
---|---|
importConfigs[] |
Required. The desired input locations. The contents of all input locations will be imported in one batch. |
GoogleCloudAiplatformV1ImportFeatureValuesOperationMetadata
Details of operations that perform import Feature values.Fields | |
---|---|
blockingOperationIds[] |
List of ImportFeatureValues operations running under a single EntityType that are blocking this operation. |
genericMetadata |
Operation metadata for Featurestore import Feature values. |
importedEntityCount |
Number of entities that have been imported by the operation. |
importedFeatureValueCount |
Number of Feature values that have been imported by the operation. |
invalidRowCount |
The number of rows in input source that weren't imported due to either * Not having any featureValues. * Having a null entityId. * Having a null timestamp. * Not being parsable (applicable for CSV sources). |
sourceUris[] |
The source URI from where Feature values are imported. |
timestampOutsideRetentionRowsCount |
The number rows that weren't ingested due to having timestamps outside the retention boundary. |
GoogleCloudAiplatformV1ImportFeatureValuesRequest
Request message for FeaturestoreService.ImportFeatureValues.Fields | |
---|---|
avroSource |
(No description provided) |
bigquerySource |
(No description provided) |
csvSource |
(No description provided) |
disableIngestionAnalysis |
If true, API doesn't start ingestion analysis pipeline. |
disableOnlineServing |
If set, data will not be imported for online serving. This is typically used for backfilling, where Feature generation timestamps are not in the timestamp range needed for online serving. |
entityIdField |
Source column that holds entity IDs. If not provided, entity IDs are extracted from the column named entity_id. |
featureSpecs[] |
Required. Specifications defining which Feature values to import from the entity. The request fails if no feature_specs are provided, and having multiple feature_specs for one Feature is not allowed. |
featureTime |
Single Feature timestamp for all entities being imported. The timestamp must not have higher than millisecond precision. |
featureTimeField |
Source column that holds the Feature timestamp for all Feature values in each entity. |
workerCount |
Specifies the number of workers that are used to write data to the Featurestore. Consider the online serving capacity that you require to achieve the desired import throughput without interfering with online serving. The value must be positive, and less than or equal to 100. If not set, defaults to using 1 worker. The low count ensures minimal impact on online serving performance. |
GoogleCloudAiplatformV1ImportFeatureValuesRequestFeatureSpec
Defines the Feature value(s) to import.Fields | |
---|---|
id |
Required. ID of the Feature to import values of. This Feature must exist in the target EntityType, or the request will fail. |
sourceField |
Source column to get the Feature values from. If not set, uses the column with the same name as the Feature ID. |
GoogleCloudAiplatformV1ImportFeatureValuesResponse
Response message for FeaturestoreService.ImportFeatureValues.Fields | |
---|---|
importedEntityCount |
Number of entities that have been imported by the operation. |
importedFeatureValueCount |
Number of Feature values that have been imported by the operation. |
invalidRowCount |
The number of rows in input source that weren't imported due to either * Not having any featureValues. * Having a null entityId. * Having a null timestamp. * Not being parsable (applicable for CSV sources). |
timestampOutsideRetentionRowsCount |
The number rows that weren't ingested due to having feature timestamps outside the retention boundary. |
GoogleCloudAiplatformV1ImportModelEvaluationRequest
Request message for ModelService.ImportModelEvaluationFields | |
---|---|
modelEvaluation |
Required. Model evaluation resource to be imported. |
GoogleCloudAiplatformV1Index
A representation of a collection of database items organized in a way that allows for approximate nearest neighbor (a.k.a ANN) algorithms search.Fields | |
---|---|
createTime |
Output only. Timestamp when this Index was created. |
deployedIndexes[] |
Output only. The pointers to DeployedIndexes created from this Index. An Index can be only deleted if all its DeployedIndexes had been undeployed first. |
description |
The description of the Index. |
displayName |
Required. The display name of the Index. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
encryptionSpec |
Immutable. Customer-managed encryption key spec for an Index. If set, this Index and all sub-resources of this Index will be secured by this key. |
etag |
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
indexStats |
Output only. Stats of the index resource. |
indexUpdateMethod |
Immutable. The update method to use with this Index. If not set, BATCH_UPDATE will be used by default. |
Enum type. Can be one of the following: | |
INDEX_UPDATE_METHOD_UNSPECIFIED |
Should not be used. |
BATCH_UPDATE |
BatchUpdate: user can call UpdateIndex with files on Cloud Storage of Datapoints to update. |
STREAM_UPDATE |
StreamUpdate: user can call UpsertDatapoints/DeleteDatapoints to update the Index and the updates will be applied in corresponding DeployedIndexes in nearly real-time. |
labels |
The labels with user-defined metadata to organize your Indexes. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
metadata |
An additional information about the Index; the schema of the metadata can be found in metadata_schema. |
metadataSchemaUri |
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access. |
name |
Output only. The resource name of the Index. |
updateTime |
Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it. |
GoogleCloudAiplatformV1IndexDatapoint
A datapoint of Index.Fields | |
---|---|
crowdingTag |
Optional. CrowdingTag of the datapoint, the number of neighbors to return in each crowding can be configured during query. |
datapointId |
Required. Unique identifier of the datapoint. |
featureVector[] |
Required. Feature embedding vector for dense index. An array of numbers with the length of [NearestNeighborSearchConfig.dimensions]. |
numericRestricts[] |
Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses numeric comparisons. |
restricts[] |
Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses categorical tokens. See: https://cloud.google.com/vertex-ai/docs/matching-engine/filtering |
sparseEmbedding |
Optional. Feature embedding vector for sparse index. |
GoogleCloudAiplatformV1IndexDatapointCrowdingTag
Crowding tag is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute.Fields | |
---|---|
crowdingAttribute |
The attribute value used for crowding. The maximum number of neighbors to return per crowding attribute value (per_crowding_attribute_num_neighbors) is configured per-query. This field is ignored if per_crowding_attribute_num_neighbors is larger than the total number of neighbors to return for a given query. |
GoogleCloudAiplatformV1IndexDatapointNumericRestriction
This field allows restricts to be based on numeric comparisons rather than categorical tokens.Fields | |
---|---|
namespace |
The namespace of this restriction. e.g.: cost. |
op |
This MUST be specified for queries and must NOT be specified for datapoints. |
Enum type. Can be one of the following: | |
OPERATOR_UNSPECIFIED |
Default value of the enum. |
LESS |
Datapoints are eligible iff their value is < the query's. |
LESS_EQUAL |
Datapoints are eligible iff their value is <= the query's. |
EQUAL |
Datapoints are eligible iff their value is == the query's. |
GREATER_EQUAL |
Datapoints are eligible iff their value is >= the query's. |
GREATER |
Datapoints are eligible iff their value is > the query's. |
NOT_EQUAL |
Datapoints are eligible iff their value is != the query's. |
valueDouble |
Represents 64 bit float. |
valueFloat |
Represents 32 bit float. |
valueInt |
Represents 64 bit integer. |
GoogleCloudAiplatformV1IndexDatapointRestriction
Restriction of a datapoint which describe its attributes(tokens) from each of several attribute categories(namespaces).Fields | |
---|---|
allowList[] |
The attributes to allow in this namespace. e.g.: 'red' |
denyList[] |
The attributes to deny in this namespace. e.g.: 'blue' |
namespace |
The namespace of this restriction. e.g.: color. |
GoogleCloudAiplatformV1IndexDatapointSparseEmbedding
Feature embedding vector for sparse index. An array of numbers whose values are located in the specified dimensions.Fields | |
---|---|
dimensions[] |
Required. The list of indexes for the embedding values of the sparse vector. |
values[] |
Required. The list of embedding values of the sparse vector. |
GoogleCloudAiplatformV1IndexEndpoint
Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.Fields | |
---|---|
createTime |
Output only. Timestamp when this IndexEndpoint was created. |
deployedIndexes[] |
Output only. The indexes deployed in this endpoint. |
description |
The description of the IndexEndpoint. |
displayName |
Required. The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
enablePrivateServiceConnect |
Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set. |
encryptionSpec |
Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key. |
etag |
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
name |
Output only. The resource name of the IndexEndpoint. |
network |
Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: |
privateServiceConnectConfig |
Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive. |
publicEndpointDomainName |
Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint. |
publicEndpointEnabled |
Optional. If true, the deployed index will be accessible through public endpoint. |
updateTime |
Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of. |
GoogleCloudAiplatformV1IndexPrivateEndpoints
IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment.Fields | |
---|---|
matchGrpcAddress |
Output only. The ip address used to send match gRPC requests. |
pscAutomatedEndpoints[] |
Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set. |
serviceAttachment |
Output only. The name of the service attachment resource. Populated if private service connect is enabled. |
GoogleCloudAiplatformV1IndexStats
Stats of the Index.Fields | |
---|---|
shardsCount |
Output only. The number of shards in the Index. |
sparseVectorsCount |
Output only. The number of sparse vectors in the Index. |
vectorsCount |
Output only. The number of dense vectors in the Index. |
GoogleCloudAiplatformV1InputDataConfig
Specifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.Fields | |
---|---|
annotationSchemaUri |
Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with metadata of the Dataset specified by dataset_id. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both annotations_filter and annotation_schema_uri. |
annotationsFilter |
Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in ListAnnotations may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem. |
bigqueryDestination |
Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name |
datasetId |
Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's training_task_definition. For tabular Datasets, all their data is exported to training, to pick and choose from. |
filterSplit |
Split based on the provided filters for each set. |
fractionSplit |
Split based on fractions defining the size of each set. |
gcsDestination |
The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: |
persistMlUseAssignment |
Whether to persist the ML use assignment to data item system labels. |
predefinedSplit |
Supported only for tabular Datasets. Split based on a predefined key. |
savedQueryId |
Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by dataset_id used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both saved_query_id and annotations_filter. Only one of saved_query_id and annotation_schema_uri should be specified as both of them represent the same thing: problem type. |
stratifiedSplit |
Supported only for tabular Datasets. Split based on the distribution of the specified column. |
timestampSplit |
Supported only for tabular Datasets. Split based on the timestamp of the input data pieces. |
GoogleCloudAiplatformV1Int64Array
A list of int64 values.Fields | |
---|---|
values[] |
A list of int64 values. |
GoogleCloudAiplatformV1IntegratedGradientsAttribution
An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365Fields | |
---|---|
blurBaselineConfig |
Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 |
smoothGradConfig |
Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf |
stepCount |
Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively. |
GoogleCloudAiplatformV1LargeModelReference
Contains information about the Large Model.Fields | |
---|---|
name |
Required. The unique name of the large Foundation or pre-built model. Like "chat-bison", "text-bison". Or model name with version ID, like "chat-bison@001", "text-bison@005", etc. |
GoogleCloudAiplatformV1LineageSubgraph
A subgraph of the overall lineage graph. Event edges connect Artifact and Execution nodes.Fields | |
---|---|
artifacts[] |
The Artifact nodes in the subgraph. |
events[] |
The Event edges between Artifacts and Executions in the subgraph. |
executions[] |
The Execution nodes in the subgraph. |
GoogleCloudAiplatformV1ListAnnotationsResponse
Response message for DatasetService.ListAnnotations.Fields | |
---|---|
annotations[] |
A list of Annotations that matches the specified filter in the request. |
nextPageToken |
The standard List next-page token. |
GoogleCloudAiplatformV1ListArtifactsResponse
Response message for MetadataService.ListArtifacts.Fields | |
---|---|
artifacts[] |
The Artifacts retrieved from the MetadataStore. |
nextPageToken |
A token, which can be sent as ListArtifactsRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages. |
GoogleCloudAiplatformV1ListBatchPredictionJobsResponse
Response message for JobService.ListBatchPredictionJobsFields | |
---|---|
batchPredictionJobs[] |
List of BatchPredictionJobs in the requested page. |
nextPageToken |
A token to retrieve the next page of results. Pass to ListBatchPredictionJobsRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListContextsResponse
Response message for MetadataService.ListContexts.Fields | |
---|---|
contexts[] |
The Contexts retrieved from the MetadataStore. |
nextPageToken |
A token, which can be sent as ListContextsRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages. |
GoogleCloudAiplatformV1ListCustomJobsResponse
Response message for JobService.ListCustomJobsFields | |
---|---|
customJobs[] |
List of CustomJobs in the requested page. |
nextPageToken |
A token to retrieve the next page of results. Pass to ListCustomJobsRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListDataItemsResponse
Response message for DatasetService.ListDataItems.Fields | |
---|---|
dataItems[] |
A list of DataItems that matches the specified filter in the request. |
nextPageToken |
The standard List next-page token. |
GoogleCloudAiplatformV1ListDataLabelingJobsResponse
Response message for JobService.ListDataLabelingJobs.Fields | |
---|---|
dataLabelingJobs[] |
A list of DataLabelingJobs that matches the specified filter in the request. |
nextPageToken |
The standard List next-page token. |
GoogleCloudAiplatformV1ListDatasetVersionsResponse
Response message for DatasetService.ListDatasetVersions.Fields | |
---|---|
datasetVersions[] |
A list of DatasetVersions that matches the specified filter in the request. |
nextPageToken |
The standard List next-page token. |
GoogleCloudAiplatformV1ListDatasetsResponse
Response message for DatasetService.ListDatasets.Fields | |
---|---|
datasets[] |
A list of Datasets that matches the specified filter in the request. |
nextPageToken |
The standard List next-page token. |
GoogleCloudAiplatformV1ListDeploymentResourcePoolsResponse
Response message for ListDeploymentResourcePools method.Fields | |
---|---|
deploymentResourcePools[] |
The DeploymentResourcePools from the specified location. |
nextPageToken |
A token, which can be sent as |
GoogleCloudAiplatformV1ListEndpointsResponse
Response message for EndpointService.ListEndpoints.Fields | |
---|---|
endpoints[] |
List of Endpoints in the requested page. |
nextPageToken |
A token to retrieve the next page of results. Pass to ListEndpointsRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListEntityTypesResponse
Response message for FeaturestoreService.ListEntityTypes.Fields | |
---|---|
entityTypes[] |
The EntityTypes matching the request. |
nextPageToken |
A token, which can be sent as ListEntityTypesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
GoogleCloudAiplatformV1ListExecutionsResponse
Response message for MetadataService.ListExecutions.Fields | |
---|---|
executions[] |
The Executions retrieved from the MetadataStore. |
nextPageToken |
A token, which can be sent as ListExecutionsRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages. |
GoogleCloudAiplatformV1ListFeatureGroupsResponse
Response message for FeatureRegistryService.ListFeatureGroups.Fields | |
---|---|
featureGroups[] |
The FeatureGroups matching the request. |
nextPageToken |
A token, which can be sent as ListFeatureGroupsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
GoogleCloudAiplatformV1ListFeatureOnlineStoresResponse
Response message for FeatureOnlineStoreAdminService.ListFeatureOnlineStores.Fields | |
---|---|
featureOnlineStores[] |
The FeatureOnlineStores matching the request. |
nextPageToken |
A token, which can be sent as ListFeatureOnlineStoresRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
GoogleCloudAiplatformV1ListFeatureViewSyncsResponse
Response message for FeatureOnlineStoreAdminService.ListFeatureViewSyncs.Fields | |
---|---|
featureViewSyncs[] |
The FeatureViewSyncs matching the request. |
nextPageToken |
A token, which can be sent as ListFeatureViewSyncsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
GoogleCloudAiplatformV1ListFeatureViewsResponse
Response message for FeatureOnlineStoreAdminService.ListFeatureViews.Fields | |
---|---|
featureViews[] |
The FeatureViews matching the request. |
nextPageToken |
A token, which can be sent as ListFeatureViewsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
GoogleCloudAiplatformV1ListFeaturesResponse
Response message for FeaturestoreService.ListFeatures. Response message for FeatureRegistryService.ListFeatures.Fields | |
---|---|
features[] |
The Features matching the request. |
nextPageToken |
A token, which can be sent as ListFeaturesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
GoogleCloudAiplatformV1ListFeaturestoresResponse
Response message for FeaturestoreService.ListFeaturestores.Fields | |
---|---|
featurestores[] |
The Featurestores matching the request. |
nextPageToken |
A token, which can be sent as ListFeaturestoresRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
GoogleCloudAiplatformV1ListHyperparameterTuningJobsResponse
Response message for JobService.ListHyperparameterTuningJobsFields | |
---|---|
hyperparameterTuningJobs[] |
List of HyperparameterTuningJobs in the requested page. HyperparameterTuningJob.trials of the jobs will be not be returned. |
nextPageToken |
A token to retrieve the next page of results. Pass to ListHyperparameterTuningJobsRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListIndexEndpointsResponse
Response message for IndexEndpointService.ListIndexEndpoints.Fields | |
---|---|
indexEndpoints[] |
List of IndexEndpoints in the requested page. |
nextPageToken |
A token to retrieve next page of results. Pass to ListIndexEndpointsRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListIndexesResponse
Response message for IndexService.ListIndexes.Fields | |
---|---|
indexes[] |
List of indexes in the requested page. |
nextPageToken |
A token to retrieve next page of results. Pass to ListIndexesRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListMetadataSchemasResponse
Response message for MetadataService.ListMetadataSchemas.Fields | |
---|---|
metadataSchemas[] |
The MetadataSchemas found for the MetadataStore. |
nextPageToken |
A token, which can be sent as ListMetadataSchemasRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages. |
GoogleCloudAiplatformV1ListMetadataStoresResponse
Response message for MetadataService.ListMetadataStores.Fields | |
---|---|
metadataStores[] |
The MetadataStores found for the Location. |
nextPageToken |
A token, which can be sent as ListMetadataStoresRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages. |
GoogleCloudAiplatformV1ListModelDeploymentMonitoringJobsResponse
Response message for JobService.ListModelDeploymentMonitoringJobs.Fields | |
---|---|
modelDeploymentMonitoringJobs[] |
A list of ModelDeploymentMonitoringJobs that matches the specified filter in the request. |
nextPageToken |
The standard List next-page token. |
GoogleCloudAiplatformV1ListModelEvaluationSlicesResponse
Response message for ModelService.ListModelEvaluationSlices.Fields | |
---|---|
modelEvaluationSlices[] |
List of ModelEvaluations in the requested page. |
nextPageToken |
A token to retrieve next page of results. Pass to ListModelEvaluationSlicesRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListModelEvaluationsResponse
Response message for ModelService.ListModelEvaluations.Fields | |
---|---|
modelEvaluations[] |
List of ModelEvaluations in the requested page. |
nextPageToken |
A token to retrieve next page of results. Pass to ListModelEvaluationsRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListModelVersionsResponse
Response message for ModelService.ListModelVersionsFields | |
---|---|
models[] |
List of Model versions in the requested page. In the returned Model name field, version ID instead of regvision tag will be included. |
nextPageToken |
A token to retrieve the next page of results. Pass to ListModelVersionsRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListModelsResponse
Response message for ModelService.ListModelsFields | |
---|---|
models[] |
List of Models in the requested page. |
nextPageToken |
A token to retrieve next page of results. Pass to ListModelsRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListNasJobsResponse
Response message for JobService.ListNasJobsFields | |
---|---|
nasJobs[] |
List of NasJobs in the requested page. NasJob.nas_job_output of the jobs will not be returned. |
nextPageToken |
A token to retrieve the next page of results. Pass to ListNasJobsRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListNasTrialDetailsResponse
Response message for JobService.ListNasTrialDetailsFields | |
---|---|
nasTrialDetails[] |
List of top NasTrials in the requested page. |
nextPageToken |
A token to retrieve the next page of results. Pass to ListNasTrialDetailsRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1ListNotebookRuntimeTemplatesResponse
Response message for NotebookService.ListNotebookRuntimeTemplates.Fields | |
---|---|
nextPageToken |
A token to retrieve next page of results. Pass to ListNotebookRuntimeTemplatesRequest.page_token to obtain that page. |
notebookRuntimeTemplates[] |
List of NotebookRuntimeTemplates in the requested page. |
GoogleCloudAiplatformV1ListNotebookRuntimesResponse
Response message for NotebookService.ListNotebookRuntimes.Fields | |
---|---|
nextPageToken |
A token to retrieve next page of results. Pass to ListNotebookRuntimesRequest.page_token to obtain that page. |
notebookRuntimes[] |
List of NotebookRuntimes in the requested page. |
GoogleCloudAiplatformV1ListOptimalTrialsResponse
Response message for VizierService.ListOptimalTrials.Fields | |
---|---|
optimalTrials[] |
The pareto-optimal Trials for multiple objective Study or the optimal trial for single objective Study. The definition of pareto-optimal can be checked in wiki page. https://en.wikipedia.org/wiki/Pareto_efficiency |
GoogleCloudAiplatformV1ListPersistentResourcesResponse
Response message for PersistentResourceService.ListPersistentResourcesFields | |
---|---|
nextPageToken |
A token to retrieve next page of results. Pass to ListPersistentResourcesRequest.page_token to obtain that page. |
persistentResources[] |
(No description provided) |
GoogleCloudAiplatformV1ListPipelineJobsResponse
Response message for PipelineService.ListPipelineJobsFields | |
---|---|
nextPageToken |
A token to retrieve the next page of results. Pass to ListPipelineJobsRequest.page_token to obtain that page. |
pipelineJobs[] |
List of PipelineJobs in the requested page. |
GoogleCloudAiplatformV1ListSavedQueriesResponse
Response message for DatasetService.ListSavedQueries.Fields | |
---|---|
nextPageToken |
The standard List next-page token. |
savedQueries[] |
A list of SavedQueries that match the specified filter in the request. |
GoogleCloudAiplatformV1ListSchedulesResponse
Response message for ScheduleService.ListSchedulesFields | |
---|---|
nextPageToken |
A token to retrieve the next page of results. Pass to ListSchedulesRequest.page_token to obtain that page. |
schedules[] |
List of Schedules in the requested page. |
GoogleCloudAiplatformV1ListSpecialistPoolsResponse
Response message for SpecialistPoolService.ListSpecialistPools.Fields | |
---|---|
nextPageToken |
The standard List next-page token. |
specialistPools[] |
A list of SpecialistPools that matches the specified filter in the request. |
GoogleCloudAiplatformV1ListStudiesResponse
Response message for VizierService.ListStudies.Fields | |
---|---|
nextPageToken |
Passes this token as the |
studies[] |
The studies associated with the project. |
GoogleCloudAiplatformV1ListTensorboardExperimentsResponse
Response message for TensorboardService.ListTensorboardExperiments.Fields | |
---|---|
nextPageToken |
A token, which can be sent as ListTensorboardExperimentsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
tensorboardExperiments[] |
The TensorboardExperiments mathching the request. |
GoogleCloudAiplatformV1ListTensorboardRunsResponse
Response message for TensorboardService.ListTensorboardRuns.Fields | |
---|---|
nextPageToken |
A token, which can be sent as ListTensorboardRunsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
tensorboardRuns[] |
The TensorboardRuns mathching the request. |
GoogleCloudAiplatformV1ListTensorboardTimeSeriesResponse
Response message for TensorboardService.ListTensorboardTimeSeries.Fields | |
---|---|
nextPageToken |
A token, which can be sent as ListTensorboardTimeSeriesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
tensorboardTimeSeries[] |
The TensorboardTimeSeries mathching the request. |
GoogleCloudAiplatformV1ListTensorboardsResponse
Response message for TensorboardService.ListTensorboards.Fields | |
---|---|
nextPageToken |
A token, which can be sent as ListTensorboardsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
tensorboards[] |
The Tensorboards mathching the request. |
GoogleCloudAiplatformV1ListTrainingPipelinesResponse
Response message for PipelineService.ListTrainingPipelinesFields | |
---|---|
nextPageToken |
A token to retrieve the next page of results. Pass to ListTrainingPipelinesRequest.page_token to obtain that page. |
trainingPipelines[] |
List of TrainingPipelines in the requested page. |
GoogleCloudAiplatformV1ListTrialsResponse
Response message for VizierService.ListTrials.Fields | |
---|---|
nextPageToken |
Pass this token as the |
trials[] |
The Trials associated with the Study. |
GoogleCloudAiplatformV1ListTuningJobsResponse
Response message for GenAiTuningService.ListTuningJobsFields | |
---|---|
nextPageToken |
A token to retrieve the next page of results. Pass to ListTuningJobsRequest.page_token to obtain that page. |
tuningJobs[] |
List of TuningJobs in the requested page. |
GoogleCloudAiplatformV1LookupStudyRequest
Request message for VizierService.LookupStudy.Fields | |
---|---|
displayName |
Required. The user-defined display name of the Study |
GoogleCloudAiplatformV1MachineSpec
Specification of a single machine.Fields | |
---|---|
acceleratorCount |
The number of accelerators to attach to the machine. |
acceleratorType |
Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. |
Enum type. Can be one of the following: | |
ACCELERATOR_TYPE_UNSPECIFIED |
Unspecified accelerator type, which means no accelerator. |
NVIDIA_TESLA_K80 |
Nvidia Tesla K80 GPU. |
NVIDIA_TESLA_P100 |
Nvidia Tesla P100 GPU. |
NVIDIA_TESLA_V100 |
Nvidia Tesla V100 GPU. |
NVIDIA_TESLA_P4 |
Nvidia Tesla P4 GPU. |
NVIDIA_TESLA_T4 |
Nvidia Tesla T4 GPU. |
NVIDIA_TESLA_A100 |
Nvidia Tesla A100 GPU. |
NVIDIA_A100_80GB |
Nvidia A100 80GB GPU. |
NVIDIA_L4 |
Nvidia L4 GPU. |
NVIDIA_H100_80GB |
Nvidia H100 80Gb GPU. |
TPU_V2 |
TPU v2. |
TPU_V3 |
TPU v3. |
TPU_V4_POD |
TPU v4. |
TPU_V5_LITEPOD |
TPU v5. |
machineType |
Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is |
tpuTopology |
Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). |
GoogleCloudAiplatformV1ManualBatchTuningParameters
Manual batch tuning parameters.Fields | |
---|---|
batchSize |
Immutable. The number of the records (e.g. instances) of the operation given in each batch to a machine replica. Machine type, and size of a single record should be considered when setting this parameter, higher value speeds up the batch operation's execution, but too high value will result in a whole batch not fitting in a machine's memory, and the whole operation will fail. The default value is 64. |
GoogleCloudAiplatformV1Measurement
A message representing a Measurement of a Trial. A Measurement contains the Metrics got by executing a Trial using suggested hyperparameter values.Fields | |
---|---|
elapsedDuration |
Output only. Time that the Trial has been running at the point of this Measurement. |
metrics[] |
Output only. A list of metrics got by evaluating the objective functions using suggested Parameter values. |
stepCount |
Output only. The number of steps the machine learning model has been trained for. Must be non-negative. |
GoogleCloudAiplatformV1MeasurementMetric
A message representing a metric in the measurement.Fields | |
---|---|
metricId |
Output only. The ID of the Metric. The Metric should be defined in StudySpec's Metrics. |
value |
Output only. The value for this metric. |
GoogleCloudAiplatformV1MergeVersionAliasesRequest
Request message for ModelService.MergeVersionAliases.Fields | |
---|---|
versionAliases[] |
Required. The set of version aliases to merge. The alias should be at most 128 characters, and match |
GoogleCloudAiplatformV1MetadataSchema
Instance of a general MetadataSchema.Fields | |
---|---|
createTime |
Output only. Timestamp when this MetadataSchema was created. |
description |
Description of the Metadata Schema |
name |
Output only. The resource name of the MetadataSchema. |
schema |
Required. The raw YAML string representation of the MetadataSchema. The combination of [MetadataSchema.version] and the schema name given by |
schemaType |
The type of the MetadataSchema. This is a property that identifies which metadata types will use the MetadataSchema. |
Enum type. Can be one of the following: | |
METADATA_SCHEMA_TYPE_UNSPECIFIED |
Unspecified type for the MetadataSchema. |
ARTIFACT_TYPE |
A type indicating that the MetadataSchema will be used by Artifacts. |
EXECUTION_TYPE |
A typee indicating that the MetadataSchema will be used by Executions. |
CONTEXT_TYPE |
A state indicating that the MetadataSchema will be used by Contexts. |
schemaVersion |
The version of the MetadataSchema. The version's format must match the following regular expression: |
GoogleCloudAiplatformV1MetadataStore
Instance of a metadata store. Contains a set of metadata that can be queried.Fields | |
---|---|
createTime |
Output only. Timestamp when this MetadataStore was created. |
description |
Description of the MetadataStore. |
encryptionSpec |
Customer-managed encryption key spec for a Metadata Store. If set, this Metadata Store and all sub-resources of this Metadata Store are secured using this key. |
name |
Output only. The resource name of the MetadataStore instance. |
state |
Output only. State information of the MetadataStore. |
updateTime |
Output only. Timestamp when this MetadataStore was last updated. |
GoogleCloudAiplatformV1MetadataStoreMetadataStoreState
Represents state information for a MetadataStore.Fields | |
---|---|
diskUtilizationBytes |
The disk utilization of the MetadataStore in bytes. |
GoogleCloudAiplatformV1MigratableResource
Represents one resource that exists in automl.googleapis.com, datalabeling.googleapis.com or ml.googleapis.com.Fields | |
---|---|
automlDataset |
Output only. Represents one Dataset in automl.googleapis.com. |
automlModel |
Output only. Represents one Model in automl.googleapis.com. |
dataLabelingDataset |
Output only. Represents one Dataset in datalabeling.googleapis.com. |
lastMigrateTime |
Output only. Timestamp when the last migration attempt on this MigratableResource started. Will not be set if there's no migration attempt on this MigratableResource. |
lastUpdateTime |
Output only. Timestamp when this MigratableResource was last updated. |
mlEngineModelVersion |
Output only. Represents one Version in ml.googleapis.com. |
GoogleCloudAiplatformV1MigratableResourceAutomlDataset
Represents one Dataset in automl.googleapis.com.Fields | |
---|---|
dataset |
Full resource name of automl Dataset. Format: |
datasetDisplayName |
The Dataset's display name in automl.googleapis.com. |
GoogleCloudAiplatformV1MigratableResourceAutomlModel
Represents one Model in automl.googleapis.com.Fields | |
---|---|
model |
Full resource name of automl Model. Format: |
modelDisplayName |
The Model's display name in automl.googleapis.com. |
GoogleCloudAiplatformV1MigratableResourceDataLabelingDataset
Represents one Dataset in datalabeling.googleapis.com.Fields | |
---|---|
dataLabelingAnnotatedDatasets[] |
The migratable AnnotatedDataset in datalabeling.googleapis.com belongs to the data labeling Dataset. |
dataset |
Full resource name of data labeling Dataset. Format: |
datasetDisplayName |
The Dataset's display name in datalabeling.googleapis.com. |
GoogleCloudAiplatformV1MigratableResourceDataLabelingDatasetDataLabelingAnnotatedDataset
Represents one AnnotatedDataset in datalabeling.googleapis.com.Fields | |
---|---|
annotatedDataset |
Full resource name of data labeling AnnotatedDataset. Format: |
annotatedDatasetDisplayName |
The AnnotatedDataset's display name in datalabeling.googleapis.com. |
GoogleCloudAiplatformV1MigratableResourceMlEngineModelVersion
Represents one model Version in ml.googleapis.com.Fields | |
---|---|
endpoint |
The ml.googleapis.com endpoint that this model Version currently lives in. Example values: * ml.googleapis.com * us-centrall-ml.googleapis.com * europe-west4-ml.googleapis.com * asia-east1-ml.googleapis.com |
version |
Full resource name of ml engine model Version. Format: |
GoogleCloudAiplatformV1MigrateResourceRequest
Config of migrating one resource from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.Fields | |
---|---|
migrateAutomlDatasetConfig |
Config for migrating Dataset in automl.googleapis.com to Vertex AI's Dataset. |
migrateAutomlModelConfig |
Config for migrating Model in automl.googleapis.com to Vertex AI's Model. |
migrateDataLabelingDatasetConfig |
Config for migrating Dataset in datalabeling.googleapis.com to Vertex AI's Dataset. |
migrateMlEngineModelVersionConfig |
Config for migrating Version in ml.googleapis.com to Vertex AI's Model. |
GoogleCloudAiplatformV1MigrateResourceRequestMigrateAutomlDatasetConfig
Config for migrating Dataset in automl.googleapis.com to Vertex AI's Dataset.Fields | |
---|---|
dataset |
Required. Full resource name of automl Dataset. Format: |
datasetDisplayName |
Required. Display name of the Dataset in Vertex AI. System will pick a display name if unspecified. |
GoogleCloudAiplatformV1MigrateResourceRequestMigrateAutomlModelConfig
Config for migrating Model in automl.googleapis.com to Vertex AI's Model.Fields | |
---|---|
model |
Required. Full resource name of automl Model. Format: |
modelDisplayName |
Optional. Display name of the model in Vertex AI. System will pick a display name if unspecified. |
GoogleCloudAiplatformV1MigrateResourceRequestMigrateDataLabelingDatasetConfig
Config for migrating Dataset in datalabeling.googleapis.com to Vertex AI's Dataset.Fields | |
---|---|
dataset |
Required. Full resource name of data labeling Dataset. Format: |
datasetDisplayName |
Optional. Display name of the Dataset in Vertex AI. System will pick a display name if unspecified. |
migrateDataLabelingAnnotatedDatasetConfigs[] |
Optional. Configs for migrating AnnotatedDataset in datalabeling.googleapis.com to Vertex AI's SavedQuery. The specified AnnotatedDatasets have to belong to the datalabeling Dataset. |
GoogleCloudAiplatformV1MigrateResourceRequestMigrateDataLabelingDatasetConfigMigrateDataLabelingAnnotatedDatasetConfig
Config for migrating AnnotatedDataset in datalabeling.googleapis.com to Vertex AI's SavedQuery.Fields | |
---|---|
annotatedDataset |
Required. Full resource name of data labeling AnnotatedDataset. Format: |
GoogleCloudAiplatformV1MigrateResourceRequestMigrateMlEngineModelVersionConfig
Config for migrating version in ml.googleapis.com to Vertex AI's Model.Fields | |
---|---|
endpoint |
Required. The ml.googleapis.com endpoint that this model version should be migrated from. Example values: * ml.googleapis.com * us-centrall-ml.googleapis.com * europe-west4-ml.googleapis.com * asia-east1-ml.googleapis.com |
modelDisplayName |
Required. Display name of the model in Vertex AI. System will pick a display name if unspecified. |
modelVersion |
Required. Full resource name of ml engine model version. Format: |
GoogleCloudAiplatformV1MigrateResourceResponse
Describes a successfully migrated resource.Fields | |
---|---|
dataset |
Migrated Dataset's resource name. |
migratableResource |
Before migration, the identifier in ml.googleapis.com, automl.googleapis.com or datalabeling.googleapis.com. |
model |
Migrated Model's resource name. |
GoogleCloudAiplatformV1Model
A trained machine learning Model.Fields | |
---|---|
artifactUri |
Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not required for AutoML Models. |
baseModelSource |
Optional. User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models. |
containerSpec |
Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel, and all binaries it contains are copied and stored internally by Vertex AI. Not required for AutoML Models. |
createTime |
Output only. Timestamp when this Model was uploaded into Vertex AI. |
dataStats |
Stats of data used for training or evaluating the Model. Only populated when the Model is trained by a TrainingPipeline with data_input_config. |
deployedModels[] |
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations. |
description |
The description of the Model. |
displayName |
Required. The display name of the Model. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
encryptionSpec |
Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key. |
etag |
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
explanationSpec |
The default explanation specification for this Model. The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model, or explanation_spec of BatchPredictionJob. If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob. |
labels |
The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
metadata |
Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema. Unset if the Model does not have any additional information. |
metadataArtifact |
Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is |
metadataSchemaUri |
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access. |
modelSourceInfo |
Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or saved and tuned from Genie or Model Garden. |
name |
The resource name of the Model. |
originalModelInfo |
Output only. If this Model is a copy of another Model, this contains info about the original. |
pipelineJob |
Optional. This field is populated if the model is produced by a pipeline job. |
predictSchemata |
The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain. |
supportedDeploymentResourcesTypes[] |
Output only. When this Model is deployed, its prediction resources are described by the |
supportedExportFormats[] |
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export. |
supportedInputStorageFormats[] |
Output only. The formats this Model supports in BatchPredictionJob.input_config. If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are: * |
supportedOutputStorageFormats[] |
Output only. The formats this Model supports in BatchPredictionJob.output_config. If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are: * |
trainingPipeline |
Output only. The resource name of the TrainingPipeline that uploaded this Model, if any. |
updateTime |
Output only. Timestamp when this Model was most recently updated. |
versionAliases[] |
User provided version aliases so that a model version can be referenced via alias (i.e. |
versionCreateTime |
Output only. Timestamp when this version was created. |
versionDescription |
The description of this version. |
versionId |
Output only. Immutable. The version ID of the model. A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation. |
versionUpdateTime |
Output only. Timestamp when this version was most recently updated. |
GoogleCloudAiplatformV1ModelBaseModelSource
User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.Fields | |
---|---|
genieSource |
Information about the base model of Genie models. |
modelGardenSource |
Source information of Model Garden models. |
GoogleCloudAiplatformV1ModelContainerSpec
Specification of a container for serving predictions. Some fields in this message correspond to fields in the Kubernetes Container v1 core specification.Fields | |
---|---|
args[] |
Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's |
command[] |
Immutable. Specifies the command that runs when the container starts. This overrides the container's ENTRYPOINT. Specify this field as an array of executable and arguments, similar to a Docker |
deploymentTimeout |
Immutable. Deployment timeout. Limit for deployment timeout is 2 hours. |
env[] |
Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable |
grpcPorts[] |
Immutable. List of ports to expose from the container. Vertex AI sends gRPC prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port. If you do not specify this field, gRPC requests to the container will be disabled. Vertex AI does not use ports other than the first one listed. This field corresponds to the |
healthProbe |
Immutable. Specification for Kubernetes readiness probe. |
healthRoute |
Immutable. HTTP path on the container to send health checks to. Vertex AI intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to |
imageUri |
Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the container publishing requirements, including permissions requirements for the Vertex AI Service Agent. The container image is ingested upon ModelService.UploadModel, stored internally, and this original path is afterwards not used. To learn about the requirements for the Docker image itself, see Custom container requirements. You can use the URI to one of Vertex AI's pre-built container images for prediction in this field. |
ports[] |
Immutable. List of ports to expose from the container. Vertex AI sends any prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value: |
predictRoute |
Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to |
sharedMemorySizeMb |
Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. |
startupProbe |
Immutable. Specification for Kubernetes startup probe. |
GoogleCloudAiplatformV1ModelDataStats
Stats of data used for train or evaluate the Model.Fields | |
---|---|
testAnnotationsCount |
Number of Annotations that are used for evaluating this Model. If the Model is evaluated multiple times, this will be the number of test Annotations used by the first evaluation. If the Model is not evaluated, the number is 0. |
testDataItemsCount |
Number of DataItems that were used for evaluating this Model. If the Model is evaluated multiple times, this will be the number of test DataItems used by the first evaluation. If the Model is not evaluated, the number is 0. |
trainingAnnotationsCount |
Number of Annotations that are used for training this Model. |
trainingDataItemsCount |
Number of DataItems that were used for training this Model. |
validationAnnotationsCount |
Number of Annotations that are used for validating this Model during training. |
validationDataItemsCount |
Number of DataItems that were used for validating this Model during training. |
GoogleCloudAiplatformV1ModelDeploymentMonitoringBigQueryTable
ModelDeploymentMonitoringBigQueryTable specifies the BigQuery table name as well as some information of the logs stored in this table.Fields | |
---|---|
bigqueryTablePath |
The created BigQuery table to store logs. Customer could do their own query & analysis. Format: |
logSource |
The source of log. |
Enum type. Can be one of the following: | |
LOG_SOURCE_UNSPECIFIED |
Unspecified source. |
TRAINING |
Logs coming from Training dataset. |
SERVING |
Logs coming from Serving traffic. |
logType |
The type of log. |
Enum type. Can be one of the following: | |
LOG_TYPE_UNSPECIFIED |
Unspecified type. |
PREDICT |
Predict logs. |
EXPLAIN |
Explain logs. |
requestResponseLoggingSchemaVersion |
Output only. The schema version of the request/response logging BigQuery table. Default to v1 if unset. |
GoogleCloudAiplatformV1ModelDeploymentMonitoringJob
Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors.Fields | |
---|---|
analysisInstanceSchemaUri |
YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string. |
bigqueryTables[] |
Output only. The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response |
createTime |
Output only. Timestamp when this ModelDeploymentMonitoringJob was created. |
displayName |
Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob. |
enableMonitoringPipelineLogs |
If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing. |
encryptionSpec |
Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key. |
endpoint |
Required. Endpoint resource name. Format: |
error |
Output only. Only populated when the job's state is |
labels |
The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
latestMonitoringPipelineMetadata |
Output only. Latest triggered monitoring pipeline metadata. |
logTtl |
The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day. |
loggingSamplingStrategy |
Required. Sample Strategy for logging. |
modelDeploymentMonitoringObjectiveConfigs[] |
Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately. |
modelDeploymentMonitoringScheduleConfig |
Required. Schedule config for running the monitoring job. |
modelMonitoringAlertConfig |
Alert config for model monitoring. |
name |
Output only. Resource name of a ModelDeploymentMonitoringJob. |
nextScheduleTime |
Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round. |
predictInstanceSchemaUri |
YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests. |
samplePredictInstance |
Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests. |
scheduleState |
Output only. Schedule state when the monitoring job is in Running state. |
Enum type. Can be one of the following: | |
MONITORING_SCHEDULE_STATE_UNSPECIFIED |
Unspecified state. |
PENDING |
The pipeline is picked up and wait to run. |
OFFLINE |
The pipeline is offline and will be scheduled for next run. |
RUNNING |
The pipeline is running. |
state |
Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'. |
Enum type. Can be one of the following: | |
JOB_STATE_UNSPECIFIED |
The job state is unspecified. |
JOB_STATE_QUEUED |
The job has been just created or resumed and processing has not yet begun. |
JOB_STATE_PENDING |
The service is preparing to run the job. |
JOB_STATE_RUNNING |
The job is in progress. |
JOB_STATE_SUCCEEDED |
The job completed successfully. |
JOB_STATE_FAILED |
The job failed. |
JOB_STATE_CANCELLING |
The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED , JOB_STATE_FAILED or JOB_STATE_CANCELLED . |
JOB_STATE_CANCELLED |
The job has been cancelled. |
JOB_STATE_PAUSED |
The job has been stopped, and can be resumed. |
JOB_STATE_EXPIRED |
The job has expired. |
JOB_STATE_UPDATING |
The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state. |
JOB_STATE_PARTIALLY_SUCCEEDED |
The job is partially succeeded, some results may be missing due to errors. |
statsAnomaliesBaseDirectory |
Stats anomalies base folder path. |
updateTime |
Output only. Timestamp when this ModelDeploymentMonitoringJob was updated most recently. |
GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadata
All metadata of most recent monitoring pipelines.Fields | |
---|---|
runTime |
The time that most recent monitoring pipelines that is related to this run. |
status |
The status of the most recent monitoring pipeline. |
GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfig
ModelDeploymentMonitoringObjectiveConfig contains the pair of deployed_model_id to ModelMonitoringObjectiveConfig.Fields | |
---|---|
deployedModelId |
The DeployedModel ID of the objective config. |
objectiveConfig |
The objective config of for the modelmonitoring job of this deployed model. |
GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfig
The config for scheduling monitoring job.Fields | |
---|---|
monitorInterval |
Required. The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered. |
monitorWindow |
The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics. |
GoogleCloudAiplatformV1ModelEvaluation
A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.Fields | |
---|---|
annotationSchemaUri |
Points to a YAML file stored on Google Cloud Storage describing EvaluatedDataItemView.predictions, EvaluatedDataItemView.ground_truths, EvaluatedAnnotation.predictions, and EvaluatedAnnotation.ground_truths. The schema is defined as an OpenAPI 3.0.2 Schema Object. This field is not populated if there are neither EvaluatedDataItemViews nor EvaluatedAnnotations under this ModelEvaluation. |
createTime |
Output only. Timestamp when this ModelEvaluation was created. |
dataItemSchemaUri |
Points to a YAML file stored on Google Cloud Storage describing EvaluatedDataItemView.data_item_payload and EvaluatedAnnotation.data_item_payload. The schema is defined as an OpenAPI 3.0.2 Schema Object. This field is not populated if there are neither EvaluatedDataItemViews nor EvaluatedAnnotations under this ModelEvaluation. |
displayName |
The display name of the ModelEvaluation. |
explanationSpecs[] |
Describes the values of ExplanationSpec that are used for explaining the predicted values on the evaluated data. |
metadata |
The metadata of the ModelEvaluation. For the ModelEvaluation uploaded from Managed Pipeline, metadata contains a structured value with keys of "pipeline_job_id", "evaluation_dataset_type", "evaluation_dataset_path", "row_based_metrics_path". |
metrics |
Evaluation metrics of the Model. The schema of the metrics is stored in metrics_schema_uri |
metricsSchemaUri |
Points to a YAML file stored on Google Cloud Storage describing the metrics of this ModelEvaluation. The schema is defined as an OpenAPI 3.0.2 Schema Object. |
modelExplanation |
Aggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for AutoML tabular Models. |
name |
Output only. The resource name of the ModelEvaluation. |
sliceDimensions[] |
All possible dimensions of ModelEvaluationSlices. The dimensions can be used as the filter of the ModelService.ListModelEvaluationSlices request, in the form of |
GoogleCloudAiplatformV1ModelEvaluationModelEvaluationExplanationSpec
(No description provided)Fields | |
---|---|
explanationSpec |
Explanation spec details. |
explanationType |
Explanation type. For AutoML Image Classification models, possible values are: * |
GoogleCloudAiplatformV1ModelEvaluationSlice
A collection of metrics calculated by comparing Model's predictions on a slice of the test data against ground truth annotations.Fields | |
---|---|
createTime |
Output only. Timestamp when this ModelEvaluationSlice was created. |
metrics |
Output only. Sliced evaluation metrics of the Model. The schema of the metrics is stored in metrics_schema_uri |
metricsSchemaUri |
Output only. Points to a YAML file stored on Google Cloud Storage describing the metrics of this ModelEvaluationSlice. The schema is defined as an OpenAPI 3.0.2 Schema Object. |
modelExplanation |
Output only. Aggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for tabular Models. |
name |
Output only. The resource name of the ModelEvaluationSlice. |
slice |
Output only. The slice of the test data that is used to evaluate the Model. |
GoogleCloudAiplatformV1ModelEvaluationSliceSlice
Definition of a slice.Fields | |
---|---|
dimension |
Output only. The dimension of the slice. Well-known dimensions are: * |
sliceSpec |
Output only. Specification for how the data was sliced. |
value |
Output only. The value of the dimension in this slice. |
GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpec
Specification for how the data should be sliced.Fields | |
---|---|
configs |
Mapping configuration for this SliceSpec. The key is the name of the feature. By default, the key will be prefixed by "instance" as a dictionary prefix for Vertex Batch Predictions output format. |
GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpecRange
A range of values for slice(s).low
is inclusive, high
is exclusive.
Fields | |
---|---|
high |
Exclusive high value for the range. |
low |
Inclusive low value for the range. |
GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpecSliceConfig
Specification message containing the config for this SliceSpec. Whenkind
is selected as value
and/or range
, only a single slice will be computed. When all_values
is present, a separate slice will be computed for each possible label/value for the corresponding key in config
. Examples, with feature zip_code with values 12345, 23334, 88888 and feature country with values "US", "Canada", "Mexico" in the dataset: Example 1: { "zip_code": { "value": { "float_value": 12345.0 } } } A single slice for any data with zip_code 12345 in the dataset. Example 2: { "zip_code": { "range": { "low": 12345, "high": 20000 } } } A single slice containing data where the zip_codes between 12345 and 20000 For this example, data with the zip_code of 12345 will be in this slice. Example 3: { "zip_code": { "range": { "low": 10000, "high": 20000 } }, "country": { "value": { "string_value": "US" } } } A single slice containing data where the zip_codes between 10000 and 20000 has the country "US". For this example, data with the zip_code of 12345 and country "US" will be in this slice. Example 4: { "country": {"all_values": { "value": true } } } Three slices are computed, one for each unique country in the dataset. Example 5: { "country": { "all_values": { "value": true } }, "zip_code": { "value": { "float_value": 12345.0 } } } Three slices are computed, one for each unique country in the dataset where the zip_code is also 12345. For this example, data with zip_code 12345 and country "US" will be in one slice, zip_code 12345 and country "Canada" in another slice, and zip_code 12345 and country "Mexico" in another slice, totaling 3 slices.
Fields | |
---|---|
allValues |
If all_values is set to true, then all possible labels of the keyed feature will have another slice computed. Example: |
range |
A range of values for a numerical feature. Example: |
value |
A unique specific value for a given feature. Example: |
GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpecValue
Single value that supports strings and floats.Fields | |
---|---|
floatValue |
Float type. |
stringValue |
String type. |
GoogleCloudAiplatformV1ModelExplanation
Aggregated explanation metrics for a Model over a set of instances.Fields | |
---|---|
meanAttributions[] |
Output only. Aggregated attributions explaining the Model's prediction outputs over the set of instances. The attributions are grouped by outputs. For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. The baselineOutputValue, instanceOutputValue and featureAttributions fields are averaged over the test data. NOTE: Currently AutoML tabular classification Models produce only one attribution, which averages attributions over all the classes it predicts. Attribution.approximation_error is not populated. |
GoogleCloudAiplatformV1ModelExportFormat
Represents export format supported by the Model. All formats export to Google Cloud Storage.Fields | |
---|---|
exportableContents[] |
Output only. The content of this Model that may be exported. |
id |
Output only. The ID of the export format. The possible format IDs are: * |
GoogleCloudAiplatformV1ModelGardenSource
Contains information about the source of the models generated from Model Garden.Fields | |
---|---|
publicModelName |
Required. The model garden source model resource name. |
GoogleCloudAiplatformV1ModelMonitoringAlertConfig
The alert config for model monitoring.Fields | |
---|---|
emailAlertConfig |
Email alert config. |
enableLogging |
Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging. |
notificationChannels[] |
Resource names of the NotificationChannels to send alert. Must be of the format |
GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfig
The config for email alert.Fields | |
---|---|
userEmails[] |
The email addresses to send the alert. |
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfig
The objective configuration for model monitoring, including the information needed to detect anomalies for one particular model.Fields | |
---|---|
explanationConfig |
The config for integrating with Vertex Explainable AI. |
predictionDriftDetectionConfig |
The config for drift of prediction data. |
trainingDataset |
Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified. |
trainingPredictionSkewDetectionConfig |
The config for skew between training data and prediction data. |
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfig
The config for integrating with Vertex Explainable AI. Only applicable if the Model has explanation_spec populated.Fields | |
---|---|
enableFeatureAttributes |
If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them. |
explanationBaseline |
Predictions generated by the BatchPredictionJob using baseline dataset. |
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaseline
Output from BatchPredictionJob for Model Monitoring baseline dataset, which can be used to generate baseline attribution scores.Fields | |
---|---|
bigquery |
BigQuery location for BatchExplain output. |
gcs |
Cloud Storage location for BatchExplain output. |
predictionFormat |
The storage format of the predictions generated BatchPrediction job. |
Enum type. Can be one of the following: | |
PREDICTION_FORMAT_UNSPECIFIED |
Should not be set. |
JSONL |
Predictions are in JSONL files. |
BIGQUERY |
Predictions are in BigQuery. |
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfig
The config for Prediction data drift detection.Fields | |
---|---|
attributionScoreDriftThresholds |
Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows. |
defaultDriftThreshold |
Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features. |
driftThresholds |
Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws. |
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDataset
Training Dataset information.Fields | |
---|---|
bigquerySource |
The BigQuery table of the unmanaged Dataset used to train this Model. |
dataFormat |
Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file. |
dataset |
The resource name of the Dataset used to train this Model. |
gcsSource |
The Google Cloud Storage uri of the unmanaged Dataset used to train this Model. |
loggingSamplingStrategy |
Strategy to sample data from Training Dataset. If not set, we process the whole dataset. |
targetField |
The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data. |
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfig
The config for Training & Prediction data skew detection. It specifies the training dataset sources and the skew detection parameters.Fields | |
---|---|
attributionScoreSkewThresholds |
Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature. |
defaultSkewThreshold |
Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features. |
skewThresholds |
Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature. |
GoogleCloudAiplatformV1ModelMonitoringStatsAnomalies
Statistics and anomalies generated by Model Monitoring.Fields | |
---|---|
anomalyCount |
Number of anomalies within all stats. |
deployedModelId |
Deployed Model ID. |
featureStats[] |
A list of historical Stats and Anomalies generated for all Features. |
objective |
Model Monitoring Objective those stats and anomalies belonging to. |
Enum type. Can be one of the following: | |
MODEL_DEPLOYMENT_MONITORING_OBJECTIVE_TYPE_UNSPECIFIED |
Default value, should not be set. |
RAW_FEATURE_SKEW |
Raw feature values' stats to detect skew between Training-Prediction datasets. |
RAW_FEATURE_DRIFT |
Raw feature values' stats to detect drift between Serving-Prediction datasets. |
FEATURE_ATTRIBUTION_SKEW |
Feature attribution scores to detect skew between Training-Prediction datasets. |
FEATURE_ATTRIBUTION_DRIFT |
Feature attribution scores to detect skew between Prediction datasets collected within different time windows. |
GoogleCloudAiplatformV1ModelMonitoringStatsAnomaliesFeatureHistoricStatsAnomalies
Historical Stats (and Anomalies) for a specific Feature.Fields | |
---|---|
featureDisplayName |
Display Name of the Feature. |
predictionStats[] |
A list of historical stats generated by different time window's Prediction Dataset. |
threshold |
Threshold for anomaly detection. |
trainingStats |
Stats calculated for the Training Dataset. |
GoogleCloudAiplatformV1ModelOriginalModelInfo
Contains information about the original Model if this Model is a copy.Fields | |
---|---|
model |
Output only. The resource name of the Model this Model is a copy of, including the revision. Format: |
GoogleCloudAiplatformV1ModelSourceInfo
Detail description of the source information of the model.Fields | |
---|---|
copy |
If this Model is copy of another Model. If true then source_type pertains to the original. |
sourceType |
Type of the model source. |
Enum type. Can be one of the following: | |
MODEL_SOURCE_TYPE_UNSPECIFIED |
Should not be used. |
AUTOML |
The Model is uploaded by automl training pipeline. |
CUSTOM |
The Model is uploaded by user or custom training pipeline. |
BQML |
The Model is registered and sync'ed from BigQuery ML. |
MODEL_GARDEN |
The Model is saved or tuned from Model Garden. |
GENIE |
The Model is saved or tuned from Genie. |
CUSTOM_TEXT_EMBEDDING |
The Model is uploaded by text embedding finetuning pipeline. |
MARKETPLACE |
The Model is saved or tuned from Marketplace. |
GoogleCloudAiplatformV1MutateDeployedIndexOperationMetadata
Runtime operation information for IndexEndpointService.MutateDeployedIndex.Fields | |
---|---|
deployedIndexId |
The unique index id specified by user |
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1MutateDeployedIndexResponse
Response message for IndexEndpointService.MutateDeployedIndex.Fields | |
---|---|
deployedIndex |
The DeployedIndex that had been updated in the IndexEndpoint. |
GoogleCloudAiplatformV1MutateDeployedModelOperationMetadata
Runtime operation information for EndpointService.MutateDeployedModel.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1MutateDeployedModelRequest
Request message for EndpointService.MutateDeployedModel.Fields | |
---|---|
deployedModel |
Required. The DeployedModel to be mutated within the Endpoint. Only the following fields can be mutated: * |
updateMask |
Required. The update mask applies to the resource. See google.protobuf.FieldMask. |
GoogleCloudAiplatformV1MutateDeployedModelResponse
Response message for EndpointService.MutateDeployedModel.Fields | |
---|---|
deployedModel |
The DeployedModel that's being mutated. |
GoogleCloudAiplatformV1NasJob
Represents a Neural Architecture Search (NAS) job.Fields | |
---|---|
createTime |
Output only. Time when the NasJob was created. |
displayName |
Required. The display name of the NasJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
enableRestrictedImageTraining |
Optional. Enable a separation of Custom model training and restricted image training for tenant project. |
encryptionSpec |
Customer-managed encryption key options for a NasJob. If this is set, then all resources created by the NasJob will be encrypted with the provided encryption key. |
endTime |
Output only. Time when the NasJob entered any of the following states: |
error |
Output only. Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED. |
labels |
The labels with user-defined metadata to organize NasJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
name |
Output only. Resource name of the NasJob. |
nasJobOutput |
Output only. Output of the NasJob. |
nasJobSpec |
Required. The specification of a NasJob. |
startTime |
Output only. Time when the NasJob for the first time entered the |
state |
Output only. The detailed state of the job. |
Enum type. Can be one of the following: | |
JOB_STATE_UNSPECIFIED |
The job state is unspecified. |
JOB_STATE_QUEUED |
The job has been just created or resumed and processing has not yet begun. |
JOB_STATE_PENDING |
The service is preparing to run the job. |
JOB_STATE_RUNNING |
The job is in progress. |
JOB_STATE_SUCCEEDED |
The job completed successfully. |
JOB_STATE_FAILED |
The job failed. |
JOB_STATE_CANCELLING |
The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED , JOB_STATE_FAILED or JOB_STATE_CANCELLED . |
JOB_STATE_CANCELLED |
The job has been cancelled. |
JOB_STATE_PAUSED |
The job has been stopped, and can be resumed. |
JOB_STATE_EXPIRED |
The job has expired. |
JOB_STATE_UPDATING |
The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state. |
JOB_STATE_PARTIALLY_SUCCEEDED |
The job is partially succeeded, some results may be missing due to errors. |
updateTime |
Output only. Time when the NasJob was most recently updated. |
GoogleCloudAiplatformV1NasJobOutput
Represents a uCAIP NasJob output.Fields | |
---|---|
multiTrialJobOutput |
Output only. The output of this multi-trial Neural Architecture Search (NAS) job. |
GoogleCloudAiplatformV1NasJobOutputMultiTrialJobOutput
The output of a multi-trial Neural Architecture Search (NAS) jobs.Fields | |
---|---|
searchTrials[] |
Output only. List of NasTrials that were started as part of search stage. |
trainTrials[] |
Output only. List of NasTrials that were started as part of train stage. |
GoogleCloudAiplatformV1NasJobSpec
Represents the spec of a NasJob.Fields | |
---|---|
multiTrialAlgorithmSpec |
The spec of multi-trial algorithms. |
resumeNasJobId |
The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob. |
searchSpaceSpec |
It defines the search space for Neural Architecture Search (NAS). |
GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpec
The spec of multi-trial Neural Architecture Search (NAS).Fields | |
---|---|
metric |
Metric specs for the NAS job. Validation for this field is done at |
multiTrialAlgorithm |
The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to |
Enum type. Can be one of the following: | |
MULTI_TRIAL_ALGORITHM_UNSPECIFIED |
Defaults to REINFORCEMENT_LEARNING . |
REINFORCEMENT_LEARNING |
The Reinforcement Learning Algorithm for Multi-trial Neural Architecture Search (NAS). |
GRID_SEARCH |
The Grid Search Algorithm for Multi-trial Neural Architecture Search (NAS). |
searchTrialSpec |
Required. Spec for search trials. |
trainTrialSpec |
Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched. |
GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpecMetricSpec
Represents a metric to optimize.Fields | |
---|---|
goal |
Required. The optimization goal of the metric. |
Enum type. Can be one of the following: | |
GOAL_TYPE_UNSPECIFIED |
Goal Type will default to maximize. |
MAXIMIZE |
Maximize the goal metric. |
MINIMIZE |
Minimize the goal metric. |
metricId |
Required. The ID of the metric. Must not contain whitespaces. |
GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpecSearchTrialSpec
Represent spec for search trials.Fields | |
---|---|
maxFailedTrialCount |
The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails. |
maxParallelTrialCount |
Required. The maximum number of trials to run in parallel. |
maxTrialCount |
Required. The maximum number of Neural Architecture Search (NAS) trials to run. |
searchTrialJobSpec |
Required. The spec of a search trial job. The same spec applies to all search trials. |
GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpecTrainTrialSpec
Represent spec for train trials.Fields | |
---|---|
frequency |
Required. Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched. |
maxParallelTrialCount |
Required. The maximum number of trials to run in parallel. |
trainTrialJobSpec |
Required. The spec of a train trial job. The same spec applies to all train trials. |
GoogleCloudAiplatformV1NasTrial
Represents a uCAIP NasJob trial.Fields | |
---|---|
endTime |
Output only. Time when the NasTrial's status changed to |
finalMeasurement |
Output only. The final measurement containing the objective value. |
id |
Output only. The identifier of the NasTrial assigned by the service. |
startTime |
Output only. Time when the NasTrial was started. |
state |
Output only. The detailed state of the NasTrial. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
The NasTrial state is unspecified. |
REQUESTED |
Indicates that a specific NasTrial has been requested, but it has not yet been suggested by the service. |
ACTIVE |
Indicates that the NasTrial has been suggested. |
STOPPING |
Indicates that the NasTrial should stop according to the service. |
SUCCEEDED |
Indicates that the NasTrial is completed successfully. |
INFEASIBLE |
Indicates that the NasTrial should not be attempted again. The service will set a NasTrial to INFEASIBLE when it's done but missing the final_measurement. |
GoogleCloudAiplatformV1NasTrialDetail
Represents a NasTrial details along with its parameters. If there is a corresponding train NasTrial, the train NasTrial is also returned.Fields | |
---|---|
name |
Output only. Resource name of the NasTrialDetail. |
parameters |
The parameters for the NasJob NasTrial. |
searchTrial |
The requested search NasTrial. |
trainTrial |
The train NasTrial corresponding to search_trial. Only populated if search_trial is used for training. |
GoogleCloudAiplatformV1NearestNeighborQuery
A query to find a number of similar entities.Fields | |
---|---|
embedding |
Optional. The embedding vector that be used for similar search. |
entityId |
Optional. The entity id whose similar entities should be searched for. If embedding is set, search will use embedding instead of entity_id. |
neighborCount |
Optional. The number of similar entities to be retrieved from feature view for each query. |
parameters |
Optional. Parameters that can be set to tune query on the fly. |
perCrowdingAttributeNeighborCount |
Optional. Crowding is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than sper_crowding_attribute_neighbor_count of the k neighbors returned have the same value of crowding_attribute. It's used for improving result diversity. |
stringFilters[] |
Optional. The list of string filters. |
GoogleCloudAiplatformV1NearestNeighborQueryEmbedding
The embedding vector.Fields | |
---|---|
value[] |
Optional. Individual value in the embedding. |
GoogleCloudAiplatformV1NearestNeighborQueryParameters
Parameters that can be overrided in each query to tune query latency and recall.Fields | |
---|---|
approximateNeighborCandidates |
Optional. The number of neighbors to find via approximate search before exact reordering is performed; if set, this value must be > neighbor_count. |
leafNodesSearchFraction |
Optional. The fraction of the number of leaves to search, set at query time allows user to tune search performance. This value increase result in both search accuracy and latency increase. The value should be between 0.0 and 1.0. |
GoogleCloudAiplatformV1NearestNeighborQueryStringFilter
String filter is used to search a subset of the entities by using boolean rules on string columns. For example: if a query specifies string filter with 'name = color, allow_tokens = {red, blue}, deny_tokens = {purple}',' then that query will match entities that are red or blue, but if those points are also purple, then they will be excluded even if they are red/blue. Only string filter is supported for now, numeric filter will be supported in the near future.Fields | |
---|---|
allowTokens[] |
Optional. The allowed tokens. |
denyTokens[] |
Optional. The denied tokens. |
name |
Required. Column names in BigQuery that used as filters. |
GoogleCloudAiplatformV1NearestNeighborSearchOperationMetadata
Runtime operation metadata with regard to Matching Engine Index.Fields | |
---|---|
contentValidationStats[] |
The validation stats of the content (per file) to be inserted or updated on the Matching Engine Index resource. Populated if contentsDeltaUri is provided as part of Index.metadata. Please note that, currently for those files that are broken or has unsupported file format, we will not have the stats for those files. |
dataBytesCount |
The ingested data size in bytes. |
GoogleCloudAiplatformV1NearestNeighborSearchOperationMetadataContentValidationStats
(No description provided)Fields | |
---|---|
invalidRecordCount |
Number of records in this file we skipped due to validate errors. |
invalidSparseRecordCount |
Number of sparse records in this file we skipped due to validate errors. |
partialErrors[] |
The detail information of the partial failures encountered for those invalid records that couldn't be parsed. Up to 50 partial errors will be reported. |
sourceGcsUri |
Cloud Storage URI pointing to the original file in user's bucket. |
validRecordCount |
Number of records in this file that were successfully processed. |
validSparseRecordCount |
Number of sparse records in this file that were successfully processed. |
GoogleCloudAiplatformV1NearestNeighborSearchOperationMetadataRecordError
(No description provided)Fields | |
---|---|
embeddingId |
Empty if the embedding id is failed to parse. |
errorMessage |
A human-readable message that is shown to the user to help them fix the error. Note that this message may change from time to time, your code should check against error_type as the source of truth. |
errorType |
The error type of this record. |
Enum type. Can be one of the following: | |
ERROR_TYPE_UNSPECIFIED |
Default, shall not be used. |
EMPTY_LINE |
The record is empty. |
INVALID_JSON_SYNTAX |
Invalid json format. |
INVALID_CSV_SYNTAX |
Invalid csv format. |
INVALID_AVRO_SYNTAX |
Invalid avro format. |
INVALID_EMBEDDING_ID |
The embedding id is not valid. |
EMBEDDING_SIZE_MISMATCH |
The size of the dense embedding vectors does not match with the specified dimension. |
NAMESPACE_MISSING |
The namespace field is missing. |
PARSING_ERROR |
Generic catch-all error. Only used for validation failure where the root cause cannot be easily retrieved programmatically. |
DUPLICATE_NAMESPACE |
There are multiple restricts with the same namespace value. |
OP_IN_DATAPOINT |
Numeric restrict has operator specified in datapoint. |
MULTIPLE_VALUES |
Numeric restrict has multiple values specified. |
INVALID_NUMERIC_VALUE |
Numeric restrict has invalid numeric value specified. |
INVALID_ENCODING |
File is not in UTF_8 format. |
INVALID_SPARSE_DIMENSIONS |
Error parsing sparse dimensions field. |
INVALID_TOKEN_VALUE |
Token restrict value is invalid. |
INVALID_SPARSE_EMBEDDING |
Invalid sparse embedding. |
rawRecord |
The original content of this record. |
sourceGcsUri |
Cloud Storage URI pointing to the original file in user's bucket. |
GoogleCloudAiplatformV1NearestNeighbors
Nearest neighbors for one query.Fields | |
---|---|
neighbors[] |
All its neighbors. |
GoogleCloudAiplatformV1NearestNeighborsNeighbor
A neighbor of the query vector.Fields | |
---|---|
distance |
The distance between the neighbor and the query vector. |
entityId |
The id of the similar entity. |
entityKeyValues |
The attributes of the neighbor, e.g. filters, crowding and metadata Note that full entities are returned only when "return_full_entity" is set to true. Otherwise, only the "entity_id" and "distance" fields are populated. |
GoogleCloudAiplatformV1Neighbor
Neighbors for example-based explanations.Fields | |
---|---|
neighborDistance |
Output only. The neighbor distance. |
neighborId |
Output only. The neighbor id. |
GoogleCloudAiplatformV1NetworkSpec
Network spec.Fields | |
---|---|
enableInternetAccess |
Whether to enable public internet access. Default false. |
network |
The full name of the Google Compute Engine network |
subnetwork |
The name of the subnet that this instance is in. Format: |
GoogleCloudAiplatformV1NfsMount
Represents a mount configuration for Network File System (NFS) to mount.Fields | |
---|---|
mountPoint |
Required. Destination mount path. The NFS will be mounted for the user under /mnt/nfs/ |
path |
Required. Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of |
server |
Required. IP address of the NFS server. |
GoogleCloudAiplatformV1NotebookEucConfig
The euc configuration of NotebookRuntimeTemplate.Fields | |
---|---|
bypassActasCheck |
Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA. |
eucDisabled |
Input only. Whether EUC is disabled in this NotebookRuntimeTemplate. In proto3, the default value of a boolean is false. In this way, by default EUC will be enabled for NotebookRuntimeTemplate. |
GoogleCloudAiplatformV1NotebookIdleShutdownConfig
The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field.Fields | |
---|---|
idleShutdownDisabled |
Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate. |
idleTimeout |
Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60. |
GoogleCloudAiplatformV1NotebookReservationAffinity
Notebook Reservation Affinity for consuming Zonal reservation.Fields | |
---|---|
consumeReservationType |
Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. |
Enum type. Can be one of the following: | |
RESERVATION_AFFINITY_TYPE_UNSPECIFIED |
Default type. |
RESERVATION_NONE |
Do not consume from any allocated capacity. |
RESERVATION_ANY |
Consume any reservation available. |
RESERVATION_SPECIFIC |
Must consume from a specific reservation. Must specify key value fields for specifying the reservations. |
key |
Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. |
values[] |
Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. |
GoogleCloudAiplatformV1NotebookRuntime
A runtime is a virtual machine allocated to a particular user for a particular Notebook file on temporary basis with lifetime limited to 24 hours.Fields | |
---|---|
createTime |
Output only. Timestamp when this NotebookRuntime was created. |
description |
The description of the NotebookRuntime. |
displayName |
Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
expirationTime |
Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade. |
healthState |
Output only. The health state of the NotebookRuntime. |
Enum type. Can be one of the following: | |
HEALTH_STATE_UNSPECIFIED |
Unspecified health state. |
HEALTHY |
NotebookRuntime is in healthy state. Applies to ACTIVE state. |
UNHEALTHY |
NotebookRuntime is in unhealthy state. Applies to ACTIVE state. |
isUpgradable |
Output only. Whether NotebookRuntime is upgradable. |
labels |
The labels with user-defined metadata to organize your NotebookRuntime. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one NotebookRuntime (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for NotebookRuntime: * "aiplatform.googleapis.com/notebook_runtime_gce_instance_id": output only, its value is the Compute Engine instance id. * "aiplatform.googleapis.com/colab_enterprise_entry_service": its value is either "bigquery" or "vertex"; if absent, it should be "vertex". This is to describe the entry service, either BigQuery or Vertex. |
name |
Output only. The resource name of the NotebookRuntime. |
networkTags[] |
Optional. The Compute Engine tags to add to runtime (see Tagging instances). |
notebookRuntimeTemplateRef |
Output only. The pointer to NotebookRuntimeTemplate this NotebookRuntime is created from. |
notebookRuntimeType |
Output only. The type of the notebook runtime. |
Enum type. Can be one of the following: | |
NOTEBOOK_RUNTIME_TYPE_UNSPECIFIED |
Unspecified notebook runtime type, NotebookRuntimeType will default to USER_DEFINED. |
USER_DEFINED |
runtime or template with coustomized configurations from user. |
ONE_CLICK |
runtime or template with system defined configurations. |
proxyUri |
Output only. The proxy endpoint used to access the NotebookRuntime. |
reservationAffinity |
Output only. Reservation Affinity of the notebook runtime. |
runtimeState |
Output only. The runtime (instance) state of the NotebookRuntime. |
Enum type. Can be one of the following: | |
RUNTIME_STATE_UNSPECIFIED |
Unspecified runtime state. |
RUNNING |
NotebookRuntime is in running state. |
BEING_STARTED |
NotebookRuntime is in starting state. |
BEING_STOPPED |
NotebookRuntime is in stopping state. |
STOPPED |
NotebookRuntime is in stopped state. |
BEING_UPGRADED |
NotebookRuntime is in upgrading state. It is in the middle of upgrading process. |
ERROR |
NotebookRuntime was unable to start/stop properly. |
INVALID |
NotebookRuntime is in invalid state. Cannot be recovered. |
runtimeUser |
Required. The user email of the NotebookRuntime. |
satisfiesPzi |
Output only. Reserved for future use. |
satisfiesPzs |
Output only. Reserved for future use. |
serviceAccount |
Output only. The service account that the NotebookRuntime workload runs as. |
updateTime |
Output only. Timestamp when this NotebookRuntime was most recently updated. |
version |
Output only. The VM os image version of NotebookRuntime. |
GoogleCloudAiplatformV1NotebookRuntimeTemplate
A template that specifies runtime configurations such as machine type, runtime version, network configurations, etc. Multiple runtimes can be created from a runtime template.Fields | |
---|---|
createTime |
Output only. Timestamp when this NotebookRuntimeTemplate was created. |
dataPersistentDiskSpec |
Optional. The specification of persistent disk attached to the runtime as data disk storage. |
description |
The description of the NotebookRuntimeTemplate. |
displayName |
Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
etag |
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
eucConfig |
EUC configuration of the NotebookRuntimeTemplate. |
idleShutdownConfig |
The idle shutdown configuration of NotebookRuntimeTemplate. This config will only be set when idle shutdown is enabled. |
isDefault |
Output only. The default template to use if not specified. |
labels |
The labels with user-defined metadata to organize the NotebookRuntimeTemplates. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
machineSpec |
Optional. Immutable. The specification of a single machine for the template. |
name |
The resource name of the NotebookRuntimeTemplate. |
networkSpec |
Optional. Network spec. |
networkTags[] |
Optional. The Compute Engine tags to add to runtime (see Tagging instances). |
notebookRuntimeType |
Optional. Immutable. The type of the notebook runtime template. |
Enum type. Can be one of the following: | |
NOTEBOOK_RUNTIME_TYPE_UNSPECIFIED |
Unspecified notebook runtime type, NotebookRuntimeType will default to USER_DEFINED. |
USER_DEFINED |
runtime or template with coustomized configurations from user. |
ONE_CLICK |
runtime or template with system defined configurations. |
reservationAffinity |
Optional. Reservation Affinity of the notebook runtime template. |
serviceAccount |
The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the Compute Engine default service account is used. |
shieldedVmConfig |
Optional. Immutable. Runtime Shielded VM spec. |
updateTime |
Output only. Timestamp when this NotebookRuntimeTemplate was most recently updated. |
GoogleCloudAiplatformV1NotebookRuntimeTemplateRef
Points to a NotebookRuntimeTemplateRef.Fields | |
---|---|
notebookRuntimeTemplate |
Immutable. A resource name of the NotebookRuntimeTemplate. |
GoogleCloudAiplatformV1Part
A datatype containing media that is part of a multi-partContent
message. A Part
consists of data which has an associated datatype. A Part
can only contain one of the accepted types in Part.data
. A Part
must have a fixed IANA MIME type identifying the type and subtype of the media if inline_data
or file_data
field is filled with raw bytes.
Fields | |
---|---|
fileData |
Optional. URI based data. |
functionCall |
Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. |
functionResponse |
Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. |
inlineData |
Optional. Inlined bytes data. |
text |
Optional. Text part (can be code). |
videoMetadata |
Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. |
GoogleCloudAiplatformV1PersistentDiskSpec
Represents the spec of persistent disk options.Fields | |
---|---|
diskSizeGb |
Size in GB of the disk (default is 100GB). |
diskType |
Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk) |
GoogleCloudAiplatformV1PersistentResource
Represents long-lasting resources that are dedicated to users to runs custom workloads. A PersistentResource can have multiple node pools and each node pool can have its own machine spec.Fields | |
---|---|
createTime |
Output only. Time when the PersistentResource was created. |
displayName |
Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
encryptionSpec |
Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key. |
error |
Output only. Only populated when persistent resource's state is |
labels |
Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
name |
Immutable. Resource name of a PersistentResource. |
network |
Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example, |
reservedIpRanges[] |
Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. |
resourcePools[] |
Required. The spec of the pools of different resources. |
resourceRuntime |
Output only. Runtime information of the Persistent Resource. |
resourceRuntimeSpec |
Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration. |
startTime |
Output only. Time when the PersistentResource for the first time entered the |
state |
Output only. The detailed state of a Study. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Not set. |
PROVISIONING |
The PROVISIONING state indicates the persistent resources is being created. |
RUNNING |
The RUNNING state indicates the persistent resource is healthy and fully usable. |
STOPPING |
The STOPPING state indicates the persistent resource is being deleted. |
ERROR |
The ERROR state indicates the persistent resource may be unusable. Details can be found in the error field. |
REBOOTING |
The REBOOTING state indicates the persistent resource is being rebooted (PR is not available right now but is expected to be ready again later). |
UPDATING |
The UPDATING state indicates the persistent resource is being updated. |
updateTime |
Output only. Time when the PersistentResource was most recently updated. |
GoogleCloudAiplatformV1PipelineJob
An instance of a machine learning PipelineJob.Fields | |
---|---|
createTime |
Output only. Pipeline creation time. |
displayName |
The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
encryptionSpec |
Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key. |
endTime |
Output only. Pipeline end time. |
error |
Output only. The error that occurred during pipeline execution. Only populated when the pipeline's state is FAILED or CANCELLED. |
jobDetail |
Output only. The details of pipeline run. Not available in the list view. |
labels |
The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - |
name |
Output only. The resource name of the PipelineJob. |
network |
The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, |
pipelineSpec |
The spec of the pipeline. |
reservedIpRanges[] |
A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. |
runtimeConfig |
Runtime config of the pipeline. |
scheduleName |
Output only. The schedule resource name. Only returned if the Pipeline is created by Schedule API. |
serviceAccount |
The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the |
startTime |
Output only. Pipeline start time. |
state |
Output only. The detailed state of the job. |
Enum type. Can be one of the following: | |
PIPELINE_STATE_UNSPECIFIED |
The pipeline state is unspecified. |
PIPELINE_STATE_QUEUED |
The pipeline has been created or resumed, and processing has not yet begun. |
PIPELINE_STATE_PENDING |
The service is preparing to run the pipeline. |
PIPELINE_STATE_RUNNING |
The pipeline is in progress. |
PIPELINE_STATE_SUCCEEDED |
The pipeline completed successfully. |
PIPELINE_STATE_FAILED |
The pipeline failed. |
PIPELINE_STATE_CANCELLING |
The pipeline is being cancelled. From this state, the pipeline may only go to either PIPELINE_STATE_SUCCEEDED, PIPELINE_STATE_FAILED or PIPELINE_STATE_CANCELLED. |
PIPELINE_STATE_CANCELLED |
The pipeline has been cancelled. |
PIPELINE_STATE_PAUSED |
The pipeline has been stopped, and can be resumed. |
templateMetadata |
Output only. Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry. |
templateUri |
A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template. |
updateTime |
Output only. Timestamp when this PipelineJob was most recently updated. |
GoogleCloudAiplatformV1PipelineJobDetail
The runtime detail of PipelineJob.Fields | |
---|---|
pipelineContext |
Output only. The context of the pipeline. |
pipelineRunContext |
Output only. The context of the current pipeline run. |
taskDetails[] |
Output only. The runtime details of the tasks under the pipeline. |
GoogleCloudAiplatformV1PipelineJobRuntimeConfig
The runtime config of a PipelineJob.Fields | |
---|---|
failurePolicy |
Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion. |
Enum type. Can be one of the following: | |
PIPELINE_FAILURE_POLICY_UNSPECIFIED |
Default value, and follows fail slow behavior. |
PIPELINE_FAILURE_POLICY_FAIL_SLOW |
Indicates that the pipeline should continue to run until all possible tasks have been scheduled and completed. |
PIPELINE_FAILURE_POLICY_FAIL_FAST |
Indicates that the pipeline should stop scheduling new tasks after a task has failed. |
gcsOutputDirectory |
Required. A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern |
inputArtifacts |
The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact. |
parameterValues |
The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using |
parameters |
Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using |
GoogleCloudAiplatformV1PipelineJobRuntimeConfigInputArtifact
The type of an input artifact.Fields | |
---|---|
artifactId |
Artifact resource id from MLMD. Which is the last portion of an artifact resource name: |
GoogleCloudAiplatformV1PipelineTaskDetail
The runtime detail of a task execution.Fields | |
---|---|
createTime |
Output only. Task create time. |
endTime |
Output only. Task end time. |
error |
Output only. The error that occurred during task execution. Only populated when the task's state is FAILED or CANCELLED. |
execution |
Output only. The execution metadata of the task. |
executorDetail |
Output only. The detailed execution info. |
inputs |
Output only. The runtime input artifacts of the task. |
outputs |
Output only. The runtime output artifacts of the task. |
parentTaskId |
Output only. The id of the parent task if the task is within a component scope. Empty if the task is at the root level. |
pipelineTaskStatus[] |
Output only. A list of task status. This field keeps a record of task status evolving over time. |
startTime |
Output only. Task start time. |
state |
Output only. State of the task. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Unspecified. |
PENDING |
Specifies pending state for the task. |
RUNNING |
Specifies task is being executed. |
SUCCEEDED |
Specifies task completed successfully. |
CANCEL_PENDING |
Specifies Task cancel is in pending state. |
CANCELLING |
Specifies task is being cancelled. |
CANCELLED |
Specifies task was cancelled. |
FAILED |
Specifies task failed. |
SKIPPED |
Specifies task was skipped due to cache hit. |
NOT_TRIGGERED |
Specifies that the task was not triggered because the task's trigger policy is not satisfied. The trigger policy is specified in the condition field of PipelineJob.pipeline_spec. |
taskId |
Output only. The system generated ID of the task. |
taskName |
Output only. The user specified name of the task that is defined in pipeline_spec. |
GoogleCloudAiplatformV1PipelineTaskDetailArtifactList
A list of artifact metadata.Fields | |
---|---|
artifacts[] |
Output only. A list of artifact metadata. |
GoogleCloudAiplatformV1PipelineTaskDetailPipelineTaskStatus
A single record of the task status.Fields | |
---|---|
error |
Output only. The error that occurred during the state. May be set when the state is any of the non-final state (PENDING/RUNNING/CANCELLING) or FAILED state. If the state is FAILED, the error here is final and not going to be retried. If the state is a non-final state, the error indicates a system-error being retried. |
state |
Output only. The state of the task. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Unspecified. |
PENDING |
Specifies pending state for the task. |
RUNNING |
Specifies task is being executed. |
SUCCEEDED |
Specifies task completed successfully. |
CANCEL_PENDING |
Specifies Task cancel is in pending state. |
CANCELLING |
Specifies task is being cancelled. |
CANCELLED |
Specifies task was cancelled. |
FAILED |
Specifies task failed. |
SKIPPED |
Specifies task was skipped due to cache hit. |
NOT_TRIGGERED |
Specifies that the task was not triggered because the task's trigger policy is not satisfied. The trigger policy is specified in the condition field of PipelineJob.pipeline_spec. |
updateTime |
Output only. Update time of this status. |
GoogleCloudAiplatformV1PipelineTaskExecutorDetail
The runtime detail of a pipeline executor.Fields | |
---|---|
containerDetail |
Output only. The detailed info for a container executor. |
customJobDetail |
Output only. The detailed info for a custom job executor. |
GoogleCloudAiplatformV1PipelineTaskExecutorDetailContainerDetail
The detail of a container execution. It contains the job names of the lifecycle of a container execution.Fields | |
---|---|
failedMainJobs[] |
Output only. The names of the previously failed CustomJob for the main container executions. The list includes the all attempts in chronological order. |
failedPreCachingCheckJobs[] |
Output only. The names of the previously failed CustomJob for the pre-caching-check container executions. This job will be available if the PipelineJob.pipeline_spec specifies the |
mainJob |
Output only. The name of the CustomJob for the main container execution. |
preCachingCheckJob |
Output only. The name of the CustomJob for the pre-caching-check container execution. This job will be available if the PipelineJob.pipeline_spec specifies the |
GoogleCloudAiplatformV1PipelineTaskExecutorDetailCustomJobDetail
The detailed info for a custom job executor.Fields | |
---|---|
failedJobs[] |
Output only. The names of the previously failed CustomJob. The list includes the all attempts in chronological order. |
job |
Output only. The name of the CustomJob. |
GoogleCloudAiplatformV1PipelineTemplateMetadata
Pipeline template metadata if PipelineJob.template_uri is from supported template registry. Currently, the only supported registry is Artifact Registry.Fields | |
---|---|
version |
The version_name in artifact registry. Will always be presented in output if the PipelineJob.template_uri is from supported template registry. Format is "sha256:abcdef123456...". |
GoogleCloudAiplatformV1Port
Represents a network port in a container.Fields | |
---|---|
containerPort |
The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive. |
GoogleCloudAiplatformV1PredefinedSplit
Assigns input data to training, validation, and test sets based on the value of a provided key. Supported only for tabular Datasets.Fields | |
---|---|
key |
Required. The key is a name of one of the Dataset's data columns. The value of the key (either the label's value or value in the column) must be one of { |
GoogleCloudAiplatformV1PredictRequest
Request message for PredictionService.Predict.Fields | |
---|---|
instances[] |
Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri. |
parameters |
The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri. |
GoogleCloudAiplatformV1PredictRequestResponseLoggingConfig
Configuration for logging request-response to a BigQuery table.Fields | |
---|---|
bigqueryDestination |
BigQuery table for logging. If only given a project, a new dataset will be created with name |
enabled |
If logging is enabled or not. |
samplingRate |
Percentage of requests to be logged, expressed as a fraction in range(0,1]. |
GoogleCloudAiplatformV1PredictResponse
Response message for PredictionService.Predict.Fields | |
---|---|
deployedModelId |
ID of the Endpoint's DeployedModel that served this prediction. |
metadata |
Output only. Request-level metadata returned by the model. The metadata type will be dependent upon the model implementation. |
model |
Output only. The resource name of the Model which is deployed as the DeployedModel that this prediction hits. |
modelDisplayName |
Output only. The display name of the Model which is deployed as the DeployedModel that this prediction hits. |
modelVersionId |
Output only. The version ID of the Model which is deployed as the DeployedModel that this prediction hits. |
predictions[] |
The predictions that are the output of the predictions call. The schema of any single prediction may be specified via Endpoint's DeployedModels' Model's PredictSchemata's prediction_schema_uri. |
GoogleCloudAiplatformV1PredictSchemata
Contains the schemata used in Model's predictions and explanations via PredictionService.Predict, PredictionService.Explain and BatchPredictionJob.Fields | |
---|---|
instanceSchemaUri |
Immutable. Points to a YAML file stored on Google Cloud Storage describing the format of a single instance, which are used in PredictRequest.instances, ExplainRequest.instances and BatchPredictionJob.input_config. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access. |
parametersSchemaUri |
Immutable. Points to a YAML file stored on Google Cloud Storage describing the parameters of prediction and explanation via PredictRequest.parameters, ExplainRequest.parameters and BatchPredictionJob.model_parameters. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no parameters are supported, then it is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access. |
predictionSchemaUri |
Immutable. Points to a YAML file stored on Google Cloud Storage describing the format of a single prediction produced by this Model, which are returned via PredictResponse.predictions, ExplainResponse.explanations, and BatchPredictionJob.output_config. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access. |
GoogleCloudAiplatformV1Presets
Preset configuration for example-based explanationsFields | |
---|---|
modality |
The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type. |
Enum type. Can be one of the following: | |
MODALITY_UNSPECIFIED |
Should not be set. Added as a recommended best practice for enums |
IMAGE |
IMAGE modality |
TEXT |
TEXT modality |
TABULAR |
TABULAR modality |
query |
Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to |
Enum type. Can be one of the following: | |
PRECISE |
More precise neighbors as a trade-off against slower response. |
FAST |
Faster response as a trade-off against less precise neighbors. |
GoogleCloudAiplatformV1PrivateEndpoints
PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment.Fields | |
---|---|
explainHttpUri |
Output only. Http(s) path to send explain requests. |
healthHttpUri |
Output only. Http(s) path to send health check requests. |
predictHttpUri |
Output only. Http(s) path to send prediction requests. |
serviceAttachment |
Output only. The name of the service attachment resource. Populated if private service connect is enabled. |
GoogleCloudAiplatformV1PrivateServiceConnectConfig
Represents configuration for private service connect.Fields | |
---|---|
enablePrivateServiceConnect |
Required. If true, expose the IndexEndpoint via private service connect. |
projectAllowlist[] |
A list of Projects from which the forwarding rule will target the service attachment. |
GoogleCloudAiplatformV1Probe
Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.Fields | |
---|---|
exec |
Exec specifies the action to take. |
periodSeconds |
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'. |
timeoutSeconds |
Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds. Maps to Kubernetes probe argument 'timeoutSeconds'. |
GoogleCloudAiplatformV1ProbeExecAction
ExecAction specifies a command to execute.Fields | |
---|---|
command[] |
Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. |
GoogleCloudAiplatformV1PscAutomatedEndpoints
PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.Fields | |
---|---|
matchAddress |
Ip Address created by the automated forwarding rule. |
network |
Corresponding network in pscAutomationConfigs. |
projectId |
Corresponding project_id in pscAutomationConfigs |
GoogleCloudAiplatformV1PublisherModel
A Model Garden Publisher Model.Fields | |
---|---|
frameworks[] |
Optional. Additional information about the model's Frameworks. |
launchStage |
Optional. Indicates the launch stage of the model. |
Enum type. Can be one of the following: | |
LAUNCH_STAGE_UNSPECIFIED |
The model launch stage is unspecified. |
EXPERIMENTAL |
Used to indicate the PublisherModel is at Experimental launch stage, available to a small set of customers. |
PRIVATE_PREVIEW |
Used to indicate the PublisherModel is at Private Preview launch stage, only available to a small set of customers, although a larger set of customers than an Experimental launch. Previews are the first launch stage used to get feedback from customers. |
PUBLIC_PREVIEW |
Used to indicate the PublisherModel is at Public Preview launch stage, available to all customers, although not supported for production workloads. |
GA |
Used to indicate the PublisherModel is at GA launch stage, available to all customers and ready for production workload. |
name |
Output only. The resource name of the PublisherModel. |
openSourceCategory |
Required. Indicates the open source category of the publisher model. |
Enum type. Can be one of the following: | |
OPEN_SOURCE_CATEGORY_UNSPECIFIED |
The open source category is unspecified, which should not be used. |
PROPRIETARY |
Used to indicate the PublisherModel is not open sourced. |
GOOGLE_OWNED_OSS_WITH_GOOGLE_CHECKPOINT |
Used to indicate the PublisherModel is a Google-owned open source model w/ Google checkpoint. |
THIRD_PARTY_OWNED_OSS_WITH_GOOGLE_CHECKPOINT |
Used to indicate the PublisherModel is a 3p-owned open source model w/ Google checkpoint. |
GOOGLE_OWNED_OSS |
Used to indicate the PublisherModel is a Google-owned pure open source model. |
THIRD_PARTY_OWNED_OSS |
Used to indicate the PublisherModel is a 3p-owned pure open source model. |
predictSchemata |
Optional. The schemata that describes formats of the PublisherModel's predictions and explanations as given and returned via PredictionService.Predict. |
publisherModelTemplate |
Optional. Output only. Immutable. Used to indicate this model has a publisher model and provide the template of the publisher model resource name. |
supportedActions |
Optional. Supported call-to-action options. |
versionId |
Output only. Immutable. The version ID of the PublisherModel. A new version is committed when a new model version is uploaded under an existing model id. It is an auto-incrementing decimal number in string representation. |
versionState |
Optional. Indicates the state of the model version. |
Enum type. Can be one of the following: | |
VERSION_STATE_UNSPECIFIED |
The version state is unspecified. |
VERSION_STATE_STABLE |
Used to indicate the version is stable. |
VERSION_STATE_UNSTABLE |
Used to indicate the version is unstable. |
GoogleCloudAiplatformV1PublisherModelCallToAction
Actions could take on this Publisher Model.Fields | |
---|---|
createApplication |
Optional. Create application using the PublisherModel. |
deploy |
Optional. Deploy the PublisherModel to Vertex Endpoint. |
deployGke |
Optional. Deploy PublisherModel to Google Kubernetes Engine. |
fineTune |
Optional. Fine tune the PublisherModel with the third-party model tuning UI. |
openEvaluationPipeline |
Optional. Open evaluation pipeline of the PublisherModel. |
openFineTuningPipeline |
Optional. Open fine-tuning pipeline of the PublisherModel. |
openFineTuningPipelines |
Optional. Open fine-tuning pipelines of the PublisherModel. |
openGenerationAiStudio |
Optional. Open in Generation AI Studio. |
openGenie |
Optional. Open Genie / Playground. |
openNotebook |
Optional. Open notebook of the PublisherModel. |
openNotebooks |
Optional. Open notebooks of the PublisherModel. |
openPromptTuningPipeline |
Optional. Open prompt-tuning pipeline of the PublisherModel. |
requestAccess |
Optional. Request for access. |
viewRestApi |
Optional. To view Rest API docs. |
GoogleCloudAiplatformV1PublisherModelCallToActionDeploy
Model metadata that is needed for UploadModel or DeployModel/CreateEndpoint requests.Fields | |
---|---|
artifactUri |
Optional. The path to the directory containing the Model artifact and any of its supporting files. |
automaticResources |
A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. |
containerSpec |
Optional. The specification of the container that is to be used when deploying this Model in Vertex AI. Not present for Large Models. |
dedicatedResources |
A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration. |
deployTaskName |
Optional. The name of the deploy task (e.g., "text to image generation"). |
largeModelReference |
Optional. Large model reference. When this is set, model_artifact_spec is not needed. |
modelDisplayName |
Optional. Default model display name. |
publicArtifactUri |
Optional. The signed URI for ephemeral Cloud Storage access to model artifact. |
sharedResources |
The resource name of the shared DeploymentResourcePool to deploy on. Format: |
title |
Required. The title of the regional resource reference. |
GoogleCloudAiplatformV1PublisherModelCallToActionDeployGke
Configurations for PublisherModel GKE deploymentFields | |
---|---|
gkeYamlConfigs[] |
Optional. GKE deployment configuration in yaml format. |
GoogleCloudAiplatformV1PublisherModelCallToActionOpenFineTuningPipelines
Open fine tuning pipelines.Fields | |
---|---|
fineTuningPipelines[] |
Required. Regional resource references to fine tuning pipelines. |
GoogleCloudAiplatformV1PublisherModelCallToActionOpenNotebooks
Open notebooks.Fields | |
---|---|
notebooks[] |
Required. Regional resource references to notebooks. |
GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences
The regional resource name or the URI. Key is region, e.g., us-central1, europe-west2, global, etc..Fields | |
---|---|
references |
Required. |
resourceDescription |
Optional. Description of the resource. |
resourceTitle |
Optional. Title of the resource. |
resourceUseCase |
Optional. Use case (CUJ) of the resource. |
title |
Required. |
GoogleCloudAiplatformV1PublisherModelCallToActionViewRestApi
Rest API docs.Fields | |
---|---|
documentations[] |
Required. |
title |
Required. The title of the view rest API. |
GoogleCloudAiplatformV1PublisherModelDocumentation
A named piece of documentation.Fields | |
---|---|
content |
Required. Content of this piece of document (in Markdown format). |
title |
Required. E.g., OVERVIEW, USE CASES, DOCUMENTATION, SDK & SAMPLES, JAVA, NODE.JS, etc.. |
GoogleCloudAiplatformV1PublisherModelResourceReference
Reference to a resource.Fields | |
---|---|
description |
Description of the resource. |
resourceName |
The resource name of the Google Cloud resource. |
uri |
The URI of the resource. |
useCase |
Use case (CUJ) of the resource. |
GoogleCloudAiplatformV1PurgeArtifactsMetadata
Details of operations that perform MetadataService.PurgeArtifacts.Fields | |
---|---|
genericMetadata |
Operation metadata for purging Artifacts. |
GoogleCloudAiplatformV1PurgeArtifactsRequest
Request message for MetadataService.PurgeArtifacts.Fields | |
---|---|
filter |
Required. A required filter matching the Artifacts to be purged. E.g., |
force |
Optional. Flag to indicate to actually perform the purge. If |
GoogleCloudAiplatformV1PurgeArtifactsResponse
Response message for MetadataService.PurgeArtifacts.Fields | |
---|---|
purgeCount |
The number of Artifacts that this request deleted (or, if |
purgeSample[] |
A sample of the Artifact names that will be deleted. Only populated if |
GoogleCloudAiplatformV1PurgeContextsMetadata
Details of operations that perform MetadataService.PurgeContexts.Fields | |
---|---|
genericMetadata |
Operation metadata for purging Contexts. |
GoogleCloudAiplatformV1PurgeContextsRequest
Request message for MetadataService.PurgeContexts.Fields | |
---|---|
filter |
Required. A required filter matching the Contexts to be purged. E.g., |
force |
Optional. Flag to indicate to actually perform the purge. If |
GoogleCloudAiplatformV1PurgeContextsResponse
Response message for MetadataService.PurgeContexts.Fields | |
---|---|
purgeCount |
The number of Contexts that this request deleted (or, if |
purgeSample[] |
A sample of the Context names that will be deleted. Only populated if |
GoogleCloudAiplatformV1PurgeExecutionsMetadata
Details of operations that perform MetadataService.PurgeExecutions.Fields | |
---|---|
genericMetadata |
Operation metadata for purging Executions. |
GoogleCloudAiplatformV1PurgeExecutionsRequest
Request message for MetadataService.PurgeExecutions.Fields | |
---|---|
filter |
Required. A required filter matching the Executions to be purged. E.g., |
force |
Optional. Flag to indicate to actually perform the purge. If |
GoogleCloudAiplatformV1PurgeExecutionsResponse
Response message for MetadataService.PurgeExecutions.Fields | |
---|---|
purgeCount |
The number of Executions that this request deleted (or, if |
purgeSample[] |
A sample of the Execution names that will be deleted. Only populated if |
GoogleCloudAiplatformV1PythonPackageSpec
The spec of a Python packaged code.Fields | |
---|---|
args[] |
Command line arguments to be passed to the Python task. |
env[] |
Environment variables to be passed to the python module. Maximum limit is 100. |
executorImageUri |
Required. The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list. |
packageUris[] |
Required. The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100. |
pythonModule |
Required. The Python module name to run after installing the packages. |
GoogleCloudAiplatformV1QueryDeployedModelsResponse
Response message for QueryDeployedModels method.Fields | |
---|---|
deployedModelRefs[] |
References to the DeployedModels that share the specified deploymentResourcePool. |
deployedModels[] |
DEPRECATED Use deployed_model_refs instead. |
nextPageToken |
A token, which can be sent as |
totalDeployedModelCount |
The total number of DeployedModels on this DeploymentResourcePool. |
totalEndpointCount |
The total number of Endpoints that have DeployedModels on this DeploymentResourcePool. |
GoogleCloudAiplatformV1RawPredictRequest
Request message for PredictionService.RawPredict.Fields | |
---|---|
httpBody |
The prediction input. Supports HTTP headers and arbitrary data payload. A DeployedModel may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the RawPredict method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model. You can specify the schema for each instance in the predict_schemata.instance_schema_uri field when you create a Model. This schema applies when you deploy the |
GoogleCloudAiplatformV1ReadFeatureValuesRequest
Request message for FeaturestoreOnlineServingService.ReadFeatureValues.Fields | |
---|---|
entityId |
Required. ID for a specific entity. For example, for a machine learning model predicting user clicks on a website, an entity ID could be |
featureSelector |
Required. Selector choosing Features of the target EntityType. |
GoogleCloudAiplatformV1ReadFeatureValuesResponse
Response message for FeaturestoreOnlineServingService.ReadFeatureValues.Fields | |
---|---|
entityView |
Entity view with Feature values. This may be the entity in the Featurestore if values for all Features were requested, or a projection of the entity in the Featurestore if values for only some Features were requested. |
header |
Response header. |
GoogleCloudAiplatformV1ReadFeatureValuesResponseEntityView
Entity view with Feature values.Fields | |
---|---|
data[] |
Each piece of data holds the k requested values for one requested Feature. If no values for the requested Feature exist, the corresponding cell will be empty. This has the same size and is in the same order as the features from the header ReadFeatureValuesResponse.header. |
entityId |
ID of the requested entity. |
GoogleCloudAiplatformV1ReadFeatureValuesResponseEntityViewData
Container to hold value(s), successive in time, for one Feature from the request.Fields | |
---|---|
value |
Feature value if a single value is requested. |
values |
Feature values list if values, successive in time, are requested. If the requested number of values is greater than the number of existing Feature values, nonexistent values are omitted instead of being returned as empty. |
GoogleCloudAiplatformV1ReadFeatureValuesResponseFeatureDescriptor
Metadata for requested Features.Fields | |
---|---|
id |
Feature ID. |
GoogleCloudAiplatformV1ReadFeatureValuesResponseHeader
Response header with metadata for the requested ReadFeatureValuesRequest.entity_type and Features.Fields | |
---|---|
entityType |
The resource name of the EntityType from the ReadFeatureValuesRequest. Value format: |
featureDescriptors[] |
List of Feature metadata corresponding to each piece of ReadFeatureValuesResponse.EntityView.data. |
GoogleCloudAiplatformV1ReadIndexDatapointsRequest
The request message for MatchService.ReadIndexDatapoints.Fields | |
---|---|
deployedIndexId |
The ID of the DeployedIndex that will serve the request. |
ids[] |
IDs of the datapoints to be searched for. |
GoogleCloudAiplatformV1ReadIndexDatapointsResponse
The response message for MatchService.ReadIndexDatapoints.Fields | |
---|---|
datapoints[] |
The result list of datapoints. |
GoogleCloudAiplatformV1ReadTensorboardBlobDataResponse
Response message for TensorboardService.ReadTensorboardBlobData.Fields | |
---|---|
blobs[] |
Blob messages containing blob bytes. |
GoogleCloudAiplatformV1ReadTensorboardSizeResponse
Response message for TensorboardService.ReadTensorboardSize.Fields | |
---|---|
storageSizeByte |
Payload storage size for the TensorBoard |
GoogleCloudAiplatformV1ReadTensorboardTimeSeriesDataResponse
Response message for TensorboardService.ReadTensorboardTimeSeriesData.Fields | |
---|---|
timeSeriesData |
The returned time series data. |
GoogleCloudAiplatformV1ReadTensorboardUsageResponse
Response message for TensorboardService.ReadTensorboardUsage.Fields | |
---|---|
monthlyUsageData |
Maps year-month (YYYYMM) string to per month usage data. |
GoogleCloudAiplatformV1ReadTensorboardUsageResponsePerMonthUsageData
Per month usage dataFields | |
---|---|
userUsageData[] |
Usage data for each user in the given month. |
GoogleCloudAiplatformV1ReadTensorboardUsageResponsePerUserUsageData
Per user usage data.Fields | |
---|---|
username |
User's username |
viewCount |
Number of times the user has read data within the Tensorboard. |
GoogleCloudAiplatformV1RebootPersistentResourceOperationMetadata
Details of operations that perform reboot PersistentResource.Fields | |
---|---|
genericMetadata |
Operation metadata for PersistentResource. |
progressMessage |
Progress Message for Reboot LRO |
GoogleCloudAiplatformV1RemoveContextChildrenRequest
Request message for MetadataService.DeleteContextChildrenRequest.Fields | |
---|---|
childContexts[] |
The resource names of the child Contexts. |
GoogleCloudAiplatformV1RemoveDatapointsRequest
Request message for IndexService.RemoveDatapointsFields | |
---|---|
datapointIds[] |
A list of datapoint ids to be deleted. |
GoogleCloudAiplatformV1ResourcePool
Represents the spec of a group of resources of the same type, for example machine type, disk, and accelerators, in a PersistentResource.Fields | |
---|---|
autoscalingSpec |
Optional. Optional spec to configure GKE autoscaling |
diskSpec |
Optional. Disk spec for the machine in this node pool. |
id |
Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically. |
machineSpec |
Required. Immutable. The specification of a single machine. |
replicaCount |
Optional. The total number of machines to use for this resource pool. |
usedReplicaCount |
Output only. The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count. |
GoogleCloudAiplatformV1ResourcePoolAutoscalingSpec
The min/max number of replicas allowed if enabling autoscalingFields | |
---|---|
maxReplicaCount |
Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error |
minReplicaCount |
Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error |
GoogleCloudAiplatformV1ResourceRuntimeSpec
Configuration for the runtime on a PersistentResource instance, including but not limited to: * Service accounts used to run the workloads. * Whether to make it a dedicated Ray Cluster.Fields | |
---|---|
raySpec |
Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource. |
serviceAccountSpec |
Optional. Configure the use of workload identity on the PersistentResource |
GoogleCloudAiplatformV1ResourcesConsumed
Statistics information about resource consumption.Fields | |
---|---|
replicaHours |
Output only. The number of replica hours used. Note that many replicas may run in parallel, and additionally any given work may be queued for some time. Therefore this value is not strictly related to wall time. |
GoogleCloudAiplatformV1RestoreDatasetVersionOperationMetadata
Runtime operation information for DatasetService.RestoreDatasetVersion.Fields | |
---|---|
genericMetadata |
The common part of the operation metadata. |
GoogleCloudAiplatformV1ResumeScheduleRequest
Request message for ScheduleService.ResumeSchedule.Fields | |
---|---|
catchUp |
Optional. Whether to backfill missed runs when the schedule is resumed from PAUSED state. If set to true, all missed runs will be scheduled. New runs will be scheduled after the backfill is complete. This will also update Schedule.catch_up field. Default to false. |
GoogleCloudAiplatformV1Retrieval
Defines a retrieval tool that model can call to access external knowledge.Fields | |
---|---|
disableAttribution |
Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. |
vertexAiSearch |
Set to use data source powered by Vertex AI Search. |
GoogleCloudAiplatformV1SafetyRating
Safety rating corresponding to the generated content.Fields | |
---|---|
blocked |
Output only. Indicates whether the content was filtered out because of this rating. |
category |
Output only. Harm category. |
Enum type. Can be one of the following: | |
HARM_CATEGORY_UNSPECIFIED |
The harm category is unspecified. |
HARM_CATEGORY_HATE_SPEECH |
The harm category is hate speech. |
HARM_CATEGORY_DANGEROUS_CONTENT |
The harm category is dangerous content. |
HARM_CATEGORY_HARASSMENT |
The harm category is harassment. |
HARM_CATEGORY_SEXUALLY_EXPLICIT |
The harm category is sexually explicit content. |
probability |
Output only. Harm probability levels in the content. |
Enum type. Can be one of the following: | |
HARM_PROBABILITY_UNSPECIFIED |
Harm probability unspecified. |
NEGLIGIBLE |
Negligible level of harm. |
LOW |
Low level of harm. |
MEDIUM |
Medium level of harm. |
HIGH |
High level of harm. |
probabilityScore |
Output only. Harm probability score. |
severity |
Output only. Harm severity levels in the content. |
Enum type. Can be one of the following: | |
HARM_SEVERITY_UNSPECIFIED |
Harm severity unspecified. |
HARM_SEVERITY_NEGLIGIBLE |
Negligible level of harm severity. |
HARM_SEVERITY_LOW |
Low level of harm severity. |
HARM_SEVERITY_MEDIUM |
Medium level of harm severity. |
HARM_SEVERITY_HIGH |
High level of harm severity. |
severityScore |
Output only. Harm severity score. |
GoogleCloudAiplatformV1SafetySetting
Safety settings.Fields | |
---|---|
category |
Required. Harm category. |
Enum type. Can be one of the following: | |
HARM_CATEGORY_UNSPECIFIED |
The harm category is unspecified. |
HARM_CATEGORY_HATE_SPEECH |
The harm category is hate speech. |
HARM_CATEGORY_DANGEROUS_CONTENT |
The harm category is dangerous content. |
HARM_CATEGORY_HARASSMENT |
The harm category is harassment. |
HARM_CATEGORY_SEXUALLY_EXPLICIT |
The harm category is sexually explicit content. |
method |
Optional. Specify if the threshold is used for probability or severity score. If not specified, the threshold is used for probability score. |
Enum type. Can be one of the following: | |
HARM_BLOCK_METHOD_UNSPECIFIED |
The harm block method is unspecified. |
SEVERITY |
The harm block method uses both probability and severity scores. |
PROBABILITY |
The harm block method uses the probability score. |
threshold |
Required. The harm block threshold. |
Enum type. Can be one of the following: | |
HARM_BLOCK_THRESHOLD_UNSPECIFIED |
Unspecified harm block threshold. |
BLOCK_LOW_AND_ABOVE |
Block low threshold and above (i.e. block more). |
BLOCK_MEDIUM_AND_ABOVE |
Block medium threshold and above. |
BLOCK_ONLY_HIGH |
Block only high threshold (i.e. block less). |
BLOCK_NONE |
Block none. |
GoogleCloudAiplatformV1SampleConfig
Active learning data sampling config. For every active learning labeling iteration, it will select a batch of data based on the sampling strategy.Fields | |
---|---|
followingBatchSamplePercentage |
The percentage of data needed to be labeled in each following batch (except the first batch). |
initialBatchSamplePercentage |
The percentage of data needed to be labeled in the first batch. |
sampleStrategy |
Field to choose sampling strategy. Sampling strategy will decide which data should be selected for human labeling in every batch. |
Enum type. Can be one of the following: | |
SAMPLE_STRATEGY_UNSPECIFIED |
Default will be treated as UNCERTAINTY. |
UNCERTAINTY |
Sample the most uncertain data to label. |
GoogleCloudAiplatformV1SampledShapleyAttribution
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.Fields | |
---|---|
pathCount |
Required. The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively. |
GoogleCloudAiplatformV1SamplingStrategy
Sampling Strategy for logging, can be for both training and prediction dataset.Fields | |
---|---|
randomSampleConfig |
Random sample config. Will support more sampling strategies later. |
GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfig
Requests are randomly selected.Fields | |
---|---|
sampleRate |
Sample rate (0, 1] |
GoogleCloudAiplatformV1SavedQuery
A SavedQuery is a view of the dataset. It references a subset of annotations by problem type and filters.Fields | |
---|---|
annotationFilter |
Output only. Filters on the Annotations in the dataset. |
annotationSpecCount |
Output only. Number of AnnotationSpecs in the context of the SavedQuery. |
createTime |
Output only. Timestamp when this SavedQuery was created. |
displayName |
Required. The user-defined name of the SavedQuery. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
etag |
Used to perform a consistent read-modify-write update. If not set, a blind "overwrite" update happens. |
metadata |
Some additional information about the SavedQuery. |
name |
Output only. Resource name of the SavedQuery. |
problemType |
Required. Problem type of the SavedQuery. Allowed values: * IMAGE_CLASSIFICATION_SINGLE_LABEL * IMAGE_CLASSIFICATION_MULTI_LABEL * IMAGE_BOUNDING_POLY * IMAGE_BOUNDING_BOX * TEXT_CLASSIFICATION_SINGLE_LABEL * TEXT_CLASSIFICATION_MULTI_LABEL * TEXT_EXTRACTION * TEXT_SENTIMENT * VIDEO_CLASSIFICATION * VIDEO_OBJECT_TRACKING |
supportAutomlTraining |
Output only. If the Annotations belonging to the SavedQuery can be used for AutoML training. |
updateTime |
Output only. Timestamp when SavedQuery was last updated. |
GoogleCloudAiplatformV1Scalar
One point viewable on a scalar metric plot.Fields | |
---|---|
value |
Value of the point at this step / timestamp. |
GoogleCloudAiplatformV1Schedule
An instance of a Schedule periodically schedules runs to make API calls based on user specified time specification and API request type.Fields | |
---|---|
allowQueueing |
Optional. Whether new scheduled runs can be queued when max_concurrent_runs limit is reached. If set to true, new runs will be queued instead of skipped. Default to false. |
catchUp |
Output only. Whether to backfill missed runs when the schedule is resumed from PAUSED state. If set to true, all missed runs will be scheduled. New runs will be scheduled after the backfill is complete. Default to false. |
createPipelineJobRequest |
Request for PipelineService.CreatePipelineJob. CreatePipelineJobRequest.parent field is required (format: projects/{project}/locations/{location}). |
createTime |
Output only. Timestamp when this Schedule was created. |
cron |
Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * ", or "TZ=America/New_York 1 * * * ". |
displayName |
Required. User provided name of the Schedule. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
endTime |
Optional. Timestamp after which no new runs can be scheduled. If specified, The schedule will be completed when either end_time is reached or when scheduled_run_count >= max_run_count. If not specified, new runs will keep getting scheduled until this Schedule is paused or deleted. Already scheduled runs will be allowed to complete. Unset if not specified. |
lastPauseTime |
Output only. Timestamp when this Schedule was last paused. Unset if never paused. |
lastResumeTime |
Output only. Timestamp when this Schedule was last resumed. Unset if never resumed from pause. |
lastScheduledRunResponse |
Output only. Response of the last scheduled run. This is the response for starting the scheduled requests and not the execution of the operations/jobs created by the requests (if applicable). Unset if no run has been scheduled yet. |
maxConcurrentRunCount |
Required. Maximum number of runs that can be started concurrently for this Schedule. This is the limit for starting the scheduled requests and not the execution of the operations/jobs created by the requests (if applicable). |
maxRunCount |
Optional. Maximum run count of the schedule. If specified, The schedule will be completed when either started_run_count >= max_run_count or when end_time is reached. If not specified, new runs will keep getting scheduled until this Schedule is paused or deleted. Already scheduled runs will be allowed to complete. Unset if not specified. |
name |
Immutable. The resource name of the Schedule. |
nextRunTime |
Output only. Timestamp when this Schedule should schedule the next run. Having a next_run_time in the past means the runs are being started behind schedule. |
startTime |
Optional. Timestamp after which the first run can be scheduled. Default to Schedule create time if not specified. |
startedRunCount |
Output only. The number of runs started by this schedule. |
state |
Output only. The state of this Schedule. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
Unspecified. |
ACTIVE |
The Schedule is active. Runs are being scheduled on the user-specified timespec. |
PAUSED |
The schedule is paused. No new runs will be created until the schedule is resumed. Already started runs will be allowed to complete. |
COMPLETED |
The Schedule is completed. No new runs will be scheduled. Already started runs will be allowed to complete. Schedules in completed state cannot be paused or resumed. |
updateTime |
Output only. Timestamp when this Schedule was updated. |
GoogleCloudAiplatformV1ScheduleRunResponse
Status of a scheduled run.Fields | |
---|---|
runResponse |
The response of the scheduled run. |
scheduledRunTime |
The scheduled run time based on the user-specified schedule. |
GoogleCloudAiplatformV1Scheduling
All parameters related to queuing and scheduling of custom jobs.Fields | |
---|---|
disableRetries |
Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides |
restartJobOnWorkerRestart |
Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job. |
timeout |
The maximum job running time. The default is 7 days. |
GoogleCloudAiplatformV1Schema
Schema is used to define the format of input/output data. Represents a select subset of an OpenAPI 3.0 schema object. More fields may be added in the future as needed.Fields | |
---|---|
default |
Optional. Default value of the data. |
description |
Optional. The description of the data. |
enum[] |
Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} |
example |
Optional. Example of the object. Will only populated when the object is the root. |
format |
Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc |
items |
Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. |
maxItems |
Optional. Maximum number of the elements for Type.ARRAY. |
maxLength |
Optional. Maximum length of the Type.STRING |
maxProperties |
Optional. Maximum number of the properties for Type.OBJECT. |
maximum |
Optional. Maximum value of the Type.INTEGER and Type.NUMBER |
minItems |
Optional. Minimum number of the elements for Type.ARRAY. |
minLength |
Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING |
minProperties |
Optional. Minimum number of the properties for Type.OBJECT. |
minimum |
Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER |
nullable |
Optional. Indicates if the value may be null. |
pattern |
Optional. Pattern of the Type.STRING to restrict a string to a regular expression. |
properties |
Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. |
required[] |
Optional. Required properties of Type.OBJECT. |
title |
Optional. The title of the Schema. |
type |
Optional. The type of the data. |
Enum type. Can be one of the following: | |
TYPE_UNSPECIFIED |
Not specified, should not be used. |
STRING |
OpenAPI string type |
NUMBER |
OpenAPI number type |
INTEGER |
OpenAPI integer type |
BOOLEAN |
OpenAPI boolean type |
ARRAY |
OpenAPI array type |
OBJECT |
OpenAPI object type |
GoogleCloudAiplatformV1SchemaAnnotationSpecColor
An entry of mapping between color and AnnotationSpec. The mapping is used in segmentation mask.Fields | |
---|---|
color |
The color of the AnnotationSpec in a segmentation mask. |
displayName |
The display name of the AnnotationSpec represented by the color in the segmentation mask. |
id |
The ID of the AnnotationSpec represented by the color in the segmentation mask. |
GoogleCloudAiplatformV1SchemaImageBoundingBoxAnnotation
Annotation details specific to image object detection.Fields | |
---|---|
annotationSpecId |
The resource Id of the AnnotationSpec that this Annotation pertains to. |
displayName |
The display name of the AnnotationSpec that this Annotation pertains to. |
xMax |
The rightmost coordinate of the bounding box. |
xMin |
The leftmost coordinate of the bounding box. |
yMax |
The bottommost coordinate of the bounding box. |
yMin |
The topmost coordinate of the bounding box. |
GoogleCloudAiplatformV1SchemaImageClassificationAnnotation
Annotation details specific to image classification.Fields | |
---|---|
annotationSpecId |
The resource Id of the AnnotationSpec that this Annotation pertains to. |
displayName |
The display name of the AnnotationSpec that this Annotation pertains to. |
GoogleCloudAiplatformV1SchemaImageDataItem
Payload of Image DataItem.Fields | |
---|---|
gcsUri |
Required. Google Cloud Storage URI points to the original image in user's bucket. The image is up to 30MB in size. |
mimeType |
Output only. The mime type of the content of the image. Only the images in below listed mime types are supported. - image/jpeg - image/gif - image/png - image/webp - image/bmp - image/tiff - image/vnd.microsoft.icon |
GoogleCloudAiplatformV1SchemaImageDatasetMetadata
The metadata of Datasets that contain Image DataItems.Fields | |
---|---|
dataItemSchemaUri |
Points to a YAML file stored on Google Cloud Storage describing payload of the Image DataItems that belong to this Dataset. |
gcsBucket |
Google Cloud Storage Bucket name that contains the blob data of this Dataset. |
GoogleCloudAiplatformV1SchemaImageSegmentationAnnotation
Annotation details specific to image segmentation.Fields | |
---|---|
maskAnnotation |
Mask based segmentation annotation. Only one mask annotation can exist for one image. |
polygonAnnotation |
Polygon annotation. |
polylineAnnotation |
Polyline annotation. |
GoogleCloudAiplatformV1SchemaImageSegmentationAnnotationMaskAnnotation
The mask based segmentation annotation.Fields | |
---|---|
annotationSpecColors[] |
The mapping between color and AnnotationSpec for this Annotation. |
maskGcsUri |
Google Cloud Storage URI that points to the mask image. The image must be in PNG format. It must have the same size as the DataItem's image. Each pixel in the image mask represents the AnnotationSpec which the pixel in the image DataItem belong to. Each color is mapped to one AnnotationSpec based on annotation_spec_colors. |
GoogleCloudAiplatformV1SchemaImageSegmentationAnnotationPolygonAnnotation
Represents a polygon in image.Fields | |
---|---|
annotationSpecId |
The resource Id of the AnnotationSpec that this Annotation pertains to. |
displayName |
The display name of the AnnotationSpec that this Annotation pertains to. |
vertexes[] |
The vertexes are connected one by one and the last vertex is connected to the first one to represent a polygon. |
GoogleCloudAiplatformV1SchemaImageSegmentationAnnotationPolylineAnnotation
Represents a polyline in image.Fields | |
---|---|
annotationSpecId |
The resource Id of the AnnotationSpec that this Annotation pertains to. |
displayName |
The display name of the AnnotationSpec that this Annotation pertains to. |
vertexes[] |
The vertexes are connected one by one and the last vertex in not connected to the first one. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsBoundingBoxMetrics
Bounding box matching model metrics for a single intersection-over-union threshold and multiple label match confidence thresholds.Fields | |
---|---|
confidenceMetrics[] |
Metrics for each label-match confidence_threshold from 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99. Precision-recall curve is derived from them. |
iouThreshold |
The intersection-over-union threshold value used to compute this metrics entry. |
meanAveragePrecision |
The mean average precision, most often close to |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsBoundingBoxMetricsConfidenceMetrics
Metrics for a single confidence threshold.Fields | |
---|---|
confidenceThreshold |
The confidence threshold value used to compute the metrics. |
f1Score |
The harmonic mean of recall and precision. |
precision |
Precision under the given confidence threshold. |
recall |
Recall under the given confidence threshold. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetrics
Metrics for classification evaluation results.Fields | |
---|---|
auPrc |
The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation. |
auRoc |
The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation. |
confidenceMetrics[] |
Metrics for each |
confusionMatrix |
Confusion matrix of the evaluation. |
logLoss |
The Log Loss metric. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
(No description provided)Fields | |
---|---|
confidenceThreshold |
Metrics are computed with an assumption that the Model never returns predictions with score lower than this value. |
confusionMatrix |
Confusion matrix of the evaluation for this confidence_threshold. |
f1Score |
The harmonic mean of recall and precision. For summary metrics, it computes the micro-averaged F1 score. |
f1ScoreAt1 |
The harmonic mean of recallAt1 and precisionAt1. |
f1ScoreMacro |
Macro-averaged F1 Score. |
f1ScoreMicro |
Micro-averaged F1 Score. |
falseNegativeCount |
The number of ground truth labels that are not matched by a Model created label. |
falsePositiveCount |
The number of Model created labels that do not match a ground truth label. |
falsePositiveRate |
False Positive Rate for the given confidence threshold. |
falsePositiveRateAt1 |
The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem. |
maxPredictions |
Metrics are computed with an assumption that the Model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the |
precision |
Precision for the given confidence threshold. |
precisionAt1 |
The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem. |
recall |
Recall (True Positive Rate) for the given confidence threshold. |
recallAt1 |
The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem. |
trueNegativeCount |
The number of labels that were not created by the Model, but if they would, they would not match a ground truth label. |
truePositiveCount |
The number of Model created labels that match a ground truth label. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix
(No description provided)Fields | |
---|---|
annotationSpecs[] |
AnnotationSpecs used in the confusion matrix. For AutoML Text Extraction, a special negative AnnotationSpec with empty |
rows[] |
Rows in the confusion matrix. The number of rows is equal to the size of |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrixAnnotationSpecRef
(No description provided)Fields | |
---|---|
displayName |
Display name of the AnnotationSpec. |
id |
ID of the AnnotationSpec. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsForecastingEvaluationMetrics
Metrics for forecasting evaluation results.Fields | |
---|---|
meanAbsoluteError |
Mean Absolute Error (MAE). |
meanAbsolutePercentageError |
Mean absolute percentage error. Infinity when there are zeros in the ground truth. |
quantileMetrics[] |
The quantile metrics entries for each quantile. |
rSquared |
Coefficient of determination as Pearson correlation coefficient. Undefined when ground truth or predictions are constant or near constant. |
rootMeanSquaredError |
Root Mean Squared Error (RMSE). |
rootMeanSquaredLogError |
Root mean squared log error. Undefined when there are negative ground truth values or predictions. |
rootMeanSquaredPercentageError |
Root Mean Square Percentage Error. Square root of MSPE. Undefined/imaginary when MSPE is negative. |
weightedAbsolutePercentageError |
Weighted Absolute Percentage Error. Does not use weights, this is just what the metric is called. Undefined if actual values sum to zero. Will be very large if actual values sum to a very small number. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsForecastingEvaluationMetricsQuantileMetricsEntry
Entry for the Quantiles loss type optimization objective.Fields | |
---|---|
observedQuantile |
This is a custom metric that calculates the percentage of true values that were less than the predicted value for that quantile. Only populated when optimization_objective is minimize-quantile-loss and each entry corresponds to an entry in quantiles The percent value can be used to compare with the quantile value, which is the target value. |
quantile |
The quantile for this entry. |
scaledPinballLoss |
The scaled pinball loss of this quantile. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsGeneralTextGenerationEvaluationMetrics
(No description provided)Fields | |
---|---|
bleu |
BLEU (bilingual evaluation understudy) scores based on sacrebleu implementation. |
rougeLSum |
ROUGE-L (Longest Common Subsequence) scoring at summary level. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsImageObjectDetectionEvaluationMetrics
Metrics for image object detection evaluation results.Fields | |
---|---|
boundingBoxMeanAveragePrecision |
The single metric for bounding boxes evaluation: the |
boundingBoxMetrics[] |
The bounding boxes match metrics for each intersection-over-union threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and each label confidence threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 pair. |
evaluatedBoundingBoxCount |
The total number of bounding boxes (i.e. summed over all images) the ground truth used to create this evaluation had. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsImageSegmentationEvaluationMetrics
Metrics for image segmentation evaluation results.Fields | |
---|---|
confidenceMetricsEntries[] |
Metrics for each confidenceThreshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 Precision-recall curve can be derived from it. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsImageSegmentationEvaluationMetricsConfidenceMetricsEntry
(No description provided)Fields | |
---|---|
confidenceThreshold |
Metrics are computed with an assumption that the model never returns predictions with score lower than this value. |
confusionMatrix |
Confusion matrix for the given confidence threshold. |
diceScoreCoefficient |
DSC or the F1 score, The harmonic mean of recall and precision. |
iouScore |
The intersection-over-union score. The measure of overlap of the annotation's category mask with ground truth category mask on the DataItem. |
precision |
Precision for the given confidence threshold. |
recall |
Recall (True Positive Rate) for the given confidence threshold. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics
Metrics for general pairwise text generation evaluation results.Fields | |
---|---|
accuracy |
Fraction of cases where the autorater agreed with the human raters. |
baselineModelWinRate |
Percentage of time the autorater decided the baseline model had the better response. |
cohensKappa |
A measurement of agreement between the autorater and human raters that takes the likelihood of random agreement into account. |
f1Score |
Harmonic mean of precision and recall. |
falseNegativeCount |
Number of examples where the autorater chose the baseline model, but humans preferred the model. |
falsePositiveCount |
Number of examples where the autorater chose the model, but humans preferred the baseline model. |
humanPreferenceBaselineModelWinRate |
Percentage of time humans decided the baseline model had the better response. |
humanPreferenceModelWinRate |
Percentage of time humans decided the model had the better response. |
modelWinRate |
Percentage of time the autorater decided the model had the better response. |
precision |
Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the autorater thought the model had a better response. True positive divided by all positive. |
recall |
Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the humans thought the model had a better response. |
trueNegativeCount |
Number of examples where both the autorater and humans decided that the model had the worse response. |
truePositiveCount |
Number of examples where both the autorater and humans decided that the model had the better response. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsQuestionAnsweringEvaluationMetrics
(No description provided)Fields | |
---|---|
exactMatch |
The rate at which the input predicted strings exactly match their references. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsRegressionEvaluationMetrics
Metrics for regression evaluation results.Fields | |
---|---|
meanAbsoluteError |
Mean Absolute Error (MAE). |
meanAbsolutePercentageError |
Mean absolute percentage error. Infinity when there are zeros in the ground truth. |
rSquared |
Coefficient of determination as Pearson correlation coefficient. Undefined when ground truth or predictions are constant or near constant. |
rootMeanSquaredError |
Root Mean Squared Error (RMSE). |
rootMeanSquaredLogError |
Root mean squared log error. Undefined when there are negative ground truth values or predictions. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsSummarizationEvaluationMetrics
(No description provided)Fields | |
---|---|
rougeLSum |
ROUGE-L (Longest Common Subsequence) scoring at summary level. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsTextExtractionEvaluationMetrics
Metrics for text extraction evaluation results.Fields | |
---|---|
confidenceMetrics[] |
Metrics that have confidence thresholds. Precision-recall curve can be derived from them. |
confusionMatrix |
Confusion matrix of the evaluation. Only set for Models where number of AnnotationSpecs is no more than 10. Only set for ModelEvaluations, not for ModelEvaluationSlices. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsTextExtractionEvaluationMetricsConfidenceMetrics
(No description provided)Fields | |
---|---|
confidenceThreshold |
Metrics are computed with an assumption that the Model never returns predictions with score lower than this value. |
f1Score |
The harmonic mean of recall and precision. |
precision |
Precision for the given confidence threshold. |
recall |
Recall (True Positive Rate) for the given confidence threshold. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsTextSentimentEvaluationMetrics
Model evaluation metrics for text sentiment problems.Fields | |
---|---|
confusionMatrix |
Confusion matrix of the evaluation. Only set for ModelEvaluations, not for ModelEvaluationSlices. |
f1Score |
The harmonic mean of recall and precision. |
linearKappa |
Linear weighted kappa. Only set for ModelEvaluations, not for ModelEvaluationSlices. |
meanAbsoluteError |
Mean absolute error. Only set for ModelEvaluations, not for ModelEvaluationSlices. |
meanSquaredError |
Mean squared error. Only set for ModelEvaluations, not for ModelEvaluationSlices. |
precision |
Precision. |
quadraticKappa |
Quadratic weighted kappa. Only set for ModelEvaluations, not for ModelEvaluationSlices. |
recall |
Recall. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsTrackMetrics
UNIMPLEMENTED. Track matching model metrics for a single track match threshold and multiple label match confidence thresholds.Fields | |
---|---|
confidenceMetrics[] |
Metrics for each label-match |
iouThreshold |
The intersection-over-union threshold value between bounding boxes across frames used to compute this metric entry. |
meanBoundingBoxIou |
The mean bounding box iou over all confidence thresholds. |
meanMismatchRate |
The mean mismatch rate over all confidence thresholds. |
meanTrackingAveragePrecision |
The mean average precision over all confidence thresholds. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsTrackMetricsConfidenceMetrics
Metrics for a single confidence threshold.Fields | |
---|---|
boundingBoxIou |
Bounding box intersection-over-union precision. Measures how well the bounding boxes overlap between each other (e.g. complete overlap or just barely above iou_threshold). |
confidenceThreshold |
The confidence threshold value used to compute the metrics. |
mismatchRate |
Mismatch rate, which measures the tracking consistency, i.e. correctness of instance ID continuity. |
trackingPrecision |
Tracking precision. |
trackingRecall |
Tracking recall. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsVideoActionMetrics
The Evaluation metrics given a specific precision_window_length.Fields | |
---|---|
confidenceMetrics[] |
Metrics for each label-match confidence_threshold from 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99. |
meanAveragePrecision |
The mean average precision. |
precisionWindowLength |
This VideoActionMetrics is calculated based on this prediction window length. If the predicted action's timestamp is inside the time window whose center is the ground truth action's timestamp with this specific length, the prediction result is treated as a true positive. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsVideoActionMetricsConfidenceMetrics
Metrics for a single confidence threshold.Fields | |
---|---|
confidenceThreshold |
Output only. The confidence threshold value used to compute the metrics. |
f1Score |
Output only. The harmonic mean of recall and precision. |
precision |
Output only. Precision for the given confidence threshold. |
recall |
Output only. Recall for the given confidence threshold. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsVideoActionRecognitionMetrics
Model evaluation metrics for video action recognition.Fields | |
---|---|
evaluatedActionCount |
The number of ground truth actions used to create this evaluation. |
videoActionMetrics[] |
The metric entries for precision window lengths: 1s,2s,3s. |
GoogleCloudAiplatformV1SchemaModelevaluationMetricsVideoObjectTrackingMetrics
Model evaluation metrics for video object tracking problems. Evaluates prediction quality of both labeled bounding boxes and labeled tracks (i.e. series of bounding boxes sharing same label and instance ID).Fields | |
---|---|
boundingBoxMeanAveragePrecision |
The single metric for bounding boxes evaluation: the |
boundingBoxMetrics[] |
The bounding boxes match metrics for each intersection-over-union threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and each label confidence threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 pair. |
evaluatedBoundingBoxCount |
UNIMPLEMENTED. The total number of bounding boxes (i.e. summed over all frames) the ground truth used to create this evaluation had. |
evaluatedFrameCount |
UNIMPLEMENTED. The number of video frames used to create this evaluation. |
evaluatedTrackCount |
UNIMPLEMENTED. The total number of tracks (i.e. as seen across all frames) the ground truth used to create this evaluation had. |
trackMeanAveragePrecision |
UNIMPLEMENTED. The single metric for tracks accuracy evaluation: the |
trackMeanBoundingBoxIou |
UNIMPLEMENTED. The single metric for tracks bounding box iou evaluation: the |
trackMeanMismatchRate |
UNIMPLEMENTED. The single metric for tracking consistency evaluation: the |
trackMetrics[] |
UNIMPLEMENTED. The tracks match metrics for each intersection-over-union threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and each label confidence threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 pair. |
GoogleCloudAiplatformV1SchemaPredictInstanceImageClassificationPredictionInstance
Prediction input format for Image Classification.Fields | |
---|---|
content |
The image bytes or Cloud Storage URI to make the prediction on. |
mimeType |
The MIME type of the content of the image. Only the images in below listed MIME types are supported. - image/jpeg - image/gif - image/png - image/webp - image/bmp - image/tiff - image/vnd.microsoft.icon |
GoogleCloudAiplatformV1SchemaPredictInstanceImageObjectDetectionPredictionInstance
Prediction input format for Image Object Detection.Fields | |
---|---|
content |
The image bytes or Cloud Storage URI to make the prediction on. |
mimeType |
The MIME type of the content of the image. Only the images in below listed MIME types are supported. - image/jpeg - image/gif - image/png - image/webp - image/bmp - image/tiff - image/vnd.microsoft.icon |
GoogleCloudAiplatformV1SchemaPredictInstanceImageSegmentationPredictionInstance
Prediction input format for Image Segmentation.Fields | |
---|---|
content |
The image bytes to make the predictions on. |
mimeType |
The MIME type of the content of the image. Only the images in below listed MIME types are supported. - image/jpeg - image/png |
GoogleCloudAiplatformV1SchemaPredictInstanceTextClassificationPredictionInstance
Prediction input format for Text Classification.Fields | |
---|---|
content |
The text snippet to make the predictions on. |
mimeType |
The MIME type of the text snippet. The supported MIME types are listed below. - text/plain |
GoogleCloudAiplatformV1SchemaPredictInstanceTextExtractionPredictionInstance
Prediction input format for Text Extraction.Fields | |
---|---|
content |
The text snippet to make the predictions on. |
key |
This field is only used for batch prediction. If a key is provided, the batch prediction result will by mapped to this key. If omitted, then the batch prediction result will contain the entire input instance. Vertex AI will not check if keys in the request are duplicates, so it is up to the caller to ensure the keys are unique. |
mimeType |
The MIME type of the text snippet. The supported MIME types are listed below. - text/plain |
GoogleCloudAiplatformV1SchemaPredictInstanceTextSentimentPredictionInstance
Prediction input format for Text Sentiment.Fields | |
---|---|
content |
The text snippet to make the predictions on. |
mimeType |
The MIME type of the text snippet. The supported MIME types are listed below. - text/plain |
GoogleCloudAiplatformV1SchemaPredictInstanceVideoActionRecognitionPredictionInstance
Prediction input format for Video Action Recognition.Fields | |
---|---|
content |
The Google Cloud Storage location of the video on which to perform the prediction. |
mimeType |
The MIME type of the content of the video. Only the following are supported: video/mp4 video/avi video/quicktime |
timeSegmentEnd |
The end, exclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision, and "inf" or "Infinity" is allowed, which means the end of the video. |
timeSegmentStart |
The beginning, inclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision. |
GoogleCloudAiplatformV1SchemaPredictInstanceVideoClassificationPredictionInstance
Prediction input format for Video Classification.Fields | |
---|---|
content |
The Google Cloud Storage location of the video on which to perform the prediction. |
mimeType |
The MIME type of the content of the video. Only the following are supported: video/mp4 video/avi video/quicktime |
timeSegmentEnd |
The end, exclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision, and "inf" or "Infinity" is allowed, which means the end of the video. |
timeSegmentStart |
The beginning, inclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision. |
GoogleCloudAiplatformV1SchemaPredictInstanceVideoObjectTrackingPredictionInstance
Prediction input format for Video Object Tracking.Fields | |
---|---|
content |
The Google Cloud Storage location of the video on which to perform the prediction. |
mimeType |
The MIME type of the content of the video. Only the following are supported: video/mp4 video/avi video/quicktime |
timeSegmentEnd |
The end, exclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision, and "inf" or "Infinity" is allowed, which means the end of the video. |
timeSegmentStart |
The beginning, inclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision. |
GoogleCloudAiplatformV1SchemaPredictParamsGroundingConfig
The configuration for grounding checking.Fields | |
---|---|
disableAttribution |
If set, skip finding claim attributions (i.e not generate grounding citation). |
sources[] |
The sources for the grounding checking. |
GoogleCloudAiplatformV1SchemaPredictParamsGroundingConfigSourceEntry
Single source entry for the grounding checking.Fields | |
---|---|
enterpriseDatastore |
The uri of the Vertex AI Search data source. Deprecated. Use vertex_ai_search_datastore instead. |
inlineContext |
The grounding text passed inline with the Predict API. It can support up to 1 million bytes. |
type |
The type of the grounding checking source. |
Enum type. Can be one of the following: | |
UNSPECIFIED |
(No description provided) |
WEB |
Uses Web Search to check the grounding. |
ENTERPRISE |
Uses Vertex AI Search to check the grounding. Deprecated. Use VERTEX_AI_SEARCH instead. |
VERTEX_AI_SEARCH |
Uses Vertex AI Search to check the grounding |
INLINE |
Uses inline context to check the grounding. |
vertexAiSearchDatastore |
The uri of the Vertex AI Search data source. |
GoogleCloudAiplatformV1SchemaPredictParamsImageClassificationPredictionParams
Prediction model parameters for Image Classification.Fields | |
---|---|
confidenceThreshold |
The Model only returns predictions with at least this confidence score. Default value is 0.0 |
maxPredictions |
The Model only returns up to that many top, by confidence score, predictions per instance. If this number is very high, the Model may return fewer predictions. Default value is 10. |
GoogleCloudAiplatformV1SchemaPredictParamsImageObjectDetectionPredictionParams
Prediction model parameters for Image Object Detection.Fields | |
---|---|
confidenceThreshold |
The Model only returns predictions with at least this confidence score. Default value is 0.0 |
maxPredictions |
The Model only returns up to that many top, by confidence score, predictions per instance. Note that number of returned predictions is also limited by metadata's predictionsLimit. Default value is 10. |
GoogleCloudAiplatformV1SchemaPredictParamsImageSegmentationPredictionParams
Prediction model parameters for Image Segmentation.Fields | |
---|---|
confidenceThreshold |
When the model predicts category of pixels of the image, it will only provide predictions for pixels that it is at least this much confident about. All other pixels will be classified as background. Default value is 0.5. |
GoogleCloudAiplatformV1SchemaPredictParamsVideoActionRecognitionPredictionParams
Prediction model parameters for Video Action Recognition.Fields | |
---|---|
confidenceThreshold |
The Model only returns predictions with at least this confidence score. Default value is 0.0 |
maxPredictions |
The model only returns up to that many top, by confidence score, predictions per frame of the video. If this number is very high, the Model may return fewer predictions per frame. Default value is 50. |
GoogleCloudAiplatformV1SchemaPredictParamsVideoClassificationPredictionParams
Prediction model parameters for Video Classification.Fields | |
---|---|
confidenceThreshold |
The Model only returns predictions with at least this confidence score. Default value is 0.0 |
maxPredictions |
The Model only returns up to that many top, by confidence score, predictions per instance. If this number is very high, the Model may return fewer predictions. Default value is 10,000. |
oneSecIntervalClassification |
Set to true to request classification for a video at one-second intervals. Vertex AI returns labels and their confidence scores for each second of the entire time segment of the video that user specified in the input WARNING: Model evaluation is not done for this classification type, the quality of it depends on the training data, but there are no metrics provided to describe that quality. Default value is false |
segmentClassification |
Set to true to request segment-level classification. Vertex AI returns labels and their confidence scores for the entire time segment of the video that user specified in the input instance. Default value is true |
shotClassification |
Set to true to request shot-level classification. Vertex AI determines the boundaries for each camera shot in the entire time segment of the video that user specified in the input instance. Vertex AI then returns labels and their confidence scores for each detected shot, along with the start and end time of the shot. WARNING: Model evaluation is not done for this classification type, the quality of it depends on the training data, but there are no metrics provided to describe that quality. Default value is false |
GoogleCloudAiplatformV1SchemaPredictParamsVideoObjectTrackingPredictionParams
Prediction model parameters for Video Object Tracking.Fields | |
---|---|
confidenceThreshold |
The Model only returns predictions with at least this confidence score. Default value is 0.0 |
maxPredictions |
The model only returns up to that many top, by confidence score, predictions per frame of the video. If this number is very high, the Model may return fewer predictions per frame. Default value is 50. |
minBoundingBoxSize |
Only bounding boxes with shortest edge at least that long as a relative value of video frame size are returned. Default value is 0.0. |
GoogleCloudAiplatformV1SchemaPredictPredictionClassificationPredictionResult
Prediction output format for Image and Text Classification.Fields | |
---|---|
confidences[] |
The Model's confidences in correctness of the predicted IDs, higher value means higher confidence. Order matches the Ids. |
displayNames[] |
The display names of the AnnotationSpecs that had been identified, order matches the IDs. |
ids[] |
The resource IDs of the AnnotationSpecs that had been identified. |
GoogleCloudAiplatformV1SchemaPredictPredictionImageObjectDetectionPredictionResult
Prediction output format for Image Object Detection.Fields | |
---|---|
bboxes[] |
Bounding boxes, i.e. the rectangles over the image, that pinpoint the found AnnotationSpecs. Given in order that matches the IDs. Each bounding box is an array of 4 numbers |
confidences[] |
The Model's confidences in correctness of the predicted IDs, higher value means higher confidence. Order matches the Ids. |
displayNames[] |
The display names of the AnnotationSpecs that had been identified, order matches the IDs. |
ids[] |
The resource IDs of the AnnotationSpecs that had been identified, ordered by the confidence score descendingly. |
GoogleCloudAiplatformV1SchemaPredictPredictionImageSegmentationPredictionResult
Prediction output format for Image Segmentation.Fields | |
---|---|
categoryMask |
A PNG image where each pixel in the mask represents the category in which the pixel in the original image was predicted to belong to. The size of this image will be the same as the original image. The mapping between the AnntoationSpec and the color can be found in model's metadata. The model will choose the most likely category and if none of the categories reach the confidence threshold, the pixel will be marked as background. |
confidenceMask |
A one channel image which is encoded as an 8bit lossless PNG. The size of the image will be the same as the original image. For a specific pixel, darker color means less confidence in correctness of the cateogry in the categoryMask for the corresponding pixel. Black means no confidence and white means complete confidence. |
GoogleCloudAiplatformV1SchemaPredictPredictionTabularClassificationPredictionResult
Prediction output format for Tabular Classification.Fields | |
---|---|
classes[] |
The name of the classes being classified, contains all possible values of the target column. |
scores[] |
The model's confidence in each class being correct, higher value means higher confidence. The N-th score corresponds to the N-th class in classes. |
GoogleCloudAiplatformV1SchemaPredictPredictionTabularRegressionPredictionResult
Prediction output format for Tabular Regression.Fields | |
---|---|
lowerBound |
The lower bound of the prediction interval. |
quantilePredictions[] |
Quantile predictions, in 1-1 correspondence with quantile_values. |
quantileValues[] |
Quantile values. |
upperBound |
The upper bound of the prediction interval. |
value |
The regression value. |
GoogleCloudAiplatformV1SchemaPredictPredictionTextExtractionPredictionResult
Prediction output format for Text Extraction.Fields | |
---|---|
confidences[] |
The Model's confidences in correctness of the predicted IDs, higher value means higher confidence. Order matches the Ids. |
displayNames[] |
The display names of the AnnotationSpecs that had been identified, order matches the IDs. |
ids[] |
The resource IDs of the AnnotationSpecs that had been identified, ordered by the confidence score descendingly. |
textSegmentEndOffsets[] |
The end offsets, inclusive, of the text segment in which the AnnotationSpec has been identified. Expressed as a zero-based number of characters as measured from the start of the text snippet. |
textSegmentStartOffsets[] |
The start offsets, inclusive, of the text segment in which the AnnotationSpec has been identified. Expressed as a zero-based number of characters as measured from the start of the text snippet. |
GoogleCloudAiplatformV1SchemaPredictPredictionTextSentimentPredictionResult
Prediction output format for Text SentimentFields | |
---|---|
sentiment |
The integer sentiment labels between 0 (inclusive) and sentimentMax label (inclusive), while 0 maps to the least positive sentiment and sentimentMax maps to the most positive one. The higher the score is, the more positive the sentiment in the text snippet is. Note: sentimentMax is an integer value between 1 (inclusive) and 10 (inclusive). |
GoogleCloudAiplatformV1SchemaPredictPredictionTftFeatureImportance
(No description provided)Fields | |
---|---|
attributeColumns[] |
(No description provided) |
attributeWeights[] |
(No description provided) |
contextColumns[] |
(No description provided) |
contextWeights[] |
TFT feature importance values. Each pair for {context/horizon/attribute} should have the same shape since the weight corresponds to the column names. |
horizonColumns[] |
(No description provided) |
horizonWeights[] |
(No description provided) |
GoogleCloudAiplatformV1SchemaPredictPredictionTimeSeriesForecastingPredictionResult
Prediction output format for Time Series Forecasting.Fields | |
---|---|
quantilePredictions[] |
Quantile predictions, in 1-1 correspondence with quantile_values. |
quantileValues[] |
Quantile values. |
tftFeatureImportance |
Only use these if TFt is enabled. |
value |
The regression value. |
GoogleCloudAiplatformV1SchemaPredictPredictionVideoActionRecognitionPredictionResult
Prediction output format for Video Action Recognition.Fields | |
---|---|
confidence |
The Model's confidence in correction of this prediction, higher value means higher confidence. |
displayName |
The display name of the AnnotationSpec that had been identified. |
id |
The resource ID of the AnnotationSpec that had been identified. |
timeSegmentEnd |
The end, exclusive, of the video's time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end. |
timeSegmentStart |
The beginning, inclusive, of the video's time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end. |
GoogleCloudAiplatformV1SchemaPredictPredictionVideoClassificationPredictionResult
Prediction output format for Video Classification.Fields | |
---|---|
confidence |
The Model's confidence in correction of this prediction, higher value means higher confidence. |
displayName |
The display name of the AnnotationSpec that had been identified. |
id |
The resource ID of the AnnotationSpec that had been identified. |
timeSegmentEnd |
The end, exclusive, of the video's time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end. Note that for 'segment-classification' prediction type, this equals the original 'timeSegmentEnd' from the input instance, for other types it is the end of a shot or a 1 second interval respectively. |
timeSegmentStart |
The beginning, inclusive, of the video's time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end. Note that for 'segment-classification' prediction type, this equals the original 'timeSegmentStart' from the input instance, for other types it is the start of a shot or a 1 second interval respectively. |
type |
The type of the prediction. The requested types can be configured via parameters. This will be one of - segment-classification - shot-classification - one-sec-interval-classification |
GoogleCloudAiplatformV1SchemaPredictPredictionVideoObjectTrackingPredictionResult
Prediction output format for Video Object Tracking.Fields | |
---|---|
confidence |
The Model's confidence in correction of this prediction, higher value means higher confidence. |
displayName |
The display name of the AnnotationSpec that had been identified. |
frames[] |
All of the frames of the video in which a single object instance has been detected. The bounding boxes in the frames identify the same object. |
id |
The resource ID of the AnnotationSpec that had been identified. |
timeSegmentEnd |
The end, inclusive, of the video's time segment in which the object instance has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end. |
timeSegmentStart |
The beginning, inclusive, of the video's time segment in which the object instance has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end. |
GoogleCloudAiplatformV1SchemaPredictPredictionVideoObjectTrackingPredictionResultFrame
The fieldsxMin
, xMax
, yMin
, and yMax
refer to a bounding box, i.e. the rectangle over the video frame pinpointing the found AnnotationSpec. The coordinates are relative to the frame size, and the point 0,0 is in the top left of the frame.
Fields | |
---|---|
timeOffset |
A time (frame) of a video in which the object has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end. |
xMax |
The rightmost coordinate of the bounding box. |
xMin |
The leftmost coordinate of the bounding box. |
yMax |
The bottommost coordinate of the bounding box. |
yMin |
The topmost coordinate of the bounding box. |
GoogleCloudAiplatformV1SchemaPredictionResult
Represents a line of JSONL in the batch prediction output file.Fields | |
---|---|
error |
The error result. Do not set prediction if this is set. |
instance |
User's input instance. Struct is used here instead of Any so that JsonFormat does not append an extra "@type" field when we convert the proto to JSON. |
key |
Optional user-provided key from the input instance. |
prediction |
The prediction result. Value is used here instead of Any so that JsonFormat does not append an extra "@type" field when we convert the proto to JSON and so we can represent array of objects. Do not set error if this is set. |
GoogleCloudAiplatformV1SchemaPredictionResultError
(No description provided)Fields | |
---|---|
message |
Error message with additional details. |
status |
Error status. This will be serialized into the enum name e.g. "NOT_FOUND". |
Enum type. Can be one of the following: | |
OK |
Not an error; returned on success. HTTP Mapping: 200 OK |
CANCELLED |
The operation was cancelled, typically by the caller. HTTP Mapping: 499 Client Closed Request |
UNKNOWN |
Unknown error. For example, this error may be returned when a Status value received from another address space belongs to an error space that is not known in this address space. Also errors raised by APIs that do not return enough error information may be converted to this error. HTTP Mapping: 500 Internal Server Error |
INVALID_ARGUMENT |
The client specified an invalid argument. Note that this differs from FAILED_PRECONDITION . INVALID_ARGUMENT indicates arguments that are problematic regardless of the state of the system (e.g., a malformed file name). HTTP Mapping: 400 Bad Request |
DEADLINE_EXCEEDED |
The deadline expired before the operation could complete. For operations that change the state of the system, this error may be returned even if the operation has completed successfully. For example, a successful response from a server could have been delayed long enough for the deadline to expire. HTTP Mapping: 504 Gateway Timeout |
NOT_FOUND |
Some requested entity (e.g., file or directory) was not found. Note to server developers: if a request is denied for an entire class of users, such as gradual feature rollout or undocumented allowlist, NOT_FOUND may be used. If a request is denied for some users within a class of users, such as user-based access control, PERMISSION_DENIED must be used. HTTP Mapping: 404 Not Found |
ALREADY_EXISTS |
The entity that a client attempted to create (e.g., file or directory) already exists. HTTP Mapping: 409 Conflict |
PERMISSION_DENIED |
The caller does not have permission to execute the specified operation. PERMISSION_DENIED must not be used for rejections caused by exhausting some resource (use RESOURCE_EXHAUSTED instead for those errors). PERMISSION_DENIED must not be used if the caller can not be identified (use UNAUTHENTICATED instead for those errors). This error code does not imply the request is valid or the requested entity exists or satisfies other pre-conditions. HTTP Mapping: 403 Forbidden |
UNAUTHENTICATED |
The request does not have valid authentication credentials for the operation. HTTP Mapping: 401 Unauthorized |
RESOURCE_EXHAUSTED |
Some resource has been exhausted, perhaps a per-user quota, or perhaps the entire file system is out of space. HTTP Mapping: 429 Too Many Requests |
FAILED_PRECONDITION |
The operation was rejected because the system is not in a state required for the operation's execution. For example, the directory to be deleted is non-empty, an rmdir operation is applied to a non-directory, etc. Service implementors can use the following guidelines to decide between FAILED_PRECONDITION , ABORTED , and UNAVAILABLE : (a) Use UNAVAILABLE if the client can retry just the failing call. (b) Use ABORTED if the client should retry at a higher level. For example, when a client-specified test-and-set fails, indicating the client should restart a read-modify-write sequence. (c) Use FAILED_PRECONDITION if the client should not retry until the system state has been explicitly fixed. For example, if an "rmdir" fails because the directory is non-empty, FAILED_PRECONDITION should be returned since the client should not retry unless the files are deleted from the directory. HTTP Mapping: 400 Bad Request |
ABORTED |
The operation was aborted, typically due to a concurrency issue such as a sequencer check failure or transaction abort. See the guidelines above for deciding between FAILED_PRECONDITION , ABORTED , and UNAVAILABLE . HTTP Mapping: 409 Conflict |
OUT_OF_RANGE |
The operation was attempted past the valid range. E.g., seeking or reading past end-of-file. Unlike INVALID_ARGUMENT , this error indicates a problem that may be fixed if the system state changes. For example, a 32-bit file system will generate INVALID_ARGUMENT if asked to read at an offset that is not in the range [0,2^32-1], but it will generate OUT_OF_RANGE if asked to read from an offset past the current file size. There is a fair bit of overlap between FAILED_PRECONDITION and OUT_OF_RANGE . We recommend using OUT_OF_RANGE (the more specific error) when it applies so that callers who are iterating through a space can easily look for an OUT_OF_RANGE error to detect when they are done. HTTP Mapping: 400 Bad Request |
UNIMPLEMENTED |
The operation is not implemented or is not supported/enabled in this service. HTTP Mapping: 501 Not Implemented |
INTERNAL |
Internal errors. This means that some invariants expected by the underlying system have been broken. This error code is reserved for serious errors. HTTP Mapping: 500 Internal Server Error |
UNAVAILABLE |
The service is currently unavailable. This is most likely a transient condition, which can be corrected by retrying with a backoff. Note that it is not always safe to retry non-idempotent operations. See the guidelines above for deciding between FAILED_PRECONDITION , ABORTED , and UNAVAILABLE . HTTP Mapping: 503 Service Unavailable |
DATA_LOSS |
Unrecoverable data loss or corruption. HTTP Mapping: 500 Internal Server Error |
GoogleCloudAiplatformV1SchemaTablesDatasetMetadata
The metadata of Datasets that contain tables data.Fields | |
---|---|
inputConfig |
(No description provided) |
GoogleCloudAiplatformV1SchemaTablesDatasetMetadataBigQuerySource
(No description provided)Fields | |
---|---|
uri |
The URI of a BigQuery table. e.g. bq://projectId.bqDatasetId.bqTableId |
GoogleCloudAiplatformV1SchemaTablesDatasetMetadataGcsSource
(No description provided)Fields | |
---|---|
uri[] |
Cloud Storage URI of one or more files. Only CSV files are supported. The first line of the CSV file is used as the header. If there are multiple files, the header is the first line of the lexicographically first file, the other files must either contain the exact same header or omit the header. |
GoogleCloudAiplatformV1SchemaTablesDatasetMetadataInputConfig
The tables Dataset's data source. The Dataset doesn't store the data directly, but only pointer(s) to its data.Fields | |
---|---|
bigquerySource |
(No description provided) |
gcsSource |
(No description provided) |
GoogleCloudAiplatformV1SchemaTextClassificationAnnotation
Annotation details specific to text classification.Fields | |
---|---|
annotationSpecId |
The resource Id of the AnnotationSpec that this Annotation pertains to. |
displayName |
The display name of the AnnotationSpec that this Annotation pertains to. |
GoogleCloudAiplatformV1SchemaTextDataItem
Payload of Text DataItem.Fields | |
---|---|
gcsUri |
Output only. Google Cloud Storage URI points to the original text in user's bucket. The text file is up to 10MB in size. |
GoogleCloudAiplatformV1SchemaTextDatasetMetadata
The metadata of Datasets that contain Text DataItems.Fields | |
---|---|
dataItemSchemaUri |
Points to a YAML file stored on Google Cloud Storage describing payload of the Text DataItems that belong to this Dataset. |
gcsBucket |
Google Cloud Storage Bucket name that contains the blob data of this Dataset. |
GoogleCloudAiplatformV1SchemaTextExtractionAnnotation
Annotation details specific to text extraction.Fields | |
---|---|
annotationSpecId |
The resource Id of the AnnotationSpec that this Annotation pertains to. |
displayName |
The display name of the AnnotationSpec that this Annotation pertains to. |
textSegment |
The segment of the text content. |
GoogleCloudAiplatformV1SchemaTextPromptDatasetMetadata
The metadata of Datasets that contain Text Prompt data.Fields | |
---|---|
candidateCount |
Number of candidates. |
gcsUri |
The Google Cloud Storage URI that stores the prompt data. |
groundingConfig |
Grounding checking configuration. |
maxOutputTokens |
Value of the maximum number of tokens generated set when the dataset was saved. |
note |
User-created prompt note. Note size limit is 2KB. |
promptType |
Type of the prompt dataset. |
stopSequences[] |
Customized stop sequences. |
systemInstructionGcsUri |
The Google Cloud Storage URI that stores the system instruction, starting with gs://. |
temperature |
Temperature value used for sampling set when the dataset was saved. This value is used to tune the degree of randomness. |
text |
The content of the prompt dataset. |
topK |
Top K value set when the dataset was saved. This value determines how many candidates with highest probability from the vocab would be selected for each decoding step. |
topP |
Top P value set when the dataset was saved. Given topK tokens for decoding, top candidates will be selected until the sum of their probabilities is topP. |
GoogleCloudAiplatformV1SchemaTextSegment
The text segment inside of DataItem.Fields | |
---|---|
content |
The text content in the segment for output only. |
endOffset |
Zero-based character index of the first character past the end of the text segment (counting character from the beginning of the text). The character at the end_offset is NOT included in the text segment. |
startOffset |
Zero-based character index of the first character of the text segment (counting characters from the beginning of the text). |
GoogleCloudAiplatformV1SchemaTextSentimentAnnotation
Annotation details specific to text sentiment.Fields | |
---|---|
annotationSpecId |
The resource Id of the AnnotationSpec that this Annotation pertains to. |
displayName |
The display name of the AnnotationSpec that this Annotation pertains to. |
sentiment |
The sentiment score for text. |
sentimentMax |
The sentiment max score for text. |
GoogleCloudAiplatformV1SchemaTextSentimentSavedQueryMetadata
The metadata of SavedQuery contains TextSentiment Annotations.Fields | |
---|---|
sentimentMax |
The maximum sentiment of sentiment Anntoation in this SavedQuery. |
GoogleCloudAiplatformV1SchemaTimeSegment
A time period inside of a DataItem that has a time dimension (e.g. video).Fields | |
---|---|
endTimeOffset |
End of the time segment (exclusive), represented as the duration since the start of the DataItem. |
startTimeOffset |
Start of the time segment (inclusive), represented as the duration since the start of the DataItem. |
GoogleCloudAiplatformV1SchemaTimeSeriesDatasetMetadata
The metadata of Datasets that contain time series data.Fields | |
---|---|
inputConfig |
(No description provided) |
timeColumn |
The column name of the time column that identifies time order in the time series. |
timeSeriesIdentifierColumn |
The column name of the time series identifier column that identifies the time series. |
GoogleCloudAiplatformV1SchemaTimeSeriesDatasetMetadataBigQuerySource
(No description provided)Fields | |
---|---|
uri |
The URI of a BigQuery table. |
GoogleCloudAiplatformV1SchemaTimeSeriesDatasetMetadataGcsSource
(No description provided)Fields | |
---|---|
uri[] |
Cloud Storage URI of one or more files. Only CSV files are supported. The first line of the CSV file is used as the header. If there are multiple files, the header is the first line of the lexicographically first file, the other files must either contain the exact same header or omit the header. |
GoogleCloudAiplatformV1SchemaTimeSeriesDatasetMetadataInputConfig
The time series Dataset's data source. The Dataset doesn't store the data directly, but only pointer(s) to its data.Fields | |
---|---|
bigquerySource |
(No description provided) |
gcsSource |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecasting
A TrainingJob that trains and uploads an AutoML Forecasting Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
metadata |
The metadata information. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputs
(No description provided)Fields | |
---|---|
additionalExperiments[] |
Additional experiment flags for the time series forcasting training. |
availableAtForecastColumns[] |
Names of columns that are available and provided when a forecast is requested. These columns contain information for the given entity (identified by the time_series_identifier_column column) that is known at forecast. For example, predicted weather for a specific day. |
contextWindow |
The amount of time into the past training and prediction data is used for model training and prediction respectively. Expressed in number of units defined by the |
dataGranularity |
Expected difference in time granularity between rows in the data. |
enableProbabilisticInference |
If probabilistic inference is enabled, the model will fit a distribution that captures the uncertainty of a prediction. At inference time, the predictive distribution is used to make a point prediction that minimizes the optimization objective. For example, the mean of a predictive distribution is the point prediction that minimizes RMSE loss. If quantiles are specified, then the quantiles of the distribution are also returned. The optimization objective cannot be minimize-quantile-loss. |
exportEvaluatedDataItemsConfig |
Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed. |
forecastHorizon |
The amount of time into the future for which forecasted values for the target are returned. Expressed in number of units defined by the |
hierarchyConfig |
Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies. |
holidayRegions[] |
The geographical region based on which the holiday effect is applied in modeling by adding holiday categorical array feature that include all holidays matching the date. This option only allowed when data_granularity is day. By default, holiday effect modeling is disabled. To turn it on, specify the holiday region using this option. |
optimizationObjective |
Objective function the model is optimizing towards. The training process creates a model that optimizes the value of the objective function over the validation set. The supported optimization objectives: * "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). * "minimize-mae" - Minimize mean-absolute error (MAE). * "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE). * "minimize-rmspe" - Minimize root-mean-squared percentage error (RMSPE). * "minimize-wape-mae" - Minimize the combination of weighted absolute percentage error (WAPE) and mean-absolute-error (MAE). * "minimize-quantile-loss" - Minimize the quantile loss at the quantiles defined in |
quantiles[] |
Quantiles to use for minimize-quantile-loss |
targetColumn |
The name of the column that the Model is to predict values for. This column must be unavailable at forecast. |
timeColumn |
The name of the column that identifies time order in the time series. This column must be available at forecast. |
timeSeriesAttributeColumns[] |
Column names that should be used as attribute columns. The value of these columns does not vary as a function of time. For example, store ID or item color. |
timeSeriesIdentifierColumn |
The name of the column that identifies the time series. |
trainBudgetMilliNodeHours |
Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive. |
transformations[] |
Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter. |
unavailableAtForecastColumns[] |
Names of columns that are unavailable when a forecast is requested. This column contains information for the given entity (identified by the time_series_identifier_column) that is unknown before the forecast For example, actual weather on a given day. |
validationOptions |
Validation options for the data validation component. The available options are: * "fail-pipeline" - default, will validate against the validation and fail the pipeline if it fails. * "ignore-validation" - ignore the results of the validation and continue |
weightColumn |
Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1. |
windowConfig |
Config containing strategy for generating sliding windows. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsGranularity
A duration of time expressed in time granularity units.Fields | |
---|---|
quantity |
The number of granularity_units between data points in the training data. If |
unit |
The time granularity unit of this time period. The supported units are: * "minute" * "hour" * "day" * "week" * "month" * "year" |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformation
(No description provided)Fields | |
---|---|
auto |
(No description provided) |
categorical |
(No description provided) |
numeric |
(No description provided) |
text |
(No description provided) |
timestamp |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationAutoTransformation
Training pipeline will infer the proper transformation based on the statistic of dataset.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationCategoricalTransformation
Training pipeline will perform following transformation functions. * The categorical string as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationNumericTransformation
Training pipeline will perform following transformation functions. * The value converted to float32. * The z_score of the value. * log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * A boolean value that indicates whether the value is valid.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationTextTransformation
Training pipeline will perform following transformation functions. * The text as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationTimestampTransformation
Training pipeline will perform following transformation functions. * Apply the transformation functions for Numerical columns. * Determine the year, month, day,and weekday. Treat each value from the timestamp as a Categorical column. * Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.Fields | |
---|---|
columnName |
(No description provided) |
timeFormat |
The format in which that time field is expressed. The time_format must either be one of: * |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingMetadata
Model metadata specific to AutoML Forecasting.Fields | |
---|---|
evaluatedDataItemsBigqueryUri |
BigQuery destination uri for exported evaluated examples. |
trainCostMilliNodeHours |
Output only. The actual training cost of the model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageClassification
A TrainingJob that trains and uploads an AutoML Image Classification Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
metadata |
The metadata information. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageClassificationInputs
(No description provided)Fields | |
---|---|
baseModelId |
The ID of the |
budgetMilliNodeHours |
The training budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual metadata.costMilliNodeHours will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using the full budget and the metadata.successfulStopReason will be |
disableEarlyStopping |
Use the entire training budget. This disables the early stopping feature. When false the early stopping feature is enabled, which means that AutoML Image Classification might stop training before the entire training budget has been used. |
modelType |
(No description provided) |
Enum type. Can be one of the following: | |
MODEL_TYPE_UNSPECIFIED |
Should not be set. |
CLOUD |
A Model best tailored to be used within Google Cloud, and which cannot be exported. Default. |
CLOUD_1 |
A model type best tailored to be used within Google Cloud, which cannot be exported externally. Compared to the CLOUD model above, it is expected to have higher prediction accuracy. |
MOBILE_TF_LOW_LATENCY_1 |
A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as TensorFlow or Core ML model and used on a mobile or edge device afterwards. Expected to have low latency, but may have lower prediction quality than other mobile models. |
MOBILE_TF_VERSATILE_1 |
A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as TensorFlow or Core ML model and used on a mobile or edge device with afterwards. |
MOBILE_TF_HIGH_ACCURACY_1 |
A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as TensorFlow or Core ML model and used on a mobile or edge device afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other mobile models. |
EFFICIENTNET |
EfficientNet model for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally. |
MAXVIT |
MaxViT model for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally. |
VIT |
ViT model for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally. |
COCA |
CoCa model for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally. |
multiLabel |
If false, a single-label (multi-class) Model will be trained (i.e. assuming that for each image just up to one annotation may be applicable). If true, a multi-label Model will be trained (i.e. assuming that for each image multiple annotations may be applicable). |
tunableParameter |
Trainer type for Vision TrainRequest. |
uptrainBaseModelId |
The ID of |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageClassificationMetadata
(No description provided)Fields | |
---|---|
costMilliNodeHours |
The actual training cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed inputs.budgetMilliNodeHours. |
successfulStopReason |
For successful job completions, this is the reason why the job has finished. |
Enum type. Can be one of the following: | |
SUCCESSFUL_STOP_REASON_UNSPECIFIED |
Should not be set. |
BUDGET_REACHED |
The inputs.budgetMilliNodeHours had been reached. |
MODEL_CONVERGED |
Further training of the Model ceased to increase its quality, since it already has converged. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageObjectDetection
A TrainingJob that trains and uploads an AutoML Image Object Detection Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
metadata |
The metadata information |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageObjectDetectionInputs
(No description provided)Fields | |
---|---|
budgetMilliNodeHours |
The training budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual metadata.costMilliNodeHours will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using the full budget and the metadata.successfulStopReason will be |
disableEarlyStopping |
Use the entire training budget. This disables the early stopping feature. When false the early stopping feature is enabled, which means that AutoML Image Object Detection might stop training before the entire training budget has been used. |
modelType |
(No description provided) |
Enum type. Can be one of the following: | |
MODEL_TYPE_UNSPECIFIED |
Should not be set. |
CLOUD_HIGH_ACCURACY_1 |
A model best tailored to be used within Google Cloud, and which cannot be exported. Expected to have a higher latency, but should also have a higher prediction quality than other cloud models. |
CLOUD_LOW_LATENCY_1 |
A model best tailored to be used within Google Cloud, and which cannot be exported. Expected to have a low latency, but may have lower prediction quality than other cloud models. |
CLOUD_1 |
A model best tailored to be used within Google Cloud, and which cannot be exported. Compared to the CLOUD_HIGH_ACCURACY_1 and CLOUD_LOW_LATENCY_1 models above, it is expected to have higher prediction quality and lower latency. |
MOBILE_TF_LOW_LATENCY_1 |
A model that, in addition to being available within Google Cloud can also be exported (see ModelService.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other mobile models. |
MOBILE_TF_VERSATILE_1 |
A model that, in addition to being available within Google Cloud can also be exported (see ModelService.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. |
MOBILE_TF_HIGH_ACCURACY_1 |
A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other mobile models. |
CLOUD_STREAMING_1 |
A model best tailored to be used within Google Cloud, and which cannot be exported. Expected to best support predictions in streaming with lower latency and lower prediction quality than other cloud models. |
SPINENET |
SpineNet for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally. |
YOLO |
YOLO for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally. |
tunableParameter |
Trainer type for Vision TrainRequest. |
uptrainBaseModelId |
The ID of |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageObjectDetectionMetadata
(No description provided)Fields | |
---|---|
costMilliNodeHours |
The actual training cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed inputs.budgetMilliNodeHours. |
successfulStopReason |
For successful job completions, this is the reason why the job has finished. |
Enum type. Can be one of the following: | |
SUCCESSFUL_STOP_REASON_UNSPECIFIED |
Should not be set. |
BUDGET_REACHED |
The inputs.budgetMilliNodeHours had been reached. |
MODEL_CONVERGED |
Further training of the Model ceased to increase its quality, since it already has converged. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageSegmentation
A TrainingJob that trains and uploads an AutoML Image Segmentation Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
metadata |
The metadata information. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageSegmentationInputs
(No description provided)Fields | |
---|---|
baseModelId |
The ID of the |
budgetMilliNodeHours |
The training budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual metadata.costMilliNodeHours will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using the full budget and the metadata.successfulStopReason will be |
modelType |
(No description provided) |
Enum type. Can be one of the following: | |
MODEL_TYPE_UNSPECIFIED |
Should not be set. |
CLOUD_HIGH_ACCURACY_1 |
A model to be used via prediction calls to uCAIP API. Expected to have a higher latency, but should also have a higher prediction quality than other models. |
CLOUD_LOW_ACCURACY_1 |
A model to be used via prediction calls to uCAIP API. Expected to have a lower latency but relatively lower prediction quality. |
MOBILE_TF_LOW_LATENCY_1 |
A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as TensorFlow model and used on a mobile or edge device afterwards. Expected to have low latency, but may have lower prediction quality than other mobile models. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageSegmentationMetadata
(No description provided)Fields | |
---|---|
costMilliNodeHours |
The actual training cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed inputs.budgetMilliNodeHours. |
successfulStopReason |
For successful job completions, this is the reason why the job has finished. |
Enum type. Can be one of the following: | |
SUCCESSFUL_STOP_REASON_UNSPECIFIED |
Should not be set. |
BUDGET_REACHED |
The inputs.budgetMilliNodeHours had been reached. |
MODEL_CONVERGED |
Further training of the Model ceased to increase its quality, since it already has converged. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTables
A TrainingJob that trains and uploads an AutoML Tables Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
metadata |
The metadata information. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputs
(No description provided)Fields | |
---|---|
additionalExperiments[] |
Additional experiment flags for the Tables training pipeline. |
disableEarlyStopping |
Use the entire training budget. This disables the early stopping feature. By default, the early stopping feature is enabled, which means that AutoML Tables might stop training before the entire training budget has been used. |
exportEvaluatedDataItemsConfig |
Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed. |
optimizationObjective |
Objective function the model is optimizing towards. The training process creates a model that maximizes/minimizes the value of the objective function over the validation set. The supported optimization objectives depend on the prediction type. If the field is not set, a default objective function is used. classification (binary): "maximize-au-roc" (default) - Maximize the area under the receiver operating characteristic (ROC) curve. "minimize-log-loss" - Minimize log loss. "maximize-au-prc" - Maximize the area under the precision-recall curve. "maximize-precision-at-recall" - Maximize precision for a specified recall value. "maximize-recall-at-precision" - Maximize recall for a specified precision value. classification (multi-class): "minimize-log-loss" (default) - Minimize log loss. regression: "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). "minimize-mae" - Minimize mean-absolute error (MAE). "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE). |
optimizationObjectivePrecisionValue |
Required when optimization_objective is "maximize-recall-at-precision". Must be between 0 and 1, inclusive. |
optimizationObjectiveRecallValue |
Required when optimization_objective is "maximize-precision-at-recall". Must be between 0 and 1, inclusive. |
predictionType |
The type of prediction the Model is to produce. "classification" - Predict one out of multiple target values is picked for each row. "regression" - Predict a value based on its relation to other values. This type is available only to columns that contain semantically numeric values, i.e. integers or floating point number, even if stored as e.g. strings. |
targetColumn |
The column name of the target column that the model is to predict. |
trainBudgetMilliNodeHours |
Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive. |
transformations[] |
Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter. |
weightColumnName |
Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformation
(No description provided)Fields | |
---|---|
auto |
(No description provided) |
categorical |
(No description provided) |
numeric |
(No description provided) |
repeatedCategorical |
(No description provided) |
repeatedNumeric |
(No description provided) |
repeatedText |
(No description provided) |
text |
(No description provided) |
timestamp |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationAutoTransformation
Training pipeline will infer the proper transformation based on the statistic of dataset.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationCategoricalArrayTransformation
Treats the column as categorical array and performs following transformation functions. * For each element in the array, convert the category name to a dictionary lookup index and generate an embedding for each index. Combine the embedding of all elements into a single embedding using the mean. * Empty arrays treated as an embedding of zeroes.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationCategoricalTransformation
Training pipeline will perform following transformation functions. * The categorical string as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationNumericArrayTransformation
Treats the column as numerical array and performs following transformation functions. * All transformations for Numerical types applied to the average of the all elements. * The average of empty arrays is treated as zero.Fields | |
---|---|
columnName |
(No description provided) |
invalidValuesAllowed |
If invalid values is allowed, the training pipeline will create a boolean feature that indicated whether the value is valid. Otherwise, the training pipeline will discard the input row from trainining data. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationNumericTransformation
Training pipeline will perform following transformation functions. * The value converted to float32. * The z_score of the value. * log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * A boolean value that indicates whether the value is valid.Fields | |
---|---|
columnName |
(No description provided) |
invalidValuesAllowed |
If invalid values is allowed, the training pipeline will create a boolean feature that indicated whether the value is valid. Otherwise, the training pipeline will discard the input row from trainining data. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTextArrayTransformation
Treats the column as text array and performs following transformation functions. * Concatenate all text values in the array into a single text value using a space (" ") as a delimiter, and then treat the result as a single text value. Apply the transformations for Text columns. * Empty arrays treated as an empty text.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTextTransformation
Training pipeline will perform following transformation functions. * The text as is--no change to case, punctuation, spelling, tense, and so on. * Tokenize text to words. Convert each words to a dictionary lookup index and generate an embedding for each index. Combine the embedding of all elements into a single embedding using the mean. * Tokenization is based on unicode script boundaries. * Missing values get their own lookup index and resulting embedding. * Stop-words receive no special treatment and are not removed.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTimestampTransformation
Training pipeline will perform following transformation functions. * Apply the transformation functions for Numerical columns. * Determine the year, month, day,and weekday. Treat each value from the * timestamp as a Categorical column. * Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.Fields | |
---|---|
columnName |
(No description provided) |
invalidValuesAllowed |
If invalid values is allowed, the training pipeline will create a boolean feature that indicated whether the value is valid. Otherwise, the training pipeline will discard the input row from trainining data. |
timeFormat |
The format in which that time field is expressed. The time_format must either be one of: * |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesMetadata
Model metadata specific to AutoML Tables.Fields | |
---|---|
evaluatedDataItemsBigqueryUri |
BigQuery destination uri for exported evaluated examples. |
trainCostMilliNodeHours |
Output only. The actual training cost of the model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextClassification
A TrainingJob that trains and uploads an AutoML Text Classification Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextClassificationInputs
(No description provided)Fields | |
---|---|
multiLabel |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextExtraction
A TrainingJob that trains and uploads an AutoML Text Extraction Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextSentiment
A TrainingJob that trains and uploads an AutoML Text Sentiment Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextSentimentInputs
(No description provided)Fields | |
---|---|
sentimentMax |
A sentiment is expressed as an integer ordinal, where higher value means a more positive sentiment. The range of sentiments that will be used is between 0 and sentimentMax (inclusive on both ends), and all the values in the range must be represented in the dataset before a model can be created. Only the Annotations with this sentimentMax will be used for training. sentimentMax value must be between 1 and 10 (inclusive). |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoActionRecognition
A TrainingJob that trains and uploads an AutoML Video Action Recognition Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoActionRecognitionInputs
(No description provided)Fields | |
---|---|
modelType |
(No description provided) |
Enum type. Can be one of the following: | |
MODEL_TYPE_UNSPECIFIED |
Should not be set. |
CLOUD |
A model best tailored to be used within Google Cloud, and which c annot be exported. Default. |
MOBILE_VERSATILE_1 |
A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as a TensorFlow or TensorFlow Lite model and used on a mobile or edge device afterwards. |
MOBILE_JETSON_VERSATILE_1 |
A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) to a Jetson device afterwards. |
MOBILE_CORAL_VERSATILE_1 |
A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as a TensorFlow or TensorFlow Lite model and used on a Coral device afterwards. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoClassification
A TrainingJob that trains and uploads an AutoML Video Classification Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoClassificationInputs
(No description provided)Fields | |
---|---|
modelType |
(No description provided) |
Enum type. Can be one of the following: | |
MODEL_TYPE_UNSPECIFIED |
Should not be set. |
CLOUD |
A model best tailored to be used within Google Cloud, and which cannot be exported. Default. |
MOBILE_VERSATILE_1 |
A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as a TensorFlow or TensorFlow Lite model and used on a mobile or edge device afterwards. |
MOBILE_JETSON_VERSATILE_1 |
A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) to a Jetson device afterwards. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoObjectTracking
A TrainingJob that trains and uploads an AutoML Video ObjectTracking Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoObjectTrackingInputs
(No description provided)Fields | |
---|---|
modelType |
(No description provided) |
Enum type. Can be one of the following: | |
MODEL_TYPE_UNSPECIFIED |
Should not be set. |
CLOUD |
A model best tailored to be used within Google Cloud, and which c annot be exported. Default. |
MOBILE_VERSATILE_1 |
A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as a TensorFlow or TensorFlow Lite model and used on a mobile or edge device afterwards. |
MOBILE_CORAL_VERSATILE_1 |
A versatile model that is meant to be exported (see ModelService.ExportModel) and used on a Google Coral device. |
MOBILE_CORAL_LOW_LATENCY_1 |
A model that trades off quality for low latency, to be exported (see ModelService.ExportModel) and used on a Google Coral device. |
MOBILE_JETSON_VERSATILE_1 |
A versatile model that is meant to be exported (see ModelService.ExportModel) and used on an NVIDIA Jetson device. |
MOBILE_JETSON_LOW_LATENCY_1 |
A model that trades off quality for low latency, to be exported (see ModelService.ExportModel) and used on an NVIDIA Jetson device. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutomlImageTrainingTunableParameter
A wrapper class which contains the tunable parameters in an AutoML Image training job.Fields | |
---|---|
checkpointName |
Optional. An unique name of pretrained model checkpoint provided in model garden, it will be mapped to a GCS location internally. |
datasetConfig |
Customizable dataset settings, used in the |
studySpec |
Optioinal. StudySpec of hyperparameter tuning job. Required for |
trainerConfig |
Customizable trainer settings, used in the |
trainerType |
(No description provided) |
Enum type. Can be one of the following: | |
TRAINER_TYPE_UNSPECIFIED |
Default value. |
AUTOML_TRAINER |
(No description provided) |
MODEL_GARDEN_TRAINER |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionCustomJobMetadata
(No description provided)Fields | |
---|---|
backingCustomJob |
The resource name of the CustomJob that has been created to carry out this custom task. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionCustomTask
A TrainingJob that trains a custom code Model.Fields | |
---|---|
inputs |
The input parameters of this CustomTask. |
metadata |
The metadata information. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionExportEvaluatedDataItemsConfig
Configuration for exporting test set predictions to a BigQuery table.Fields | |
---|---|
destinationBigqueryUri |
URI of desired destination BigQuery table. Expected format: |
overrideExistingTable |
If true and an export destination is specified, then the contents of the destination are overwritten. Otherwise, if the export destination already exists, then the export operation fails. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHierarchyConfig
Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies.Fields | |
---|---|
groupColumns[] |
A list of time series attribute column names that define the time series hierarchy. Only one level of hierarchy is supported, ex. 'region' for a hierarchy of stores or 'department' for a hierarchy of products. If multiple columns are specified, time series will be grouped by their combined values, ex. ('blue', 'large') for 'color' and 'size', up to 5 columns are accepted. If no group columns are specified, all time series are considered to be part of the same group. |
groupTemporalTotalWeight |
The weight of the loss for predictions aggregated over both the horizon and time series in the same hierarchy group. |
groupTotalWeight |
The weight of the loss for predictions aggregated over time series in the same group. |
temporalTotalWeight |
The weight of the loss for predictions aggregated over the horizon for a single time series. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHyperparameterTuningJobMetadata
(No description provided)Fields | |
---|---|
backingHyperparameterTuningJob |
The resource name of the HyperparameterTuningJob that has been created to carry out this HyperparameterTuning task. |
bestTrialBackingCustomJob |
The resource name of the CustomJob that has been created to run the best Trial of this HyperparameterTuning task. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHyperparameterTuningJobSpec
(No description provided)Fields | |
---|---|
maxFailedTrialCount |
The number of failed Trials that need to be seen before failing the HyperparameterTuningJob. If set to 0, Vertex AI decides how many Trials must fail before the whole job fails. |
maxTrialCount |
The desired total number of Trials. |
parallelTrialCount |
The desired number of Trials to run in parallel. |
studySpec |
Study configuration of the HyperparameterTuningJob. |
trialJobSpec |
The spec of a trial job. The same spec applies to the CustomJobs created in all the trials. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHyperparameterTuningTask
A TrainingJob that tunes Hypererparameters of a custom code Model.Fields | |
---|---|
inputs |
The input parameters of this HyperparameterTuningTask. |
metadata |
The metadata information. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecasting
A TrainingJob that trains and uploads an AutoML Forecasting Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
metadata |
The metadata information. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputs
(No description provided)Fields | |
---|---|
additionalExperiments[] |
Additional experiment flags for the time series forcasting training. |
availableAtForecastColumns[] |
Names of columns that are available and provided when a forecast is requested. These columns contain information for the given entity (identified by the time_series_identifier_column column) that is known at forecast. For example, predicted weather for a specific day. |
contextWindow |
The amount of time into the past training and prediction data is used for model training and prediction respectively. Expressed in number of units defined by the |
dataGranularity |
Expected difference in time granularity between rows in the data. |
exportEvaluatedDataItemsConfig |
Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed. |
forecastHorizon |
The amount of time into the future for which forecasted values for the target are returned. Expressed in number of units defined by the |
hierarchyConfig |
Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies. |
holidayRegions[] |
The geographical region based on which the holiday effect is applied in modeling by adding holiday categorical array feature that include all holidays matching the date. This option only allowed when data_granularity is day. By default, holiday effect modeling is disabled. To turn it on, specify the holiday region using this option. |
optimizationObjective |
Objective function the model is optimizing towards. The training process creates a model that optimizes the value of the objective function over the validation set. The supported optimization objectives: * "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). * "minimize-mae" - Minimize mean-absolute error (MAE). * "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE). * "minimize-rmspe" - Minimize root-mean-squared percentage error (RMSPE). * "minimize-wape-mae" - Minimize the combination of weighted absolute percentage error (WAPE) and mean-absolute-error (MAE). * "minimize-quantile-loss" - Minimize the quantile loss at the quantiles defined in |
quantiles[] |
Quantiles to use for minimize-quantile-loss |
targetColumn |
The name of the column that the Model is to predict values for. This column must be unavailable at forecast. |
timeColumn |
The name of the column that identifies time order in the time series. This column must be available at forecast. |
timeSeriesAttributeColumns[] |
Column names that should be used as attribute columns. The value of these columns does not vary as a function of time. For example, store ID or item color. |
timeSeriesIdentifierColumn |
The name of the column that identifies the time series. |
trainBudgetMilliNodeHours |
Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive. |
transformations[] |
Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter. |
unavailableAtForecastColumns[] |
Names of columns that are unavailable when a forecast is requested. This column contains information for the given entity (identified by the time_series_identifier_column) that is unknown before the forecast For example, actual weather on a given day. |
validationOptions |
Validation options for the data validation component. The available options are: * "fail-pipeline" - default, will validate against the validation and fail the pipeline if it fails. * "ignore-validation" - ignore the results of the validation and continue |
weightColumn |
Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1. This column must be available at forecast. |
windowConfig |
Config containing strategy for generating sliding windows. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsGranularity
A duration of time expressed in time granularity units.Fields | |
---|---|
quantity |
The number of granularity_units between data points in the training data. If |
unit |
The time granularity unit of this time period. The supported units are: * "minute" * "hour" * "day" * "week" * "month" * "year" |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformation
(No description provided)Fields | |
---|---|
auto |
(No description provided) |
categorical |
(No description provided) |
numeric |
(No description provided) |
text |
(No description provided) |
timestamp |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationAutoTransformation
Training pipeline will infer the proper transformation based on the statistic of dataset.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationCategoricalTransformation
Training pipeline will perform following transformation functions. * The categorical string as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationNumericTransformation
Training pipeline will perform following transformation functions. * The value converted to float32. * The z_score of the value. * log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationTextTransformation
Training pipeline will perform following transformation functions. * The text as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationTimestampTransformation
Training pipeline will perform following transformation functions. * Apply the transformation functions for Numerical columns. * Determine the year, month, day,and weekday. Treat each value from the timestamp as a Categorical column. * Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.Fields | |
---|---|
columnName |
(No description provided) |
timeFormat |
The format in which that time field is expressed. The time_format must either be one of: * |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingMetadata
Model metadata specific to Seq2Seq Plus Forecasting.Fields | |
---|---|
evaluatedDataItemsBigqueryUri |
BigQuery destination uri for exported evaluated examples. |
trainCostMilliNodeHours |
Output only. The actual training cost of the model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecasting
A TrainingJob that trains and uploads an AutoML Forecasting Model.Fields | |
---|---|
inputs |
The input parameters of this TrainingJob. |
metadata |
The metadata information. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputs
(No description provided)Fields | |
---|---|
additionalExperiments[] |
Additional experiment flags for the time series forcasting training. |
availableAtForecastColumns[] |
Names of columns that are available and provided when a forecast is requested. These columns contain information for the given entity (identified by the time_series_identifier_column column) that is known at forecast. For example, predicted weather for a specific day. |
contextWindow |
The amount of time into the past training and prediction data is used for model training and prediction respectively. Expressed in number of units defined by the |
dataGranularity |
Expected difference in time granularity between rows in the data. |
exportEvaluatedDataItemsConfig |
Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed. |
forecastHorizon |
The amount of time into the future for which forecasted values for the target are returned. Expressed in number of units defined by the |
hierarchyConfig |
Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies. |
holidayRegions[] |
The geographical region based on which the holiday effect is applied in modeling by adding holiday categorical array feature that include all holidays matching the date. This option only allowed when data_granularity is day. By default, holiday effect modeling is disabled. To turn it on, specify the holiday region using this option. |
optimizationObjective |
Objective function the model is optimizing towards. The training process creates a model that optimizes the value of the objective function over the validation set. The supported optimization objectives: * "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). * "minimize-mae" - Minimize mean-absolute error (MAE). * "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE). * "minimize-rmspe" - Minimize root-mean-squared percentage error (RMSPE). * "minimize-wape-mae" - Minimize the combination of weighted absolute percentage error (WAPE) and mean-absolute-error (MAE). * "minimize-quantile-loss" - Minimize the quantile loss at the quantiles defined in |
quantiles[] |
Quantiles to use for minimize-quantile-loss |
targetColumn |
The name of the column that the Model is to predict values for. This column must be unavailable at forecast. |
timeColumn |
The name of the column that identifies time order in the time series. This column must be available at forecast. |
timeSeriesAttributeColumns[] |
Column names that should be used as attribute columns. The value of these columns does not vary as a function of time. For example, store ID or item color. |
timeSeriesIdentifierColumn |
The name of the column that identifies the time series. |
trainBudgetMilliNodeHours |
Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive. |
transformations[] |
Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter. |
unavailableAtForecastColumns[] |
Names of columns that are unavailable when a forecast is requested. This column contains information for the given entity (identified by the time_series_identifier_column) that is unknown before the forecast For example, actual weather on a given day. |
validationOptions |
Validation options for the data validation component. The available options are: * "fail-pipeline" - default, will validate against the validation and fail the pipeline if it fails. * "ignore-validation" - ignore the results of the validation and continue |
weightColumn |
Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1. This column must be available at forecast. |
windowConfig |
Config containing strategy for generating sliding windows. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsGranularity
A duration of time expressed in time granularity units.Fields | |
---|---|
quantity |
The number of granularity_units between data points in the training data. If |
unit |
The time granularity unit of this time period. The supported units are: * "minute" * "hour" * "day" * "week" * "month" * "year" |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformation
(No description provided)Fields | |
---|---|
auto |
(No description provided) |
categorical |
(No description provided) |
numeric |
(No description provided) |
text |
(No description provided) |
timestamp |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationAutoTransformation
Training pipeline will infer the proper transformation based on the statistic of dataset.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationCategoricalTransformation
Training pipeline will perform following transformation functions. * The categorical string as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationNumericTransformation
Training pipeline will perform following transformation functions. * The value converted to float32. * The z_score of the value. * log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationTextTransformation
Training pipeline will perform following transformation functions. * The text as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index.Fields | |
---|---|
columnName |
(No description provided) |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationTimestampTransformation
Training pipeline will perform following transformation functions. * Apply the transformation functions for Numerical columns. * Determine the year, month, day,and weekday. Treat each value from the timestamp as a Categorical column. * Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.Fields | |
---|---|
columnName |
(No description provided) |
timeFormat |
The format in which that time field is expressed. The time_format must either be one of: * |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingMetadata
Model metadata specific to TFT Forecasting.Fields | |
---|---|
evaluatedDataItemsBigqueryUri |
BigQuery destination uri for exported evaluated examples. |
trainCostMilliNodeHours |
Output only. The actual training cost of the model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget. |
GoogleCloudAiplatformV1SchemaTrainingjobDefinitionWindowConfig
Config that contains the strategy used to generate sliding windows in time series training. A window is a series of rows that comprise the context up to the time of prediction, and the horizon following. The corresponding row for each window marks the start of the forecast horizon. Each window is used as an input example for training/evaluation.Fields | |
---|---|
column |
Name of the column that should be used to generate sliding windows. The column should contain either booleans or string booleans; if the value of the row is True, generate a sliding window with the horizon starting at that row. The column will not be used as a feature in training. |
maxCount |
Maximum number of windows that should be generated across all time series. |
strideLength |
Stride length used to generate input examples. Within one time series, every {$STRIDE_LENGTH} rows will be used to generate a sliding window. |
GoogleCloudAiplatformV1SchemaVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.Fields | |
---|---|
x |
X coordinate. |
y |
Y coordinate. |
GoogleCloudAiplatformV1SchemaVideoActionRecognitionAnnotation
Annotation details specific to video action recognition.Fields | |
---|---|
annotationSpecId |
The resource Id of the AnnotationSpec that this Annotation pertains to. |
displayName |
The display name of the AnnotationSpec that this Annotation pertains to. |
timeSegment |
This Annotation applies to the time period represented by the TimeSegment. If it's not set, the Annotation applies to the whole video. |
GoogleCloudAiplatformV1SchemaVideoClassificationAnnotation
Annotation details specific to video classification.Fields | |
---|---|
annotationSpecId |
The resource Id of the AnnotationSpec that this Annotation pertains to. |
displayName |
The display name of the AnnotationSpec that this Annotation pertains to. |
timeSegment |
This Annotation applies to the time period represented by the TimeSegment. If it's not set, the Annotation applies to the whole video. |
GoogleCloudAiplatformV1SchemaVideoDataItem
Payload of Video DataItem.Fields | |
---|---|
gcsUri |
Required. Google Cloud Storage URI points to the original video in user's bucket. The video is up to 50 GB in size and up to 3 hour in duration. |
mimeType |
Output only. The mime type of the content of the video. Only the videos in below listed mime types are supported. Supported mime_type: - video/mp4 - video/avi - video/quicktime |
GoogleCloudAiplatformV1SchemaVideoDatasetMetadata
The metadata of Datasets that contain Video DataItems.Fields | |
---|---|
dataItemSchemaUri |
Points to a YAML file stored on Google Cloud Storage describing payload of the Video DataItems that belong to this Dataset. |
gcsBucket |
Google Cloud Storage Bucket name that contains the blob data of this Dataset. |
GoogleCloudAiplatformV1SchemaVideoObjectTrackingAnnotation
Annotation details specific to video object tracking.Fields | |
---|---|
annotationSpecId |
The resource Id of the AnnotationSpec that this Annotation pertains to. |
displayName |
The display name of the AnnotationSpec that this Annotation pertains to. |
instanceId |
The instance of the object, expressed as a positive integer. Used to track the same object across different frames. |
timeOffset |
A time (frame) of a video to which this annotation pertains. Represented as the duration since the video's start. |
xMax |
The rightmost coordinate of the bounding box. |
xMin |
The leftmost coordinate of the bounding box. |
yMax |
The bottommost coordinate of the bounding box. |
yMin |
The topmost coordinate of the bounding box. |
GoogleCloudAiplatformV1SchemaVisualInspectionClassificationLabelSavedQueryMetadata
(No description provided)Fields | |
---|---|
multiLabel |
Whether or not the classification label is multi_label. |
GoogleCloudAiplatformV1SearchDataItemsResponse
Response message for DatasetService.SearchDataItems.Fields | |
---|---|
dataItemViews[] |
The DataItemViews read. |
nextPageToken |
A token to retrieve next page of results. Pass to SearchDataItemsRequest.page_token to obtain that page. |
GoogleCloudAiplatformV1SearchEntryPoint
Google search entry point.Fields | |
---|---|
renderedContent |
Optional. Web content snippet that can be embedded in a web page or an app webview. |
sdkBlob |
Optional. Base64 encoded JSON representing array of tuple. |
GoogleCloudAiplatformV1SearchFeaturesResponse
Response message for FeaturestoreService.SearchFeatures.Fields | |
---|---|
features[] |
The Features matching the request. Fields returned: * |
nextPageToken |
A token, which can be sent as SearchFeaturesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. |
GoogleCloudAiplatformV1SearchMigratableResourcesRequest
Request message for MigrationService.SearchMigratableResources.Fields | |
---|---|
filter |
A filter for your search. You can use the following types of filters: * Resource type filters. The following strings filter for a specific type of MigratableResource: * |
pageSize |
The standard page size. The default and maximum value is 100. |
pageToken |
The standard page token. |
GoogleCloudAiplatformV1SearchMigratableResourcesResponse
Response message for MigrationService.SearchMigratableResources.Fields | |
---|---|
migratableResources[] |
All migratable resources that can be migrated to the location specified in the request. |
nextPageToken |
The standard next-page token. The migratable_resources may not fill page_size in SearchMigratableResourcesRequest even when there are subsequent pages. |
GoogleCloudAiplatformV1SearchModelDeploymentMonitoringStatsAnomaliesRequest
Request message for JobService.SearchModelDeploymentMonitoringStatsAnomalies.Fields | |
---|---|
deployedModelId |
Required. The DeployedModel ID of the [ModelDeploymentMonitoringObjectiveConfig.deployed_model_id]. |
endTime |
The latest timestamp of stats being generated. If not set, indicates feching stats till the latest possible one. |
featureDisplayName |
The feature display name. If specified, only return the stats belonging to this feature. Format: ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies.feature_display_name, example: "user_destination". |
objectives[] |
Required. Objectives of the stats to retrieve. |
pageSize |
The standard list page size. |
pageToken |
A page token received from a previous JobService.SearchModelDeploymentMonitoringStatsAnomalies call. |
startTime |
The earliest timestamp of stats being generated. If not set, indicates fetching stats till the earliest possible one. |
GoogleCloudAiplatformV1SearchModelDeploymentMonitoringStatsAnomaliesRequestStatsAnomaliesObjective
Stats requested for specific objective.Fields | |
---|---|
topFeatureCount |
If set, all attribution scores between SearchModelDeploymentMonitoringStatsAnomaliesRequest.start_time and SearchModelDeploymentMonitoringStatsAnomaliesRequest.end_time are fetched, and page token doesn't take effect in this case. Only used to retrieve attribution score for the top Features which has the highest attribution score in the latest monitoring run. |
type |
(No description provided) |
Enum type. Can be one of the following: | |
MODEL_DEPLOYMENT_MONITORING_OBJECTIVE_TYPE_UNSPECIFIED |
Default value, should not be set. |
RAW_FEATURE_SKEW |
Raw feature values' stats to detect skew between Training-Prediction datasets. |
RAW_FEATURE_DRIFT |
Raw feature values' stats to detect drift between Serving-Prediction datasets. |
FEATURE_ATTRIBUTION_SKEW |
Feature attribution scores to detect skew between Training-Prediction datasets. |
FEATURE_ATTRIBUTION_DRIFT |
Feature attribution scores to detect skew between Prediction datasets collected within different time windows. |
GoogleCloudAiplatformV1SearchModelDeploymentMonitoringStatsAnomaliesResponse
Response message for JobService.SearchModelDeploymentMonitoringStatsAnomalies.Fields | |
---|---|
monitoringStats[] |
Stats retrieved for requested objectives. There are at most 1000 ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies.prediction_stats in the response. |
nextPageToken |
The page token that can be used by the next JobService.SearchModelDeploymentMonitoringStatsAnomalies call. |
GoogleCloudAiplatformV1SearchNearestEntitiesRequest
The request message for FeatureOnlineStoreService.SearchNearestEntities.Fields | |
---|---|
query |
Required. The query. |
returnFullEntity |
Optional. If set to true, the full entities (including all vector values and metadata) of the nearest neighbors are returned; otherwise only entity id of the nearest neighbors will be returned. Note that returning full entities will significantly increase the latency and cost of the query. |
GoogleCloudAiplatformV1SearchNearestEntitiesResponse
Response message for FeatureOnlineStoreService.SearchNearestEntitiesFields | |
---|---|
nearestNeighbors |
The nearest neighbors of the query entity. |
GoogleCloudAiplatformV1ServiceAccountSpec
Configuration for the use of custom service account to run the workloads.Fields | |
---|---|
enableCustomServiceAccount |
Required. If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent. |
serviceAccount |
Optional. Required when all below conditions are met * |
GoogleCloudAiplatformV1ShieldedVmConfig
A set of Shielded Instance options. See Images using supported Shielded VM features.Fields | |
---|---|
enableSecureBoot |
Defines whether the instance has Secure Boot enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. |
GoogleCloudAiplatformV1SmoothGradConfig
Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdfFields | |
---|---|
featureNoiseSigma |
This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features. |
noiseSigma |
This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about normalization. For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature. |
noisySampleCount |
The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3. |
GoogleCloudAiplatformV1SpecialistPool
SpecialistPool represents customers' own workforce to work on their data labeling jobs. It includes a group of specialist managers and workers. Managers are responsible for managing the workers in this pool as well as customers' data labeling jobs associated with this pool. Customers create specialist pool as well as start data labeling jobs on Cloud, managers and workers handle the jobs using CrowdCompute console.Fields | |
---|---|
displayName |
Required. The user-defined name of the SpecialistPool. The name can be up to 128 characters long and can consist of any UTF-8 characters. This field should be unique on project-level. |
name |
Required. The resource name of the SpecialistPool. |
pendingDataLabelingJobs[] |
Output only. The resource name of the pending data labeling jobs. |
specialistManagerEmails[] |
The email addresses of the managers in the SpecialistPool. |
specialistManagersCount |
Output only. The number of managers in this SpecialistPool. |
specialistWorkerEmails[] |
The email addresses of workers in the SpecialistPool. |
GoogleCloudAiplatformV1StartNotebookRuntimeOperationMetadata
Metadata information for NotebookService.StartNotebookRuntime.Fields | |
---|---|
genericMetadata |
The operation generic information. |
progressMessage |
A human-readable message that shows the intermediate progress details of NotebookRuntime. |
GoogleCloudAiplatformV1StratifiedSplit
Assigns input data to the training, validation, and test sets so that the distribution of values found in the categorical column (as specified by thekey
field) is mirrored within each split. The fraction values determine the relative sizes of the splits. For example, if the specified column has three values, with 50% of the rows having value "A", 25% value "B", and 25% value "C", and the split fractions are specified as 80/10/10, then the training set will constitute 80% of the training data, with about 50% of the training set rows having the value "A" for the specified column, about 25% having the value "B", and about 25% having the value "C". Only the top 500 occurring values are used; any values not in the top 500 values are randomly assigned to a split. If less than three rows contain a specific value, those rows are randomly assigned. Supported only for tabular Datasets.
Fields | |
---|---|
key |
Required. The key is a name of one of the Dataset's data columns. The key provided must be for a categorical column. |
testFraction |
The fraction of the input data that is to be used to evaluate the Model. |
trainingFraction |
The fraction of the input data that is to be used to train the Model. |
validationFraction |
The fraction of the input data that is to be used to validate the Model. |
GoogleCloudAiplatformV1StreamRawPredictRequest
Request message for PredictionService.StreamRawPredict.Fields | |
---|---|
httpBody |
The prediction input. Supports HTTP headers and arbitrary data payload. |
GoogleCloudAiplatformV1StreamingPredictRequest
Request message for PredictionService.StreamingPredict. The first message must contain endpoint field and optionally input. The subsequent messages must contain input.Fields | |
---|---|
inputs[] |
The prediction input. |
parameters |
The parameters that govern the prediction. |
GoogleCloudAiplatformV1StreamingPredictResponse
Response message for PredictionService.StreamingPredict.Fields | |
---|---|
outputs[] |
The prediction output. |
parameters |
The parameters that govern the prediction. |
GoogleCloudAiplatformV1StreamingReadFeatureValuesRequest
Request message for FeaturestoreOnlineServingService.StreamingFeatureValuesRead.Fields | |
---|---|
entityIds[] |
Required. IDs of entities to read Feature values of. The maximum number of IDs is 100. For example, for a machine learning model predicting user clicks on a website, an entity ID could be |
featureSelector |
Required. Selector choosing Features of the target EntityType. Feature IDs will be deduplicated. |
GoogleCloudAiplatformV1StringArray
A list of string values.Fields | |
---|---|
values[] |
A list of string values. |
GoogleCloudAiplatformV1Study
A message representing a Study.Fields | |
---|---|
createTime |
Output only. Time at which the study was created. |
displayName |
Required. Describes the Study, default value is empty string. |
inactiveReason |
Output only. A human readable reason why the Study is inactive. This should be empty if a study is ACTIVE or COMPLETED. |
name |
Output only. The name of a study. The study's globally unique identifier. Format: |
state |
Output only. The detailed state of a Study. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
The study state is unspecified. |
ACTIVE |
The study is active. |
INACTIVE |
The study is stopped due to an internal error. |
COMPLETED |
The study is done when the service exhausts the parameter search space or max_trial_count is reached. |
studySpec |
Required. Configuration of the Study. |
GoogleCloudAiplatformV1StudySpec
Represents specification of a Study.Fields | |
---|---|
algorithm |
The search algorithm specified for the Study. |
Enum type. Can be one of the following: | |
ALGORITHM_UNSPECIFIED |
The default algorithm used by Vertex AI for hyperparameter tuning and Vertex AI Vizier. |
GRID_SEARCH |
Simple grid search within the feasible space. To use grid search, all parameters must be INTEGER , CATEGORICAL , or DISCRETE . |
RANDOM_SEARCH |
Simple random search within the feasible space. |
convexAutomatedStoppingSpec |
The automated early stopping spec using convex stopping rule. |
decayCurveStoppingSpec |
The automated early stopping spec using decay curve rule. |
measurementSelectionType |
Describe which measurement selection type will be used |
Enum type. Can be one of the following: | |
MEASUREMENT_SELECTION_TYPE_UNSPECIFIED |
Will be treated as LAST_MEASUREMENT. |
LAST_MEASUREMENT |
Use the last measurement reported. |
BEST_MEASUREMENT |
Use the best measurement reported. |
medianAutomatedStoppingSpec |
The automated early stopping spec using median rule. |
metrics[] |
Required. Metric specs for the Study. |
observationNoise |
The observation noise level of the study. Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline. |
Enum type. Can be one of the following: | |
OBSERVATION_NOISE_UNSPECIFIED |
The default noise level chosen by Vertex AI. |
LOW |
Vertex AI assumes that the objective function is (nearly) perfectly reproducible, and will never repeat the same Trial parameters. |
HIGH |
Vertex AI will estimate the amount of noise in metric evaluations, it may repeat the same Trial parameters more than once. |
parameters[] |
Required. The set of parameters to tune. |
studyStoppingConfig |
Conditions for automated stopping of a Study. Enable automated stopping by configuring at least one condition. |
GoogleCloudAiplatformV1StudySpecConvexAutomatedStoppingSpec
Configuration for ConvexAutomatedStoppingSpec. When there are enough completed trials (configured by min_measurement_count), for pending trials with enough measurements and steps, the policy first computes an overestimate of the objective value at max_num_steps according to the slope of the incomplete objective value curve. No prediction can be made if the curve is completely flat. If the overestimation is worse than the best objective value of the completed trials, this pending trial will be early-stopped, but a last measurement will be added to the pending trial with max_num_steps and predicted objective value from the autoregression model.Fields | |
---|---|
learningRateParameterName |
The hyper-parameter name used in the tuning job that stands for learning rate. Leave it blank if learning rate is not in a parameter in tuning. The learning_rate is used to estimate the objective value of the ongoing trial. |
maxStepCount |
Steps used in predicting the final objective for early stopped trials. In general, it's set to be the same as the defined steps in training / tuning. If not defined, it will learn it from the completed trials. When use_steps is false, this field is set to the maximum elapsed seconds. |
minMeasurementCount |
The minimal number of measurements in a Trial. Early-stopping checks will not trigger if less than min_measurement_count+1 completed trials or pending trials with less than min_measurement_count measurements. If not defined, the default value is 5. |
minStepCount |
Minimum number of steps for a trial to complete. Trials which do not have a measurement with step_count > min_step_count won't be considered for early stopping. It's ok to set it to 0, and a trial can be early stopped at any stage. By default, min_step_count is set to be one-tenth of the max_step_count. When use_elapsed_duration is true, this field is set to the minimum elapsed seconds. |
updateAllStoppedTrials |
ConvexAutomatedStoppingSpec by default only updates the trials that needs to be early stopped using a newly trained auto-regressive model. When this flag is set to True, all stopped trials from the beginning are potentially updated in terms of their |
useElapsedDuration |
This bool determines whether or not the rule is applied based on elapsed_secs or steps. If use_elapsed_duration==false, the early stopping decision is made according to the predicted objective values according to the target steps. If use_elapsed_duration==true, elapsed_secs is used instead of steps. Also, in this case, the parameters max_num_steps and min_num_steps are overloaded to contain max_elapsed_seconds and min_elapsed_seconds. |
GoogleCloudAiplatformV1StudySpecDecayCurveAutomatedStoppingSpec
The decay curve automated stopping rule builds a Gaussian Process Regressor to predict the final objective value of a Trial based on the already completed Trials and the intermediate measurements of the current Trial. Early stopping is requested for the current Trial if there is very low probability to exceed the optimal value found so far.Fields | |
---|---|
useElapsedDuration |
True if Measurement.elapsed_duration is used as the x-axis of each Trials Decay Curve. Otherwise, Measurement.step_count will be used as the x-axis. |
GoogleCloudAiplatformV1StudySpecMedianAutomatedStoppingSpec
The median automated stopping rule stops a pending Trial if the Trial's best objective_value is strictly below the median 'performance' of all completed Trials reported up to the Trial's last measurement. Currently, 'performance' refers to the running average of the objective values reported by the Trial in each measurement.Fields | |
---|---|
useElapsedDuration |
True if median automated stopping rule applies on Measurement.elapsed_duration. It means that elapsed_duration field of latest measurement of current Trial is used to compute median objective value for each completed Trials. |
GoogleCloudAiplatformV1StudySpecMetricSpec
Represents a metric to optimize.Fields | |
---|---|
goal |
Required. The optimization goal of the metric. |
Enum type. Can be one of the following: | |
GOAL_TYPE_UNSPECIFIED |
Goal Type will default to maximize. |
MAXIMIZE |
Maximize the goal metric. |
MINIMIZE |
Minimize the goal metric. |
metricId |
Required. The ID of the metric. Must not contain whitespaces and must be unique amongst all MetricSpecs. |
safetyConfig |
Used for safe search. In the case, the metric will be a safety metric. You must provide a separate metric for objective metric. |
GoogleCloudAiplatformV1StudySpecMetricSpecSafetyMetricConfig
Used in safe optimization to specify threshold levels and risk tolerance.Fields | |
---|---|
desiredMinSafeTrialsFraction |
Desired minimum fraction of safe trials (over total number of trials) that should be targeted by the algorithm at any time during the study (best effort). This should be between 0.0 and 1.0 and a value of 0.0 means that there is no minimum and an algorithm proceeds without targeting any specific fraction. A value of 1.0 means that the algorithm attempts to only Suggest safe Trials. |
safetyThreshold |
Safety threshold (boundary value between safe and unsafe). NOTE that if you leave SafetyMetricConfig unset, a default value of 0 will be used. |
GoogleCloudAiplatformV1StudySpecParameterSpec
Represents a single parameter to optimize.Fields | |
---|---|
categoricalValueSpec |
The value spec for a 'CATEGORICAL' parameter. |
conditionalParameterSpecs[] |
A conditional parameter node is active if the parameter's value matches the conditional node's parent_value_condition. If two items in conditional_parameter_specs have the same name, they must have disjoint parent_value_condition. |
discreteValueSpec |
The value spec for a 'DISCRETE' parameter. |
doubleValueSpec |
The value spec for a 'DOUBLE' parameter. |
integerValueSpec |
The value spec for an 'INTEGER' parameter. |
parameterId |
Required. The ID of the parameter. Must not contain whitespaces and must be unique amongst all ParameterSpecs. |
scaleType |
How the parameter should be scaled. Leave unset for |
Enum type. Can be one of the following: | |
SCALE_TYPE_UNSPECIFIED |
By default, no scaling is applied. |
UNIT_LINEAR_SCALE |
Scales the feasible space to (0, 1) linearly. |
UNIT_LOG_SCALE |
Scales the feasible space logarithmically to (0, 1). The entire feasible space must be strictly positive. |
UNIT_REVERSE_LOG_SCALE |
Scales the feasible space "reverse" logarithmically to (0, 1). The result is that values close to the top of the feasible space are spread out more than points near the bottom. The entire feasible space must be strictly positive. |
GoogleCloudAiplatformV1StudySpecParameterSpecCategoricalValueSpec
Value specification for a parameter inCATEGORICAL
type.
Fields | |
---|---|
defaultValue |
A default value for a |
values[] |
Required. The list of possible categories. |
GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpec
Represents a parameter spec with condition from its parent parameter.Fields | |
---|---|
parameterSpec |
Required. The spec for a conditional parameter. |
parentCategoricalValues |
The spec for matching values from a parent parameter of |
parentDiscreteValues |
The spec for matching values from a parent parameter of |
parentIntValues |
The spec for matching values from a parent parameter of |
GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpecCategoricalValueCondition
Represents the spec to match categorical values from parent parameter.Fields | |
---|---|
values[] |
Required. Matches values of the parent parameter of 'CATEGORICAL' type. All values must exist in |
GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpecDiscreteValueCondition
Represents the spec to match discrete values from parent parameter.Fields | |
---|---|
values[] |
Required. Matches values of the parent parameter of 'DISCRETE' type. All values must exist in |
GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpecIntValueCondition
Represents the spec to match integer values from parent parameter.Fields | |
---|---|
values[] |
Required. Matches values of the parent parameter of 'INTEGER' type. All values must lie in |
GoogleCloudAiplatformV1StudySpecParameterSpecDiscreteValueSpec
Value specification for a parameter inDISCRETE
type.
Fields | |
---|---|
defaultValue |
A default value for a |
values[] |
Required. A list of possible values. The list should be in increasing order and at least 1e-10 apart. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values. |
GoogleCloudAiplatformV1StudySpecParameterSpecDoubleValueSpec
Value specification for a parameter inDOUBLE
type.
Fields | |
---|---|
defaultValue |
A default value for a |
maxValue |
Required. Inclusive maximum value of the parameter. |
minValue |
Required. Inclusive minimum value of the parameter. |
GoogleCloudAiplatformV1StudySpecParameterSpecIntegerValueSpec
Value specification for a parameter inINTEGER
type.
Fields | |
---|---|
defaultValue |
A default value for an |
maxValue |
Required. Inclusive maximum value of the parameter. |
minValue |
Required. Inclusive minimum value of the parameter. |
GoogleCloudAiplatformV1StudySpecStudyStoppingConfig
The configuration (stopping conditions) for automated stopping of a Study. Conditions include trial budgets, time budgets, and convergence detection.Fields | |
---|---|
maxDurationNoProgress |
If the objective value has not improved for this much time, stop the study. WARNING: Effective only for single-objective studies. |
maxNumTrials |
If there are more than this many trials, stop the study. |
maxNumTrialsNoProgress |
If the objective value has not improved for this many consecutive trials, stop the study. WARNING: Effective only for single-objective studies. |
maximumRuntimeConstraint |
If the specified time or duration has passed, stop the study. |
minNumTrials |
If there are fewer than this many COMPLETED trials, do not stop the study. |
minimumRuntimeConstraint |
Each "stopping rule" in this proto specifies an "if" condition. Before Vizier would generate a new suggestion, it first checks each specified stopping rule, from top to bottom in this list. Note that the first few rules (e.g. minimum_runtime_constraint, min_num_trials) will prevent other stopping rules from being evaluated until they are met. For example, setting |
shouldStopAsap |
If true, a Study enters STOPPING_ASAP whenever it would normally enters STOPPING state. The bottom line is: set to true if you want to interrupt on-going evaluations of Trials as soon as the study stopping condition is met. (Please see Study.State documentation for the source of truth). |
GoogleCloudAiplatformV1StudyTimeConstraint
Time-based Constraint for StudyFields | |
---|---|
endTime |
Compares the wallclock time to this time. Must use UTC timezone. |
maxDuration |
Counts the wallclock time passed since the creation of this Study. |
GoogleCloudAiplatformV1SuggestTrialsMetadata
Details of operations that perform Trials suggestion.Fields | |
---|---|
clientId |
The identifier of the client that is requesting the suggestion. If multiple SuggestTrialsRequests have the same |
genericMetadata |
Operation metadata for suggesting Trials. |
GoogleCloudAiplatformV1SuggestTrialsRequest
Request message for VizierService.SuggestTrials.Fields | |
---|---|
clientId |
Required. The identifier of the client that is requesting the suggestion. If multiple SuggestTrialsRequests have the same |
contexts[] |
Optional. This allows you to specify the "context" for a Trial; a context is a slice (a subspace) of the search space. Typical uses for contexts: 1) You are using Vizier to tune a server for best performance, but there's a strong weekly cycle. The context specifies the day-of-week. This allows Tuesday to generalize from Wednesday without assuming that everything is identical. 2) Imagine you're optimizing some medical treatment for people. As they walk in the door, you know certain facts about them (e.g. sex, weight, height, blood-pressure). Put that information in the context, and Vizier will adapt its suggestions to the patient. 3) You want to do a fair A/B test efficiently. Specify the "A" and "B" conditions as contexts, and Vizier will generalize between "A" and "B" conditions. If they are similar, this will allow Vizier to converge to the optimum faster than if "A" and "B" were separate Studies. NOTE: You can also enter contexts as REQUESTED Trials, e.g. via the CreateTrial() RPC; that's the asynchronous option where you don't need a close association between contexts and suggestions. NOTE: All the Parameters you set in a context MUST be defined in the Study. NOTE: You must supply 0 or $suggestion_count contexts. If you don't supply any contexts, Vizier will make suggestions from the full search space specified in the StudySpec; if you supply a full set of context, each suggestion will match the corresponding context. NOTE: A Context with no features set matches anything, and allows suggestions from the full search space. NOTE: Contexts MUST lie within the search space specified in the StudySpec. It's an error if they don't. NOTE: Contexts preferentially match ACTIVE then REQUESTED trials before new suggestions are generated. NOTE: Generation of suggestions involves a match between a Context and (optionally) a REQUESTED trial; if that match is not fully specified, a suggestion will be geneated in the merged subspace. |
suggestionCount |
Required. The number of suggestions requested. It must be positive. |
GoogleCloudAiplatformV1SuggestTrialsResponse
Response message for VizierService.SuggestTrials.Fields | |
---|---|
endTime |
The time at which operation processing completed. |
startTime |
The time at which the operation was started. |
studyState |
The state of the Study. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
The study state is unspecified. |
ACTIVE |
The study is active. |
INACTIVE |
The study is stopped due to an internal error. |
COMPLETED |
The study is done when the service exhausts the parameter search space or max_trial_count is reached. |
trials[] |
A list of Trials. |
GoogleCloudAiplatformV1SupervisedHyperParameters
Hyperparameters for SFT.Fields | |
---|---|
adapterSize |
Optional. Adapter size for tuning. |
Enum type. Can be one of the following: | |
ADAPTER_SIZE_UNSPECIFIED |
Adapter size is unspecified. |
ADAPTER_SIZE_ONE |
Adapter size 1. |
ADAPTER_SIZE_FOUR |
Adapter size 4. |
ADAPTER_SIZE_EIGHT |
Adapter size 8. |
ADAPTER_SIZE_SIXTEEN |
Adapter size 16. |
epochCount |
Optional. Number of complete passes the model makes over the entire training dataset during training. |
learningRateMultiplier |
Optional. Multiplier for adjusting the default learning rate. |
GoogleCloudAiplatformV1SupervisedTuningDataStats
Tuning data statistics for Supervised Tuning.Fields | |
---|---|
totalBillableCharacterCount |
Output only. Number of billable characters in the tuning dataset. |
totalTuningCharacterCount |
Output only. Number of tuning characters in the tuning dataset. |
tuningDatasetExampleCount |
Output only. Number of examples in the tuning dataset. |
tuningStepCount |
Output only. Number of tuning steps for this Tuning Job. |
userDatasetExamples[] |
Output only. Sample user messages in the training dataset uri. |
userInputTokenDistribution |
Output only. Dataset distributions for the user input tokens. |
userMessagePerExampleDistribution |
Output only. Dataset distributions for the messages per example. |
userOutputTokenDistribution |
Output only. Dataset distributions for the user output tokens. |
GoogleCloudAiplatformV1SupervisedTuningDatasetDistribution
Dataset distribution for Supervised Tuning.Fields | |
---|---|
buckets[] |
Output only. Defines the histogram bucket. |
max |
Output only. The maximum of the population values. |
mean |
Output only. The arithmetic mean of the values in the population. |
median |
Output only. The median of the values in the population. |
min |
Output only. The minimum of the population values. |
p5 |
Output only. The 5th percentile of the values in the population. |
p95 |
Output only. The 95th percentile of the values in the population. |
sum |
Output only. Sum of a given population of values. |
GoogleCloudAiplatformV1SupervisedTuningDatasetDistributionDatasetBucket
Dataset bucket used to create a histogram for the distribution given a population of values.Fields | |
---|---|
count |
Output only. Number of values in the bucket. |
left |
Output only. Left bound of the bucket. |
right |
Output only. Right bound of the bucket. |
GoogleCloudAiplatformV1SupervisedTuningSpec
Tuning Spec for Supervised Tuning.Fields | |
---|---|
hyperParameters |
Optional. Hyperparameters for SFT. |
trainingDatasetUri |
Required. Cloud Storage path to file containing training dataset for tuning. The dataset must be formatted as a JSONL file. |
validationDatasetUri |
Optional. Cloud Storage path to file containing validation dataset for tuning. The dataset must be formatted as a JSONL file. |
GoogleCloudAiplatformV1SyncFeatureViewResponse
Respose message for FeatureOnlineStoreAdminService.SyncFeatureView.Fields | |
---|---|
featureViewSync |
Format: |
GoogleCloudAiplatformV1TFRecordDestination
The storage details for TFRecord output content.Fields | |
---|---|
gcsDestination |
Required. Google Cloud Storage location. |
GoogleCloudAiplatformV1Tensor
A tensor value type.Fields | |
---|---|
boolVal[] |
Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order. BOOL |
bytesVal[] |
STRING |
doubleVal[] |
DOUBLE |
dtype |
The data type of tensor. |
Enum type. Can be one of the following: | |
DATA_TYPE_UNSPECIFIED |
Not a legal value for DataType. Used to indicate a DataType field has not been set. |
BOOL |
Data types that all computation devices are expected to be capable to support. |
STRING |
(No description provided) |
FLOAT |
(No description provided) |
DOUBLE |
(No description provided) |
INT8 |
(No description provided) |
INT16 |
(No description provided) |
INT32 |
(No description provided) |
INT64 |
(No description provided) |
UINT8 |
(No description provided) |
UINT16 |
(No description provided) |
UINT32 |
(No description provided) |
UINT64 |
(No description provided) |
floatVal[] |
FLOAT |
int64Val[] |
INT64 |
intVal[] |
INT_8 INT_16 INT_32 |
listVal[] |
A list of tensor values. |
shape[] |
Shape of the tensor. |
stringVal[] |
STRING |
structVal |
A map of string to tensor. |
tensorVal |
Serialized raw tensor content. |
uint64Val[] |
UINT64 |
uintVal[] |
UINT8 UINT16 UINT32 |
GoogleCloudAiplatformV1Tensorboard
Tensorboard is a physical database that stores users' training metrics. A default Tensorboard is provided in each region of a Google Cloud project. If needed users can also create extra Tensorboards in their projects.Fields | |
---|---|
blobStoragePathPrefix |
Output only. Consumer project Cloud Storage path prefix used to store blob data, which can either be a bucket or directory. Does not end with a '/'. |
createTime |
Output only. Timestamp when this Tensorboard was created. |
description |
Description of this Tensorboard. |
displayName |
Required. User provided name of this Tensorboard. |
encryptionSpec |
Customer-managed encryption key spec for a Tensorboard. If set, this Tensorboard and all sub-resources of this Tensorboard will be secured by this key. |
etag |
Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
isDefault |
Used to indicate if the TensorBoard instance is the default one. Each project & region can have at most one default TensorBoard instance. Creation of a default TensorBoard instance and updating an existing TensorBoard instance to be default will mark all other TensorBoard instances (if any) as non default. |
labels |
The labels with user-defined metadata to organize your Tensorboards. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Tensorboard (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. |
name |
Output only. Name of the Tensorboard. Format: |
runCount |
Output only. The number of Runs stored in this Tensorboard. |
updateTime |
Output only. Timestamp when this Tensorboard was last updated. |
GoogleCloudAiplatformV1TensorboardBlob
One blob (e.g, image, graph) viewable on a blob metric plot.Fields | |
---|---|
data |
Optional. The bytes of the blob is not present unless it's returned by the ReadTensorboardBlobData endpoint. |
id |
Output only. A URI safe key uniquely identifying a blob. Can be used to locate the blob stored in the Cloud Storage bucket of the consumer project. |
GoogleCloudAiplatformV1TensorboardBlobSequence
One point viewable on a blob metric plot, but mostly just a wrapper message to work around repeated fields can't be used directly withinoneof
fields.
Fields | |
---|---|
values[] |
List of blobs contained within the sequence. |
GoogleCloudAiplatformV1TensorboardExperiment
A TensorboardExperiment is a group of TensorboardRuns, that are typically the results of a training job run, in a Tensorboard.Fields | |
---|---|
createTime |
Output only. Timestamp when this TensorboardExperiment was created. |
description |
Description of this TensorboardExperiment. |
displayName |
User provided name of this TensorboardExperiment. |
etag |
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
The labels with user-defined metadata to organize your TensorboardExperiment. Label keys and values cannot be longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Dataset (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with |
name |
Output only. Name of the TensorboardExperiment. Format: |
source |
Immutable. Source of the TensorboardExperiment. Example: a custom training job. |
updateTime |
Output only. Timestamp when this TensorboardExperiment was last updated. |
GoogleCloudAiplatformV1TensorboardRun
TensorboardRun maps to a specific execution of a training job with a given set of hyperparameter values, model definition, dataset, etcFields | |
---|---|
createTime |
Output only. Timestamp when this TensorboardRun was created. |
description |
Description of this TensorboardRun. |
displayName |
Required. User provided name of this TensorboardRun. This value must be unique among all TensorboardRuns belonging to the same parent TensorboardExperiment. |
etag |
Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
The labels with user-defined metadata to organize your TensorboardRuns. This field will be used to filter and visualize Runs in the Tensorboard UI. For example, a Vertex AI training job can set a label aiplatform.googleapis.com/training_job_id=xxxxx to all the runs created within that job. An end user can set a label experiment_id=xxxxx for all the runs produced in a Jupyter notebook. These runs can be grouped by a label value and visualized together in the Tensorboard UI. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one TensorboardRun (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. |
name |
Output only. Name of the TensorboardRun. Format: |
updateTime |
Output only. Timestamp when this TensorboardRun was last updated. |
GoogleCloudAiplatformV1TensorboardTensor
One point viewable on a tensor metric plot.Fields | |
---|---|
value |
Required. Serialized form of https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/tensor.proto |
versionNumber |
Optional. Version number of TensorProto used to serialize value. |
GoogleCloudAiplatformV1TensorboardTimeSeries
TensorboardTimeSeries maps to times series produced in training runsFields | |
---|---|
createTime |
Output only. Timestamp when this TensorboardTimeSeries was created. |
description |
Description of this TensorboardTimeSeries. |
displayName |
Required. User provided name of this TensorboardTimeSeries. This value should be unique among all TensorboardTimeSeries resources belonging to the same TensorboardRun resource (parent resource). |
etag |
Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
metadata |
Output only. Scalar, Tensor, or Blob metadata for this TensorboardTimeSeries. |
name |
Output only. Name of the TensorboardTimeSeries. |
pluginData |
Data of the current plugin, with the size limited to 65KB. |
pluginName |
Immutable. Name of the plugin this time series pertain to. Such as Scalar, Tensor, Blob |
updateTime |
Output only. Timestamp when this TensorboardTimeSeries was last updated. |
valueType |
Required. Immutable. Type of TensorboardTimeSeries value. |
Enum type. Can be one of the following: | |
VALUE_TYPE_UNSPECIFIED |
The value type is unspecified. |
SCALAR |
Used for TensorboardTimeSeries that is a list of scalars. E.g. accuracy of a model over epochs/time. |
TENSOR |
Used for TensorboardTimeSeries that is a list of tensors. E.g. histograms of weights of layer in a model over epoch/time. |
BLOB_SEQUENCE |
Used for TensorboardTimeSeries that is a list of blob sequences. E.g. set of sample images with labels over epochs/time. |
GoogleCloudAiplatformV1TensorboardTimeSeriesMetadata
Describes metadata for a TensorboardTimeSeries.Fields | |
---|---|
maxBlobSequenceLength |
Output only. The largest blob sequence length (number of blobs) of all data points in this time series, if its ValueType is BLOB_SEQUENCE. |
maxStep |
Output only. Max step index of all data points within a TensorboardTimeSeries. |
maxWallTime |
Output only. Max wall clock timestamp of all data points within a TensorboardTimeSeries. |
GoogleCloudAiplatformV1ThresholdConfig
The config for feature monitoring threshold.Fields | |
---|---|
value |
Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature. |
GoogleCloudAiplatformV1TimeSeriesData
All the data stored in a TensorboardTimeSeries.Fields | |
---|---|
tensorboardTimeSeriesId |
Required. The ID of the TensorboardTimeSeries, which will become the final component of the TensorboardTimeSeries' resource name |
valueType |
Required. Immutable. The value type of this time series. All the values in this time series data must match this value type. |
Enum type. Can be one of the following: | |
VALUE_TYPE_UNSPECIFIED |
The value type is unspecified. |
SCALAR |
Used for TensorboardTimeSeries that is a list of scalars. E.g. accuracy of a model over epochs/time. |
TENSOR |
Used for TensorboardTimeSeries that is a list of tensors. E.g. histograms of weights of layer in a model over epoch/time. |
BLOB_SEQUENCE |
Used for TensorboardTimeSeries that is a list of blob sequences. E.g. set of sample images with labels over epochs/time. |
values[] |
Required. Data points in this time series. |
GoogleCloudAiplatformV1TimeSeriesDataPoint
A TensorboardTimeSeries data point.Fields | |
---|---|
blobs |
A blob sequence value. |
scalar |
A scalar value. |
step |
Step index of this data point within the run. |
tensor |
A tensor value. |
wallTime |
Wall clock timestamp when this data point is generated by the end user. |
GoogleCloudAiplatformV1TimestampSplit
Assigns input data to training, validation, and test sets based on a provided timestamps. The youngest data pieces are assigned to training set, next to validation set, and the oldest to the test set. Supported only for tabular Datasets.Fields | |
---|---|
key |
Required. The key is a name of one of the Dataset's data columns. The values of the key (the values in the column) must be in RFC 3339 |
testFraction |
The fraction of the input data that is to be used to evaluate the Model. |
trainingFraction |
The fraction of the input data that is to be used to train the Model. |
validationFraction |
The fraction of the input data that is to be used to validate the Model. |
GoogleCloudAiplatformV1TokensInfo
Tokens info with a list of tokens and the corresponding list of token ids.Fields | |
---|---|
tokenIds[] |
A list of token ids from the input. |
tokens[] |
A list of tokens from the input. |
GoogleCloudAiplatformV1Tool
Tool details that the model may use to generate response. ATool
is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
Fields | |
---|---|
functionDeclarations[] |
Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. |
retrieval |
Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. |
GoogleCloudAiplatformV1TrainingConfig
CMLE training config. For every active learning labeling iteration, system will train a machine learning model on CMLE. The trained model will be used by data sampling algorithm to select DataItems.Fields | |
---|---|
timeoutTrainingMilliHours |
The timeout hours for the CMLE training job, expressed in milli hours i.e. 1,000 value in this field means 1 hour. |
GoogleCloudAiplatformV1TrainingPipeline
The TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from Vertex AI's Dataset which becomes the training input, upload the Model to Vertex AI, and evaluate the Model.Fields | |
---|---|
createTime |
Output only. Time when the TrainingPipeline was created. |
displayName |
Required. The user-defined name of this TrainingPipeline. |
encryptionSpec |
Customer-managed encryption key spec for a TrainingPipeline. If set, this TrainingPipeline will be secured by this key. Note: Model trained by this TrainingPipeline is also secured by this key if model_to_upload is not set separately. |
endTime |
Output only. Time when the TrainingPipeline entered any of the following states: |
error |
Output only. Only populated when the pipeline's state is |
inputDataConfig |
Specifies Vertex AI owned input data that may be used for training the Model. The TrainingPipeline's training_task_definition should make clear whether this config is used and if there are any special requirements on how it should be filled. If nothing about this config is mentioned in the training_task_definition, then it should be assumed that the TrainingPipeline does not depend on this configuration. |
labels |
The labels with user-defined metadata to organize TrainingPipelines. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
modelId |
Optional. The ID to use for the uploaded Model, which will become the final component of the model resource name. This value may be up to 63 characters, and valid characters are |
modelToUpload |
Describes the Model that may be uploaded (via ModelService.UploadModel) by this TrainingPipeline. The TrainingPipeline's training_task_definition should make clear whether this Model description should be populated, and if there are any special requirements regarding how it should be filled. If nothing is mentioned in the training_task_definition, then it should be assumed that this field should not be filled and the training task either uploads the Model without a need of this information, or that training task does not support uploading a Model as part of the pipeline. When the Pipeline's state becomes |
name |
Output only. Resource name of the TrainingPipeline. |
parentModel |
Optional. When specify this field, the |
startTime |
Output only. Time when the TrainingPipeline for the first time entered the |
state |
Output only. The detailed state of the pipeline. |
Enum type. Can be one of the following: | |
PIPELINE_STATE_UNSPECIFIED |
The pipeline state is unspecified. |
PIPELINE_STATE_QUEUED |
The pipeline has been created or resumed, and processing has not yet begun. |
PIPELINE_STATE_PENDING |
The service is preparing to run the pipeline. |
PIPELINE_STATE_RUNNING |
The pipeline is in progress. |
PIPELINE_STATE_SUCCEEDED |
The pipeline completed successfully. |
PIPELINE_STATE_FAILED |
The pipeline failed. |
PIPELINE_STATE_CANCELLING |
The pipeline is being cancelled. From this state, the pipeline may only go to either PIPELINE_STATE_SUCCEEDED, PIPELINE_STATE_FAILED or PIPELINE_STATE_CANCELLED. |
PIPELINE_STATE_CANCELLED |
The pipeline has been cancelled. |
PIPELINE_STATE_PAUSED |
The pipeline has been stopped, and can be resumed. |
trainingTaskDefinition |
Required. A Google Cloud Storage path to the YAML file that defines the training task which is responsible for producing the model artifact, and may also include additional auxiliary work. The definition files that can be used here are found in gs://google-cloud-aiplatform/schema/trainingjob/definition/. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access. |
trainingTaskInputs |
Required. The training task's parameter(s), as specified in the training_task_definition's |
trainingTaskMetadata |
Output only. The metadata information as specified in the training_task_definition's |
updateTime |
Output only. Time when the TrainingPipeline was most recently updated. |
GoogleCloudAiplatformV1Trial
A message representing a Trial. A Trial contains a unique set of Parameters that has been or will be evaluated, along with the objective metrics got by running the Trial.Fields | |
---|---|
clientId |
Output only. The identifier of the client that originally requested this Trial. Each client is identified by a unique client_id. When a client asks for a suggestion, Vertex AI Vizier will assign it a Trial. The client should evaluate the Trial, complete it, and report back to Vertex AI Vizier. If suggestion is asked again by same client_id before the Trial is completed, the same Trial will be returned. Multiple clients with different client_ids can ask for suggestions simultaneously, each of them will get their own Trial. |
customJob |
Output only. The CustomJob name linked to the Trial. It's set for a HyperparameterTuningJob's Trial. |
endTime |
Output only. Time when the Trial's status changed to |
finalMeasurement |
Output only. The final measurement containing the objective value. |
id |
Output only. The identifier of the Trial assigned by the service. |
infeasibleReason |
Output only. A human readable string describing why the Trial is infeasible. This is set only if Trial state is |
measurements[] |
Output only. A list of measurements that are strictly lexicographically ordered by their induced tuples (steps, elapsed_duration). These are used for early stopping computations. |
name |
Output only. Resource name of the Trial assigned by the service. |
parameters[] |
Output only. The parameters of the Trial. |
startTime |
Output only. Time when the Trial was started. |
state |
Output only. The detailed state of the Trial. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
The Trial state is unspecified. |
REQUESTED |
Indicates that a specific Trial has been requested, but it has not yet been suggested by the service. |
ACTIVE |
Indicates that the Trial has been suggested. |
STOPPING |
Indicates that the Trial should stop according to the service. |
SUCCEEDED |
Indicates that the Trial is completed successfully. |
INFEASIBLE |
Indicates that the Trial should not be attempted again. The service will set a Trial to INFEASIBLE when it's done but missing the final_measurement. |
webAccessUris |
Output only. URIs for accessing interactive shells (one URI for each training node). Only available if this trial is part of a HyperparameterTuningJob and the job's trial_job_spec.enable_web_access field is |
GoogleCloudAiplatformV1TrialContext
Next ID: 3Fields | |
---|---|
description |
A human-readable field which can store a description of this context. This will become part of the resulting Trial's description field. |
parameters[] |
If/when a Trial is generated or selected from this Context, its Parameters will match any parameters specified here. (I.e. if this context specifies parameter name:'a' int_value:3, then a resulting Trial will have int_value:3 for its parameter named 'a'.) Note that we first attempt to match existing REQUESTED Trials with contexts, and if there are no matches, we generate suggestions in the subspace defined by the parameters specified here. NOTE: a Context without any Parameters matches the entire feasible search space. |
GoogleCloudAiplatformV1TrialParameter
A message representing a parameter to be tuned.Fields | |
---|---|
parameterId |
Output only. The ID of the parameter. The parameter should be defined in StudySpec's Parameters. |
value |
Output only. The value of the parameter. |
GoogleCloudAiplatformV1TunedModel
The Model Registry Model and Online Prediction Endpoint assiociated with this TuningJob.Fields | |
---|---|
endpoint |
Output only. A resource name of an Endpoint. Format: |
model |
Output only. The resource name of the TunedModel. Format: |
GoogleCloudAiplatformV1TuningDataStats
The tuning data statistic values for TuningJob.Fields | |
---|---|
supervisedTuningDataStats |
The SFT Tuning data stats. |
GoogleCloudAiplatformV1TuningJob
Represents a TuningJob that runs with Google owned models.Fields | |
---|---|
baseModel |
The base model that is being tuned, e.g., "gemini-1.0-pro-002". |
createTime |
Output only. Time when the TuningJob was created. |
description |
Optional. The description of the TuningJob. |
encryptionSpec |
Customer-managed encryption key options for a TuningJob. If this is set, then all resources created by the TuningJob will be encrypted with the provided encryption key. |
endTime |
Output only. Time when the TuningJob entered any of the following JobStates: |
error |
Output only. Only populated when job's state is |
experiment |
Output only. The Experiment associated with this TuningJob. |
labels |
Optional. The labels with user-defined metadata to organize TuningJob and generated resources such as Model and Endpoint. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
name |
Output only. Identifier. Resource name of a TuningJob. Format: |
startTime |
Output only. Time when the TuningJob for the first time entered the |
state |
Output only. The detailed state of the job. |
Enum type. Can be one of the following: | |
JOB_STATE_UNSPECIFIED |
The job state is unspecified. |
JOB_STATE_QUEUED |
The job has been just created or resumed and processing has not yet begun. |
JOB_STATE_PENDING |
The service is preparing to run the job. |
JOB_STATE_RUNNING |
The job is in progress. |
JOB_STATE_SUCCEEDED |
The job completed successfully. |
JOB_STATE_FAILED |
The job failed. |
JOB_STATE_CANCELLING |
The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED , JOB_STATE_FAILED or JOB_STATE_CANCELLED . |
JOB_STATE_CANCELLED |
The job has been cancelled. |
JOB_STATE_PAUSED |
The job has been stopped, and can be resumed. |
JOB_STATE_EXPIRED |
The job has expired. |
JOB_STATE_UPDATING |
The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state. |
JOB_STATE_PARTIALLY_SUCCEEDED |
The job is partially succeeded, some results may be missing due to errors. |
supervisedTuningSpec |
Tuning Spec for Supervised Fine Tuning. |
tunedModel |
Output only. The tuned model resources assiociated with this TuningJob. |
tunedModelDisplayName |
Optional. The display name of the TunedModel. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
tuningDataStats |
Output only. The tuning data statistics associated with this TuningJob. |
updateTime |
Output only. Time when the TuningJob was most recently updated. |
GoogleCloudAiplatformV1UndeployIndexOperationMetadata
Runtime operation information for IndexEndpointService.UndeployIndex.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1UndeployIndexRequest
Request message for IndexEndpointService.UndeployIndex.Fields | |
---|---|
deployedIndexId |
Required. The ID of the DeployedIndex to be undeployed from the IndexEndpoint. |
GoogleCloudAiplatformV1UndeployModelOperationMetadata
Runtime operation information for EndpointService.UndeployModel.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1UndeployModelRequest
Request message for EndpointService.UndeployModel.Fields | |
---|---|
deployedModelId |
Required. The ID of the DeployedModel to be undeployed from the Endpoint. |
trafficSplit |
If this field is provided, then the Endpoint's traffic_split will be overwritten with it. If last DeployedModel is being undeployed from the Endpoint, the [Endpoint.traffic_split] will always end up empty when this call returns. A DeployedModel will be successfully undeployed only if it doesn't have any traffic assigned to it when this method executes, or if this field unassigns any traffic to it. |
GoogleCloudAiplatformV1UnmanagedContainerModel
Contains model information necessary to perform batch prediction without requiring a full model import.Fields | |
---|---|
artifactUri |
The path to the directory containing the Model artifact and any of its supporting files. |
containerSpec |
Input only. The specification of the container that is to be used when deploying this Model. |
predictSchemata |
Contains the schemata used in Model's predictions and explanations |
GoogleCloudAiplatformV1UpdateDeploymentResourcePoolOperationMetadata
Runtime operation information for UpdateDeploymentResourcePool method.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1UpdateExplanationDatasetOperationMetadata
Runtime operation information for ModelService.UpdateExplanationDataset.Fields | |
---|---|
genericMetadata |
The common part of the operation metadata. |
GoogleCloudAiplatformV1UpdateExplanationDatasetRequest
Request message for ModelService.UpdateExplanationDataset.Fields | |
---|---|
examples |
The example config containing the location of the dataset. |
GoogleCloudAiplatformV1UpdateFeatureGroupOperationMetadata
Details of operations that perform update FeatureGroup.Fields | |
---|---|
genericMetadata |
Operation metadata for FeatureGroup. |
GoogleCloudAiplatformV1UpdateFeatureOnlineStoreOperationMetadata
Details of operations that perform update FeatureOnlineStore.Fields | |
---|---|
genericMetadata |
Operation metadata for FeatureOnlineStore. |
GoogleCloudAiplatformV1UpdateFeatureOperationMetadata
Details of operations that perform update Feature.Fields | |
---|---|
genericMetadata |
Operation metadata for Feature Update. |
GoogleCloudAiplatformV1UpdateFeatureViewOperationMetadata
Details of operations that perform update FeatureView.Fields | |
---|---|
genericMetadata |
Operation metadata for FeatureView Update. |
GoogleCloudAiplatformV1UpdateFeaturestoreOperationMetadata
Details of operations that perform update Featurestore.Fields | |
---|---|
genericMetadata |
Operation metadata for Featurestore. |
GoogleCloudAiplatformV1UpdateIndexOperationMetadata
Runtime operation information for IndexService.UpdateIndex.Fields | |
---|---|
genericMetadata |
The operation generic information. |
nearestNeighborSearchOperationMetadata |
The operation metadata with regard to Matching Engine Index operation. |
GoogleCloudAiplatformV1UpdateModelDeploymentMonitoringJobOperationMetadata
Runtime operation information for JobService.UpdateModelDeploymentMonitoringJob.Fields | |
---|---|
genericMetadata |
The operation generic information. |
GoogleCloudAiplatformV1UpdatePersistentResourceOperationMetadata
Details of operations that perform update PersistentResource.Fields | |
---|---|
genericMetadata |
Operation metadata for PersistentResource. |
progressMessage |
Progress Message for Update LRO |
GoogleCloudAiplatformV1UpdateSpecialistPoolOperationMetadata
Runtime operation metadata for SpecialistPoolService.UpdateSpecialistPool.Fields | |
---|---|
genericMetadata |
The operation generic information. |
specialistPool |
Output only. The name of the SpecialistPool to which the specialists are being added. Format: |
GoogleCloudAiplatformV1UpdateTensorboardOperationMetadata
Details of operations that perform update Tensorboard.Fields | |
---|---|
genericMetadata |
Operation metadata for Tensorboard. |
GoogleCloudAiplatformV1UpgradeNotebookRuntimeOperationMetadata
Metadata information for NotebookService.UpgradeNotebookRuntime.Fields | |
---|---|
genericMetadata |
The operation generic information. |
progressMessage |
A human-readable message that shows the intermediate progress details of NotebookRuntime. |
GoogleCloudAiplatformV1UploadModelOperationMetadata
Details of ModelService.UploadModel operation.Fields | |
---|---|
genericMetadata |
The common part of the operation metadata. |
GoogleCloudAiplatformV1UploadModelRequest
Request message for ModelService.UploadModel.Fields | |
---|---|
model |
Required. The Model to create. |
modelId |
Optional. The ID to use for the uploaded Model, which will become the final component of the model resource name. This value may be up to 63 characters, and valid characters are |
parentModel |
Optional. The resource name of the model into which to upload the version. Only specify this field when uploading a new version. |
serviceAccount |
Optional. The user-provided custom service account to use to do the model upload. If empty, Vertex AI Service Agent will be used to access resources needed to upload the model. This account must belong to the target project where the model is uploaded to, i.e., the project specified in the |
GoogleCloudAiplatformV1UploadModelResponse
Response message of ModelService.UploadModel operation.Fields | |
---|---|
model |
The name of the uploaded Model resource. Format: |
modelVersionId |
Output only. The version ID of the model that is uploaded. |
GoogleCloudAiplatformV1UpsertDatapointsRequest
Request message for IndexService.UpsertDatapointsFields | |
---|---|
datapoints[] |
A list of datapoints to be created/updated. |
updateMask |
Optional. Update mask is used to specify the fields to be overwritten in the datapoints by the update. The fields specified in the update_mask are relative to each IndexDatapoint inside datapoints, not the full request. Updatable fields: * Use |
GoogleCloudAiplatformV1UserActionReference
References an API call. It contains more information about long running operation and Jobs that are triggered by the API call.Fields | |
---|---|
dataLabelingJob |
For API calls that start a LabelingJob. Resource name of the LabelingJob. Format: |
method |
The method name of the API RPC call. For example, "/google.cloud.aiplatform.{apiVersion}.DatasetService.CreateDataset" |
operation |
For API calls that return a long running operation. Resource name of the long running operation. Format: |
GoogleCloudAiplatformV1Value
Value is the value of the field.Fields | |
---|---|
doubleValue |
A double value. |
intValue |
An integer value. |
stringValue |
A string value. |
GoogleCloudAiplatformV1VertexAISearch
Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversationFields | |
---|---|
datastore |
Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: |
GoogleCloudAiplatformV1VideoMetadata
Metadata describes the input video content.Fields | |
---|---|
endOffset |
Optional. The end offset of the video. |
startOffset |
Optional. The start offset of the video. |
GoogleCloudAiplatformV1WorkerPoolSpec
Represents the spec of a worker pool in a job.Fields | |
---|---|
containerSpec |
The custom container task. |
diskSpec |
Disk spec. |
machineSpec |
Optional. Immutable. The specification of a single machine. |
nfsMounts[] |
Optional. List of NFS mount spec. |
pythonPackageSpec |
The Python packaged task. |
replicaCount |
Optional. The number of worker replicas to use for this worker pool. |
GoogleCloudAiplatformV1WriteFeatureValuesPayload
Contains Feature values to be written for a specific entity.Fields | |
---|---|
entityId |
Required. The ID of the entity. |
featureValues |
Required. Feature values to be written, mapping from Feature ID to value. Up to 100,000 |
GoogleCloudAiplatformV1WriteFeatureValuesRequest
Request message for FeaturestoreOnlineServingService.WriteFeatureValues.Fields | |
---|---|
payloads[] |
Required. The entities to be written. Up to 100,000 feature values can be written across all |
GoogleCloudAiplatformV1WriteTensorboardExperimentDataRequest
Request message for TensorboardService.WriteTensorboardExperimentData.Fields | |
---|---|
writeRunDataRequests[] |
Required. Requests containing per-run TensorboardTimeSeries data to write. |
GoogleCloudAiplatformV1WriteTensorboardRunDataRequest
Request message for TensorboardService.WriteTensorboardRunData.Fields | |
---|---|
tensorboardRun |
Required. The resource name of the TensorboardRun to write data to. Format: |
timeSeriesData[] |
Required. The TensorboardTimeSeries data to write. Values with in a time series are indexed by their step value. Repeated writes to the same step will overwrite the existing value for that step. The upper limit of data points per write request is 5000. |
GoogleCloudAiplatformV1XraiAttribution
An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models.Fields | |
---|---|
blurBaselineConfig |
Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 |
smoothGradConfig |
Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf |
stepCount |
Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively. |
GoogleCloudLocationListLocationsResponse
The response message for Locations.ListLocations.Fields | |
---|---|
locations[] |
A list of locations that matches the specified filter in the request. |
nextPageToken |
The standard List next-page token. |
GoogleCloudLocationLocation
A resource that represents a Google Cloud location.Fields | |
---|---|
displayName |
The friendly name for this location, typically a nearby city name. For example, "Tokyo". |
labels |
Cross-service attributes for the location. For example {"cloud.googleapis.com/region": "us-east1"} |
locationId |
The canonical id for this location. For example: |
metadata |
Service-specific metadata. For example the available capacity at the given location. |
name |
Resource name for the location, which may vary between implementations. For example: |
GoogleIamV1Binding
Associatesmembers
, or principals, with a role
.
Fields | |
---|---|
condition |
The condition that is associated with this binding. If the condition evaluates to |
members[] |
Specifies the principals requesting access for a Google Cloud resource. |
role |
Role that is assigned to the list of |
GoogleIamV1Policy
An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources. APolicy
is a collection of bindings
. A binding
binds one or more members
, or principals, to a single role
. Principals can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role
is a named list of permissions; each role
can be an IAM predefined role or a user-created custom role. For some types of Google Cloud resources, a binding
can also specify a condition
, which is a logical expression that allows access to a resource only if the expression evaluates to true
. A condition can add constraints based on attributes of the request, the resource, or both. To learn which resources support conditions in their IAM policies, see the IAM documentation. JSON example: { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }
YAML example: bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3
For a description of IAM and its features, see the IAM documentation.
Fields | |
---|---|
bindings[] |
Associates a list of |
etag |
|
version |
Specifies the format of the policy. Valid values are |
GoogleIamV1SetIamPolicyRequest
Request message forSetIamPolicy
method.
Fields | |
---|---|
policy |
REQUIRED: The complete policy to be applied to the |
GoogleIamV1TestIamPermissionsResponse
Response message forTestIamPermissions
method.
Fields | |
---|---|
permissions[] |
A subset of |
GoogleLongrunningListOperationsResponse
The response message for Operations.ListOperations.Fields | |
---|---|
nextPageToken |
The standard List next-page token. |
operations[] |
A list of operations that matches the specified filter in the request. |
GoogleLongrunningOperation
This resource represents a long-running operation that is the result of a network API call.Fields | |
---|---|
done |
If the value is |
error |
The error result of the operation in case of failure or cancellation. |
metadata |
Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. |
name |
The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the |
response |
The normal, successful response of the operation. If the original method returns no data on success, such as |
GoogleRpcStatus
TheStatus
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
Fields | |
---|---|
code |
The status code, which should be an enum value of google.rpc.Code. |
details[] |
A list of messages that carry the error details. There is a common set of message types for APIs to use. |
message |
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. |
GoogleTypeColor
Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to and from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor ofjava.awt.Color
in Java; it can also be trivially provided to UIColor's +colorWithRed:green:blue:alpha
method in iOS; and, with just a little work, it can be easily formatted into a CSS rgba()
string in JavaScript. This reference page doesn't have information about the absolute color space that should be used to interpret the RGB value—for example, sRGB, Adobe RGB, DCI-P3, and BT.2020. By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5
. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor fromProto(Color protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color toProto(UIColor color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...
Fields | |
---|---|
alpha |
The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: |
blue |
The amount of blue in the color as a value in the interval [0, 1]. |
green |
The amount of green in the color as a value in the interval [0, 1]. |
red |
The amount of red in the color as a value in the interval [0, 1]. |
GoogleTypeDate
Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values. * A month and day, with a zero year (for example, an anniversary). * A year on its own, with a zero month and a zero day. * A year and month, with a zero day (for example, a credit card expiration date). Related types: * google.type.TimeOfDay * google.type.DateTime * google.protobuf.TimestampFields | |
---|---|
day |
Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant. |
month |
Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day. |
year |
Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year. |
GoogleTypeExpr
Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.Fields | |
---|---|
description |
Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI. |
expression |
Textual representation of an expression in Common Expression Language syntax. |
location |
Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file. |
title |
Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression. |
GoogleTypeInterval
Represents a time interval, encoded as a Timestamp start (inclusive) and a Timestamp end (exclusive). The start must be less than or equal to the end. When the start equals the end, the interval is empty (matches no time). When both start and end are unspecified, the interval matches any time.Fields | |
---|---|
endTime |
Optional. Exclusive end of the interval. If specified, a Timestamp matching this interval will have to be before the end. |
startTime |
Optional. Inclusive start of the interval. If specified, a Timestamp matching this interval will have to be the same or after the start. |
GoogleTypeMoney
Represents an amount of money with its currency type.Fields | |
---|---|
currencyCode |
The three-letter currency code defined in ISO 4217. |
nanos |
Number of nano (10^-9) units of the amount. The value must be between -999,999,999 and +999,999,999 inclusive. If |
units |
The whole units of the amount. For example if |