Resource: ModelEvaluation
A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.
name
string
Output only. The resource name of the ModelEvaluation.
displayName
string
The display name of the ModelEvaluation.
metricsSchemaUri
string
Points to a YAML file stored on Google Cloud Storage describing the metrics
of this ModelEvaluation. The schema is defined as an OpenAPI 3.0.2 Schema Object.
Evaluation metrics of the Model. The schema of the metrics is stored in metricsSchemaUri
Output only. timestamp when this ModelEvaluation was created.
A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z"
and "2014-10-02T15:01:23.045123456Z"
.
sliceDimensions[]
string
All possible dimensions
of ModelEvaluationSlices. The dimensions can be used as the filter of the ModelService.ListModelEvaluationSlices
request, in the form of slice.dimension = <dimension>
.
Aggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for AutoML tabular Models.
Describes the values of ExplanationSpec
that are used for explaining the predicted values on the evaluated data.
The metadata of the ModelEvaluation. For the ModelEvaluation uploaded from Managed Pipeline, metadata contains a structured value with keys of "pipelineJobId", "evaluation_dataset_type", "evaluation_dataset_path", "row_based_metrics_path".
Specify the configuration for bias detection.
JSON representation |
---|
{ "name": string, "displayName": string, "metricsSchemaUri": string, "metrics": value, "createTime": string, "sliceDimensions": [ string ], "modelExplanation": { object ( |
ModelEvaluationExplanationSpec
explanationType
string
Explanation type.
For AutoML Image Classification models, possible values are:
image-integrated-gradients
image-xrai
Explanation spec details.
JSON representation |
---|
{
"explanationType": string,
"explanationSpec": {
object ( |
BiasConfig
Configuration for bias detection.
Specification for how the data should be sliced for bias. It contains a list of slices, with limitation of two slices. The first slice of data will be the slice_a. The second slice in the list (slice_b) will be compared against the first slice. If only a single slice is provided, then slice_a will be compared against "not slice_a". Below are examples with feature "education" with value "low", "medium", "high" in the dataset:
Example 1:
biasSlices = [{'education': 'low'}]
A single slice provided. In this case, slice_a is the collection of data with 'education' equals 'low', and slice_b is the collection of data with 'education' equals 'medium' or 'high'.
Example 2:
biasSlices = [{'education': 'low'},
{'education': 'high'}]
Two slices provided. In this case, slice_a is the collection of data with 'education' equals 'low', and slice_b is the collection of data with 'education' equals 'high'.
labels[]
string
Positive labels selection on the target field.
JSON representation |
---|
{
"biasSlices": {
object ( |
Methods |
|
---|---|
|
Gets a ModelEvaluation. |
|
Imports an externally generated ModelEvaluation. |
|
Lists ModelEvaluations in a Model. |