Output only. IDs of the annotation specs used in the confusion matrix.
For Tables CLASSIFICATION
prediction_type
only list of [annotation_spec_display_name-s][] is populated.
Output only. IDs of the annotation specs used in the confusion matrix.
For Tables CLASSIFICATION
prediction_type
only list of [annotation_spec_display_name-s][] is populated.
The bytes of the annotationSpecId at the given index.
getAnnotationSpecIdCount()
publicintgetAnnotationSpecIdCount()
Output only. IDs of the annotation specs used in the confusion matrix.
For Tables CLASSIFICATION
prediction_type
only list of [annotation_spec_display_name-s][] is populated.
Output only. IDs of the annotation specs used in the confusion matrix.
For Tables CLASSIFICATION
prediction_type
only list of [annotation_spec_display_name-s][] is populated.
Output only. Display name of the annotation specs used in the confusion
matrix, as they were at the moment of the evaluation. For Tables
CLASSIFICATION
prediction_type-s,
distinct values of the target column at the moment of the model
evaluation are populated here.
Output only. Display name of the annotation specs used in the confusion
matrix, as they were at the moment of the evaluation. For Tables
CLASSIFICATION
prediction_type-s,
distinct values of the target column at the moment of the model
evaluation are populated here.
Output only. Display name of the annotation specs used in the confusion
matrix, as they were at the moment of the evaluation. For Tables
CLASSIFICATION
prediction_type-s,
distinct values of the target column at the moment of the model
evaluation are populated here.
Output only. Display name of the annotation specs used in the confusion
matrix, as they were at the moment of the evaluation. For Tables
CLASSIFICATION
prediction_type-s,
distinct values of the target column at the moment of the model
evaluation are populated here.
Output only. Rows in the confusion matrix. The number of rows is equal to
the size of annotation_spec_id.
row[i].example_count[j] is the number of examples that have ground
truth of the annotation_spec_id[i] and are predicted as
annotation_spec_id[j] by the model being evaluated.
Output only. Rows in the confusion matrix. The number of rows is equal to
the size of annotation_spec_id.
row[i].example_count[j] is the number of examples that have ground
truth of the annotation_spec_id[i] and are predicted as
annotation_spec_id[j] by the model being evaluated.
Output only. Rows in the confusion matrix. The number of rows is equal to
the size of annotation_spec_id.
row[i].example_count[j] is the number of examples that have ground
truth of the annotation_spec_id[i] and are predicted as
annotation_spec_id[j] by the model being evaluated.
Output only. Rows in the confusion matrix. The number of rows is equal to
the size of annotation_spec_id.
row[i].example_count[j] is the number of examples that have ground
truth of the annotation_spec_id[i] and are predicted as
annotation_spec_id[j] by the model being evaluated.
Output only. Rows in the confusion matrix. The number of rows is equal to
the size of annotation_spec_id.
row[i].example_count[j] is the number of examples that have ground
truth of the annotation_spec_id[i] and are predicted as
annotation_spec_id[j] by the model being evaluated.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-01-27 UTC."],[],[]]