ImageClassificationModelMetadata(
mapping=None, *, ignore_unknown_fields=False, **kwargs
)
Model metadata for image classification.
Attributes |
|
---|---|
Name | Description |
base_model_id |
str
Optional. The ID of the base model. If it is specified,
the new model will be created based on the base model.
Otherwise, the new model will be created from scratch. The
base model must be in the same project and
location as the new model to create, and have the same
model_type .
|
train_budget_milli_node_hours |
int
Optional. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual train_cost will be equal or less
than this value. If further model training ceases to provide
any improvements, it will stop without using full budget and
the stop_reason will be MODEL_CONVERGED . Note, node_hour
= actual_hour \* number_of_nodes_invovled. For model type
cloud \ (default), the train budget must be between 8,000
and 800,000 milli node hours, inclusive. The default value
is 192, 000 which represents one day in wall time. For model
type mobile-low-latency-1 , mobile-versatile-1 ,
mobile-high-accuracy-1 ,
mobile-core-ml-low-latency-1 ,
mobile-core-ml-versatile-1 ,
mobile-core-ml-high-accuracy-1 , the train budget must be
between 1,000 and 100,000 milli node hours, inclusive. The
default value is 24, 000 which represents one day in wall
time.
|
train_cost_milli_node_hours |
int
Output only. The actual train cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget. |
stop_reason |
str
Output only. The reason that this create model operation stopped, e.g. BUDGET_REACHED , MODEL_CONVERGED .
|
model_type |
str
Optional. Type of the model. The available values are: - cloud - Model to be used via prediction calls to
AutoML API. This is the default value.
- mobile-low-latency-1 - A model that, in addition to
providing prediction via AutoML API, can also be exported
(see
AutoMl.ExportModel)
and used on a mobile or edge device with TensorFlow
afterwards. Expected to have low latency, but may have
lower prediction quality than other models.
- mobile-versatile-1 - A model that, in addition to
providing prediction via AutoML API, can also be exported
(see
AutoMl.ExportModel)
and used on a mobile or edge device with TensorFlow
afterwards.
- mobile-high-accuracy-1 - A model that, in addition to
providing prediction via AutoML API, can also be exported
(see
AutoMl.ExportModel)
and used on a mobile or edge device with TensorFlow
afterwards. Expected to have a higher latency, but should
also have a higher prediction quality than other models.
- mobile-core-ml-low-latency-1 - A model that, in
addition to providing prediction via AutoML API, can also
be exported (see
AutoMl.ExportModel)
and used on a mobile device with Core ML afterwards.
Expected to have low latency, but may have lower
prediction quality than other models.
- mobile-core-ml-versatile-1 - A model that, in
addition to providing prediction via AutoML API, can also
be exported (see
AutoMl.ExportModel)
and used on a mobile device with Core ML afterwards.
- mobile-core-ml-high-accuracy-1 - A model that, in
addition to providing prediction via AutoML API, can also
be exported (see
AutoMl.ExportModel)
and used on a mobile device with Core ML afterwards.
Expected to have a higher latency, but should also have a
higher prediction quality than other models.
|
node_qps |
float
Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed. |
node_count |
int
Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the node_qps field. |