Zur Konsole

AutoML Vision Edgeplat_iosplat_android

Train your own image labeling models with AutoML Vision Edge.

ML Kit's base on-device image labeling API model is built for general-purpose use, and is trained to recognize around 400 categories that cover the most commonly-found concepts in photos. If you need a more specialized image labeling model, covering a narrower domain of concepts in more detail—for example, a model to distinguish between species of flowers or types of food—you can use AutoML Vision Edge to train a model with your own images and use the model you trained instead.

Get started

Key capabilities

Train models based on your data

Automatically train custom image labeling models to recognize the labels you care about, using your training data.

Built-in model hosting

Host your models with Firebase, and load them at run time with the iOS and Android SDKs. By hosting the model on Firebase, you can make sure users have the latest model without releasing a new app version,

And, of course, you can also bundle the model with your app, so it's immediately available on install.

Implementation path

Assemble training data Put together a dataset of examples of each label you want your model to recognize.
Train a new model In the Firebase console, import your training data and use it to train a new model.
Use the model in your app Bundle the model with your app or let the ML Kit SDK download it from Firebase. Then, use the model to label images on the device.

Pricing & Limits

Spark & Flame Blaze
Datasets 1 Billed according to Cloud Storage rates
Images per dataset 1,000 1,000,000
Training hours
  • 3 free hours per project
  • 1 hour per model
  • 15 hours of free training per billed project. Subsequent training hours 4.95 USD per hour.
  • No per-model limit

Next steps

Learn how to train an image labeling model.