AutoML Vision Edgeplat_iosplat_android

Create custom image classification models from your own training data with AutoML Vision Edge.

If you want to recognize contents of an image, one option is to use ML Kit's on-device image labeling API. The model used by the API is built for general-purpose use, and is trained to recognize around 400 categories that cover the most commonly-found concepts in photos.

If you need a more specialized image labeling model, covering a narrower domain of concepts in more detail—for example, a model to distinguish between species of flowers or types of food—you can use Firebase ML and AutoML Vision Edge to train a model with your own images and categories. The custom model is trained in Google Cloud, and once the model is ready, it's used fully on the device.

Get started

Key capabilities

Train models based on your data

Automatically train custom image labeling models to recognize the labels you care about, using your training data.

Built-in model hosting

Host your models with Firebase, and load them at run time with ML Kit. By hosting the model on Firebase, you can make sure users have the latest model without releasing a new app version.

And, of course, you can also bundle the model with your app, so it's immediately available on install.

Implementation path

Assemble training data Put together a dataset of examples of each label you want your model to recognize.
Train a new model In the Firebase console, import your training data and use it to train a new model.
Use the model in your app Bundle the model with your app or let ML Kit download it from Firebase when it's needed. Then, use the model to label images on the device.

Pricing & Limits

Spark & Flame Blaze
Datasets 1 Billed according to Cloud Storage rates
Images per dataset 1,000 1,000,000
Training hours
  • 3 free hours per project
  • 1 hour per model
  • 15 hours of free training per billed project. Subsequent training hours 4.95 USD per hour.
  • No per-model limit

Next steps

Learn how to train an image labeling model.