Firebase Machine Learning

Use machine learning in your apps to solve real-world problems.

Firebase Machine Learning is a mobile SDK that brings Google's machine learning expertise to Android and Apple apps in a powerful yet easy-to-use package. Whether you're new or experienced in machine learning, you can implement the functionality you need in just a few lines of code. There's no need to have deep knowledge of neural networks or model optimization to get started. On the other hand, if you are an experienced ML developer, Firebase ML provides convenient APIs that help you use your custom TensorFlow Lite models in your mobile apps.

Key capabilities

Host and deploy custom models

Use your own TensorFlow Lite models for on-device inference. Just deploy your model to Firebase, and we'll take care of hosting and serving it to your app. Firebase will dynamically serve the latest version of the model to your users, allowing you to regularly update them without having to push a new version of your app to users.

When you use Firebase ML with Remote Config, you can serve different models to different user segments, and with A/B Testing, you can run experiments to find the best performing model (see the Apple and Android guides).

Production-ready for common use cases

Firebase ML comes with a set of ready-to-use APIs for common mobile use cases: recognizing text, labeling images, and identifying landmarks. Simply pass in data to the Firebase ML library and it gives you the information you need. These APIs leverage the power of Google Cloud's machine learning technology to give you the highest level of accuracy.

Cloud vs. on-device

Firebase ML has APIs that work either in the cloud or on the device. When we describe an ML API as being a cloud API or on-device API, we are describing which machine performs inference: that is, which machine uses the ML model to discover insights about the data you provide it. In Firebase ML, this happens either on Google Cloud, or on your users' mobile devices.

The text recognition, image labeling, and landmark recognition APIs perform inference in the cloud. These models have more computational power and memory available to them than a comparable on-device model, and as a result, can perform inference with greater accuracy and precision than an on-device model. On the other hand, every request to these APIs requires a network round-trip, which makes them unsuitable for real-time and low-latency applications such as video processing.

The custom model APIs deal with ML models that run on the device. The models used and produced by these features are TensorFlow Lite models, which are optimized to run on mobile devices. The biggest advantage to these models is that they don't require a network connection and can run very quickly—fast enough, for example, to process frames of video in real time.

Firebase ML provides the ability to deploy custom models to your users' devices by uploading them to our servers. Your Firebase-enabled app will download the model to the device on demand. This allows you to keep your app's initial install size small, and you can swap the ML model without having to republish your app.

ML Kit: Ready-to-use on-device models

If you're looking for pre-trained models that run on the device, check out ML Kit. ML Kit is available for iOS and Android, and has APIs for many use cases:

  • Text recognition
  • Image labeling
  • Object detection and tracking
  • Face detection and contour tracing
  • Barcode scanning
  • Language identification
  • Translation
  • Smart Reply

Next steps