Recognize Text in Images with ML Kit on Android

You can use ML Kit to recognize text in images, using either an on-device model or a cloud model. See the overview to learn about the benefits of each approach.

See the ML Kit quickstart sample on GitHub for an example of this API in use, or try the codelab.

Before you begin

  1. If you have not already added Firebase to your app, do so by following the steps in the getting started guide.
  2. Include the dependencies for ML Kit in your app-level build.gradle file:
    dependencies {
      // ...
    
      implementation 'com.google.firebase:firebase-ml-vision:16.0.0'
    }
    
  3. Optional but recommended: If you use the on-device API, configure your app to automatically download the ML model to the device after your app is installed from the Play Store.

    To do so, add the following declaration to your app's AndroidManifest.xml file:

    <application ...>
      ...
      <meta-data
          android:name="com.google.firebase.ml.vision.DEPENDENCIES"
          android:value="ocr" />
      <!-- To use multiple models: android:value="ocr,model2,model3" -->
    </application>
    
    If you do not enable install-time model downloads, the model will be downloaded the first time you run the on-device detector. Requests you make before the download has completed will produce no results.
  4. If you want to use the cloud-based model, and you have not upgraded your project to a Blaze plan, do so in the Firebase console. Only Blaze-level projects can use the Cloud Vision APIs.
  5. If you want to use the cloud-based model, also enable the Cloud Vision API:
    1. Open the Cloud Vision API in the Cloud Console API library.
    2. Ensure that your Firebase project is selected in the menu at the top of the page.
    3. If the API is not already enabled, click Enable.
    If you want to use only the on-device model, you can skip this step.

Now you are ready to start recognizing text in an image, using either the on-device model or the Cloud-based model.


On-device text recognition

To use the on-device text recognition model, run the text detector as described below.

1. Run the text detector

To recognize text in an image, create a FirebaseVisionImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass the FirebaseVisionImage object to the FirebaseVisionTextDetector's detectInImage method.

  1. Create a FirebaseVisionImage object from your image.

    • To create a FirebaseVisionImage object from a Bitmap object:
      FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
      The image represented by the Bitmap object must be upright, with no additional rotation required.
    • To create a FirebaseVisionImage object from a media.Image object, such as when capturing an image from a device's camera, first determine the angle the image must be rotated to compensate for both the device's rotation and the orientation of camera sensor in the device:
      private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
      static {
          ORIENTATIONS.append(Surface.ROTATION_0, 90);
          ORIENTATIONS.append(Surface.ROTATION_90, 0);
          ORIENTATIONS.append(Surface.ROTATION_180, 270);
          ORIENTATIONS.append(Surface.ROTATION_270, 180);
      }
      
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      private int getRotationCompensation(String cameraId, Activity activity, Context context)
              throws CameraAccessException {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
          int rotationCompensation = ORIENTATIONS.get(deviceRotation);
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE);
          int sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION);
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360;
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          int result;
          switch (rotationCompensation) {
              case 0:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  break;
              case 90:
                  result = FirebaseVisionImageMetadata.ROTATION_90;
                  break;
              case 180:
                  result = FirebaseVisionImageMetadata.ROTATION_180;
                  break;
              case 270:
                  result = FirebaseVisionImageMetadata.ROTATION_270;
                  break;
              default:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  Log.e(TAG, "Bad rotation value: " + rotationCompensation);
          }
          return result;
      }

      Then, pass the media.Image object and the rotation value to FirebaseVisionImage.fromMediaImage():

      FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);
    • To create a FirebaseVisionImage object from a ByteBuffer or a byte array, first calculate the image rotation as described above.

      Then, create a FirebaseVisionImageMetadata object that contains the image's height, width, color encoding format, and rotation:

      FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
              .setWidth(1280)
              .setHeight(720)
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build();

      Use the buffer or array, and the metadata object, to create a FirebaseVisionImage object:

      FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata);
      // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);
      
    • To create a FirebaseVisionImage object from a file, pass the app context and file URI to FirebaseVisionImage.fromFilePath():
      FirebaseVisionImage image;
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri);
      } catch (IOException e) {
          e.printStackTrace();
      }

  2. Get an instance of FirebaseVisionTextDetector:

    FirebaseVisionTextDetector detector = FirebaseVision.getInstance()
            .getVisionTextDetector();

  3. Finally, pass the image to the detectInImage method:

    Task<FirebaseVisionText> result =
            detector.detectInImage(image)
                    .addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() {
                        @Override
                        public void onSuccess(FirebaseVisionText firebaseVisionText) {
                            // Task completed successfully
                            // ...
                        }
                    })
                    .addOnFailureListener(
                            new OnFailureListener() {
                                @Override
                                public void onFailure(@NonNull Exception e) {
                                    // Task failed with an exception
                                    // ...
                                }
                            });

2. Extract text from blocks of recognized text

If the text recognition operation succeeds, a FirebaseVisionText object will be passed to the success listener. From this object, you can get the blocks of text that were recognized, the boundaries of the block on the image, and the text contained in the block. In addition, for each block, you can get the lines of text that make up the block, and the elements, such as characters or punctuation marks, that make up each line of text:

for (FirebaseVisionText.Block block: firebaseVisionText.getBlocks()) {
    Rect boundingBox = block.getBoundingBox();
    Point[] cornerPoints = block.getCornerPoints();
    String text = block.getText();

    for (FirebaseVisionText.Line line: block.getLines()) {
        // ...
        for (FirebaseVisionText.Element element: line.getElements()) {
            // ...
        }
    }
}

Cloud text recognition

To use the Cloud-based text recognition model, configure and run the the text detector as described below.

1. Configure the text detector

By default, the Cloud detector uses the STABLE version of the model and returns up to 10 results. If you want to change either of these settings, specify them with a FirebaseVisionCloudDetectorOptions object.

For example, to change both of the default settings, build a FirebaseVisionCloudDetectorOptions object as in the following example:

FirebaseVisionCloudDetectorOptions options =
    new FirebaseVisionCloudDetectorOptions.Builder()
        .setModelType(FirebaseVisionCloudDetectorOptions.LATEST_MODEL)
        .setMaxResults(15)
        .build();

To use the default settings, you can use FirebaseVisionCloudDetectorOptions.DEFAULT in the next step.

2. Run the text detector

To recognize text in an image, create a FirebaseVisionImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass the FirebaseVisionImage object to either the FirebaseVisionCloudTextDetector or—if the image is of a document—the FirebaseVisionCloudDocumentTextDetector's detectInImage method.

  1. Create a FirebaseVisionImage object from your image.

    • To create a FirebaseVisionImage object from a Bitmap object:
      FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
      The image represented by the Bitmap object must be upright, with no additional rotation required.
    • To create a FirebaseVisionImage object from a media.Image object, such as when capturing an image from a device's camera, first determine the angle the image must be rotated to compensate for both the device's rotation and the orientation of camera sensor in the device:
      private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
      static {
          ORIENTATIONS.append(Surface.ROTATION_0, 90);
          ORIENTATIONS.append(Surface.ROTATION_90, 0);
          ORIENTATIONS.append(Surface.ROTATION_180, 270);
          ORIENTATIONS.append(Surface.ROTATION_270, 180);
      }
      
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      private int getRotationCompensation(String cameraId, Activity activity, Context context)
              throws CameraAccessException {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
          int rotationCompensation = ORIENTATIONS.get(deviceRotation);
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE);
          int sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION);
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360;
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          int result;
          switch (rotationCompensation) {
              case 0:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  break;
              case 90:
                  result = FirebaseVisionImageMetadata.ROTATION_90;
                  break;
              case 180:
                  result = FirebaseVisionImageMetadata.ROTATION_180;
                  break;
              case 270:
                  result = FirebaseVisionImageMetadata.ROTATION_270;
                  break;
              default:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  Log.e(TAG, "Bad rotation value: " + rotationCompensation);
          }
          return result;
      }

      Then, pass the media.Image object and the rotation value to FirebaseVisionImage.fromMediaImage():

      FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);
    • To create a FirebaseVisionImage object from a ByteBuffer or a byte array, first calculate the image rotation as described above.

      Then, create a FirebaseVisionImageMetadata object that contains the image's height, width, color encoding format, and rotation:

      FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
              .setWidth(1280)
              .setHeight(720)
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build();

      Use the buffer or array, and the metadata object, to create a FirebaseVisionImage object:

      FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata);
      // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);
      
    • To create a FirebaseVisionImage object from a file, pass the app context and file URI to FirebaseVisionImage.fromFilePath():
      FirebaseVisionImage image;
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri);
      } catch (IOException e) {
          e.printStackTrace();
      }

  2. Get an instance of FirebaseVisionCloudTextDetector or FirebaseVisionCloudDocumentTextDetector:

    FirebaseVisionCloudTextDetector detector = FirebaseVision.getInstance()
            .getVisionCloudTextDetector();
    // Or, to change the default settings:
    // FirebaseVisionCloudTextDetector detector = FirebaseVision.getInstance()
    //         .getVisionCloudTextDetector(options);

  3. Finally, pass the image to the detectInImage method:

    Task<FirebaseVisionCloudText> result = detector.detectInImage(image)
            .addOnSuccessListener(new OnSuccessListener<FirebaseVisionCloudText>() {
                @Override
                public void onSuccess(FirebaseVisionCloudText firebaseVisionCloudText) {
                    // Task completed successfully
                    // ...
                }
            })
            .addOnFailureListener(new OnFailureListener() {
                @Override
                public void onFailure(@NonNull Exception e) {
                    // Task failed with an exception
                    // ...
                }
            });

3. Extract text from blocks of recognized text

If the text recognition operation succeeds, a FirebaseVisionCloudText object will be passed to the success listener. This object contains the text that was recognized in the image.

You can also get information about the structure of the text. The text is organized into pages, blocks, paragraphs, words, and symbols. For each unit of organization, you can get information such as its dimensions and the languages it contains.

For example:

String recognizedText = firebaseVisionCloudText.getText();

for (FirebaseVisionCloudText.Page page: firebaseVisionCloudText.getPages()) {
    List<FirebaseVisionCloudText.DetectedLanguage> languages =
            page.getTextProperty().getDetectedLanguages();
    int height = page.getHeight();
    int width = page.getWidth();
    float confidence = page.getConfidence();

    for (FirebaseVisionCloudText.Block block: page.getBlocks()) {
        Rect boundingBox = block.getBoundingBox();
        List<FirebaseVisionCloudText.DetectedLanguage> blockLanguages =
                block.getTextProperty().getDetectedLanguages();
        float blockConfidence = block.getConfidence();
        // And so on: Paragraph, Word, Symbol
    }
}

Send feedback about...

Need help? Visit our support page.