Recognize Text in Images with ML Kit on Android

You can use ML Kit to recognize text in images. ML Kit has both a general-purpose API suitable for recognizing text in images, such as the text of a street sign, and an API optimized for recognizing the text of documents. The general-purpose API has both on-device and cloud-based models. Document text recognition is available only as a cloud-based model. See the overview for a comparison of the cloud and on-device models.

See the ML Kit quickstart sample on GitHub for an example of this API in use, or try the codelab.

Before you begin

  1. If you have not already added Firebase to your app, do so by following the steps in the getting started guide.
  2. Include the dependencies for ML Kit in your app-level build.gradle file:
    dependencies {
      // ...
    
      implementation 'com.google.firebase:firebase-ml-vision:18.0.1'
    }
    
  3. Optional but recommended: If you use the on-device API, configure your app to automatically download the ML model to the device after your app is installed from the Play Store.

    To do so, add the following declaration to your app's AndroidManifest.xml file:

    <application ...>
      ...
      <meta-data
          android:name="com.google.firebase.ml.vision.DEPENDENCIES"
          android:value="ocr" />
      <!-- To use multiple models: android:value="ocr,model2,model3" -->
    </application>
    
    If you do not enable install-time model downloads, the model will be downloaded the first time you run the on-device detector. Requests you make before the download has completed will produce no results.
  4. If you want to use the Cloud-based model, and you have not already enabled the Cloud-based APIs for your project, do so now:

    1. Open the ML Kit APIs page of the Firebase console.
    2. If you have not already upgraded your project to a Blaze plan, click Upgrade to do so. (You will be prompted to upgrade only if your project isn't on the Blaze plan.)

      Only Blaze-level projects can use Cloud-based APIs.

    3. If Cloud-based APIs aren't already enabled, click Enable Cloud-based APIs.

    If you want to use only the on-device model, you can skip this step.

Now you are ready to start recognizing text in images.


Recognize text in images

To recognize text in an image using either an on-device or cloud-based model, run the text recognizer as described below.

1. Run the text recognizer

To recognize text in an image, create a FirebaseVisionImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass the FirebaseVisionImage object to the FirebaseVisionTextRecognizer's processImage method.

  1. Create a FirebaseVisionImage object from your image.

    • To create a FirebaseVisionImage object from a Bitmap object:
      FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
      The image represented by the Bitmap object must be upright, with no additional rotation required.
    • To create a FirebaseVisionImage object from a media.Image object, such as when capturing an image from a device's camera, first determine the angle the image must be rotated to compensate for both the device's rotation and the orientation of camera sensor in the device:
      private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
      static {
          ORIENTATIONS.append(Surface.ROTATION_0, 90);
          ORIENTATIONS.append(Surface.ROTATION_90, 0);
          ORIENTATIONS.append(Surface.ROTATION_180, 270);
          ORIENTATIONS.append(Surface.ROTATION_270, 180);
      }
      
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      private int getRotationCompensation(String cameraId, Activity activity, Context context)
              throws CameraAccessException {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
          int rotationCompensation = ORIENTATIONS.get(deviceRotation);
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE);
          int sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION);
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360;
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          int result;
          switch (rotationCompensation) {
              case 0:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  break;
              case 90:
                  result = FirebaseVisionImageMetadata.ROTATION_90;
                  break;
              case 180:
                  result = FirebaseVisionImageMetadata.ROTATION_180;
                  break;
              case 270:
                  result = FirebaseVisionImageMetadata.ROTATION_270;
                  break;
              default:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  Log.e(TAG, "Bad rotation value: " + rotationCompensation);
          }
          return result;
      }

      Then, pass the media.Image object and the rotation value to FirebaseVisionImage.fromMediaImage():

      FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);
    • To create a FirebaseVisionImage object from a ByteBuffer or a byte array, first calculate the image rotation as described above.

      Then, create a FirebaseVisionImageMetadata object that contains the image's height, width, color encoding format, and rotation:

      FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
              .setWidth(480)   // 480x360 is typically sufficient for
              .setHeight(360)  // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build();

      Use the buffer or array, and the metadata object, to create a FirebaseVisionImage object:

      FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata);
      // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);
      
    • To create a FirebaseVisionImage object from a file, pass the app context and file URI to FirebaseVisionImage.fromFilePath():
      FirebaseVisionImage image;
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri);
      } catch (IOException e) {
          e.printStackTrace();
      }

  2. Get an instance of FirebaseVisionTextRecognizer.

    To use the on-device model:

    FirebaseVisionTextRecognizer textRecognizer = FirebaseVision.getInstance()
        .getOnDeviceTextRecognizer();
    

    To use the cloud-based model:

    FirebaseVisionTextRecognizer textRecognizer = FirebaseVision.getInstance()
            .getCloudTextRecognizer();
    
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    FirebaseVisionCloudTextRecognizerOptions options =
            new FirebaseVisionCloudTextRecognizerOptions.Builder()
                    .setLanguageHints(Arrays.asList("en", "hi"))
                    .build();
    FirebaseVisionTextRecognizer textRecognizer = FirebaseVision.getInstance()
            .getCloudTextRecognizer(options);
    
  3. Finally, pass the image to the processImage method:

    textRecognizer.processImage(image)
            .addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() {
                @Override
                public void onSuccess(FirebaseVisionText result) {
                    // Task completed successfully
                    // ...
                }
            })
            .addOnFailureListener(
                    new OnFailureListener() {
                        @Override
                        public void onFailure(@NonNull Exception e) {
                            // Task failed with an exception
                            // ...
                        }
                    });
    

2. Extract text from blocks of recognized text

If the text recognition operation succeeds, a FirebaseVisionText object will be passed to the success listener. A FirebaseVisionText object contains the full text recognized in the image and zero or more TextBlock objects.

Each TextBlock represents a rectangular block of text, which contains zero or more Line objects. Each Line object contains zero or more Element objects, which represent words and word-like entities (dates, numbers, and so on).

For each TextBlock, Line, and Element object, you can get the text recognized in the region and the bounding coordinates of the region.

For example:

String resultText = result.getText();
for (TextBlock block: result.getTextBlocks()) {
    String blockText = block.getText();
    Float blockConfidence = block.getConfidence();
    List<RecognizedLanguage> blockLanguages = block.getRecognizedLanguages();
    Point[] blockCornerPoints = block.getCornerPoints();
    Rect blockFrame = block.getBoundingBox();
    for (Line line: block.getLines()) {
        String lineText = line.getText();
        Float lineConfidence = line.getConfidence();
        List<RecognizedLanguage> lineLanguages = line.getRecognizedLanguages();
        Point[] lineCornerPoints = line.getCornerPoints();
        Rect lineFrame = line.getBoundingBox();
        for (Element element: line.getElements()) {
            String elementText = element.getText();
            Float elementConfidence = element.getConfidence();
            List<RecognizedLanguage> elementLanguages = element.getRecognizedLanguages();
            Point[] elementCornerPoints = element.getCornerPoints();
            Rect elementFrame = element.getBoundingBox();
        }
    }
}

Tips to improve real-time performance

If you want use the on-device model to recognize text in a real-time application, follow these guidelines to achieve the best framerates:

  • Throttle calls to the text recognizer. If a new video frame becomes available while the text recognizer is running, drop the frame.
  • If you are using the output of the text recognizer to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. By doing so, you render to the display surface only once for each input frame. See the CameraSourcePreview and GraphicOverlay classes in the quickstart sample app for an example.
  • If you use the Camera2 API, capture images in ImageFormat.YUV_420_888 format.

    If you use the older Camera API, capture images in ImageFormat.NV21 format.

  • Capture images at a lower resolution. A 480x360 pixel image is typically sufficient.

Recognize text in images of documents

To recognize the text of a document, configure and run the cloud-based document text recognizer as described below.

The document text recognition API, described below, provides an interface that is intended to be more convenient for working with images of documents. However, if you prefer the interface provided by the FirebaseVisionTextRecognizer API, you can use it instead to scan documents by configuring the cloud text recognizer to use the dense text model.

To use the document text recognition API:

1. Run the text recognizer

To recognize text in an image, create a FirebaseVisionImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass the FirebaseVisionImage object to the FirebaseVisionDocumentTextRecognizer's processImage method.

  1. Create a FirebaseVisionImage object from your image.

    • To create a FirebaseVisionImage object from a Bitmap object:
      FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
      The image represented by the Bitmap object must be upright, with no additional rotation required.
    • To create a FirebaseVisionImage object from a media.Image object, such as when capturing an image from a device's camera, first determine the angle the image must be rotated to compensate for both the device's rotation and the orientation of camera sensor in the device:
      private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
      static {
          ORIENTATIONS.append(Surface.ROTATION_0, 90);
          ORIENTATIONS.append(Surface.ROTATION_90, 0);
          ORIENTATIONS.append(Surface.ROTATION_180, 270);
          ORIENTATIONS.append(Surface.ROTATION_270, 180);
      }
      
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      private int getRotationCompensation(String cameraId, Activity activity, Context context)
              throws CameraAccessException {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
          int rotationCompensation = ORIENTATIONS.get(deviceRotation);
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE);
          int sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION);
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360;
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          int result;
          switch (rotationCompensation) {
              case 0:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  break;
              case 90:
                  result = FirebaseVisionImageMetadata.ROTATION_90;
                  break;
              case 180:
                  result = FirebaseVisionImageMetadata.ROTATION_180;
                  break;
              case 270:
                  result = FirebaseVisionImageMetadata.ROTATION_270;
                  break;
              default:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  Log.e(TAG, "Bad rotation value: " + rotationCompensation);
          }
          return result;
      }

      Then, pass the media.Image object and the rotation value to FirebaseVisionImage.fromMediaImage():

      FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);
    • To create a FirebaseVisionImage object from a ByteBuffer or a byte array, first calculate the image rotation as described above.

      Then, create a FirebaseVisionImageMetadata object that contains the image's height, width, color encoding format, and rotation:

      FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
              .setWidth(480)   // 480x360 is typically sufficient for
              .setHeight(360)  // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build();

      Use the buffer or array, and the metadata object, to create a FirebaseVisionImage object:

      FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata);
      // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);
      
    • To create a FirebaseVisionImage object from a file, pass the app context and file URI to FirebaseVisionImage.fromFilePath():
      FirebaseVisionImage image;
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri);
      } catch (IOException e) {
          e.printStackTrace();
      }

  2. Get an instance of FirebaseVisionDocumentTextRecognizer:

    FirebaseVisionDocumentTextRecognizer textRecognizer = FirebaseVision.getInstance()
            .getCloudDocumentTextRecognizer();
    
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    FirebaseVisionCloudDocumentRecognizerOptions options =
            new FirebaseVisionCloudDocumentRecognizerOptions.Builder()
                    .setLanguageHints(Arrays.asList("en", "hi"))
                    .build();
    FirebaseVisionDocumentTextRecognizer textRecognizer = FirebaseVision.getInstance()
            .getCloudDocumentTextRecognizer(options);
    
  3. Finally, pass the image to the processImage method:

    textRecognizer.processImage(image)
            .addOnSuccessListener(new OnSuccessListener<FirebaseVisionDocumentText>() {
                @Override
                public void onSuccess(FirebaseVisionDocumentText result) {
                    // Task completed successfully
                    // ...
                }
            })
            .addOnFailureListener(
                    new OnFailureListener() {
                        @Override
                        public void onFailure(@NonNull Exception e) {
                            // Task failed with an exception
                            // ...
                        }
                    });
    

2. Extract text from blocks of recognized text

If the text recognition operation succeeds, it will return a FirebaseVisionDocumentText object. A FirebaseVisionDocumentText object contains the full text recognized in the image and a hierarchy of objects that reflect the structure of the recognized document:

For each Block, Paragraph, Word, and Symbol object, you can get the text recognized in the region and the bounding coordinates of the region.

For example:

String resultText = result.getText();
for (Block block: result.getBlocks()) {
    String blockText = block.getText();
    Float blockConfidence = block.getConfidence();
    List<RecognizedLanguage> blockRecognizedLanguages = block.getRecognizedLanguages();
    Rect blockFrame = block.getBoundingBox();
    for (Paragraph paragraph: block.getParagraphs()) {
        String paragraphText = paragraph.getText();
        Float paragraphConfidence = paragraph.getConfidence();
        List<RecognizedLanguage> paragraphRecognizedLanguages = paragraph.getRecognizedLanguages();
        Rect paragraphFrame = paragraph.getBoundingBox();
        for (Word word: paragraph.getWords()) {
            String wordText = word.getText();
            Float wordConfidence = word.getConfidence();
            List<RecognizedLanguage> wordRecognizedLanguages = word.getRecognizedLanguages();
            Rect wordFrame = word.getBoundingBox();
            for (Symbol symbol: word.getSymbols()) {
                String symbolText = symbol.getText();
                Float symbolConfidence = symbol.getConfidence();
                List<RecognizedLanguage> symbolRecognizedLanguages = symbol.getRecognizedLanguages();
                Rect symbolFrame = symbol.getBoundingBox();
            }
        }
    }
}

Next steps

Before you deploy to production an app that uses a Cloud API, you should take some additional steps to prevent and mitigate the effect of unauthorized API access.

Şunun hakkında geri bildirim gönderin...

Yardım mı gerekiyor? Destek sayfamızı ziyaret edin.