在 Android 上使用 ML Kit 辨識圖片中的文字

你可以使用 ML Kit 辨識圖片中的文字。ML Kit 提供適用於辨識圖片文字 (例如路標文字) 的一般用途 API,以及用於辨識文件文字最佳化的 API。一般用途的 API 同時具備裝置和雲端模型。 文件文字辨識功能僅適用於雲端式模型。請參閱總覽,瞭解雲端和裝置端模型的比較。

事前準備

  1. 如果您尚未將 Firebase 新增至 Android 專案,請先完成這項操作。
  2. 將 ML Kit Android 程式庫的依附元件新增至模組 (應用程式層級) Gradle 檔案 (通常是 app/build.gradle):
    apply plugin: 'com.android.application'
    apply plugin: 'com.google.gms.google-services'
    
    dependencies {
      // ...
    
      implementation 'com.google.firebase:firebase-ml-vision:24.0.3'
    }
    
  3. 選用,但建議使用:如果您使用裝置端 API,請將應用程式設為在從 Play 商店安裝應用程式後,自動將機器學習模型下載至裝置。

    如要這麼做,請在應用程式的 AndroidManifest.xml 檔案中新增以下宣告:

    <application ...>
      ...
      <meta-data
          android:name="com.google.firebase.ml.vision.DEPENDENCIES"
          android:value="ocr" />
      <!-- To use multiple models: android:value="ocr,model2,model3" -->
    </application>
    
    如未啟用安裝期間模型下載功能,系統會在您首次執行裝置端偵測工具時下載模型。下載完成前提出的要求不會產生任何結果。
  4. 如要使用以雲端為基礎的模型,且您尚未為專案啟用雲端式 API,請立即啟用:

    1. 開啟 Firebase 控制台的 ML Kit API 頁面
    2. 如果您尚未將專案升級至 Blaze 定價方案,按一下「升級」即可進行升級 (只有在專案未採用 Blaze 方案時,系統才會提示您升級)。

      只有 Blaze 層級的專案可以使用以雲端為基礎的 API。

    3. 如果雲端型 API 尚未啟用,請點選「啟用雲端式 API」

    如果只想使用裝置端模型,可以略過這個步驟。

現在可以開始辨識圖片中的文字。

輸入圖片規範

  • 為了讓 ML Kit 準確辨識文字,輸入圖片必須包含以充足的像素資料表示的文字。理想情況下,拉丁文字的每個字元至少要有 16x16 像素。如果是中文、日文和韓文文字 (只有雲端式 API 支援),則每個字元應為 24x24 像素。對於所有語言來說,大於 24x24 像素的字元通常沒有什麼助益。

    舉例來說,640x480 的圖片可能適合掃描佔圖片整個寬度的名片,如要掃描印在正大尺寸紙上的文件,可能需要使用 720 x 1280 像素的圖片。

  • 圖片焦點不佳可能會降低文字辨識的準確度。如果您仍未取得可接受的結果,請嘗試要求使用者重新拍攝圖片。

  • 如果您在即時應用程式中辨識文字,建議您也考量輸入圖片的整體尺寸。系統處理較小的圖片可加快處理速度,因此為了縮短延遲時間,建議你以較低的解析度擷取圖片 (請注意上述準確率規定),並確保文字盡可能佔用圖片。另請參閱「即時效能改善提示」。


辨識圖片中的文字

如要使用裝置或雲端式模型辨識圖片中的文字,請按照下列步驟操作,執行文字辨識工具。

1. 執行文字辨識工具

如要辨識圖片中的文字,請透過 Bitmapmedia.ImageByteBuffer、位元組陣列或裝置上的檔案建立 FirebaseVisionImage 物件。然後將 FirebaseVisionImage 物件傳遞至 FirebaseVisionTextRecognizerprocessImage 方法。

  1. 使用圖片建立 FirebaseVisionImage 物件。

    • 如要從 media.Image 物件建立 FirebaseVisionImage 物件 (例如從裝置相機擷取圖片),請將 media.Image 物件和圖片的旋轉角度傳遞至 FirebaseVisionImage.fromMediaImage()

      如果您使用 CameraX 程式庫,OnImageCapturedListenerImageAnalysis.Analyzer 類別會為您計算旋轉值,因此只要在呼叫 FirebaseVisionImage.fromMediaImage() 之前,將旋轉角度轉換為 ML Kit 的 ROTATION_ 常數即可:

      Java

      private class YourAnalyzer implements ImageAnalysis.Analyzer {
      
          private int degreesToFirebaseRotation(int degrees) {
              switch (degrees) {
                  case 0:
                      return FirebaseVisionImageMetadata.ROTATION_0;
                  case 90:
                      return FirebaseVisionImageMetadata.ROTATION_90;
                  case 180:
                      return FirebaseVisionImageMetadata.ROTATION_180;
                  case 270:
                      return FirebaseVisionImageMetadata.ROTATION_270;
                  default:
                      throw new IllegalArgumentException(
                              "Rotation must be 0, 90, 180, or 270.");
              }
          }
      
          @Override
          public void analyze(ImageProxy imageProxy, int degrees) {
              if (imageProxy == null || imageProxy.getImage() == null) {
                  return;
              }
              Image mediaImage = imageProxy.getImage();
              int rotation = degreesToFirebaseRotation(degrees);
              FirebaseVisionImage image =
                      FirebaseVisionImage.fromMediaImage(mediaImage, rotation);
              // Pass image to an ML Kit Vision API
              // ...
          }
      }
      

      Kotlin+KTX

      private class YourImageAnalyzer : ImageAnalysis.Analyzer {
          private fun degreesToFirebaseRotation(degrees: Int): Int = when(degrees) {
              0 -> FirebaseVisionImageMetadata.ROTATION_0
              90 -> FirebaseVisionImageMetadata.ROTATION_90
              180 -> FirebaseVisionImageMetadata.ROTATION_180
              270 -> FirebaseVisionImageMetadata.ROTATION_270
              else -> throw Exception("Rotation must be 0, 90, 180, or 270.")
          }
      
          override fun analyze(imageProxy: ImageProxy?, degrees: Int) {
              val mediaImage = imageProxy?.image
              val imageRotation = degreesToFirebaseRotation(degrees)
              if (mediaImage != null) {
                  val image = FirebaseVisionImage.fromMediaImage(mediaImage, imageRotation)
                  // Pass image to an ML Kit Vision API
                  // ...
              }
          }
      }
      

      如果您使用的相機程式庫不提供圖像旋轉功能,您可依據裝置旋轉情形和裝置相機感應器方向計算曝光:

      Java

      private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
      static {
          ORIENTATIONS.append(Surface.ROTATION_0, 90);
          ORIENTATIONS.append(Surface.ROTATION_90, 0);
          ORIENTATIONS.append(Surface.ROTATION_180, 270);
          ORIENTATIONS.append(Surface.ROTATION_270, 180);
      }
      
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      private int getRotationCompensation(String cameraId, Activity activity, Context context)
              throws CameraAccessException {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
          int rotationCompensation = ORIENTATIONS.get(deviceRotation);
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE);
          int sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION);
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360;
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          int result;
          switch (rotationCompensation) {
              case 0:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  break;
              case 90:
                  result = FirebaseVisionImageMetadata.ROTATION_90;
                  break;
              case 180:
                  result = FirebaseVisionImageMetadata.ROTATION_180;
                  break;
              case 270:
                  result = FirebaseVisionImageMetadata.ROTATION_270;
                  break;
              default:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  Log.e(TAG, "Bad rotation value: " + rotationCompensation);
          }
          return result;
      }

      Kotlin+KTX

      private val ORIENTATIONS = SparseIntArray()
      
      init {
          ORIENTATIONS.append(Surface.ROTATION_0, 90)
          ORIENTATIONS.append(Surface.ROTATION_90, 0)
          ORIENTATIONS.append(Surface.ROTATION_180, 270)
          ORIENTATIONS.append(Surface.ROTATION_270, 180)
      }
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      @Throws(CameraAccessException::class)
      private fun getRotationCompensation(cameraId: String, activity: Activity, context: Context): Int {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          val deviceRotation = activity.windowManager.defaultDisplay.rotation
          var rotationCompensation = ORIENTATIONS.get(deviceRotation)
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          val cameraManager = context.getSystemService(CAMERA_SERVICE) as CameraManager
          val sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION)!!
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          val result: Int
          when (rotationCompensation) {
              0 -> result = FirebaseVisionImageMetadata.ROTATION_0
              90 -> result = FirebaseVisionImageMetadata.ROTATION_90
              180 -> result = FirebaseVisionImageMetadata.ROTATION_180
              270 -> result = FirebaseVisionImageMetadata.ROTATION_270
              else -> {
                  result = FirebaseVisionImageMetadata.ROTATION_0
                  Log.e(TAG, "Bad rotation value: $rotationCompensation")
              }
          }
          return result
      }

      然後將 media.Image 物件和旋轉值傳遞至 FirebaseVisionImage.fromMediaImage()

      Java

      FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);

      Kotlin+KTX

      val image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation)
    • 如要從檔案 URI 建立 FirebaseVisionImage 物件,請將應用程式結構定義和檔案 URI 傳遞至 FirebaseVisionImage.fromFilePath()。使用 ACTION_GET_CONTENT 意圖提示使用者從圖片庫應用程式中選取圖片時,這項功能就很實用。

      Java

      FirebaseVisionImage image;
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri);
      } catch (IOException e) {
          e.printStackTrace();
      }

      Kotlin+KTX

      val image: FirebaseVisionImage
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri)
      } catch (e: IOException) {
          e.printStackTrace()
      }
    • 如要從 ByteBuffer 或位元組陣列建立 FirebaseVisionImage 物件,請先按照上述的 media.Image 輸入方式計算圖片旋轉角度。

      接著,建立 FirebaseVisionImageMetadata 物件,其中包含圖片的高度、寬度、顏色編碼格式和旋轉:

      Java

      FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
              .setWidth(480)   // 480x360 is typically sufficient for
              .setHeight(360)  // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build();

      Kotlin+KTX

      val metadata = FirebaseVisionImageMetadata.Builder()
              .setWidth(480) // 480x360 is typically sufficient for
              .setHeight(360) // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build()

      使用緩衝區或陣列和中繼資料物件建立 FirebaseVisionImage 物件:

      Java

      FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata);
      // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);

      Kotlin+KTX

      val image = FirebaseVisionImage.fromByteBuffer(buffer, metadata)
      // Or: val image = FirebaseVisionImage.fromByteArray(byteArray, metadata)
    • 如要從 Bitmap 物件建立 FirebaseVisionImage 物件,請按照下列步驟操作:

      Java

      FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);

      Kotlin+KTX

      val image = FirebaseVisionImage.fromBitmap(bitmap)
      Bitmap 物件代表的圖片必須直立,無需額外旋轉。

  2. 取得 FirebaseVisionTextRecognizer 的執行個體。

    如何使用裝置端模型:

    Java

    FirebaseVisionTextRecognizer detector = FirebaseVision.getInstance()
            .getOnDeviceTextRecognizer();

    Kotlin+KTX

    val detector = FirebaseVision.getInstance()
            .onDeviceTextRecognizer

    如要使用雲端模型:

    Java

    FirebaseVisionTextRecognizer detector = FirebaseVision.getInstance()
            .getCloudTextRecognizer();
    // Or, to change the default settings:
    //   FirebaseVisionTextRecognizer detector = FirebaseVision.getInstance()
    //          .getCloudTextRecognizer(options);
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    FirebaseVisionCloudTextRecognizerOptions options = new FirebaseVisionCloudTextRecognizerOptions.Builder()
            .setLanguageHints(Arrays.asList("en", "hi"))
            .build();
    

    Kotlin+KTX

    val detector = FirebaseVision.getInstance().cloudTextRecognizer
    // Or, to change the default settings:
    // val detector = FirebaseVision.getInstance().getCloudTextRecognizer(options)
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    val options = FirebaseVisionCloudTextRecognizerOptions.Builder()
            .setLanguageHints(listOf("en", "hi"))
            .build()
    
  3. 最後,將圖片傳遞至 processImage 方法:

    Java

    Task<FirebaseVisionText> result =
            detector.processImage(image)
                    .addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() {
                        @Override
                        public void onSuccess(FirebaseVisionText firebaseVisionText) {
                            // Task completed successfully
                            // ...
                        }
                    })
                    .addOnFailureListener(
                            new OnFailureListener() {
                                @Override
                                public void onFailure(@NonNull Exception e) {
                                    // Task failed with an exception
                                    // ...
                                }
                            });

    Kotlin+KTX

    val result = detector.processImage(image)
            .addOnSuccessListener { firebaseVisionText ->
                // Task completed successfully
                // ...
            }
            .addOnFailureListener { e ->
                // Task failed with an exception
                // ...
            }

2. 從已辨識的文字區塊擷取文字

如果文字辨識作業成功,FirebaseVisionText 物件會傳遞給成功事件監聽器。FirebaseVisionText 物件包含圖片中可辨識的完整文字,以及零或多個 TextBlock 物件。

每個 TextBlock 都代表矩形文字區塊,其中包含零或多個 Line 物件。每個 Line 物件都含有零或多個 Element 物件,這些物件代表字詞和類似文字的實體 (日期、數字等)。

對於每個 TextBlockLineElement 物件,您可以取得在區域辨識的文字和該區域的邊界座標。

例如:

Java

String resultText = result.getText();
for (FirebaseVisionText.TextBlock block: result.getTextBlocks()) {
    String blockText = block.getText();
    Float blockConfidence = block.getConfidence();
    List<RecognizedLanguage> blockLanguages = block.getRecognizedLanguages();
    Point[] blockCornerPoints = block.getCornerPoints();
    Rect blockFrame = block.getBoundingBox();
    for (FirebaseVisionText.Line line: block.getLines()) {
        String lineText = line.getText();
        Float lineConfidence = line.getConfidence();
        List<RecognizedLanguage> lineLanguages = line.getRecognizedLanguages();
        Point[] lineCornerPoints = line.getCornerPoints();
        Rect lineFrame = line.getBoundingBox();
        for (FirebaseVisionText.Element element: line.getElements()) {
            String elementText = element.getText();
            Float elementConfidence = element.getConfidence();
            List<RecognizedLanguage> elementLanguages = element.getRecognizedLanguages();
            Point[] elementCornerPoints = element.getCornerPoints();
            Rect elementFrame = element.getBoundingBox();
        }
    }
}

Kotlin+KTX

val resultText = result.text
for (block in result.textBlocks) {
    val blockText = block.text
    val blockConfidence = block.confidence
    val blockLanguages = block.recognizedLanguages
    val blockCornerPoints = block.cornerPoints
    val blockFrame = block.boundingBox
    for (line in block.lines) {
        val lineText = line.text
        val lineConfidence = line.confidence
        val lineLanguages = line.recognizedLanguages
        val lineCornerPoints = line.cornerPoints
        val lineFrame = line.boundingBox
        for (element in line.elements) {
            val elementText = element.text
            val elementConfidence = element.confidence
            val elementLanguages = element.recognizedLanguages
            val elementCornerPoints = element.cornerPoints
            val elementFrame = element.boundingBox
        }
    }
}

即時效能改善訣竅

如要在即時應用程式中,使用裝置端模型辨識文字,請按照下列指南操作,以便達到最佳的影格速率:

  • 限制對文字辨識工具的呼叫。在文字辨識工具執行期間,如果有新的影片影格可供使用,請捨棄該影格。
  • 如果您使用文字辨識工具的輸出內容來疊加輸入圖像上的圖像,請先從 ML Kit 取得結果,然後透過一個步驟算繪圖像和疊加層。這樣一來,每個輸入影格就只會算繪到顯示介面一次。
  • 如果你使用 Camera2 API,請擷取 ImageFormat.YUV_420_888 格式的圖片。

    如果您使用舊版 Camera API,請拍攝 ImageFormat.NV21 格式的圖片。

  • 建議以較低的解析度拍攝圖片。不過,也請注意這個 API 的圖片尺寸規定。

後續步驟


辨識文件圖片中的文字

如要辨識文件的文字,請按照下列說明設定並執行雲端文件文字辨識工具。

如下所述文件文字辨識 API 所提供的介面,讓您更輕鬆地處理文件的圖片。不過,如果您偏好 FirebaseVisionTextRecognizer API 提供的介面,只要設定雲端文字辨識工具來使用密集文字模型,即可改用該介面掃描文件。

如何使用文件文字辨識 API:

1. 執行文字辨識工具

如要辨識圖片中的文字,請透過 Bitmapmedia.ImageByteBuffer、位元組陣列或裝置上的檔案建立 FirebaseVisionImage 物件。然後將 FirebaseVisionImage 物件傳遞至 FirebaseVisionDocumentTextRecognizerprocessImage 方法。

  1. 使用圖片建立 FirebaseVisionImage 物件。

    • 如要從 media.Image 物件建立 FirebaseVisionImage 物件 (例如從裝置相機擷取圖片),請將 media.Image 物件和圖片的旋轉角度傳遞至 FirebaseVisionImage.fromMediaImage()

      如果您使用 CameraX 程式庫,OnImageCapturedListenerImageAnalysis.Analyzer 類別會為您計算旋轉值,因此只要在呼叫 FirebaseVisionImage.fromMediaImage() 之前,將旋轉角度轉換為 ML Kit 的 ROTATION_ 常數即可:

      Java

      private class YourAnalyzer implements ImageAnalysis.Analyzer {
      
          private int degreesToFirebaseRotation(int degrees) {
              switch (degrees) {
                  case 0:
                      return FirebaseVisionImageMetadata.ROTATION_0;
                  case 90:
                      return FirebaseVisionImageMetadata.ROTATION_90;
                  case 180:
                      return FirebaseVisionImageMetadata.ROTATION_180;
                  case 270:
                      return FirebaseVisionImageMetadata.ROTATION_270;
                  default:
                      throw new IllegalArgumentException(
                              "Rotation must be 0, 90, 180, or 270.");
              }
          }
      
          @Override
          public void analyze(ImageProxy imageProxy, int degrees) {
              if (imageProxy == null || imageProxy.getImage() == null) {
                  return;
              }
              Image mediaImage = imageProxy.getImage();
              int rotation = degreesToFirebaseRotation(degrees);
              FirebaseVisionImage image =
                      FirebaseVisionImage.fromMediaImage(mediaImage, rotation);
              // Pass image to an ML Kit Vision API
              // ...
          }
      }
      

      Kotlin+KTX

      private class YourImageAnalyzer : ImageAnalysis.Analyzer {
          private fun degreesToFirebaseRotation(degrees: Int): Int = when(degrees) {
              0 -> FirebaseVisionImageMetadata.ROTATION_0
              90 -> FirebaseVisionImageMetadata.ROTATION_90
              180 -> FirebaseVisionImageMetadata.ROTATION_180
              270 -> FirebaseVisionImageMetadata.ROTATION_270
              else -> throw Exception("Rotation must be 0, 90, 180, or 270.")
          }
      
          override fun analyze(imageProxy: ImageProxy?, degrees: Int) {
              val mediaImage = imageProxy?.image
              val imageRotation = degreesToFirebaseRotation(degrees)
              if (mediaImage != null) {
                  val image = FirebaseVisionImage.fromMediaImage(mediaImage, imageRotation)
                  // Pass image to an ML Kit Vision API
                  // ...
              }
          }
      }
      

      如果您使用的相機程式庫不提供圖像旋轉功能,您可依據裝置旋轉情形和裝置相機感應器方向計算曝光:

      Java

      private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
      static {
          ORIENTATIONS.append(Surface.ROTATION_0, 90);
          ORIENTATIONS.append(Surface.ROTATION_90, 0);
          ORIENTATIONS.append(Surface.ROTATION_180, 270);
          ORIENTATIONS.append(Surface.ROTATION_270, 180);
      }
      
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      private int getRotationCompensation(String cameraId, Activity activity, Context context)
              throws CameraAccessException {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
          int rotationCompensation = ORIENTATIONS.get(deviceRotation);
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE);
          int sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION);
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360;
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          int result;
          switch (rotationCompensation) {
              case 0:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  break;
              case 90:
                  result = FirebaseVisionImageMetadata.ROTATION_90;
                  break;
              case 180:
                  result = FirebaseVisionImageMetadata.ROTATION_180;
                  break;
              case 270:
                  result = FirebaseVisionImageMetadata.ROTATION_270;
                  break;
              default:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  Log.e(TAG, "Bad rotation value: " + rotationCompensation);
          }
          return result;
      }

      Kotlin+KTX

      private val ORIENTATIONS = SparseIntArray()
      
      init {
          ORIENTATIONS.append(Surface.ROTATION_0, 90)
          ORIENTATIONS.append(Surface.ROTATION_90, 0)
          ORIENTATIONS.append(Surface.ROTATION_180, 270)
          ORIENTATIONS.append(Surface.ROTATION_270, 180)
      }
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      @Throws(CameraAccessException::class)
      private fun getRotationCompensation(cameraId: String, activity: Activity, context: Context): Int {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          val deviceRotation = activity.windowManager.defaultDisplay.rotation
          var rotationCompensation = ORIENTATIONS.get(deviceRotation)
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          val cameraManager = context.getSystemService(CAMERA_SERVICE) as CameraManager
          val sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION)!!
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          val result: Int
          when (rotationCompensation) {
              0 -> result = FirebaseVisionImageMetadata.ROTATION_0
              90 -> result = FirebaseVisionImageMetadata.ROTATION_90
              180 -> result = FirebaseVisionImageMetadata.ROTATION_180
              270 -> result = FirebaseVisionImageMetadata.ROTATION_270
              else -> {
                  result = FirebaseVisionImageMetadata.ROTATION_0
                  Log.e(TAG, "Bad rotation value: $rotationCompensation")
              }
          }
          return result
      }

      然後將 media.Image 物件和旋轉值傳遞至 FirebaseVisionImage.fromMediaImage()

      Java

      FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);

      Kotlin+KTX

      val image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation)
    • 如要從檔案 URI 建立 FirebaseVisionImage 物件,請將應用程式結構定義和檔案 URI 傳遞至 FirebaseVisionImage.fromFilePath()。使用 ACTION_GET_CONTENT 意圖提示使用者從圖片庫應用程式中選取圖片時,這項功能就很實用。

      Java

      FirebaseVisionImage image;
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri);
      } catch (IOException e) {
          e.printStackTrace();
      }

      Kotlin+KTX

      val image: FirebaseVisionImage
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri)
      } catch (e: IOException) {
          e.printStackTrace()
      }
    • 如要從 ByteBuffer 或位元組陣列建立 FirebaseVisionImage 物件,請先按照上述的 media.Image 輸入方式計算圖片旋轉角度。

      接著,建立 FirebaseVisionImageMetadata 物件,其中包含圖片的高度、寬度、顏色編碼格式和旋轉:

      Java

      FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
              .setWidth(480)   // 480x360 is typically sufficient for
              .setHeight(360)  // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build();

      Kotlin+KTX

      val metadata = FirebaseVisionImageMetadata.Builder()
              .setWidth(480) // 480x360 is typically sufficient for
              .setHeight(360) // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build()

      使用緩衝區或陣列和中繼資料物件建立 FirebaseVisionImage 物件:

      Java

      FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata);
      // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);

      Kotlin+KTX

      val image = FirebaseVisionImage.fromByteBuffer(buffer, metadata)
      // Or: val image = FirebaseVisionImage.fromByteArray(byteArray, metadata)
    • 如要從 Bitmap 物件建立 FirebaseVisionImage 物件,請按照下列步驟操作:

      Java

      FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);

      Kotlin+KTX

      val image = FirebaseVisionImage.fromBitmap(bitmap)
      Bitmap 物件代表的圖片必須直立,無需額外旋轉。

  2. 取得 FirebaseVisionDocumentTextRecognizer 的執行個體:

    Java

    FirebaseVisionDocumentTextRecognizer detector = FirebaseVision.getInstance()
            .getCloudDocumentTextRecognizer();
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    FirebaseVisionCloudDocumentRecognizerOptions options =
            new FirebaseVisionCloudDocumentRecognizerOptions.Builder()
                    .setLanguageHints(Arrays.asList("en", "hi"))
                    .build();
    FirebaseVisionDocumentTextRecognizer detector = FirebaseVision.getInstance()
            .getCloudDocumentTextRecognizer(options);

    Kotlin+KTX

    val detector = FirebaseVision.getInstance()
            .cloudDocumentTextRecognizer
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    val options = FirebaseVisionCloudDocumentRecognizerOptions.Builder()
            .setLanguageHints(listOf("en", "hi"))
            .build()
    val detector = FirebaseVision.getInstance()
            .getCloudDocumentTextRecognizer(options)

  3. 最後,將圖片傳遞至 processImage 方法:

    Java

    detector.processImage(myImage)
            .addOnSuccessListener(new OnSuccessListener<FirebaseVisionDocumentText>() {
                @Override
                public void onSuccess(FirebaseVisionDocumentText result) {
                    // Task completed successfully
                    // ...
                }
            })
            .addOnFailureListener(new OnFailureListener() {
                @Override
                public void onFailure(@NonNull Exception e) {
                    // Task failed with an exception
                    // ...
                }
            });

    Kotlin+KTX

    detector.processImage(myImage)
            .addOnSuccessListener { firebaseVisionDocumentText ->
                // Task completed successfully
                // ...
            }
            .addOnFailureListener { e ->
                // Task failed with an exception
                // ...
            }

2. 從已辨識的文字區塊擷取文字

如果文字辨識作業成功,會傳回 FirebaseVisionDocumentText 物件。FirebaseVisionDocumentText 物件包含圖片中可辨識的完整文字,以及反映所辨識文件結構的物件階層:

對於每個 BlockParagraphWordSymbol 物件,您可以取得區域中辨識的文字和該區域的定界座標。

例如:

Java

String resultText = result.getText();
for (FirebaseVisionDocumentText.Block block: result.getBlocks()) {
    String blockText = block.getText();
    Float blockConfidence = block.getConfidence();
    List<RecognizedLanguage> blockRecognizedLanguages = block.getRecognizedLanguages();
    Rect blockFrame = block.getBoundingBox();
    for (FirebaseVisionDocumentText.Paragraph paragraph: block.getParagraphs()) {
        String paragraphText = paragraph.getText();
        Float paragraphConfidence = paragraph.getConfidence();
        List<RecognizedLanguage> paragraphRecognizedLanguages = paragraph.getRecognizedLanguages();
        Rect paragraphFrame = paragraph.getBoundingBox();
        for (FirebaseVisionDocumentText.Word word: paragraph.getWords()) {
            String wordText = word.getText();
            Float wordConfidence = word.getConfidence();
            List<RecognizedLanguage> wordRecognizedLanguages = word.getRecognizedLanguages();
            Rect wordFrame = word.getBoundingBox();
            for (FirebaseVisionDocumentText.Symbol symbol: word.getSymbols()) {
                String symbolText = symbol.getText();
                Float symbolConfidence = symbol.getConfidence();
                List<RecognizedLanguage> symbolRecognizedLanguages = symbol.getRecognizedLanguages();
                Rect symbolFrame = symbol.getBoundingBox();
            }
        }
    }
}

Kotlin+KTX

val resultText = result.text
for (block in result.blocks) {
    val blockText = block.text
    val blockConfidence = block.confidence
    val blockRecognizedLanguages = block.recognizedLanguages
    val blockFrame = block.boundingBox
    for (paragraph in block.paragraphs) {
        val paragraphText = paragraph.text
        val paragraphConfidence = paragraph.confidence
        val paragraphRecognizedLanguages = paragraph.recognizedLanguages
        val paragraphFrame = paragraph.boundingBox
        for (word in paragraph.words) {
            val wordText = word.text
            val wordConfidence = word.confidence
            val wordRecognizedLanguages = word.recognizedLanguages
            val wordFrame = word.boundingBox
            for (symbol in word.symbols) {
                val symbolText = symbol.text
                val symbolConfidence = symbol.confidence
                val symbolRecognizedLanguages = symbol.recognizedLanguages
                val symbolFrame = symbol.boundingBox
            }
        }
    }
}

後續步驟