Google is committed to advancing racial equity for Black communities. See how.
本頁面由 Cloud Translation API 翻譯而成。
Switch to English

在iOS上使用ML Kit識別圖像中的文本

您可以使用ML Kit識別圖像中的文本。 ML Kit既具有適用於識別圖像中的文本(例如路牌文本)的通用API,又具有針對識別文檔文本進行了優化的API。通用API具有設備上模型和基於雲的模型。文檔文本識別僅作為基於雲的模型可用。有關雲模型和設備上模型的比較,請參見概述

在你開始之前

  1. 如果您尚未將Firebase添加到您的應用程序,請按照入門指南中的步驟進行操作。
  2. 在您的Podfile中包括ML Kit庫:
    pod 'Firebase/MLVision', '6.25.0'
    # If using an on-device API:
    pod 'Firebase/MLVisionTextModel', '6.25.0'
    
    安裝或更新項目的Pod之後,請確保使用其.xcworkspace打開Xcode項目。
  3. 在您的應用中,導入Firebase:

    迅速

    import Firebase

    物鏡

    @import Firebase;
  4. 如果要使用基於雲的模型,並且尚未為項目啟用基於雲的API,請立即執行以下操作:

    1. 打開Firebase控制台的ML Kit API頁面
    2. 如果尚未將項目升級到Blaze計劃,請單擊“升級” 。 (僅當您的項目不在Blaze計劃中時,系統才會提示您升級。)

      只有Blaze級別的項目才能使用基於雲的API。

    3. 如果尚未啟用基於雲的API,請點擊啟用基於雲的API

    如果您只想使用設備上的型號,則可以跳過此步驟。

現在您可以開始識別圖像中的文本了。

輸入圖像準則

  • 為了使ML Kit準確識別文本,輸入圖像必須包含由足夠的像素數據表示的文本。理想情況下,對於拉丁文字,每個字符應至少為16x16像素。對於中文,日文和韓文文本(僅基於雲的API支持),每個字符應為24x24像素。對於所有語言,大於24x24像素的字符通常沒有準確性。

    因此,例如,一張640x480的圖像可能會很好地掃描佔據該圖像整個寬度的名片。要掃描打印在letter尺寸紙張上的文檔,可能需要720x1280像素的圖像。

  • 圖像聚焦不良會損害文本識別的準確性。如果沒有得到滿意的結果,請嘗試要求用戶重新捕獲圖像。

  • 如果要在實時應用程序中識別文本,則可能還需要考慮輸入圖像的整體尺寸。較小的圖像可以更快地處理,因此可以減少延遲,以較低的分辨率捕獲圖像(請牢記上述精度要求),並確保文本佔據盡可能多的圖像。另請參閱提高實時性能的提示


識別圖像中的文字

要使用設備上或基於雲的模型識別圖像中的文本,請按照以下說明運行文本識別器。

1.運行文本識別器

將圖像作為UIImage或CMSampleBufferRef傳遞給VisionTextRecognizer的process(_:completion :)方法:
  1. 通過調用onDeviceTextRecognizercloudTextRecognizer獲取VisionTextRecognizer的實例:

    迅速

    要使用設備上模型:

    let vision = Vision.vision()
    let textRecognizer = vision.onDeviceTextRecognizer()
    

    要使用雲模型:

    let vision = Vision.vision()
    let textRecognizer = vision.cloudTextRecognizer()
    
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    let options = VisionCloudTextRecognizerOptions()
    options.languageHints = ["en", "hi"]
    let textRecognizer = vision.cloudTextRecognizer(options: options)
    

    物鏡

    要使用設備上模型:

    FIRVision *vision = [FIRVision vision];
    FIRVisionTextRecognizer *textRecognizer = [vision onDeviceTextRecognizer];
    

    要使用雲模型:

    FIRVision *vision = [FIRVision vision];
    FIRVisionTextRecognizer *textRecognizer = [vision cloudTextRecognizer];
    
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    FIRVisionCloudTextRecognizerOptions *options =
            [[FIRVisionCloudTextRecognizerOptions alloc] init];
    options.languageHints = @[@"en", @"hi"];
    FIRVisionTextRecognizer *textRecognizer = [vision cloudTextRecognizerWithOptions:options];
    
  2. 使用UIImageCMSampleBufferRef創建VisionImage對象。

    要使用UIImage

    1. 如有必要,旋轉圖像,使其imageOrientation屬性為.up
    2. 使用正確旋轉的UIImage創建一個VisionImage對象。不要指定任何旋轉元數據-必須使用默認值.topLeft

      迅速

      let image = VisionImage(image: uiImage)

      目標C

      FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];

    要使用CMSampleBufferRef

    1. 創建一個VisionImageMetadata對象,該對象指定CMSampleBufferRef緩衝區中包含的圖像數據的方向。

      獲取圖像方向:

      迅速

      第076章

      目標C

      - (FIRVisionDetectorImageOrientation)
          imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation
                                 cameraPosition:(AVCaptureDevicePosition)cameraPosition {
        switch (deviceOrientation) {
          case UIDeviceOrientationPortrait:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationLeftTop;
            } else {
              return FIRVisionDetectorImageOrientationRightTop;
            }
          case UIDeviceOrientationLandscapeLeft:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationBottomLeft;
            } else {
              return FIRVisionDetectorImageOrientationTopLeft;
            }
          case UIDeviceOrientationPortraitUpsideDown:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationRightBottom;
            } else {
              return FIRVisionDetectorImageOrientationLeftBottom;
            }
          case UIDeviceOrientationLandscapeRight:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationTopRight;
            } else {
              return FIRVisionDetectorImageOrientationBottomRight;
            }
          default:
            return FIRVisionDetectorImageOrientationTopLeft;
        }
      }

      然後,創建元數據對象:

      迅速

      let cameraPosition = AVCaptureDevice.Position.back  // Set to the capture device you used.
      let metadata = VisionImageMetadata()
      metadata.orientation = imageOrientation(
          deviceOrientation: UIDevice.current.orientation,
          cameraPosition: cameraPosition
      )

      物鏡

      FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init];
      AVCaptureDevicePosition cameraPosition =
          AVCaptureDevicePositionBack;  // Set to the capture device you used.
      metadata.orientation =
          [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation
                                       cameraPosition:cameraPosition];
    2. 使用CMSampleBufferRef對象和旋轉元數據創建VisionImage對象:

      迅速

      let image = VisionImage(buffer: sampleBuffer)
      image.metadata = metadata

      物鏡

      FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:sampleBuffer];
      image.metadata = metadata;
  3. 然後,將圖像傳遞給process(_:completion:)方法:

    迅速

    textRecognizer.process(visionImage) { result, error in
      guard error == nil, let result = result else {
        // ...
        return
      }
    
      // Recognized text
    }
    

    目標C

    [textRecognizer processImage:image
                      completion:^(FIRVisionText *_Nullable result,
                                   NSError *_Nullable error) {
      if (error != nil || result == nil) {
        // ...
        return;
      }
    
      // Recognized text
    }];
    

2.從識別的文本塊中提取文本

如果文本識別操作成功,它將返回[`VisionText`] [VisionText]對象。 “ VisionText”對象包含圖像中識別的全文和零個或多個[`VisionTextBlock`] [VisionTextBlock]對象。每個`VisionTextBlock`代表一個矩形的文本塊,其中包含零個或多個[`VisionTextLine`] [VisionTextLine]對象。每個`VisionTextLine`對象包含零個或多個[`VisionTextElement`] [VisionTextElement]對象,這些對象代表單詞和類似單詞的實體(日期,數字等)。對於每個“ VisionTextBlock”,“ VisionTextLine”和“ VisionTextElement”對象,您都可以獲得在區域中識別的文本以及該區域的邊界坐標。例如:

迅速

let resultText = result.text
for block in result.blocks {
    let blockText = block.text
    let blockConfidence = block.confidence
    let blockLanguages = block.recognizedLanguages
    let blockCornerPoints = block.cornerPoints
    let blockFrame = block.frame
    for line in block.lines {
        let lineText = line.text
        let lineConfidence = line.confidence
        let lineLanguages = line.recognizedLanguages
        let lineCornerPoints = line.cornerPoints
        let lineFrame = line.frame
        for element in line.elements {
            let elementText = element.text
            let elementConfidence = element.confidence
            let elementLanguages = element.recognizedLanguages
            let elementCornerPoints = element.cornerPoints
            let elementFrame = element.frame
        }
    }
}

目標C

NSString *resultText = result.text;
for (FIRVisionTextBlock *block in result.blocks) {
  NSString *blockText = block.text;
  NSNumber *blockConfidence = block.confidence;
  NSArray<FIRVisionTextRecognizedLanguage *> *blockLanguages = block.recognizedLanguages;
  NSArray<NSValue *> *blockCornerPoints = block.cornerPoints;
  CGRect blockFrame = block.frame;
  for (FIRVisionTextLine *line in block.lines) {
    NSString *lineText = line.text;
    NSNumber *lineConfidence = line.confidence;
    NSArray<FIRVisionTextRecognizedLanguage *> *lineLanguages = line.recognizedLanguages;
    NSArray<NSValue *> *lineCornerPoints = line.cornerPoints;
    CGRect lineFrame = line.frame;
    for (FIRVisionTextElement *element in line.elements) {
      NSString *elementText = element.text;
      NSNumber *elementConfidence = element.confidence;
      NSArray<FIRVisionTextRecognizedLanguage *> *elementLanguages = element.recognizedLanguages;
      NSArray<NSValue *> *elementCornerPoints = element.cornerPoints;
      CGRect elementFrame = element.frame;
    }
  }
}

改善實時性能的提示

如果要使用設備上的模型來識別實時應用程序中的文本,請遵循以下準則以獲得最佳幀速率:

  • 節氣門調用文本識別器。如果在文本識別器運行時有新的視頻幀可用,請放下該幀。
  • 如果要使用文本識別器的輸出在輸入圖像上疊加圖形,請首先從ML Kit獲取結果,然後在一個步驟中渲染圖像並疊加。這樣,對於每個輸入幀,只渲染一次到顯示表面。有關示例,請參見展示樣本應用程序中的previewOverlayViewFIRDetectionOverlayView類。
  • 考慮以較低的分辨率捕獲圖像。但是,請記住此API的圖像尺寸要求。

下一步


識別文檔圖像中的文本

要識別文檔的文本,請按如下所述配置並運行基於雲的文檔文本識別器。

如下所述,文檔文本識別API提供了一個旨在更方便地處理文檔圖像的接口。但是,如果您更喜歡稀疏文本API提供的接口,則可以通過將雲文本識別器配置為使用密集文本模型來使用它來掃描文檔。

要使用文檔文本識別API,請執行以下操作:

1.運行文本識別器

將圖像作為UIImageCMSampleBufferRef傳遞給VisionDocumentTextRecognizerprocess(_:completion:)方法:

  1. 通過調用cloudDocumentTextRecognizer獲取VisionDocumentTextRecognizer的實例:

    迅速

    let vision = Vision.vision()
    let textRecognizer = vision.cloudDocumentTextRecognizer()
    
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    let options = VisionCloudDocumentTextRecognizerOptions()
    options.languageHints = ["en", "hi"]
    let textRecognizer = vision.cloudDocumentTextRecognizer(options: options)
    

    目標C

    FIRVision *vision = [FIRVision vision];
    FIRVisionDocumentTextRecognizer *textRecognizer = [vision cloudDocumentTextRecognizer];
    
    // Or, to provide language hints to assist with language detection:
    // See https://cloud.google.com/vision/docs/languages for supported languages
    FIRVisionCloudDocumentTextRecognizerOptions *options =
            [[FIRVisionCloudDocumentTextRecognizerOptions alloc] init];
    options.languageHints = @[@"en", @"hi"];
    FIRVisionDocumentTextRecognizer *textRecognizer = [vision cloudDocumentTextRecognizerWithOptions:options];
    
  2. 使用UIImageCMSampleBufferRef創建VisionImage對象。

    要使用UIImage

    1. 如有必要,旋轉圖像,使其imageOrientation屬性為.up
    2. 使用正確旋轉的UIImage創建一個VisionImage對象。不要指定任何旋轉元數據-必須使用默認值.topLeft

      迅速

      let image = VisionImage(image: uiImage)

      目標C

      FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];

    要使用CMSampleBufferRef

    1. 創建一個VisionImageMetadata對象,該對象指定CMSampleBufferRef緩衝區中包含的圖像數據的方向。

      獲取圖像方向:

      迅速

      第076章

      物鏡

      - (FIRVisionDetectorImageOrientation)
          imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation
                                 cameraPosition:(AVCaptureDevicePosition)cameraPosition {
        switch (deviceOrientation) {
          case UIDeviceOrientationPortrait:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationLeftTop;
            } else {
              return FIRVisionDetectorImageOrientationRightTop;
            }
          case UIDeviceOrientationLandscapeLeft:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationBottomLeft;
            } else {
              return FIRVisionDetectorImageOrientationTopLeft;
            }
          case UIDeviceOrientationPortraitUpsideDown:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationRightBottom;
            } else {
              return FIRVisionDetectorImageOrientationLeftBottom;
            }
          case UIDeviceOrientationLandscapeRight:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationTopRight;
            } else {
              return FIRVisionDetectorImageOrientationBottomRight;
            }
          default:
            return FIRVisionDetectorImageOrientationTopLeft;
        }
      }

      然後,創建元數據對象:

      迅速

      let cameraPosition = AVCaptureDevice.Position.back  // Set to the capture device you used.
      let metadata = VisionImageMetadata()
      metadata.orientation = imageOrientation(
          deviceOrientation: UIDevice.current.orientation,
          cameraPosition: cameraPosition
      )

      目標C

      FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init];
      AVCaptureDevicePosition cameraPosition =
          AVCaptureDevicePositionBack;  // Set to the capture device you used.
      metadata.orientation =
          [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation
                                       cameraPosition:cameraPosition];
    2. 使用CMSampleBufferRef對象和旋轉元數據創建VisionImage對象:

      迅速

      let image = VisionImage(buffer: sampleBuffer)
      image.metadata = metadata

      物鏡

      FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:sampleBuffer];
      image.metadata = metadata;
  3. 然後,將圖像傳遞給process(_:completion:)方法:

    迅速

    textRecognizer.process(visionImage) { result, error in
      guard error == nil, let result = result else {
        // ...
        return
      }
    
      // Recognized text
    }
    

    物鏡

    [textRecognizer processImage:image
                      completion:^(FIRVisionDocumentText *_Nullable result,
                                   NSError *_Nullable error) {
      if (error != nil || result == nil) {
        // ...
        return;
      }
    
        // Recognized text
    }];
    

2.從識別的文本塊中提取文本

如果文本識別操作成功,它將返回VisionDocumentText對象。 VisionDocumentText對象包含圖像中識別的全文和反映所識別文檔結構的對象層次結構:

對於每個VisionDocumentTextBlockVisionDocumentTextParagraphVisionDocumentTextWordVisionDocumentTextSymbol對象,您可以獲得在區域中識別的文本以及該區域的邊界坐標。

例如:

迅速

let resultText = result.text
for block in result.blocks {
    let blockText = block.text
    let blockConfidence = block.confidence
    let blockRecognizedLanguages = block.recognizedLanguages
    let blockBreak = block.recognizedBreak
    let blockCornerPoints = block.cornerPoints
    let blockFrame = block.frame
    for paragraph in block.paragraphs {
        let paragraphText = paragraph.text
        let paragraphConfidence = paragraph.confidence
        let paragraphRecognizedLanguages = paragraph.recognizedLanguages
        let paragraphBreak = paragraph.recognizedBreak
        let paragraphCornerPoints = paragraph.cornerPoints
        let paragraphFrame = paragraph.frame
        for word in paragraph.words {
            let wordText = word.text
            let wordConfidence = word.confidence
            let wordRecognizedLanguages = word.recognizedLanguages
            let wordBreak = word.recognizedBreak
            let wordCornerPoints = word.cornerPoints
            let wordFrame = word.frame
            for symbol in word.symbols {
                let symbolText = symbol.text
                let symbolConfidence = symbol.confidence
                let symbolRecognizedLanguages = symbol.recognizedLanguages
                let symbolBreak = symbol.recognizedBreak
                let symbolCornerPoints = symbol.cornerPoints
                let symbolFrame = symbol.frame
            }
        }
    }
}

物鏡

NSString *resultText = result.text;
for (FIRVisionDocumentTextBlock *block in result.blocks) {
  NSString *blockText = block.text;
  NSNumber *blockConfidence = block.confidence;
  NSArray<FIRVisionTextRecognizedLanguage *> *blockRecognizedLanguages = block.recognizedLanguages;
  FIRVisionTextRecognizedBreak *blockBreak = block.recognizedBreak;
  CGRect blockFrame = block.frame;
  for (FIRVisionDocumentTextParagraph *paragraph in block.paragraphs) {
    NSString *paragraphText = paragraph.text;
    NSNumber *paragraphConfidence = paragraph.confidence;
    NSArray<FIRVisionTextRecognizedLanguage *> *paragraphRecognizedLanguages = paragraph.recognizedLanguages;
    FIRVisionTextRecognizedBreak *paragraphBreak = paragraph.recognizedBreak;
    CGRect paragraphFrame = paragraph.frame;
    for (FIRVisionDocumentTextWord *word in paragraph.words) {
      NSString *wordText = word.text;
      NSNumber *wordConfidence = word.confidence;
      NSArray<FIRVisionTextRecognizedLanguage *> *wordRecognizedLanguages = word.recognizedLanguages;
      FIRVisionTextRecognizedBreak *wordBreak = word.recognizedBreak;
      CGRect wordFrame = word.frame;
      for (FIRVisionDocumentTextSymbol *symbol in word.symbols) {
        NSString *symbolText = symbol.text;
        NSNumber *symbolConfidence = symbol.confidence;
        NSArray<FIRVisionTextRecognizedLanguage *> *symbolRecognizedLanguages = symbol.recognizedLanguages;
        FIRVisionTextRecognizedBreak *symbolBreak = symbol.recognizedBreak;
        CGRect symbolFrame = symbol.frame;
      }
    }
  }
}

下一步