您可以使用 Firebase ML 辨識圖片中的文字。Firebase ML擁有 適用於辨識圖片中的文字 標誌文字,以及最佳化的 API,可辨識 文件。
,瞭解如何調查及移除這項存取權。事前準備
-
如果尚未將 Firebase 加入應用程式,請按照下列步驟操作:
《入門指南》中的步驟。
- 在 Xcode 中保持開啟應用程式專案,然後前往「檔案」檔案 >新增套件。
- 在系統提示時,新增 Firebase Apple 平台 SDK 存放區:
- 選擇 Firebase ML 程式庫。
- 在目標建構設定的「Other Linker Flags」部分中新增
-ObjC
標記。 - 完成後,Xcode 會自動開始解析並下載 複製到背景依附元件
- 在應用程式中匯入 Firebase:
Swift
import FirebaseMLModelDownloader
Objective-C
@import FirebaseMLModelDownloader;
-
如果您尚未為專案啟用雲端式 API,請先啟用 現在:
- 開啟 Firebase ML Firebase 控制台的 API 頁面。
-
如果您尚未將專案升級至 Blaze 定價方案,請按一下 如要這麼做,請升級。(只有在您的 專案並未採用 Blaze 方案)。
只有 Blaze 層級的專案可以使用以雲端為基礎的 API。
- 如果尚未啟用雲端式 API,請按一下「Enable Cloud-based API」(啟用雲端式 API) API
使用 Swift Package Manager 安裝及管理 Firebase 依附元件。
https://github.com/firebase/firebase-ios-sdk.git
接著,進行一些應用程式內設定:
現在可以開始辨識圖片中的文字。
輸入圖片規範
-
為了讓 Firebase ML 準確辨識文字,輸入圖片必須包含 以充足的像素資料表示的文字最適合拉丁字母 每個字元至少要有 16x16 像素中文 日文和韓文文字 字元應為 24x24 像素所有語言通常沒有 對字元大於 24x24 像素的特性來說,準確性的優勢在於。
舉例來說,640x480 的圖片適合掃描名片 圖片會佔滿圖片的整個寬度如何掃描列印的文件 則建議使用 720x1280 像素的圖片。
-
圖片焦點不佳可能會降低文字辨識的準確度。如果您不 請嘗試重新擷取圖片。
辨識圖片中的文字
如要辨識圖片中的文字,請按照說明執行文字辨識工具 。
1. 執行文字辨識工具
將圖片做為UIImage
或 CMSampleBufferRef
傳遞至
VisionTextRecognizer
的process(_:completion:)
方法:
- 呼叫即可取得
VisionTextRecognizer
的例項cloudTextRecognizer
:Swift
let vision = Vision.vision() let textRecognizer = vision.cloudTextRecognizer() // Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages let options = VisionCloudTextRecognizerOptions() options.languageHints = ["en", "hi"] let textRecognizer = vision.cloudTextRecognizer(options: options)
Objective-C
FIRVision *vision = [FIRVision vision]; FIRVisionTextRecognizer *textRecognizer = [vision cloudTextRecognizer]; // Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages FIRVisionCloudTextRecognizerOptions *options = [[FIRVisionCloudTextRecognizerOptions alloc] init]; options.languageHints = @[@"en", @"hi"]; FIRVisionTextRecognizer *textRecognizer = [vision cloudTextRecognizerWithOptions:options];
-
為了呼叫 Cloud Vision,圖片必須採用 Base64 編碼格式
字串。如何處理
UIImage
:Swift
guard let imageData = uiImage.jpegData(compressionQuality: 1.0) else { return } let base64encodedImage = imageData.base64EncodedString()
Objective-C
NSData *imageData = UIImageJPEGRepresentation(uiImage, 1.0f); NSString *base64encodedImage = [imageData base64EncodedStringWithOptions:NSDataBase64Encoding76CharacterLineLength];
-
接著,將圖片傳遞至
process(_:completion:)
方法:Swift
textRecognizer.process(visionImage) { result, error in guard error == nil, let result = result else { // ... return } // Recognized text }
Objective-C
[textRecognizer processImage:image completion:^(FIRVisionText *_Nullable result, NSError *_Nullable error) { if (error != nil || result == nil) { // ... return; } // Recognized text }];
2. 從已辨識的文字區塊擷取文字
如果文字辨識作業成功,系統會傳回VisionText
物件。VisionText
物件包含完整的文字
已辨識於圖中,且為零或多個 VisionTextBlock
如需儲存大量結構化物件
建議使用 Cloud Bigtable
每個 VisionTextBlock
都代表一段文字區塊,其中包含
零或多個 VisionTextLine
物件。每VisionTextLine
物件包含零個或多個 VisionTextElement
物件
,用來表示字詞和類似文字的實體 (日期、數字等)。
針對每個 VisionTextBlock
、VisionTextLine
和 VisionTextElement
物件
即可讓系統辨識該區域中的文字和
區域。
例如:
Swift
let resultText = result.text for block in result.blocks { let blockText = block.text let blockConfidence = block.confidence let blockLanguages = block.recognizedLanguages let blockCornerPoints = block.cornerPoints let blockFrame = block.frame for line in block.lines { let lineText = line.text let lineConfidence = line.confidence let lineLanguages = line.recognizedLanguages let lineCornerPoints = line.cornerPoints let lineFrame = line.frame for element in line.elements { let elementText = element.text let elementConfidence = element.confidence let elementLanguages = element.recognizedLanguages let elementCornerPoints = element.cornerPoints let elementFrame = element.frame } } }
Objective-C
NSString *resultText = result.text; for (FIRVisionTextBlock *block in result.blocks) { NSString *blockText = block.text; NSNumber *blockConfidence = block.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *blockLanguages = block.recognizedLanguages; NSArray<NSValue *> *blockCornerPoints = block.cornerPoints; CGRect blockFrame = block.frame; for (FIRVisionTextLine *line in block.lines) { NSString *lineText = line.text; NSNumber *lineConfidence = line.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *lineLanguages = line.recognizedLanguages; NSArray<NSValue *> *lineCornerPoints = line.cornerPoints; CGRect lineFrame = line.frame; for (FIRVisionTextElement *element in line.elements) { NSString *elementText = element.text; NSNumber *elementConfidence = element.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *elementLanguages = element.recognizedLanguages; NSArray<NSValue *> *elementCornerPoints = element.cornerPoints; CGRect elementFrame = element.frame; } } }
後續步驟
- 部署至使用 Cloud API 的正式版應用程式之前,您應先完成 防範及減少 未經授權 API 存取的影響
辨識文件圖片中的文字
如要辨識文件中的文字,請設定並執行 與文件文字辨識工具搭配使用
以下說明文件文字辨識 API 提供的介面 是為了方便處理文件圖片。不過 如果您偏好稀疏文字 API 提供的介面,則可以使用這個 API 只要將 Cloud 文字辨識工具設為 使用密集文字模型。
如何使用文件文字辨識 API:
1. 執行文字辨識工具
將圖片做為UIImage
或 CMSampleBufferRef
傳遞至
VisionDocumentTextRecognizer
的process(_:completion:)
方法:
- 呼叫即可取得
VisionDocumentTextRecognizer
的例項cloudDocumentTextRecognizer
:Swift
let vision = Vision.vision() let textRecognizer = vision.cloudDocumentTextRecognizer() // Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages let options = VisionCloudDocumentTextRecognizerOptions() options.languageHints = ["en", "hi"] let textRecognizer = vision.cloudDocumentTextRecognizer(options: options)
Objective-C
FIRVision *vision = [FIRVision vision]; FIRVisionDocumentTextRecognizer *textRecognizer = [vision cloudDocumentTextRecognizer]; // Or, to provide language hints to assist with language detection: // See https://cloud.google.com/vision/docs/languages for supported languages FIRVisionCloudDocumentTextRecognizerOptions *options = [[FIRVisionCloudDocumentTextRecognizerOptions alloc] init]; options.languageHints = @[@"en", @"hi"]; FIRVisionDocumentTextRecognizer *textRecognizer = [vision cloudDocumentTextRecognizerWithOptions:options];
-
為了呼叫 Cloud Vision,圖片必須採用 Base64 編碼格式
字串。如何處理
UIImage
:Swift
guard let imageData = uiImage.jpegData(compressionQuality: 1.0) else { return } let base64encodedImage = imageData.base64EncodedString()
Objective-C
NSData *imageData = UIImageJPEGRepresentation(uiImage, 1.0f); NSString *base64encodedImage = [imageData base64EncodedStringWithOptions:NSDataBase64Encoding76CharacterLineLength];
-
接著,將圖片傳遞至
process(_:completion:)
方法:Swift
textRecognizer.process(visionImage) { result, error in guard error == nil, let result = result else { // ... return } // Recognized text }
Objective-C
[textRecognizer processImage:image completion:^(FIRVisionDocumentText *_Nullable result, NSError *_Nullable error) { if (error != nil || result == nil) { // ... return; } // Recognized text }];
2. 從已辨識的文字區塊擷取文字
如果文字辨識作業成功,系統會傳回VisionDocumentText
物件。VisionDocumentText
物件
包含圖片中可辨識的完整文字和物件階層
反映公認文件的結構:
每 VisionDocumentTextBlock
、VisionDocumentTextParagraph
,
VisionDocumentTextWord
和 VisionDocumentTextSymbol
物件,您可以取得
可在區域辨識的文字和區域的邊界座標。
例如:
Swift
let resultText = result.text for block in result.blocks { let blockText = block.text let blockConfidence = block.confidence let blockRecognizedLanguages = block.recognizedLanguages let blockBreak = block.recognizedBreak let blockCornerPoints = block.cornerPoints let blockFrame = block.frame for paragraph in block.paragraphs { let paragraphText = paragraph.text let paragraphConfidence = paragraph.confidence let paragraphRecognizedLanguages = paragraph.recognizedLanguages let paragraphBreak = paragraph.recognizedBreak let paragraphCornerPoints = paragraph.cornerPoints let paragraphFrame = paragraph.frame for word in paragraph.words { let wordText = word.text let wordConfidence = word.confidence let wordRecognizedLanguages = word.recognizedLanguages let wordBreak = word.recognizedBreak let wordCornerPoints = word.cornerPoints let wordFrame = word.frame for symbol in word.symbols { let symbolText = symbol.text let symbolConfidence = symbol.confidence let symbolRecognizedLanguages = symbol.recognizedLanguages let symbolBreak = symbol.recognizedBreak let symbolCornerPoints = symbol.cornerPoints let symbolFrame = symbol.frame } } } }
Objective-C
NSString *resultText = result.text; for (FIRVisionDocumentTextBlock *block in result.blocks) { NSString *blockText = block.text; NSNumber *blockConfidence = block.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *blockRecognizedLanguages = block.recognizedLanguages; FIRVisionTextRecognizedBreak *blockBreak = block.recognizedBreak; CGRect blockFrame = block.frame; for (FIRVisionDocumentTextParagraph *paragraph in block.paragraphs) { NSString *paragraphText = paragraph.text; NSNumber *paragraphConfidence = paragraph.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *paragraphRecognizedLanguages = paragraph.recognizedLanguages; FIRVisionTextRecognizedBreak *paragraphBreak = paragraph.recognizedBreak; CGRect paragraphFrame = paragraph.frame; for (FIRVisionDocumentTextWord *word in paragraph.words) { NSString *wordText = word.text; NSNumber *wordConfidence = word.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *wordRecognizedLanguages = word.recognizedLanguages; FIRVisionTextRecognizedBreak *wordBreak = word.recognizedBreak; CGRect wordFrame = word.frame; for (FIRVisionDocumentTextSymbol *symbol in word.symbols) { NSString *symbolText = symbol.text; NSNumber *symbolConfidence = symbol.confidence; NSArray<FIRVisionTextRecognizedLanguage *> *symbolRecognizedLanguages = symbol.recognizedLanguages; FIRVisionTextRecognizedBreak *symbolBreak = symbol.recognizedBreak; CGRect symbolFrame = symbol.frame; } } } }
後續步驟
- 部署至使用 Cloud API 的正式版應用程式之前,您應先完成 防範及減少 未經授權 API 存取的影響