可以使用 ML Kit 辨識圖片中的知名地標。
事前準備
- 如果尚未將 Firebase 加入應用程式,請按照下列步驟操作: 入門指南中的步驟。
- 在 Podfile 中加入 ML Kit 程式庫:
敬上 安裝或更新專案的 Pod 後,請務必開啟 Xcode 專案pod 'Firebase/MLVision', '6.25.0'
.xcworkspace
。 - 在應用程式中匯入 Firebase:
Swift
import Firebase
Objective-C
@import Firebase;
-
如果您尚未為專案啟用雲端式 API,請先啟用 現在:
- 開啟 ML Kit Firebase 控制台的 API 頁面。
-
如果您尚未將專案升級至 Blaze 定價方案,請按一下 如要這麼做,請升級。(只有在您的 專案並未採用 Blaze 方案)。
只有 Blaze 層級的專案可以使用以雲端為基礎的 API。
- 如果尚未啟用雲端式 API,請按一下「Enable Cloud-based API」(啟用雲端式 API) API
設定地標偵測工具
根據預設,Cloud 偵測工具會使用模型的穩定版本
最多會傳回 10 筆結果。如要變更下列任一設定
可以使用 VisionCloudDetectorOptions
物件來指定這些 Pod 的
在以下範例中:
Swift
let options = VisionCloudDetectorOptions() options.modelType = .latest options.maxResults = 20
Objective-C
FIRVisionCloudDetectorOptions *options = [[FIRVisionCloudDetectorOptions alloc] init]; options.modelType = FIRVisionCloudModelTypeLatest; options.maxResults = 20;
在下一個步驟中,請傳遞 VisionCloudDetectorOptions
物件。
執行地標偵測工具
如要辨識圖片中的地標,請以UIImage
或
CMSampleBufferRef
到VisionCloudLandmarkDetector
的detect(in:)
方法:
- 取得
VisionCloudLandmarkDetector
的執行個體:Swift
lazy var vision = Vision.vision() let cloudDetector = vision.cloudLandmarkDetector(options: options) // Or, to use the default settings: // let cloudDetector = vision.cloudLandmarkDetector()
Objective-C
FIRVision *vision = [FIRVision vision]; FIRVisionCloudLandmarkDetector *landmarkDetector = [vision cloudLandmarkDetector]; // Or, to change the default settings: // FIRVisionCloudLandmarkDetector *landmarkDetector = // [vision cloudLandmarkDetectorWithOptions:options];
-
使用
UIImage
或VisionImage
CMSampleBufferRef
。如何使用
UIImage
:- 視需要旋轉圖片,使其
imageOrientation
屬性為.up
。 - 使用正確旋轉的做法建立
VisionImage
物件UIImage
。請勿指定任何輪替中繼資料 (預設值) 值 (.topLeft
),則必須使用。Swift
let image = VisionImage(image: uiImage)
Objective-C
FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];
如何使用
CMSampleBufferRef
:-
建立
VisionImageMetadata
物件,以指定 包含的圖片資料方向CMSampleBufferRef
緩衝區。如何取得圖片方向:
Swift
func imageOrientation( deviceOrientation: UIDeviceOrientation, cameraPosition: AVCaptureDevice.Position ) -> VisionDetectorImageOrientation { switch deviceOrientation { case .portrait: return cameraPosition == .front ? .leftTop : .rightTop case .landscapeLeft: return cameraPosition == .front ? .bottomLeft : .topLeft case .portraitUpsideDown: return cameraPosition == .front ? .rightBottom : .leftBottom case .landscapeRight: return cameraPosition == .front ? .topRight : .bottomRight case .faceDown, .faceUp, .unknown: return .leftTop } }
Objective-C
- (FIRVisionDetectorImageOrientation) imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation cameraPosition:(AVCaptureDevicePosition)cameraPosition { switch (deviceOrientation) { case UIDeviceOrientationPortrait: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationLeftTop; } else { return FIRVisionDetectorImageOrientationRightTop; } case UIDeviceOrientationLandscapeLeft: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationBottomLeft; } else { return FIRVisionDetectorImageOrientationTopLeft; } case UIDeviceOrientationPortraitUpsideDown: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationRightBottom; } else { return FIRVisionDetectorImageOrientationLeftBottom; } case UIDeviceOrientationLandscapeRight: if (cameraPosition == AVCaptureDevicePositionFront) { return FIRVisionDetectorImageOrientationTopRight; } else { return FIRVisionDetectorImageOrientationBottomRight; } default: return FIRVisionDetectorImageOrientationTopLeft; } }
然後,建立中繼資料物件:
Swift
let cameraPosition = AVCaptureDevice.Position.back // Set to the capture device you used. let metadata = VisionImageMetadata() metadata.orientation = imageOrientation( deviceOrientation: UIDevice.current.orientation, cameraPosition: cameraPosition )
Objective-C
FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init]; AVCaptureDevicePosition cameraPosition = AVCaptureDevicePositionBack; // Set to the capture device you used. metadata.orientation = [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation cameraPosition:cameraPosition];
- 請使用
VisionImage
CMSampleBufferRef
物件和輪替中繼資料:Swift
let image = VisionImage(buffer: sampleBuffer) image.metadata = metadata
Objective-C
FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:sampleBuffer]; image.metadata = metadata;
- 視需要旋轉圖片,使其
-
接著,將圖片傳遞至
detect(in:)
方法:Swift
cloudDetector.detect(in: visionImage) { landmarks, error in guard error == nil, let landmarks = landmarks, !landmarks.isEmpty else { // ... return } // Recognized landmarks // ... }
Objective-C
[landmarkDetector detectInImage:image completion:^(NSArray<FIRVisionCloudLandmark *> *landmarks, NSError *error) { if (error != nil) { return; } else if (landmarks != nil) { // Got landmarks } }];
取得可辨識的地標相關資訊
如果地標辨識成功,系統會傳回VisionCloudLandmark
陣列
傳遞到完成處理常式在每個物件中,您可以
圖片中辨識出的地標相關資訊。
例如:
Swift
for landmark in landmarks { let landmarkDesc = landmark.landmark let boundingPoly = landmark.frame let entityId = landmark.entityId // A landmark can have multiple locations: for example, the location the image // was taken, and the location of the landmark depicted. for location in landmark.locations { let latitude = location.latitude let longitude = location.longitude } let confidence = landmark.confidence }
Objective-C
for (FIRVisionCloudLandmark *landmark in landmarks) { NSString *landmarkDesc = landmark.landmark; CGRect frame = landmark.frame; NSString *entityId = landmark.entityId; // A landmark can have multiple locations: for example, the location the image // was taken, and the location of the landmark depicted. for (FIRVisionLatitudeLongitude *location in landmark.locations) { double latitude = [location.latitude doubleValue]; double longitude = [location.longitude doubleValue]; } float confidence = [landmark.confidence floatValue]; }
後續步驟
- 部署至使用 Cloud API 的正式版應用程式之前,您應先完成 防範及減少 未經授權 API 存取的影響