在 iOS 上使用 ML Kit 識別地標

您可以使用 ML Kit 來識別影像中的知名地標。

在你開始之前

  1. 如果您尚未將 Firebase 新增至您的應用程式中,請按照入門指南中的步驟進行操作。
  2. 在 Podfile 中包含 ML Kit 函式庫:
    pod 'Firebase/MLVision', '6.25.0'
    
    安裝或更新專案的 Pod 後,請務必使用其.xcworkspace開啟 Xcode 專案。
  3. 在您的應用程式中,導入 Firebase:

    迅速

    import Firebase

    Objective-C

    @import Firebase;
  4. 如果您尚未為您的專案啟用基於雲端的 API,請立即執行此操作:

    1. 開啟 Firebase 控制台的ML Kit API 頁面
    2. 如果您尚未將項目升級到 Blaze 定價計劃,請按一下升​​級來執行此操作。 (只有當您的專案不在 Blaze 計劃中時,系統才會提示您升級。)

      只有 Blaze 等級的項目才能使用基於雲端的 API。

    3. 如果尚未啟用基於雲端的 API,請按一下啟用基於雲端的 API

配置地標檢測器

預設情況下,雲端偵測器使用模型的穩定版本並傳回最多 10 個結果。如果要變更其中任一設置,請使用VisionCloudDetectorOptions物件指定它們,如下例所示:

迅速

let options = VisionCloudDetectorOptions()
options.modelType = .latest
options.maxResults = 20

Objective-C

  FIRVisionCloudDetectorOptions *options =
      [[FIRVisionCloudDetectorOptions alloc] init];
  options.modelType = FIRVisionCloudModelTypeLatest;
  options.maxResults = 20;
  

在下一個步驟中,在建立雲端偵測器物件時傳遞VisionCloudDetectorOptions物件。

運行地標檢測器

若要辨識影像中的地標,請將影像作為UIImageCMSampleBufferRef傳遞給VisionCloudLandmarkDetectordetect(in:)方法:

  1. 取得VisionCloudLandmarkDetector的實例:

    迅速

    lazy var vision = Vision.vision()
    
    let cloudDetector = vision.cloudLandmarkDetector(options: options)
    // Or, to use the default settings:
    // let cloudDetector = vision.cloudLandmarkDetector()
    

    Objective-C

    FIRVision *vision = [FIRVision vision];
    FIRVisionCloudLandmarkDetector *landmarkDetector = [vision cloudLandmarkDetector];
    // Or, to change the default settings:
    // FIRVisionCloudLandmarkDetector *landmarkDetector =
    //     [vision cloudLandmarkDetectorWithOptions:options];
    
  2. 使用UIImageCMSampleBufferRef建立VisionImage物件。

    使用UIImage

    1. 如有必要,旋轉影像,使其imageOrientation屬性為.up
    2. 使用正確旋轉的UIImage建立VisionImage物件。不要指定任何旋轉元資料 - 必須使用預設值.topLeft

      迅速

      let image = VisionImage(image: uiImage)

      Objective-C

      FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];

    使用CMSampleBufferRef

    1. 建立一個VisionImageMetadata對象,該對象指定CMSampleBufferRef緩衝區中包含的圖像資料的方向。

      若要取得影像方向:

      迅速

      func imageOrientation(
          deviceOrientation: UIDeviceOrientation,
          cameraPosition: AVCaptureDevice.Position
          ) -> VisionDetectorImageOrientation {
          switch deviceOrientation {
          case .portrait:
              return cameraPosition == .front ? .leftTop : .rightTop
          case .landscapeLeft:
              return cameraPosition == .front ? .bottomLeft : .topLeft
          case .portraitUpsideDown:
              return cameraPosition == .front ? .rightBottom : .leftBottom
          case .landscapeRight:
              return cameraPosition == .front ? .topRight : .bottomRight
          case .faceDown, .faceUp, .unknown:
              return .leftTop
          }
      }

      Objective-C

      - (FIRVisionDetectorImageOrientation)
          imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation
                                 cameraPosition:(AVCaptureDevicePosition)cameraPosition {
        switch (deviceOrientation) {
          case UIDeviceOrientationPortrait:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationLeftTop;
            } else {
              return FIRVisionDetectorImageOrientationRightTop;
            }
          case UIDeviceOrientationLandscapeLeft:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationBottomLeft;
            } else {
              return FIRVisionDetectorImageOrientationTopLeft;
            }
          case UIDeviceOrientationPortraitUpsideDown:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationRightBottom;
            } else {
              return FIRVisionDetectorImageOrientationLeftBottom;
            }
          case UIDeviceOrientationLandscapeRight:
            if (cameraPosition == AVCaptureDevicePositionFront) {
              return FIRVisionDetectorImageOrientationTopRight;
            } else {
              return FIRVisionDetectorImageOrientationBottomRight;
            }
          default:
            return FIRVisionDetectorImageOrientationTopLeft;
        }
      }

      然後,建立元資料物件:

      迅速

      let cameraPosition = AVCaptureDevice.Position.back  // Set to the capture device you used.
      let metadata = VisionImageMetadata()
      metadata.orientation = imageOrientation(
          deviceOrientation: UIDevice.current.orientation,
          cameraPosition: cameraPosition
      )

      Objective-C

      FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init];
      AVCaptureDevicePosition cameraPosition =
          AVCaptureDevicePositionBack;  // Set to the capture device you used.
      metadata.orientation =
          [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation
                                       cameraPosition:cameraPosition];
    2. 使用CMSampleBufferRef物件和旋轉元資料建立VisionImage物件:

      迅速

      let image = VisionImage(buffer: sampleBuffer)
      image.metadata = metadata

      Objective-C

      FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:sampleBuffer];
      image.metadata = metadata;
  3. 然後,將圖像傳遞給detect(in:)方法:

    迅速

    cloudDetector.detect(in: visionImage) { landmarks, error in
      guard error == nil, let landmarks = landmarks, !landmarks.isEmpty else {
        // ...
        return
      }
    
      // Recognized landmarks
      // ...
    }
    

    Objective-C

    [landmarkDetector detectInImage:image
                         completion:^(NSArray<FIRVisionCloudLandmark *> *landmarks,
                                      NSError *error) {
      if (error != nil) {
        return;
      } else if (landmarks != nil) {
        // Got landmarks
      }
    }];
    

獲取有關公認地標的信息

如果地標識別成功, VisionCloudLandmark物件的陣列將會傳遞到完成處理程序。從每個物件中,您可以獲得有關圖像中識別的地標的資訊。

例如:

迅速

for landmark in landmarks {
  let landmarkDesc = landmark.landmark
  let boundingPoly = landmark.frame
  let entityId = landmark.entityId

  // A landmark can have multiple locations: for example, the location the image
  // was taken, and the location of the landmark depicted.
  for location in landmark.locations {
    let latitude = location.latitude
    let longitude = location.longitude
  }

  let confidence = landmark.confidence
}

Objective-C

for (FIRVisionCloudLandmark *landmark in landmarks) {
   NSString *landmarkDesc = landmark.landmark;
   CGRect frame = landmark.frame;
   NSString *entityId = landmark.entityId;

   // A landmark can have multiple locations: for example, the location the image
   // was taken, and the location of the landmark depicted.
   for (FIRVisionLatitudeLongitude *location in landmark.locations) {
     double latitude = [location.latitude doubleValue];
     double longitude = [location.longitude doubleValue];
   }

   float confidence = [landmark.confidence floatValue];
}

下一步