Etichetta le immagini con Firebase ML sulle piattaforme Apple
Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
Puoi utilizzare Firebase ML per etichettare gli oggetti riconosciuti in un'immagine. Consulta la panoramica per informazioni sulle funzionalità di questa API.
Prima di iniziare
Se non hai ancora aggiunto Firebase alla tua app, fallo seguendo i passaggi della guida introduttiva.
Utilizza Swift Package Manager per installare e gestire le dipendenze di Firebase.
In Xcode, con il progetto dell'app aperto, vai a File > Add Packages (File > Aggiungi pacchetti).
Quando richiesto, aggiungi il repository dell'SDK delle piattaforme Apple di Firebase:
https://github.com/firebase/firebase-ios-sdk.git
Scegli la raccolta Firebase ML.
Aggiungi il flag -ObjC alla sezione Altri flag del linker delle impostazioni di build del target.
Al termine, Xcode inizierà automaticamente a risolvere e a scaricare le tue dipendenze in background.
Poi, esegui alcune operazioni di configurazione nell'app:
Se non hai ancora eseguito l'upgrade del progetto al
piano tariffario Blaze con pagamento a consumo, fai clic su Esegui upgrade. (Ti verrà
chiesto di eseguire l'upgrade solo se il tuo progetto non è incluso nel
piano tariffario Blaze.)
Solo i progetti con il piano tariffario Blaze possono utilizzare
le API basate sul cloud.
Se le API basate sul cloud non sono già abilitate, fai clic su
Abilita API basate sul cloud.
Ora puoi etichettare le immagini.
1. Prepara l'immagine di input
Crea un oggetto VisionImage utilizzando un UIImage o un
CMSampleBufferRef.
Per utilizzare un UIImage:
Se necessario, ruota l'immagine in modo che la proprietà imageOrientation
sia .up.
Crea un oggetto VisionImage utilizzando UIImage ruotato correttamente. Non specificare metadati di rotazione. Deve essere utilizzato il valore predefinito
.topLeft.
letcameraPosition=AVCaptureDevice.Position.back// Set to the capture device you used.letmetadata=VisionImageMetadata()metadata.orientation=imageOrientation(deviceOrientation:UIDevice.current.orientation,cameraPosition:cameraPosition)
Objective-C
FIRVisionImageMetadata*metadata=[[FIRVisionImageMetadataalloc]init];AVCaptureDevicePositioncameraPosition=AVCaptureDevicePositionBack;// Set to the capture device you used.metadata.orientation=[selfimageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientationcameraPosition:cameraPosition];
Crea un oggetto VisionImage utilizzando l'oggetto
CMSampleBufferRef e i metadati di rotazione:
2. Configura ed esegui l'etichettatore di immagini
Per etichettare gli oggetti in un'immagine, passa l'oggetto VisionImage al
metodo processImage() di VisionImageLabeler.
Innanzitutto, ottieni un'istanza di VisionImageLabeler:
Swift
letlabeler=Vision.vision().cloudImageLabeler()// Or, to set the minimum confidence required:// let options = VisionCloudImageLabelerOptions()// options.confidenceThreshold = 0.7// let labeler = Vision.vision().cloudImageLabeler(options: options)
Objective-C
FIRVisionImageLabeler*labeler=[[FIRVisionvision]cloudImageLabeler];// Or, to set the minimum confidence required:// FIRVisionCloudImageLabelerOptions *options =// [[FIRVisionCloudImageLabelerOptions alloc] init];// options.confidenceThreshold = 0.7;// FIRVisionImageLabeler *labeler =// [[FIRVision vision] cloudImageLabelerWithOptions:options];
Quindi, passa l'immagine al metodo processImage():
Se l'etichettatura delle immagini va a buon fine, al gestore di completamento viene passato un array di oggetti VisionImageLabel. Da ogni oggetto puoi ottenere
informazioni su una funzionalità riconosciuta nell'immagine.
[[["Facile da capire","easyToUnderstand","thumb-up"],["Il problema è stato risolto","solvedMyProblem","thumb-up"],["Altra","otherUp","thumb-up"]],[["Mancano le informazioni di cui ho bisogno","missingTheInformationINeed","thumb-down"],["Troppo complicato/troppi passaggi","tooComplicatedTooManySteps","thumb-down"],["Obsoleti","outOfDate","thumb-down"],["Problema di traduzione","translationIssue","thumb-down"],["Problema relativo a esempi/codice","samplesCodeIssue","thumb-down"],["Altra","otherDown","thumb-down"]],["Ultimo aggiornamento 2025-09-06 UTC."],[],[],null,["| This page describes an old version of labeling objects recognized in an image using the\n| deprecated Firebase ML Vision SDK. As an alternative, you may\n| [call\n| Cloud Vision APIs using Firebase Auth and Callable Functions](/docs/ml/ios/label-images) to allow only users logged\n| into your app to access the API.\n\nYou can use Firebase ML to label objects recognized in an image. See the\n[overview](/docs/ml/label-images) for information about this API's\nfeatures.\n| Use of the Cloud Vision APIs is subject to the [Google Cloud Platform License\n| Agreement](https://cloud.google.com/terms/) and [Service\n| Specific Terms](https://cloud.google.com/terms/service-terms), and billed accordingly. For billing information, see the [Pricing](https://cloud.google.com/vision/pricing) page.\n| **Looking for on-device image labeling?** Try the [standalone ML Kit library](https://developers.google.com/ml-kit/vision/image-labeling).\n\n\u003cbr /\u003e\n\nBefore you begin\n\nIf you have not already added Firebase to your app, do so by following the steps in the [getting started guide](/docs/ios/setup).\n1. Use Swift Package Manager to install and manage Firebase dependencies.\n| Visit [our installation guide](/docs/ios/installation-methods) to learn about the different ways you can add Firebase SDKs to your Apple project, including importing frameworks directly and using CocoaPods.\n1. In Xcode, with your app project open, navigate to **File \\\u003e Add Packages**.\n2. When prompted, add the Firebase Apple platforms SDK repository: \n\n```text\n https://github.com/firebase/firebase-ios-sdk.git\n```\n| **Note:** New projects should use the default (latest) SDK version, but you can choose an older version if needed.\n3. Choose the Firebase ML library.\n4. Add the `-ObjC` flag to the *Other Linker Flags* section of your target's build settings.\n5. When finished, Xcode will automatically begin resolving and downloading your dependencies in the background.\n2. Next, perform some in-app setup:\n1. In your app, import Firebase:\n\n Swift \n\n ```swift\n import FirebaseMLModelDownloader\n ```\n\n Objective-C \n\n ```objective-c\n @import FirebaseMLModelDownloader;\n ```\n3. If you haven't already enabled Cloud-based APIs for your project, do so\n now:\n\n 1. Open the [Firebase ML\n APIs page](//console.firebase.google.com/project/_/ml/apis) in the Firebase console.\n 2. If you haven't already upgraded your project to the\n [pay-as-you-go Blaze pricing plan](/pricing), click **Upgrade** to do so. (You'll be\n prompted to upgrade only if your project isn't on the\n Blaze pricing plan.)\n\n Only projects on the Blaze pricing plan can use\n Cloud-based APIs.\n 3. If Cloud-based APIs aren't already enabled, click **Enable Cloud-based APIs**.\n\n | Before you deploy to production an app that uses a Cloud API, you should take some additional steps to [prevent and mitigate the\n | effect of unauthorized API access](./secure-api-key).\n\nNow you are ready to label images.\n\n1. Prepare the input image\n\nCreate a [`VisionImage`](/docs/reference/swift/firebasemlvision/api/reference/Classes/VisionImage) object using a `UIImage` or a\n`CMSampleBufferRef`.\n\nTo use a `UIImage`:\n\n1. If necessary, rotate the image so that its `imageOrientation` property is `.up`.\n2. Create a `VisionImage` object using the correctly-rotated `UIImage`. Do not specify any rotation metadata---the default value, `.topLeft`, must be used. \n\n Swift \n\n ```swift\n let image = VisionImage(image: uiImage)\n ```\n\n Objective-C \n\n ```objective-c\n FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];\n ```\n\nTo use a `CMSampleBufferRef`:\n\n1. Create a [`VisionImageMetadata`](/docs/reference/swift/firebasemlvision/api/reference/Classes/VisionImageMetadata) object that specifies the\n orientation of the image data contained in the\n `CMSampleBufferRef` buffer.\n\n To get the image orientation: \n\n Swift \n\n ```swift\n func imageOrientation(\n deviceOrientation: UIDeviceOrientation,\n cameraPosition: AVCaptureDevice.Position\n ) -\u003e VisionDetectorImageOrientation {\n switch deviceOrientation {\n case .portrait:\n return cameraPosition == .front ? .leftTop : .rightTop\n case .landscapeLeft:\n return cameraPosition == .front ? .bottomLeft : .topLeft\n case .portraitUpsideDown:\n return cameraPosition == .front ? .rightBottom : .leftBottom\n case .landscapeRight:\n return cameraPosition == .front ? .topRight : .bottomRight\n case .faceDown, .faceUp, .unknown:\n return .leftTop\n }\n }\n ```\n\n Objective-C \n\n ```objective-c\n - (FIRVisionDetectorImageOrientation)\n imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation\n cameraPosition:(AVCaptureDevicePosition)cameraPosition {\n switch (deviceOrientation) {\n case UIDeviceOrientationPortrait:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationLeftTop;\n } else {\n return FIRVisionDetectorImageOrientationRightTop;\n }\n case UIDeviceOrientationLandscapeLeft:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationBottomLeft;\n } else {\n return FIRVisionDetectorImageOrientationTopLeft;\n }\n case UIDeviceOrientationPortraitUpsideDown:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationRightBottom;\n } else {\n return FIRVisionDetectorImageOrientationLeftBottom;\n }\n case UIDeviceOrientationLandscapeRight:\n if (cameraPosition == AVCaptureDevicePositionFront) {\n return FIRVisionDetectorImageOrientationTopRight;\n } else {\n return FIRVisionDetectorImageOrientationBottomRight;\n }\n default:\n return FIRVisionDetectorImageOrientationTopLeft;\n }\n }\n ```\n\n Then, create the metadata object: \n\n Swift \n\n ```swift\n let cameraPosition = AVCaptureDevice.Position.back // Set to the capture device you used.\n let metadata = VisionImageMetadata()\n metadata.orientation = imageOrientation(\n deviceOrientation: UIDevice.current.orientation,\n cameraPosition: cameraPosition\n )\n ```\n\n Objective-C \n\n ```objective-c\n FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init];\n AVCaptureDevicePosition cameraPosition =\n AVCaptureDevicePositionBack; // Set to the capture device you used.\n metadata.orientation =\n [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation\n cameraPosition:cameraPosition];\n ```\n2. Create a `VisionImage` object using the `CMSampleBufferRef` object and the rotation metadata: \n\n Swift \n\n ```swift\n let image = VisionImage(buffer: sampleBuffer)\n image.metadata = metadata\n ```\n\n Objective-C \n\n ```objective-c\n FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:sampleBuffer];\n image.metadata = metadata;\n ```\n\n2. Configure and run the image labeler To label objects in an image, pass the `VisionImage` object to the `VisionImageLabeler`'s `processImage()` method.\n\n\u003cbr /\u003e\n\n1. First, get an instance of `VisionImageLabeler`:\n\n Swift \n\n let labeler = Vision.vision().cloudImageLabeler()\n\n // Or, to set the minimum confidence required:\n // let options = VisionCloudImageLabelerOptions()\n // options.confidenceThreshold = 0.7\n // let labeler = Vision.vision().cloudImageLabeler(options: options)\n\n Objective-C \n\n FIRVisionImageLabeler *labeler = [[FIRVision vision] cloudImageLabeler];\n\n // Or, to set the minimum confidence required:\n // FIRVisionCloudImageLabelerOptions *options =\n // [[FIRVisionCloudImageLabelerOptions alloc] init];\n // options.confidenceThreshold = 0.7;\n // FIRVisionImageLabeler *labeler =\n // [[FIRVision vision] cloudImageLabelerWithOptions:options];\n\n2. Then, pass the image to the `processImage()` method:\n\n Swift \n\n labeler.process(image) { labels, error in\n guard error == nil, let labels = labels else { return }\n\n // Task succeeded.\n // ...\n }\n\n Objective-C \n\n [labeler processImage:image\n completion:^(NSArray\u003cFIRVisionImageLabel *\u003e *_Nullable labels,\n NSError *_Nullable error) {\n if (error != nil) { return; }\n\n // Task succeeded.\n // ...\n }];\n\n3. Get information about labeled objects If image labeling succeeds, an array of `VisionImageLabel` objects will be passed to the completion handler. From each object, you can get information about a feature recognized in the image.\n\n\u003cbr /\u003e\n\nFor example: \n\nSwift \n\n for label in labels {\n let labelText = label.text\n let entityId = label.entityID\n let confidence = label.confidence\n }\n\nObjective-C \n\n for (FIRVisionImageLabel *label in labels) {\n NSString *labelText = label.text;\n NSString *entityId = label.entityID;\n NSNumber *confidence = label.confidence;\n }\n\nNext steps\n\n- Before you deploy to production an app that uses a Cloud API, you should take some additional steps to [prevent and mitigate the\n effect of unauthorized API access](./secure-api-key)."]]