You can use ML Kit to recognize and decode barcodes.
See the ML Kit quickstart sample on GitHub for an example of this API in use.
Before you begin
- If you have not already added Firebase to your app, do so by following the steps in the getting started guide.
- Include the ML Kit libraries in your Podfile:
pod 'Firebase/Core' pod 'Firebase/MLVision' pod 'Firebase/MLVisionBarcodeModel'
After you install or update your project's Pods, be sure to open your Xcode project using its.xcworkspace
. - In your app, import Firebase:
Swift
import Firebase
Objective-C
@import Firebase;
Input image guidelines
-
For ML Kit to accurately read barcodes, input images must contain barcodes that are represented by sufficient pixel data. In general, the smallest meaningful unit of the barcode should be at least 2 pixels wide (and for 2-dimensional codes, 2 pixels tall).
For example, EAN-13 barcodes are made up of bars and spaces that are 1, 2, 3, or 4 units wide, so an EAN-13 barcode image ideally has bars and spaces that are at least 2, 4, 6, and 8 pixels wide. Because an EAN-13 barcode is 95 units wide in total, the barcode should be at least 190 pixels wide.
Denser formats, such as PDF417, need greater pixel dimensions for ML Kit to reliably read them. For example, a PDF417 code can have up to 34 17-unit wide "words" in a single row, which would ideally be at least 1156 pixels wide.
-
Poor image focus can hurt scanning accuracy. If you aren't getting acceptable results, try asking the user to recapture the image.
-
If you are scanning barcodes in a real-time application, you might also want to consider the overall dimensions of the input images. Smaller images can be processed faster, so to reduce latency, capture images at lower resolutions (keeping in mind the above accuracy requirements) and ensure that the barcode occupies as much of the image as possible. Also see Tips to improve real-time performance.
1. Configure the barcode detector
If you know which barcode formats you expect to read, you can improve the speed of the barcode detector by configuring it to only detect those formats.For example, to detect only Aztec code and QR codes, build a
VisionBarcodeDetectorOptions
object as in the
following example:
Swift
let format = VisionBarcodeFormat.all let barcodeOptions = VisionBarcodeDetectorOptions(formats: format)
The following formats are supported:
- Code128
- Code39
- Code93
- CodaBar
- EAN13
- EAN8
- ITF
- UPCA
- UPCE
- QRCode
- PDF417
- Aztec
- DataMatrix
Objective-C
FIRVisionBarcodeDetectorOptions *options = [[FIRVisionBarcodeDetectorOptions alloc] initWithFormats: FIRVisionBarcodeFormatQRCode | FIRVisionBarcodeFormatAztec];
The following formats are supported:
- Code 128 (
FIRVisionBarcodeFormatCode128
) - Code 39 (
FIRVisionBarcodeFormatCode39
) - Code 93 (
FIRVisionBarcodeFormatCode93
) - Codabar (
FIRVisionBarcodeFormatCodaBar
) - EAN-13 (
FIRVisionBarcodeFormatEAN13
) - EAN-8 (
FIRVisionBarcodeFormatEAN8
) - ITF (
FIRVisionBarcodeFormatITF
) - UPC-A (
FIRVisionBarcodeFormatUPCA
) - UPC-E (
FIRVisionBarcodeFormatUPCE
) - QR Code (
FIRVisionBarcodeFormatQRCode
) - PDF417 (
FIRVisionBarcodeFormatPDF417
) - Aztec (
FIRVisionBarcodeFormatAztec
) - Data Matrix (
FIRVisionBarcodeFormatDataMatrix
)
2. Run the barcode detector
To scan barcodes in an image, pass the image as aUIImage
or a
CMSampleBufferRef
to the VisionBarcodeDetector
's detect(in:)
method:
- Get an instance of
VisionBarcodeDetector
:Swift
lazy var vision = Vision.vision() let barcodeDetector = vision.barcodeDetector(options: barcodeOptions)
Objective-C
FIRVision *vision = [FIRVision vision]; FIRVisionBarcodeDetector *barcodeDetector = [vision barcodeDetector]; // Or, to change the default settings: // FIRVisionBarcodeDetector *barcodeDetector = // [vision barcodeDetectorWithOptions:options];
-
Create a
VisionImage
object using aUIImage
or aCMSampleBufferRef
.To use a
UIImage
:- If necessary, rotate the image so that its
imageOrientation
property is.up
. - Create a
VisionImage
object using the correctly-rotatedUIImage
. Do not specify any rotation metadata—the default value,.topLeft
, must be used.Swift
let image = VisionImage(image: uiImage)
Objective-C
FIRVisionImage *image = [[FIRVisionImage alloc] initWithImage:uiImage];
To use a
CMSampleBufferRef
:-
Create a
VisionImageMetadata
object that specifies the orientation of the image data contained in theCMSampleBufferRef
buffer.For example, if you are using image data captured from the device's back-facing camera:
Swift
let metadata = VisionImageMetadata() // Using back-facing camera let devicePosition: AVCaptureDevice.Position = .back let deviceOrientation = UIDevice.current.orientation switch deviceOrientation { case .portrait: metadata.orientation = devicePosition == .front ? .leftTop : .rightTop case .landscapeLeft: metadata.orientation = devicePosition == .front ? .bottomLeft : .topLeft case .portraitUpsideDown: metadata.orientation = devicePosition == .front ? .rightBottom : .leftBottom case .landscapeRight: metadata.orientation = devicePosition == .front ? .topRight : .bottomRight case .faceDown, .faceUp, .unknown: metadata.orientation = .leftTop }
Objective-C
// Calculate the image orientation FIRVisionDetectorImageOrientation orientation; // Using front-facing camera AVCaptureDevicePosition devicePosition = AVCaptureDevicePositionFront; UIDeviceOrientation deviceOrientation = UIDevice.currentDevice.orientation; switch (deviceOrientation) { case UIDeviceOrientationPortrait: if (devicePosition == AVCaptureDevicePositionFront) { orientation = FIRVisionDetectorImageOrientationLeftTop; } else { orientation = FIRVisionDetectorImageOrientationRightTop; } break; case UIDeviceOrientationLandscapeLeft: if (devicePosition == AVCaptureDevicePositionFront) { orientation = FIRVisionDetectorImageOrientationBottomLeft; } else { orientation = FIRVisionDetectorImageOrientationTopLeft; } break; case UIDeviceOrientationPortraitUpsideDown: if (devicePosition == AVCaptureDevicePositionFront) { orientation = FIRVisionDetectorImageOrientationRightBottom; } else { orientation = FIRVisionDetectorImageOrientationLeftBottom; } break; case UIDeviceOrientationLandscapeRight: if (devicePosition == AVCaptureDevicePositionFront) { orientation = FIRVisionDetectorImageOrientationTopRight; } else { orientation = FIRVisionDetectorImageOrientationBottomRight; } break; default: orientation = FIRVisionDetectorImageOrientationTopLeft; break; } FIRVisionImageMetadata *metadata = [[FIRVisionImageMetadata alloc] init]; metadata.orientation = orientation;
- Create a
VisionImage
object using theCMSampleBufferRef
object and the rotation metadata:Swift
let image = VisionImage(buffer: bufferRef) image.metadata = metadata
Objective-C
FIRVisionImage *image = [[FIRVisionImage alloc] initWithBuffer:buffer]; image.metadata = metadata;
- If necessary, rotate the image so that its
-
Then, pass the image to the
detect(in:)
method:Swift
barcodeDetector.detect(in: visionImage) { features, error in guard error == nil, let features = features, !features.isEmpty else { // ... return } // ... }
Objective-C
[barcodeDetector detectInImage:image completion:^(NSArray<FIRVisionBarcode *> *barcodes, NSError *error) { if (error != nil) { return; } else if (barcodes != nil) { // Recognized barcodes // ... } }];
3. Get information from barcodes
If the barcode recognition operation succeeds, the detector returns an array ofVisionBarcode
objects. Each VisionBarcode
object represents a
barcode that was detected in the image. For each barcode, you can get its
bounding coordinates in the input image, as well as the raw data encoded by the
barcode. Also, if the barcode detector was able to determine the type of data
encoded by the barcode, you can get an object containing parsed data.
For example:
Swift
for barcode in barcodes { let corners = barcode.cornerPoints let displayValue = barcode.displayValue let rawValue = barcode.rawValue let valueType = barcode.valueType switch valueType { case .wiFi: let ssid = barcode.wifi!.ssid let password = barcode.wifi!.password let encryptionType = barcode.wifi!.type case .URL: let title = barcode.url!.title let url = barcode.url!.url default: // See API reference for all supported value types } }
Objective-C
for (FIRVisionBarcode *barcode in barcodes) { NSArray *corners = barcode.cornerPoints; NSString *displayValue = barcode.displayValue; NSString *rawValue = barcode.rawValue; FIRVisionBarcodeValueType valueType = barcode.valueType; switch (valueType) { case FIRVisionBarcodeValueTypeWiFi: // ssid = barcode.wifi.ssid; // password = barcode.wifi.password; // encryptionType = barcode.wifi.type; break; case FIRVisionBarcodeValueTypeURL: // url = barcode.URL.url; // title = barcode.URL.title; break; // ... default: break; } }
Tips to improve real-time performance
If you want to scan barcodes in a real-time application, follow these guidelines to achieve the best framerates:
- Throttle calls to the detector. If a new video frame becomes available while the detector is running, drop the frame.
- If you are using the output of the detector to overlay graphics on
the input image, first get the result from ML Kit, then render the image
and overlay in a single step. By doing so, you render to the display surface
only once for each input frame. See the
CameraSourcePreview
andGraphicOverlay
classes in the quickstart sample app for an example. -
If you use the Camera2 API, capture images in
ImageFormat.YUV_420_888
format.If you use the older Camera API, capture images in
ImageFormat.NV21
format. - Consider capturing images at a lower resolution. However, also keep in mind this API's image dimension requirements.