Use a TensorFlow Lite model for inference with ML Kit on iOS

You can use ML Kit to perform on-device inference with a TensorFlow Lite model.

ML Kit can use TensorFlow Lite models only on devices running iOS 9 and newer.

See the ML Kit quickstart sample on GitHub for an example of this API in use.

Before you begin

  1. If you have not already added Firebase to your app, do so by following the steps in the getting started guide.
  2. Include the ML Kit libraries in your Podfile:
    pod 'Firebase/Core'
    pod 'Firebase/MLModelInterpreter'
    
    After you install or update your project's Pods, be sure to open your Xcode project using its .xcworkspace.
  3. In your app, import Firebase:

    Swift

    import Firebase

    Objective-C

    @import Firebase;
  4. Convert the TensorFlow model you want to use to TensorFlow Lite (tflite) format. See TOCO: TensorFlow Lite Optimizing Converter.

Host or bundle your model

Before you can use a TensorFlow Lite model for inference in your app, you must make the model available to ML Kit. ML Kit can use TensorFlow Lite models hosted remotely using Firebase, stored locally on the device, or both.

By both hosting the model on Firebase and storing the model locally, you can ensure that the most recent version of the model is used when it is available, but your app's ML features still work when the Firebase-hosted model isn't available.

Model security

Regardless of how you make your TensorFlow Lite models available to ML Kit, ML Kit stores them in the standard serialized protobuf format in local storage.

In theory, this means that anybody can copy your model. However, in practice, most models are so application-specific and obfuscated by optimizations that the risk is similar to that of competitors disassembling and reusing your code. Nevertheless, you should be aware of this risk before you use a custom model in your app.

Host models on Firebase

To host your TensorFlow Lite model on Firebase:

  1. In the ML Kit section of the Firebase console, click the Custom tab.
  2. Click Add custom model (or Add another model).
  3. Specify a name that will be used to identify your model in your Firebase project, then upload the .tflite file.

After you add a custom model to your Firebase project, you can reference the model in your apps using the name you specified. At any time, you can upload a new .tflite file for a model, and your app will download the new model and start using it when the app next restarts. You can define the device conditions required for your app to attempt to update the model (see below).

Make models available locally

To make your TensorFlow Lite model locally available, you can either bundle the model with your app, or download the model from your own server at run time.

To bundle your TensorFlow Lite model with your app, add the .tflite file to your Xcode project, taking care to select Copy bundle resources when you do so. The .tflitefile will be included in the app bundle and available to ML Kit.

If you instead host the model on your own server, you can download the model to local storage at an appropriate point in your app. Then, the model will be available to ML Kit as a local file.

Load the model

To use a TensorFlow Lite model for inference, first specify the locations of the .tflite file.

If you hosted your model with Firebase, register a CloudModelSource object, specifying the name you assigned the model when you uploaded it, and the conditions under which ML Kit should download the model initially and when updates are available.

Swift

let conditions = ModelDownloadConditions(isWiFiRequired: true, canDownloadInBackground: true)
let cloudModelSource = CloudModelSource(
  modelName: "my_cloud_model",
  enableModelUpdates: true,
  initialConditions: conditions,
  updateConditions: conditions
)
let registrationSuccessful = ModelManager.modelManager().register(cloudModelSource)

Objective-C

FIRModelDownloadConditions *conditions =
    [[FIRModelDownloadConditions alloc] initWithIsWiFiRequired:YES
                                       canDownloadInBackground:YES];
FIRCloudModelSource *cloudModelSource =
    [[FIRCloudModelSource alloc] initWithModelName:@"my_cloud_model"
                                enableModelUpdates:YES
                                 initialConditions:conditions
                                  updateConditions:conditions];
  BOOL registrationSuccess =
      [[FIRModelManager modelManager] registerCloudModelSource:cloudModelSource];

If you bundled the model with your app, or downloaded the model from your own host at run time, register a LocalModelSource object, specifying the local path of the .tflite model and assigning the local source a unique name that identifies it in your app.

Swift

guard let modelPath = Bundle.main.path(
  forResource: "my_model",
  ofType: "tflite"
) else {
  // Invalid model path
  return
}
let localModelSource = LocalModelSource(modelName: "my_local_model",
                                        path: modelPath)
let registrationSuccessful = ModelManager.modelManager().register(localModelSource)

Objective-C

NSString *modelPath = [NSBundle.mainBundle pathForResource:@"my_model"
                                                    ofType:@"tflite"];
FIRLocalModelSource *localModelSource =
    [[FIRLocalModelSource alloc] initWithModelName:@"my_local_model"
                                              path:modelPath];
BOOL registrationSuccess =
      [[FIRModelManager modelManager] registerLocalModelSource:localModelSource];

Then, create a ModelOptions object with the Cloud source, the local source, or both, and use it to get an instance of ModelInterpreter. If you only have one source, specify nil for the source type you don't use.

Swift

let options = ModelOptions(
  cloudModelName: "my_cloud_model",
  localModelName: "my_local_model"
)
let interpreter = ModelInterpreter(options: options)

Objective-C

FIRModelOptions *options = [[FIRModelOptions alloc] initWithCloudModelName:@"my_cloud_model"
                                                            localModelName:@"my_local_model"];
FIRModelInterpreter *interpreter = [FIRModelInterpreter modelInterpreterWithOptions:options];

If you specify both a Cloud model source and a local model source, the model interpreter will use the Cloud model if it's available, and fall back to the local model when it is not.

Specify the model's input and output

Next, you must specify the format of the model's input and output by creating a ModelInputOutputOptions object.

A TensorFlow Lite model takes as input and produces as output one or more multidimensional arrays. These arrays contain either UInt8, Int32, Int64, or Float32. You must configure ML Kit with the number and dimensions ("shape") of the arrays your model uses.

For example, an image classification model might take as input a 1x640x480x3 array of bytes, representing a single 640x480 truecolor (24-bit) image, and produce as output a list of 1000 Float32 values, each representing the probability the image is a member of one of the 1000 categories the model predicts.

Swift

let ioOptions = ModelInputOutputOptions()
do {
  try ioOptions.setInputFormat(index: 0, type: .uInt8, dimensions: [1, 640, 480, 3])
  try ioOptions.setOutputFormat(index: 0, type: .float32, dimensions: [1, 1000])
} catch let error as NSError {
  print("Failed to set input or output format with error: \(error.localizedDescription)")
}

Objective-C

FIRModelInputOutputOptions *ioOptions = [[FIRModelInputOutputOptions alloc] init];
NSError *error;
[ioOptions setInputFormatForIndex:0
                             type:FIRModelElementTypeUInt8
                       dimensions:@[@1, @640, @480, @3]
                            error:&error];
if (error != nil) { return; }
[ioOptions setOutputFormatForIndex:0
                              type:FIRModelElementTypeFloat32
                        dimensions:@[@1, @1000]
                             error:&error];
if (error != nil) { return; }

Perform inference on input data

Finally, to perform inference using the model, create a ModelInputs object with your model inputs, and pass it and the model's input and output options to your model interpreter's run(inputs:options:) method. For the best performance, pass your model inputs as a Data (NSData) object.

Swift

let input = ModelInputs()
do {
  var data: Data  // or var data: Array
  // Store input data in `data`
  // ...
  try input.addInput(data)
  // Repeat as necessary for each input index
} catch let error as NSError {
  print("Failed to add input: \(error.localizedDescription)")
}

interpreter.run(inputs: input, options: ioOptions) { outputs, error in
  guard error == nil, let outputs = outputs else { return }
  // Process outputs
  // ...
}

Objective-C

FIRModelInputs *inputs = [[FIRModelInputs alloc] init];
NSData *data;  // Or NSArray *data;
// ...
[inputs addInput:data error:&error];  // Repeat as necessary.
if (error != nil) { return; }
[interpreter runWithInputs:inputs
                   options:ioOptions
                completion:^(FIRModelOutputs * _Nullable outputs,
                             NSError * _Nullable error) {
  if (error != nil || outputs == nil) {
    return;
  }
  // Process outputs
  // ...
}];

You can get the output by calling the output(index:) method of the object that is returned. For example:

Swift

// Get first and only output of inference with a batch size of 1
let probabilities = try? outputs.output(index: 0)

Objective-C

// Get first and only output of inference with a batch size of 1
NSError *outputError;
[outputs outputAtIndex:0 error:&outputError];

How you use the output depends on the model you are using. For example, if you are performing classification, as a next step, you might map the indices of the result to the labels they represent.

Оставить отзыв о...

Текущей странице
Нужна помощь? Обратитесь в службу поддержки.