This class is deprecated.
The standalone ML Kit SDK replaces this API. For more information, refer to the migration guide.
Detector for finding FirebaseVisionObjects
in a supplied image.
A object detector is created via
getOnDeviceObjectDetector(FirebaseVisionObjectDetectorOptions) or
getOnDeviceObjectDetector(), if you wish to use the default options. For example,
the code below creates an object detector with default options.
FirebaseVisionObjectDetector objectDetector =
FirebaseVision.getInstance().getOnDeviceObjectDetector();
FirebaseVisionImage
from a Bitmap,
ByteBuffer, etc. See
FirebaseVisionImage
documentation for more details. For example, the code below creates a FirebaseVisionImage
from a ByteBuffer.
FirebaseVisionImage image
= FirebaseVisionImage.fromByteBuffer(byteBuffer, imageMetadata);
FirebaseVisionImage.
Task<List<FirebaseVisionObject>> task = objectDetector.processImage(image);
task.addOnSuccessListener(...).addOnFailureListener(...);
Public Method Summary
| void |
close()
|
| Task<List<FirebaseVisionObject>> |
Inherited Method Summary
Public Methods
public void close ()
Throws
| IOException |
|---|
public Task<List<FirebaseVisionObject>> processImage (FirebaseVisionImage image)
Detects objects from supplied image.
For best efficiency, create a
FirebaseVisionImage object using following way:
fromByteBuffer(ByteBuffer, FirebaseVisionImageMetadata)if you need to pre-process the image. E.g. allocate a directByteBufferand write processed pixels into theByteBuffer.
FirebaseVisionImage factory methods will work as well, but possibly slightly
slower.
Note that the width and height of the provided image cannot be less than 32.
Returns
- A
Taskthat asynchronously returns aListof detectedFirebaseVisionObjects.