You can use ML Kit to detect faces in images and video.
See the ML Kit quickstart sample on GitHub for an example of this API in use.
Before you begin
- If you have not already added Firebase to your app, do so by following the steps in the getting started guide.
- Include the dependencies for ML Kit in your app-level
build.gradlefile:dependencies { // ... implementation 'com.google.firebase:firebase-ml-vision:17.0.1' } -
Optional but recommended: Configure your app to automatically download
the ML model to the device after your app is installed from the Play Store.
To do so, add the following declaration to your app's
AndroidManifest.xmlfile:<application ...> ... <meta-data android:name="com.google.firebase.ml.vision.DEPENDENCIES" android:value="face" /> <!-- To use multiple models: android:value="face,model2,model3" --> </application>If you do not enable install-time model downloads, the model will be downloaded the first time you run the detector. Requests you make before the download has completed will produce no results.
1. Configure the face detector
Before you apply face detection to an image, if you want to change any of the face detector's default settings, specify those settings with aFirebaseVisionFaceDetectorOptions object.
You can change the following settings:
| Settings | |
|---|---|
| Detection mode |
FAST_MODE (default) | ACCURATE_MODE
Favor speed or accuracy when detecting faces. |
| Detect landmarks |
NO_LANDMARKS (default) | ALL_LANDMARKS
Whether or not to attempt to identify facial "landmarks": eyes, ears, nose, cheeks, mouth. |
| Classify faces |
NO_CLASSIFICATIONS (default)
| ALL_CLASSIFICATIONS
Whether or not to classify faces into categories such as "smiling", and "eyes open". |
| Minimum face size |
float (default: 0.1f)
The minimum size, relative to the image, of faces to detect. |
| Enable face tracking |
false (default) | true
Whether or not to assign faces an ID, which can be used to track faces across images. |
For example:
FirebaseVisionFaceDetectorOptions options =
new FirebaseVisionFaceDetectorOptions.Builder()
.setModeType(FirebaseVisionFaceDetectorOptions.ACCURATE_MODE)
.setLandmarkType(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.setMinFaceSize(0.15f)
.setTrackingEnabled(true)
.build();
2. Run the face detector
To detect faces in an image, create aFirebaseVisionImage object
from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on
the device. Then, pass the FirebaseVisionImage object to the
FirebaseVisionFaceDetector's detectInImage method.
Create a
FirebaseVisionImageobject from your image.- To create a
FirebaseVisionImageobject from aBitmapobject:FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
The image represented by theBitmapobject must be upright, with no additional rotation required. - To create a
FirebaseVisionImageobject from amedia.Imageobject, such as when capturing an image from a device's camera, first determine the angle the image must be rotated to compensate for both the device's rotation and the orientation of camera sensor in the device:private static final SparseIntArray ORIENTATIONS = new SparseIntArray(); static { ORIENTATIONS.append(Surface.ROTATION_0, 90); ORIENTATIONS.append(Surface.ROTATION_90, 0); ORIENTATIONS.append(Surface.ROTATION_180, 270); ORIENTATIONS.append(Surface.ROTATION_270, 180); } /** * Get the angle by which an image must be rotated given the device's current * orientation. */ @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP) private int getRotationCompensation(String cameraId, Activity activity, Context context) throws CameraAccessException { // Get the device's current rotation relative to its "native" orientation. // Then, from the ORIENTATIONS table, look up the angle the image must be // rotated to compensate for the device's rotation. int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation(); int rotationCompensation = ORIENTATIONS.get(deviceRotation); // On most devices, the sensor orientation is 90 degrees, but for some // devices it is 270 degrees. For devices with a sensor orientation of // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees. CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE); int sensorOrientation = cameraManager .getCameraCharacteristics(cameraId) .get(CameraCharacteristics.SENSOR_ORIENTATION); rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360; // Return the corresponding FirebaseVisionImageMetadata rotation value. int result; switch (rotationCompensation) { case 0: result = FirebaseVisionImageMetadata.ROTATION_0; break; case 90: result = FirebaseVisionImageMetadata.ROTATION_90; break; case 180: result = FirebaseVisionImageMetadata.ROTATION_180; break; case 270: result = FirebaseVisionImageMetadata.ROTATION_270; break; default: result = FirebaseVisionImageMetadata.ROTATION_0; Log.e(TAG, "Bad rotation value: " + rotationCompensation); } return result; }Then, pass the
media.Imageobject and the rotation value toFirebaseVisionImage.fromMediaImage():FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);
- To create a
FirebaseVisionImageobject from aByteBufferor a byte array, first calculate the image rotation as described above.Then, create a
FirebaseVisionImageMetadataobject that contains the image's height, width, color encoding format, and rotation:FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder() .setWidth(480) // 480x360 is typically sufficient for .setHeight(360) // image recognition .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21) .setRotation(rotation) .build();Use the buffer or array, and the metadata object, to create a
FirebaseVisionImageobject:FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata); // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);
- To create a
FirebaseVisionImageobject from a file, pass the app context and file URI toFirebaseVisionImage.fromFilePath():FirebaseVisionImage image; try { image = FirebaseVisionImage.fromFilePath(context, uri); } catch (IOException e) { e.printStackTrace(); }
- To create a
Get an instance of
FirebaseVisionFaceDetector:FirebaseVisionFaceDetector detector = FirebaseVision.getInstance() .getVisionFaceDetector(options);Finally, pass the image to the
detectInImagemethod:Task<List<FirebaseVisionFace>> result = detector.detectInImage(image) .addOnSuccessListener( new OnSuccessListener<List<FirebaseVisionFace>>() { @Override public void onSuccess(List<FirebaseVisionFace> faces) { // Task completed successfully // ... } }) .addOnFailureListener( new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { // Task failed with an exception // ... } });
3. Get information about detected faces
If the face recognition operation succeeds, a list ofFirebaseVisionFace objects will be passed to the success listener. Each
FirebaseVisionFace object represents a face that was detected in the image.
For each face, you can get its bounding coordinates in the input image, as well
as any other information you configured the face detector to find. For example:
for (FirebaseVisionFace face : faces) {
Rect bounds = face.getBoundingBox();
float rotY = face.getHeadEulerAngleY(); // Head is rotated to the right rotY degrees
float rotZ = face.getHeadEulerAngleZ(); // Head is tilted sideways rotZ degrees
// If landmark detection was enabled (mouth, ears, eyes, cheeks, and
// nose available):
FirebaseVisionFaceLandmark leftEar = face.getLandmark(FirebaseVisionFaceLandmark.LEFT_EAR);
if (leftEar != null) {
FirebaseVisionPoint leftEarPos = leftEar.getPosition();
}
// If classification was enabled:
if (face.getSmilingProbability() != FirebaseVisionFace.UNCOMPUTED_PROBABILITY) {
float smileProb = face.getSmilingProbability();
}
if (face.getRightEyeOpenProbability() != FirebaseVisionFace.UNCOMPUTED_PROBABILITY) {
float rightEyeOpenProb = face.getRightEyeOpenProbability();
}
// If face tracking was enabled:
if (face.getTrackingId() != FirebaseVisionFace.INVALID_ID) {
int id = face.getTrackingId();
}
}

