使用机器学习套件检测人脸 (Android)

您可以使用机器学习套件检测图片和视频中的人脸。

准备工作

  1. 将 Firebase 添加到您的 Android 项目(如果尚未添加)。
  2. 将 Android 版机器学习套件库的依赖项添加到您的模块(应用级层)Gradle 文件(通常为 app/build.gradle):
    apply plugin: 'com.android.application'
    apply plugin: 'com.google.gms.google-services'
    
    dependencies {
      // ...
    
      implementation 'com.google.firebase:firebase-ml-vision:24.0.3'
      // If you want to detect face contours (landmark detection and classification
      // don't require this additional model):
      implementation 'com.google.firebase:firebase-ml-vision-face-model:20.0.1'
    }
    
  3. 非强制但建议执行的操作:对您的应用进行配置,使之在从 Play 商店安装后自动将机器学习模式下载到设备上。

    为此,请将以下声明添加到您的应用的 AndroidManifest.xml 文件:

    <application ...>
      ...
      <meta-data
          android:name="com.google.firebase.ml.vision.DEPENDENCIES"
          android:value="face" />
      <!-- To use multiple models: android:value="face,model2,model3" -->
    </application>
    
    如果您未启用在安装时下载模型的选项,模型将在您首次运行检测器时下载。您在下载完毕之前提出的请求不会产生任何结果。

输入图片指南

为了使机器学习套件准确检测人脸,输入图片必须包含由足够像素数据表示的人脸。通常,要在图片中检测的每张人脸应至少为 100x100 像素。如果要检测人脸轮廓,机器学习套件需要更高的分辨率输入:每张人脸应至少为 200x200 像素。

如果您是在实时应用中检测人脸,可能还需要考虑输入图片的整体尺寸。较小图片的处理速度相对较快,因此,为了减少延迟时间,请以较低的分辨率捕获图片(同时需满足上述人脸图片的精度要求),并确保主体的面部在图片中占尽可能大的部分。另请参阅提高实时性能的相关提示

图片聚焦不良会影响准确性。如果您无法获得满意的结果,请尝试让用户重新捕获图片。

人脸相对于相机的方向也会影响机器学习套件检测的面部特征。请参阅人脸检测概念

1. 配置人脸检测器

在对图片应用人脸检测之前,如果要更改人脸检测器的任何默认设置,请使用 FirebaseVisionFaceDetectorOptions 对象指定这些设置。您可以更改以下设置:

设置
性能模式 FAST(默认)| ACCURATE

在检测人脸时更注重速度还是准确性。

检测特征点 NO_LANDMARKS(默认)| ALL_LANDMARKS

是否尝试识别面部“特征点”:眼睛、耳朵、鼻子、脸颊、嘴巴等等。

检测轮廓 NO_CONTOURS(默认)| ALL_CONTOURS

是否检测面部特征的轮廓。仅检测图片中最突出的人脸的轮廓。

对人脸进行分类 NO_CLASSIFICATIONS(默认)| ALL_CLASSIFICATIONS

是否将人脸分为不同类别(例如“微笑”和“眼睛睁开”)。

人脸大小下限 float(默认:0.1f

需要检测的人脸的大小下限(相对于图片)。

启用面部跟踪 false(默认)| true

是否为人脸分配 ID,以用于跨图片跟踪人脸。

请注意,启用轮廓检测后,仅会检测一张人脸,因此人脸跟踪不会产生有用的结果。为此,若要加快检测速度,请勿同时启用轮廓检测和人脸跟踪。

例如:

Java

// High-accuracy landmark detection and face classification
FirebaseVisionFaceDetectorOptions highAccuracyOpts =
        new FirebaseVisionFaceDetectorOptions.Builder()
                .setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE)
                .setLandmarkMode(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
                .setClassificationMode(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
                .build();

// Real-time contour detection of multiple faces
FirebaseVisionFaceDetectorOptions realTimeOpts =
        new FirebaseVisionFaceDetectorOptions.Builder()
                .setContourMode(FirebaseVisionFaceDetectorOptions.ALL_CONTOURS)
                .build();

Kotlin+KTX

// High-accuracy landmark detection and face classification
val highAccuracyOpts = FirebaseVisionFaceDetectorOptions.Builder()
        .setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE)
        .setLandmarkMode(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
        .setClassificationMode(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
        .build()

// Real-time contour detection of multiple faces
val realTimeOpts = FirebaseVisionFaceDetectorOptions.Builder()
        .setContourMode(FirebaseVisionFaceDetectorOptions.ALL_CONTOURS)
        .build()

2. 运行人脸检测器

如需识别图片中的文本,请基于设备上的以下资源创建一个 FirebaseVisionImage 对象:Bitmapmedia.ImageByteBuffer、字节数组或文件。然后,将 FirebaseVisionImage 对象传递给 FirebaseVisionFaceDetectordetectInImage 方法。

对于人脸识别,您使用的图片尺寸应至少为 480x360 像素。如果您要实时识别人脸,以此最低分辨率捕获帧有助于减少延迟时间。

  1. 基于图片创建 FirebaseVisionImage 对象。

    • 如需基于 media.Image 对象创建 FirebaseVisionImage 对象(例如从设备的相机捕获图片时),请将 media.Image 对象和图片的旋转角度传递给 FirebaseVisionImage.fromMediaImage()

      如果您使用 CameraX 库,OnImageCapturedListenerImageAnalysis.Analyzer 类会为您计算旋转角度值,因此您只需在调用 FirebaseVisionImage.fromMediaImage() 之前将旋转角度转换为机器学习套件的 ROTATION_ 常量之一:

      Java

      private class YourAnalyzer implements ImageAnalysis.Analyzer {
      
          private int degreesToFirebaseRotation(int degrees) {
              switch (degrees) {
                  case 0:
                      return FirebaseVisionImageMetadata.ROTATION_0;
                  case 90:
                      return FirebaseVisionImageMetadata.ROTATION_90;
                  case 180:
                      return FirebaseVisionImageMetadata.ROTATION_180;
                  case 270:
                      return FirebaseVisionImageMetadata.ROTATION_270;
                  default:
                      throw new IllegalArgumentException(
                              "Rotation must be 0, 90, 180, or 270.");
              }
          }
      
          @Override
          public void analyze(ImageProxy imageProxy, int degrees) {
              if (imageProxy == null || imageProxy.getImage() == null) {
                  return;
              }
              Image mediaImage = imageProxy.getImage();
              int rotation = degreesToFirebaseRotation(degrees);
              FirebaseVisionImage image =
                      FirebaseVisionImage.fromMediaImage(mediaImage, rotation);
              // Pass image to an ML Kit Vision API
              // ...
          }
      }
      

      Kotlin+KTX

      private class YourImageAnalyzer : ImageAnalysis.Analyzer {
          private fun degreesToFirebaseRotation(degrees: Int): Int = when(degrees) {
              0 -> FirebaseVisionImageMetadata.ROTATION_0
              90 -> FirebaseVisionImageMetadata.ROTATION_90
              180 -> FirebaseVisionImageMetadata.ROTATION_180
              270 -> FirebaseVisionImageMetadata.ROTATION_270
              else -> throw Exception("Rotation must be 0, 90, 180, or 270.")
          }
      
          override fun analyze(imageProxy: ImageProxy?, degrees: Int) {
              val mediaImage = imageProxy?.image
              val imageRotation = degreesToFirebaseRotation(degrees)
              if (mediaImage != null) {
                  val image = FirebaseVisionImage.fromMediaImage(mediaImage, imageRotation)
                  // Pass image to an ML Kit Vision API
                  // ...
              }
          }
      }
      

      如果您没有使用可提供图片旋转角度的相机库,可以根据设备的旋转角度和设备中相机传感器的朝向来计算旋转角度:

      Java

      private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
      static {
          ORIENTATIONS.append(Surface.ROTATION_0, 90);
          ORIENTATIONS.append(Surface.ROTATION_90, 0);
          ORIENTATIONS.append(Surface.ROTATION_180, 270);
          ORIENTATIONS.append(Surface.ROTATION_270, 180);
      }
      
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      private int getRotationCompensation(String cameraId, Activity activity, Context context)
              throws CameraAccessException {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
          int rotationCompensation = ORIENTATIONS.get(deviceRotation);
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          CameraManager cameraManager = (CameraManager) context.getSystemService(CAMERA_SERVICE);
          int sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION);
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360;
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          int result;
          switch (rotationCompensation) {
              case 0:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  break;
              case 90:
                  result = FirebaseVisionImageMetadata.ROTATION_90;
                  break;
              case 180:
                  result = FirebaseVisionImageMetadata.ROTATION_180;
                  break;
              case 270:
                  result = FirebaseVisionImageMetadata.ROTATION_270;
                  break;
              default:
                  result = FirebaseVisionImageMetadata.ROTATION_0;
                  Log.e(TAG, "Bad rotation value: " + rotationCompensation);
          }
          return result;
      }

      Kotlin+KTX

      private val ORIENTATIONS = SparseIntArray()
      
      init {
          ORIENTATIONS.append(Surface.ROTATION_0, 90)
          ORIENTATIONS.append(Surface.ROTATION_90, 0)
          ORIENTATIONS.append(Surface.ROTATION_180, 270)
          ORIENTATIONS.append(Surface.ROTATION_270, 180)
      }
      /**
       * Get the angle by which an image must be rotated given the device's current
       * orientation.
       */
      @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
      @Throws(CameraAccessException::class)
      private fun getRotationCompensation(cameraId: String, activity: Activity, context: Context): Int {
          // Get the device's current rotation relative to its "native" orientation.
          // Then, from the ORIENTATIONS table, look up the angle the image must be
          // rotated to compensate for the device's rotation.
          val deviceRotation = activity.windowManager.defaultDisplay.rotation
          var rotationCompensation = ORIENTATIONS.get(deviceRotation)
      
          // On most devices, the sensor orientation is 90 degrees, but for some
          // devices it is 270 degrees. For devices with a sensor orientation of
          // 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.
          val cameraManager = context.getSystemService(CAMERA_SERVICE) as CameraManager
          val sensorOrientation = cameraManager
                  .getCameraCharacteristics(cameraId)
                  .get(CameraCharacteristics.SENSOR_ORIENTATION)!!
          rotationCompensation = (rotationCompensation + sensorOrientation + 270) % 360
      
          // Return the corresponding FirebaseVisionImageMetadata rotation value.
          val result: Int
          when (rotationCompensation) {
              0 -> result = FirebaseVisionImageMetadata.ROTATION_0
              90 -> result = FirebaseVisionImageMetadata.ROTATION_90
              180 -> result = FirebaseVisionImageMetadata.ROTATION_180
              270 -> result = FirebaseVisionImageMetadata.ROTATION_270
              else -> {
                  result = FirebaseVisionImageMetadata.ROTATION_0
                  Log.e(TAG, "Bad rotation value: $rotationCompensation")
              }
          }
          return result
      }

      然后,将 media.Image 对象及旋转角度值传递给 FirebaseVisionImage.fromMediaImage()

      Java

      FirebaseVisionImage image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation);

      Kotlin+KTX

      val image = FirebaseVisionImage.fromMediaImage(mediaImage, rotation)
    • 如需基于文件 URI 创建 FirebaseVisionImage 对象,请将应用上下文和文件 URI 传递给 FirebaseVisionImage.fromFilePath()。如果您使用 ACTION_GET_CONTENT Intent 提示用户从图库应用中选择图片,这一操作会非常有用。

      Java

      FirebaseVisionImage image;
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri);
      } catch (IOException e) {
          e.printStackTrace();
      }

      Kotlin+KTX

      val image: FirebaseVisionImage
      try {
          image = FirebaseVisionImage.fromFilePath(context, uri)
      } catch (e: IOException) {
          e.printStackTrace()
      }
    • 如需基于 ByteBuffer 或字节数组创建 FirebaseVisionImage 对象,请先按上述 media.Image 输入的说明计算图片旋转角度。

      然后,创建一个包含图片的高度、宽度、颜色编码格式和旋转角度的 FirebaseVisionImageMetadata 对象:

      Java

      FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
              .setWidth(480)   // 480x360 is typically sufficient for
              .setHeight(360)  // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build();

      Kotlin+KTX

      val metadata = FirebaseVisionImageMetadata.Builder()
              .setWidth(480) // 480x360 is typically sufficient for
              .setHeight(360) // image recognition
              .setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
              .setRotation(rotation)
              .build()

      使用缓冲区或数组以及元数据对象来创建 FirebaseVisionImage 对象:

      Java

      FirebaseVisionImage image = FirebaseVisionImage.fromByteBuffer(buffer, metadata);
      // Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);

      Kotlin+KTX

      val image = FirebaseVisionImage.fromByteBuffer(buffer, metadata)
      // Or: val image = FirebaseVisionImage.fromByteArray(byteArray, metadata)
    • 如需基于 Bitmap 对象创建 FirebaseVisionImage 对象,请运行以下代码:

      Java

      FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);

      Kotlin+KTX

      val image = FirebaseVisionImage.fromBitmap(bitmap)
      Bitmap 对象表示的图片必须保持竖直,不需要额外的旋转。
  2. 获取 FirebaseVisionFaceDetector 的一个实例:

    Java

    FirebaseVisionFaceDetector detector = FirebaseVision.getInstance()
            .getVisionFaceDetector(options);

    Kotlin+KTX

    val detector = FirebaseVision.getInstance()
            .getVisionFaceDetector(options)
  3. 最后,将图片传递给 detectInImage 方法:

    Java

    Task<List<FirebaseVisionFace>> result =
            detector.detectInImage(image)
                    .addOnSuccessListener(
                            new OnSuccessListener<List<FirebaseVisionFace>>() {
                                @Override
                                public void onSuccess(List<FirebaseVisionFace> faces) {
                                    // Task completed successfully
                                    // ...
                                }
                            })
                    .addOnFailureListener(
                            new OnFailureListener() {
                                @Override
                                public void onFailure(@NonNull Exception e) {
                                    // Task failed with an exception
                                    // ...
                                }
                            });

    Kotlin+KTX

    val result = detector.detectInImage(image)
            .addOnSuccessListener { faces ->
                // Task completed successfully
                // ...
            }
            .addOnFailureListener { e ->
                // Task failed with an exception
                // ...
            }

3. 获取检测到的人脸的相关信息

如果人脸识别操作成功,系统会向成功监听器传递一组 FirebaseVisionFace 对象。每个 FirebaseVisionFace 对象都代表一张在图片中检测到的面孔。对于每张面孔,您可以获取它在输入图片中的边界坐标,以及您已配置面部检测器查找的任何其他信息。例如:

Java

for (FirebaseVisionFace face : faces) {
    Rect bounds = face.getBoundingBox();
    float rotY = face.getHeadEulerAngleY();  // Head is rotated to the right rotY degrees
    float rotZ = face.getHeadEulerAngleZ();  // Head is tilted sideways rotZ degrees

    // If landmark detection was enabled (mouth, ears, eyes, cheeks, and
    // nose available):
    FirebaseVisionFaceLandmark leftEar = face.getLandmark(FirebaseVisionFaceLandmark.LEFT_EAR);
    if (leftEar != null) {
        FirebaseVisionPoint leftEarPos = leftEar.getPosition();
    }

    // If contour detection was enabled:
    List<FirebaseVisionPoint> leftEyeContour =
            face.getContour(FirebaseVisionFaceContour.LEFT_EYE).getPoints();
    List<FirebaseVisionPoint> upperLipBottomContour =
            face.getContour(FirebaseVisionFaceContour.UPPER_LIP_BOTTOM).getPoints();

    // If classification was enabled:
    if (face.getSmilingProbability() != FirebaseVisionFace.UNCOMPUTED_PROBABILITY) {
        float smileProb = face.getSmilingProbability();
    }
    if (face.getRightEyeOpenProbability() != FirebaseVisionFace.UNCOMPUTED_PROBABILITY) {
        float rightEyeOpenProb = face.getRightEyeOpenProbability();
    }

    // If face tracking was enabled:
    if (face.getTrackingId() != FirebaseVisionFace.INVALID_ID) {
        int id = face.getTrackingId();
    }
}

Kotlin+KTX

for (face in faces) {
    val bounds = face.boundingBox
    val rotY = face.headEulerAngleY // Head is rotated to the right rotY degrees
    val rotZ = face.headEulerAngleZ // Head is tilted sideways rotZ degrees

    // If landmark detection was enabled (mouth, ears, eyes, cheeks, and
    // nose available):
    val leftEar = face.getLandmark(FirebaseVisionFaceLandmark.LEFT_EAR)
    leftEar?.let {
        val leftEarPos = leftEar.position
    }

    // If contour detection was enabled:
    val leftEyeContour = face.getContour(FirebaseVisionFaceContour.LEFT_EYE).points
    val upperLipBottomContour = face.getContour(FirebaseVisionFaceContour.UPPER_LIP_BOTTOM).points

    // If classification was enabled:
    if (face.smilingProbability != FirebaseVisionFace.UNCOMPUTED_PROBABILITY) {
        val smileProb = face.smilingProbability
    }
    if (face.rightEyeOpenProbability != FirebaseVisionFace.UNCOMPUTED_PROBABILITY) {
        val rightEyeOpenProb = face.rightEyeOpenProbability
    }

    // If face tracking was enabled:
    if (face.trackingId != FirebaseVisionFace.INVALID_ID) {
        val id = face.trackingId
    }
}

人脸轮廓的示例

启用人脸轮廓检测后,对于检测到的每个面部特征,您会获得一系列点。这些点表示特征的形状。如需详细了解轮廓的表示方式,请参阅人脸检测概念概览

下图展示了这些点与人脸的对应情况(点击图片可放大):

实时人脸检测

如果要在实时应用中使用人脸检测,请遵循以下准则以实现最佳帧速率:

  • 人脸检测器配置为使用人脸轮廓检测或分类和特征点检测,但不能同时使用这二者:

    轮廓检测
    特征点检测
    分类
    特征点检测和分类
    轮廓检测和特征点检测
    轮廓检测和分类
    轮廓检测、特征点检测和分类

  • 启用 FAST 模式(默认情况下启用)。

  • 建议以较低分辨率捕获图片,同时也要牢记此 API 的图片尺寸要求。

  • 限制检测器的调用次数。如果在检测器运行时有新的视频帧可用,请丢弃该帧。
  • 如果要将检测器的输出作为图形叠加在输入图片上,请先从机器学习套件获取结果,然后在一个步骤中完成图片的呈现和叠加。采用这一方法,每个输入帧只需在显示表面呈现一次。
  • 如果您使用 Camera2 API,请以 ImageFormat.YUV_420_888 格式捕获图片。

    如果您使用旧版 Camera API,请以 ImageFormat.NV21 格式捕获图片。