本页介绍了如何使用 Firebase AI Logic SDK 通过 Imagen 替换图片背景。
背景替换是一种基于蒙版的修改(具体来说是修复)。蒙版是一种数字叠加层,用于定义您要修改的特定区域。
工作原理:您提供一张原始图片和一张相应的遮盖图片,该图片定义了背景上的遮盖层,您可以使用自动背景检测功能,也可以自行提供背景遮盖层。您还可以提供文本提示,说明您要更改的内容。 然后,模型会生成并应用新的背景。
例如,您可以更改正文或对象周围的设置,而不会影响前景(例如,在商品图片中)。
准备工作
仅在将 Vertex AI Gemini API 用作 API 提供方时可用。 |
如果您尚未完成入门指南,请先完成该指南。该指南介绍了如何设置 Firebase 项目、将应用连接到 Firebase、添加 SDK、为所选的 API 提供程序初始化后端服务,以及创建 ImagenModel
实例。
支持此功能的模型
Imagen 通过其 capability
模型提供图片编辑功能:
imagen-3.0-capability-001
请注意,对于 Imagen 模型,global
位置不受支持。
使用自动背景检测功能替换背景
在试用此示例之前,请完成本指南的准备工作部分,以设置您的项目和应用。 |
以下示例展示了如何使用自动背景检测功能替换图片背景。 您提供原始图片和文本提示,Imagen 会自动检测并创建背景蒙版,以修改原始图片。
Swift
Swift 不支持使用 Imagen 模型进行图片编辑。今年晚些时候再回来查看!
Kotlin
如需使用自动背景检测功能替换背景,请指定 ImagenBackgroundMask
。使用 editImage()
并将编辑配置设置为使用 ImagenEditMode.INPAINT_INSERTION
。
// Using this SDK to access Imagen models is a Preview release and requires opt-in
@OptIn(PublicPreviewAPI::class)
suspend fun customizeImage() {
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
val ai = Firebase.ai(backend = GenerativeBackend.vertexAI(location = "us-central1"))
// Create an `ImagenModel` instance with an Imagen "capability" model
val model = ai.imagenModel("imagen-3.0-capability-001")
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
val originalImage: Bitmap = TODO("Load your original image Bitmap here")
// Provide the prompt describing the new background.
val prompt = "space background"
// Use the editImage API to replace the background.
// Pass the original image, the prompt, and an editing configuration.
val editedImage = model.editImage(
sources = listOf(
ImagenRawImage(originalImage),
ImagenBackgroundMask(), // Use ImagenBackgroundMask() to auto-generate the mask.
),
prompt = prompt,
// Define the editing configuration for inpainting and background replacement.
config = ImagenEditingConfig(ImagenEditMode.INPAINT_INSERTION)
)
// Process the resulting 'editedImage' Bitmap, for example, by displaying it in an ImageView.
}
Java
如需使用自动背景检测功能替换背景,请指定 ImagenBackgroundMask
。使用 editImage()
并将编辑配置设置为使用 ImagenEditMode.INPAINT_INSERTION
。
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
// Create an `ImagenModel` instance with an Imagen "capability" model
ImagenModel imagenModel = FirebaseAI.getInstance(GenerativeBackend.vertexAI("us-central1"))
.imagenModel(
/* modelName */ "imagen-3.0-capability-001");
ImagenModelFutures model = ImagenModelFutures.from(imagenModel);
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
Bitmap originalImage = null; // TODO("Load your image Bitmap here");
// Provide the prompt describing the new background.
String prompt = "space background";
// Define the list of sources for the editImage call.
// This includes the original image and the auto-generated mask.
ImagenRawImage rawOriginalImage = new ImagenRawImage(originalImage);
ImagenBackgroundMask rawMaskedImage = new ImagenBackgroundMask(); // Use ImagenBackgroundMask() to auto-generate the mask.
// Define the editing configuration for inpainting and insertion.
ImagenEditingConfig config = new ImagenEditingConfig.Builder()
.setEditMode(ImagenEditMode.INPAINT_INSERTION)
.build();
// Use the editImage API to replace the background.
// Pass the original image, the auto-generated masked image, the prompt, and an editing configuration.
Futures.addCallback(model.editImage(Arrays.asList(rawOriginalImage, rawMaskedImage), prompt, config), new FutureCallback<ImagenGenerationResponse>() {
@Override
public void onSuccess(ImagenGenerationResponse result) {
if (result.getImages().isEmpty()) {
Log.d("ImageEditor", "No images generated");
}
Bitmap editedImage = result.getImages().get(0).asBitmap();
// Process and use the bitmap to display the image in your UI
}
@Override
public void onFailure(Throwable t) {
// ...
}
}, Executors.newSingleThreadExecutor());
Web
Web 应用不支持使用 Imagen 模型进行图片编辑。今年晚些时候再回来查看!
Dart
如需使用自动背景检测功能替换背景,请指定 ImagenBackgroundMask
。使用 editImage()
并将编辑配置设置为使用 ImagenEditMode.inpaintInsertion
。
import 'dart:typed_data';
import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';
// Initialize FirebaseApp
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Vertex AI Gemini API backend service
// Optionally specify a location to access the model (for example, `us-central1`)
final ai = FirebaseAI.vertexAI(location: 'us-central1');
// Create an `ImagenModel` instance with an Imagen "capability" model
final model = ai.imagenModel(model: 'imagen-3.0-capability-001');
// This example assumes 'originalImage' is a pre-loaded Uint8List.
// In a real app, this might come from the user's device or a URL.
final Uint8List originalImage = Uint8List(0); // TODO: Load your original image data here.
// Provide the prompt describing the new background.
final prompt = 'space background';
try {
// Use the editImage API to replace the background.
// Pass the original image, the prompt, and an editing configuration.
final response = await model.editImage(
sources: [
ImagenRawImage(originalImage),
ImagenBackgroundMask(), // Use ImagenBackgroundMask() to auto-generate the mask.
],
prompt: prompt,
// Define the editing configuration for inpainting and background replacement.
config: const ImagenEditingConfig(
editMode: ImagenEditMode.inpaintInsertion,
),
);
// Process the result.
if (response.images.isNotEmpty) {
final editedImage = response.images.first.bytes;
// Use the editedImage (a Uint8List) to display the image, save it, etc.
print('Image successfully generated!');
} else {
// Handle the case where no images were generated.
print('Error: No images were generated.');
}
} catch (e) {
// Handle any potential errors during the API call.
print('An error occurred: $e');
}
Unity
Unity 不支持使用 Imagen 模型进行图片编辑。今年晚些时候再回来查看!
使用提供的蒙版替换背景
在试用此示例之前,请完成本指南的准备工作部分,以设置您的项目和应用。 |
以下示例展示了如何替换图片背景 - 使用您提供的图片中定义的背景蒙版。 您提供原始图片、文本提示和蒙版图片。
Swift
Swift 不支持使用 Imagen 模型进行图片编辑。今年晚些时候再回来查看!
Kotlin
如需使用您提供的蒙版替换背景,请使用蒙版图片指定 ImagenRawMask
。使用 editImage()
并将编辑配置设置为使用 ImagenEditMode.INPAINT_INSERTION
。
// Using this SDK to access Imagen models is a Preview release and requires opt-in
@OptIn(PublicPreviewAPI::class)
suspend fun customizeImage() {
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
val ai = Firebase.ai(backend = GenerativeBackend.vertexAI(location = "us-central1"))
// Create an `ImagenModel` instance with an Imagen "capability" model
val model = ai.imagenModel("imagen-3.0-capability-001")
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
val originalImage: Bitmap = TODO("Load your original image Bitmap here")
// This example assumes 'maskImage' is a pre-loaded Bitmap that contains the masked area.
// In a real app, this might come from the user's device or a URL.
val maskImage: Bitmap = TODO("Load your masked image Bitmap here")
// Provide the prompt describing the new background.
val prompt = "space background"
// Use the editImage API to replace the background.
// Pass the original image, the masked image, the prompt, and an editing configuration.
val editedImage = model.editImage(
referenceImages = listOf(
ImagenRawImage(originalImage.toImagenInlineImage()),
ImagenRawMask(maskImage.toImagenInlineImage()), // Use ImagenRawMask() to provide your own masked image.
),
prompt = prompt,
// Define the editing configuration for inpainting and background replacement.
config = ImagenEditingConfig(ImagenEditMode.INPAINT_INSERTION)
)
// Process the resulting 'editedImage' Bitmap, for example, by displaying it in an ImageView.
}
Java
如需使用您提供的蒙版替换背景,请使用蒙版图片指定 ImagenRawMask
。使用 editImage()
并将编辑配置设置为使用 ImagenEditMode.INPAINT_INSERTION
。
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
// Create an `ImagenModel` instance with an Imagen "capability" model
ImagenModel imagenModel = FirebaseAI.getInstance(GenerativeBackend.vertexAI("us-central1"))
.imagenModel(
/* modelName */ "imagen-3.0-capability-001");
ImagenModelFutures model = ImagenModelFutures.from(imagenModel);
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
Bitmap originalImage = null; // TODO("Load your original image Bitmap here");
// This example assumes 'maskImage' is a pre-loaded Bitmap that contains the masked area.
// In a real app, this might come from the user's device or a URL.
Bitmap maskImage = null; // TODO("Load your masked image Bitmap here");
// Provide the prompt describing the new background.
String prompt = "space background";
// Define the list of source images for the editImage call.
ImagenRawImage rawOriginalImage = new ImagenRawImage(originalImage);
ImagenBackgroundMask rawMaskedImage = new ImagenRawMask(maskImage); // Use ImagenRawMask() to provide your own masked image.
// Define the editing configuration for inpainting and background replacement.
ImagenEditingConfig config = new ImagenEditingConfig.Builder()
.setEditMode(ImagenEditMode.INPAINT_INSERTION)
.build();
// Use the editImage API to replace the background.
// Pass the original image, the masked image, the prompt, and an editing configuration.
Futures.addCallback(model.editImage(Arrays.asList(rawOriginalImage, rawMaskedImage), prompt, config), new FutureCallback<ImagenGenerationResponse>() {
@Override
public void onSuccess(ImagenGenerationResponse result) {
if (result.getImages().isEmpty()) {
Log.d("ImageEditor", "No images generated");
}
Bitmap editedImage = result.getImages().get(0).asBitmap();
// Process and use the bitmap to display the image in your UI
}
@Override
public void onFailure(Throwable t) {
// ...
}
}, Executors.newSingleThreadExecutor());
Web
Web 应用不支持使用 Imagen 模型进行图片编辑。今年晚些时候再回来查看!
Dart
如需使用您提供的蒙版替换背景,请使用蒙版图片指定 ImagenRawMask
。使用 editImage()
并将编辑配置设置为使用 ImagenEditMode.INPAINT_INSERTION
。
import 'dart:typed_data';
import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';
// Initialize FirebaseApp
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Vertex AI Gemini API backend service
// Optionally specify a location to access the model (for example, `us-central1`)
final ai = FirebaseAI.vertexAI(location: 'us-central1');
// Create an `ImagenModel` instance with an Imagen "capability" model
final model = ai.imagenModel(model: 'imagen-3.0-capability-001');
// This example assumes 'originalImage' is a pre-loaded Uint8List.
// In a real app, this might come from the user's device or a URL.
final Uint8List originalImage = Uint8List(0); // TODO: Load your original image data here.
// This example assumes 'maskImage' is a pre-loaded Uint8List that contains the masked area.
// In a real app, this might come from the user's device or a URL.
final Uint8List maskImage = Uint8List(0); // TODO: Load your masked image data here.
// Provide the prompt describing the new background.
final prompt = 'space background';
try {
// Use the editImage API to replace the background.
// Pass the original image, the prompt, and an editing configuration.
final response = await model.editImage(
sources: [
ImagenRawImage(originalImage),
ImagenRawMask(maskImage), // Use ImagenRawMask() to provide your own masked image.
],
prompt: prompt,
// Define the editing configuration for inpainting and background replacement.
config: const ImagenEditingConfig(
editMode: ImagenEditMode.inpaintInsertion,
),
);
// Process the result.
if (response.images.isNotEmpty) {
final editedImage = response.images.first.bytes;
// Use the editedImage (a Uint8List) to display the image, save it, etc.
print('Image successfully generated!');
} else {
// Handle the case where no images were generated.
print('Error: No images were generated.');
}
} catch (e) {
// Handle any potential errors during the API call.
print('An error occurred: $e');
}
Unity
Unity 不支持使用 Imagen 模型进行图片编辑。今年晚些时候再回来查看!
最佳做法和限制
建议您在编辑图片时扩大遮罩。这有助于平滑编辑的边界,使其看起来更逼真。一般来说,建议使用 1% 或 2% 的扩张值(0.01
或 0.02
)。
就您使用 Firebase AI Logic 的体验提供反馈