This page describes how to use inpainting using Imagen to remove an object from an image using the Firebase AI Logic SDKs.
Inpainting is a type of mask-based editing. A mask is a digital overlay defining the specific area you want to edit.
How it works: You provide an original image and a corresponding masked image — either auto-generated or provided by you — that defines a mask over the object or subject that you want to remove. You can also optionally provide a text prompt describing what you want to remove, or the model can intelligently detect which object to remove. The model then removes the object and fills in the area with new, contextually appropriate content.
For example, you can mask a ball and replace it with a blank wall or a grassy field.
Jump to code for auto-generated mask Jump to code for providing the mask
Before you begin
Only available when using the Vertex AI Gemini API as your API provider. |
If you haven't already, complete the
getting started guide, which
describes how to set up your Firebase project, connect your app to Firebase,
add the SDK, initialize the backend service for your chosen API provider, and
create an ImagenModel
instance.
Models that support this capability
Imagen offers image editing through its capability
model:
imagen-3.0-capability-001
Note that for Imagen models, the global
location is
not supported.
Remove objects using an auto-generated mask
Before trying this sample, complete the Before you begin section of this guide to set up your project and app. |
The following sample shows how to use inpainting to remove content from an image — using automatic mask generation. You provide the original image and a text prompt, and Imagen automatically detects and creates a mask area to modify the original image.
Swift
Image editing with Imagen models isn't supported for Swift. Check back later this year!
Kotlin
To remove objects with an auto-generated mask, specify
ImagenBackgroundMask
. Use
editImage()
and set the editing config to use ImagenEditMode.INPAINT_REMOVAL
.
// Using this SDK to access Imagen models is a Preview release and requires opt-in
@OptIn(PublicPreviewAPI::class)
suspend fun customizeImage() {
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
val ai = Firebase.ai(backend = GenerativeBackend.vertexAI(location = "us-central1"))
// Create an `ImagenModel` instance with an Imagen "capability" model
val model = ai.imagenModel("imagen-3.0-capability-001")
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
val originalImage: Bitmap = TODO("Load your original image Bitmap here")
// Provide the prompt describing the content to be removed.
val prompt = "ball"
// Use the editImage API to remove the unwanted content.
// Pass the original image, the prompt, and an editing configuration.
val editedImage = model.editImage(
sources = listOf(
ImagenRawImage(originalImage),
ImagenBackgroundMask(), // Use ImagenBackgroundMask() to auto-generate the mask.
),
prompt = prompt,
// Define the editing configuration for inpainting and insertion.
config = ImagenEditingConfig(ImagenEditMode.INPAINT_REMOVAL)
)
// Process the resulting 'editedImage' Bitmap, for example, by displaying it in an ImageView.
}
Java
To remove objects with an auto-generated mask, specify
ImagenBackgroundMask
. Use
editImage()
and set the editing config to use ImagenEditMode.INPAINT_REMOVAL
.
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
// Create an `ImagenModel` instance with an Imagen "capability" model
ImagenModel imagenModel = FirebaseAI.getInstance(GenerativeBackend.vertexAI("us-central1"))
.imagenModel(
/* modelName */ "imagen-3.0-capability-001");
ImagenModelFutures model = ImagenModelFutures.from(imagenModel);
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
Bitmap originalImage = null; // TODO("Load your image Bitmap here");
// Provide the prompt describing the content to be removed.
String prompt = "ball";
// Define the list of sources for the editImage call.
// This includes the original image and the auto-generated mask.
ImagenRawImage rawOriginalImage = new ImagenRawImage(originalImage);
ImagenBackgroundMask rawMaskedImage = new ImagenBackgroundMask(); // Use ImagenBackgroundMask() to auto-generate the mask.
// Define the editing configuration for inpainting and removal.
ImagenEditingConfig config = new ImagenEditingConfig.Builder()
.setEditMode(ImagenEditMode.INPAINT_REMOVAL)
.build();
// Use the editImage API to remove the unwanted content.
// Pass the original image, the auto-generated masked image, the prompt, and an editing configuration.
Futures.addCallback(model.editImage(Arrays.asList(rawOriginalImage, rawMaskedImage), prompt, config), new FutureCallback<ImagenGenerationResponse>() {
@Override
public void onSuccess(ImagenGenerationResponse result) {
if (result.getImages().isEmpty()) {
Log.d("ImageEditor", "No images generated");
}
Bitmap editedImage = result.getImages().get(0).asBitmap();
// Process and use the bitmap to display the image in your UI
}
@Override
public void onFailure(Throwable t) {
// ...
}
}, Executors.newSingleThreadExecutor());
Web
Image editing with Imagen models isn't supported for Web apps. Check back later this year!
Dart
To remove objects with an auto-generated mask, specify
ImagenBackgroundMask
. Use
editImage()
and set the editing config to use ImagenEditMode.inpaintRemoval
.
import 'dart:typed_data';
import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';
// Initialize FirebaseApp
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Vertex AI Gemini API backend service
// Optionally specify a location to access the model (for example, `us-central1`)
final ai = FirebaseAI.vertexAI(location: 'us-central1');
// Create an `ImagenModel` instance with an Imagen "capability" model
final model = ai.imagenModel(model: 'imagen-3.0-capability-001');
TODO - FLUTTER// This example assumes 'originalImage' is a pre-loaded Uint8List.
// In a real app, this might come from the user's device or a URL.
final Uint8List originalImage = Uint8List(0); // TODO: Load your original image data here.
// Provide the prompt describing the content to be removed.
final prompt = 'ball';
try {
// Use the editImage API to remove the unwanted content.
// Pass the original image, the prompt, and an editing configuration.
final response = await model.editImage(
sources: [
ImagenRawImage(originalImage),
ImagenBackgroundMask(), // Use ImagenBackgroundMask() to auto-generate the mask.
],
prompt,
// Define the editing configuration for inpainting and removal.
config: const ImagenEditingConfig(
editMode: ImagenEditMode.inpaintRemoval,
),
);
// Process the result.
if (response.images.isNotEmpty) {
final editedImage = response.images.first.bytes;
// Use the editedImage (a Uint8List) to display the image, save it, etc.
print('Image successfully generated!');
} else {
// Handle the case where no images were generated.
print('Error: No images were generated.');
}
} catch (e) {
// Handle any potential errors during the API call.
print('An error occurred: $e');
}
Unity
Image editing with Imagen models isn't supported for Unity. Check back later this year!
Remove objects using a provided mask
Before trying this sample, complete the Before you begin section of this guide to set up your project and app. |
The following sample shows how to use inpainting to remove content from an image — using a mask defined in an image that you provide. You provide the original image, a text prompt, and the masked image.
Providing a text prompt is optional if you provide a masked image. Imagen can intelligently detect an object to remove from the masked area. However, if the object you want to remove isn't obvious or you only want to remove specific objects in the masked area, then provide a text prompt to help the model remove the correct object.
Swift
Image editing with Imagen models isn't supported for Swift. Check back later this year!
Kotlin
To remove objects and provide your own masked image, specify
ImagenRawMask
with the masked image. Use
editImage()
and set the editing config to use ImagenEditMode.INPAINT_REMOVAL
.
// Using this SDK to access Imagen models is a Preview release and requires opt-in
@OptIn(PublicPreviewAPI::class)
suspend fun customizeImage() {
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
val ai = Firebase.ai(backend = GenerativeBackend.vertexAI(location = "us-central1"))
// Create an `ImagenModel` instance with an Imagen "capability" model
val model = ai.imagenModel("imagen-3.0-capability-001")
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
val originalImage: Bitmap = TODO("Load your original image Bitmap here")
// This example assumes 'maskImage' is a pre-loaded Bitmap that contains the masked area.
// In a real app, this might come from the user's device or a URL.
val maskImage: Bitmap = TODO("Load your masked image Bitmap here")
// Provide the prompt describing the content to be removed.
val prompt = "ball"
// Use the editImage API to remove the unwanted content.
// Pass the original image, the masked image, the prompt, and an editing configuration.
val editedImage = model.editImage(
referenceImages = listOf(
ImagenRawImage(originalImage.toImagenInlineImage()),
ImagenRawMask(maskImage.toImagenInlineImage()), // Use ImagenRawMask() to provide your own masked image.
),
prompt = prompt,
// Define the editing configuration for inpainting and removal.
config = ImagenEditingConfig(ImagenEditMode.INPAINT_REMOVAL)
)
// Process the resulting 'editedImage' Bitmap, for example, by displaying it in an ImageView.
}
Java
To remove objects and provide your own masked image, specify
ImagenRawMask
with the masked image. Use
editImage()
and set the editing config to use ImagenEditMode.INPAINT_REMOVAL
.
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
// Create an `ImagenModel` instance with an Imagen "capability" model
ImagenModel imagenModel = FirebaseAI.getInstance(GenerativeBackend.vertexAI("us-central1"))
.imagenModel(
/* modelName */ "imagen-3.0-capability-001");
ImagenModelFutures model = ImagenModelFutures.from(imagenModel);
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
Bitmap originalImage = null; // TODO("Load your original image Bitmap here");
// This example assumes 'maskImage' is a pre-loaded Bitmap that contains the masked area.
// In a real app, this might come from the user's device or a URL.
Bitmap maskImage = null; // TODO("Load your masked image Bitmap here");
// Provide the prompt describing the content to be removed.
String prompt = "ball";
// Define the list of source images for the editImage call.
ImagenRawImage rawOriginalImage = new ImagenRawImage(originalImage);
ImagenBackgroundMask rawMaskedImage = new ImagenRawMask(maskImage); // Use ImagenRawMask() to provide your own masked image.
// Define the editing configuration for inpainting and removal.
ImagenEditingConfig config = new ImagenEditingConfig.Builder()
.setEditMode(ImagenEditMode.INPAINT_REMOVAL)
.build();
// Use the editImage API to remove the unwanted content.
// Pass the original image, the masked image, the prompt, and an editing configuration.
Futures.addCallback(model.editImage(Arrays.asList(rawOriginalImage, rawMaskedImage), prompt, config), new FutureCallback<ImagenGenerationResponse>() {
@Override
public void onSuccess(ImagenGenerationResponse result) {
if (result.getImages().isEmpty()) {
Log.d("ImageEditor", "No images generated");
}
Bitmap editedImage = result.getImages().get(0).asBitmap();
// Process and use the bitmap to display the image in your UI
}
@Override
public void onFailure(Throwable t) {
// ...
}
}, Executors.newSingleThreadExecutor());
Web
Image editing with Imagen models isn't supported for Web apps. Check back later this year!
Dart
To remove objects and provide your own masked image, specify
ImagenRawMask
with the masked image. Use
editImage()
and set the editing config to use ImagenEditMode.inpaintRemoval
.
import 'dart:typed_data';
import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';
// Initialize FirebaseApp
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Vertex AI Gemini API backend service
// Optionally specify a location to access the model (for example, `us-central1`)
final ai = FirebaseAI.vertexAI(location: 'us-central1');
// Create an `ImagenModel` instance with an Imagen "capability" model
final model = ai.imagenModel(model: 'imagen-3.0-capability-001');
// This example assumes 'originalImage' is a pre-loaded Uint8List.
// In a real app, this might come from the user's device or a URL.
final Uint8List originalImage = Uint8List(0); // TODO: Load your original image data here.
// This example assumes 'maskImage' is a pre-loaded Uint8List that contains the masked area.
// In a real app, this might come from the user's device or a URL.
final Uint8List maskImage = Uint8List(0); // TODO: Load your masked image data here.
// Provide the prompt describing the content to be removed.
final prompt = 'ball';
try {
// Use the editImage API to remove the unwanted content.
// Pass the original image, the prompt, and an editing configuration.
final response = await model.editImage(
sources: [
ImagenRawImage(originalImage),
ImagenRawMask(maskImage), // Use ImagenRawMask() to provide your own masked image.
],
prompt: prompt,
// Define the editing configuration for inpainting and removal.
config: const ImagenEditingConfig(
editMode: ImagenEditMode.inpaintRemoval,
),
);
// Process the result.
if (response.images.isNotEmpty) {
final editedImage = response.images.first.bytes;
// Use the editedImage (a Uint8List) to display the image, save it, etc.
print('Image successfully generated!');
} else {
// Handle the case where no images were generated.
print('Error: No images were generated.');
}
} catch (e) {
// Handle any potential errors during the API call.
print('An error occurred: $e');
}
Unity
Image editing with Imagen models isn't supported for Unity. Check back later this year!
Best practices and limitations
We recommend dilating the mask when editing an image. This can help smooth
the borders of an edit and make it seem more convincing. Generally, a dilation
value of 1% or 2% is recommended (0.01
or 0.02
).
Give feedback about your experience with Firebase AI Logic