Générer et modifier des images à l'aide de Gemini (alias "Nano Banana")


Vous pouvez demander à un Gemini modèle de générer et de modifier des images à l'aide de requêtes textuelles uniquement, ou de requêtes textuelles et d'images. Lorsque vous utilisez Firebase AI Logic, vous pouvez envoyer cette requête directement depuis votre application.

Cette fonctionnalité vous permet, entre autres, de :

  • Générer des images de manière itérative par le biais d'une conversation en langage naturel, en ajustant les images tout en conservant la cohérence et le contexte.

  • Générer des images avec un rendu de texte de haute qualité, y compris de longues chaînes de texte.

  • Générer une sortie texte-image entrelacée. Par exemple, un article de blog avec du texte et des images en un seul tour. Auparavant, cela nécessitait d'enchaîner plusieurs modèles.

  • Générer des images à l'aide des connaissances du monde et des capacités de raisonnement de Gemini.

Vous trouverez une liste complète des modalités et fonctionnalités compatibles (ainsi que des exemples de requêtes) plus loin sur cette page.

Accéder au code pour la conversion de texte en image Accéder au code pour le texte et les images entrelacés

Accéder au code pour la modification d'images Accéder au code pour la modification itérative d'images


Consultez d'autres guides pour découvrir d'autres options permettant d'utiliser des images
Analyser des images Analyser des images sur l'appareil Générer une sortie structurée

Avant de commencer

Cliquez sur votre fournisseur Gemini API pour afficher le contenu spécifique au fournisseur et le code sur cette page.

Si ce n'est pas déjà fait, suivez le guide de démarrage, qui explique comment configurer votre projet Firebase, connecter votre application à Firebase, ajouter le SDK, initialiser le service backend pour le fournisseur Gemini API de votre choix et créer une instance GenerativeModel.

Pour tester et effectuer des itérations de vos requêtes, nous vous recommandons d'utiliser Google AI Studio.

Modèles compatibles avec cette fonctionnalité

  • gemini-3-pro-image-preview (également appelé "Nano Banana Pro")
  • gemini-3.1-flash-image-preview (également appelé "Nano Banana 2")
  • gemini-2.5-flash-image (également appelé "Nano Banana")

Générer et modifier des images

Vous pouvez générer et modifier des images à l'aide d'un Gemini modèle.

Générer des images (entrée textuelle uniquement)

Avant d'essayer cet exemple, suivez les étapes de la section Avant de commencer de ce guide pour configurer votre projet et votre application.
Dans cette section, vous cliquerez également sur un bouton pour le fournisseur Gemini API de votre choix afin d'afficher le contenu spécifique au fournisseur sur cette page.

Vous pouvez demander à un modèle Gemini de générer des images en envoyant une requête textuelle.

Veillez à créer une instance GenerativeModel, à inclure responseModalities: ["TEXT", "IMAGE"] dans la configuration de votre modèle et à appeler generateContent.

Swift


import FirebaseAILogic

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Provide a text prompt instructing the model to generate an image
let prompt = "Generate an image of the Eiffel tower with fireworks in the background."

// To generate an image, call `generateContent` with the text input
let response = try await model.generateContent(prompt)

// Handle the generated image
guard let inlineDataPart = response.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let uiImage = UIImage(data: inlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide a text prompt instructing the model to generate an image
val prompt = "Generate an image of the Eiffel tower with fireworks in the background."

// To generate image output, call `generateContent` with the text input
val generatedImageAsBitmap = model.generateContent(prompt)
    // Handle the generated image
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide a text prompt instructing the model to generate an image
Content prompt = new Content.Builder()
        .addText("Generate an image of the Eiffel Tower with fireworks in the background.")
        .build();

// To generate an image, call `generateContent` with the text input
ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) { 
        // iterate over all the parts in the first candidate in the result object
        for (Part part : result.getCandidates().get(0).getContent().getParts()) {
            if (part instanceof ImagePart) {
                ImagePart imagePart = (ImagePart) part;
                // The returned image as a bitmap
                Bitmap generatedImageAsBitmap = imagePart.getImage();
                break;
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Provide a text prompt instructing the model to generate an image
const prompt = 'Generate an image of the Eiffel Tower with fireworks in the background.';

// To generate an image, call `generateContent` with the text input
const result = model.generateContent(prompt);

// Handle the generated image
try {
  const inlineDataParts = result.response.inlineDataParts();
  if (inlineDataParts?.[0]) {
    const image = inlineDataParts[0].inlineData;
    console.log(image.mimeType, image.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.5-flash-image',
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [ResponseModalities.text, ResponseModalities.image]),
);

// Provide a text prompt instructing the model to generate an image
final prompt = [Content.text('Generate an image of the Eiffel Tower with fireworks in the background.')];

// To generate an image, call `generateContent` with the text input
final response = await model.generateContent(prompt);
if (response.inlineDataParts.isNotEmpty) {
  final imageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Provide a text prompt instructing the model to generate an image
var prompt = "Generate an image of the Eiffel Tower with fireworks in the background.";

// To generate an image, call `GenerateContentAsync` with the text input
var response = await model.GenerateContentAsync(prompt);

var text = response.Text;
if (!string.IsNullOrWhiteSpace(text)) {
  // Do something with the text
}

// Handle the generated image
var imageParts = response.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
foreach (var imagePart in imageParts) {
  // Load the Image into a Unity Texture2D object
  UnityEngine.Texture2D texture2D = new(2, 2);
  if (texture2D.LoadImage(imagePart.Data.ToArray())) {
    // Do something with the image
  }
}

Générer du texte entrecoupé d'images

Avant d'essayer cet exemple, suivez les étapes de la section Avant de commencer de ce guide pour configurer votre projet et votre application.
Dans cette section, vous cliquerez également sur un bouton pour le fournisseur Gemini API de votre choix afin d'afficher le contenu spécifique au fournisseur sur cette page.

Vous pouvez demander à un modèle Gemini de générer des images entrelacées avec ses réponses textuelles. Par exemple, vous pouvez générer des images de chaque étape d'une recette générée avec les instructions correspondantes, sans avoir à envoyer de requêtes distinctes au modèle ou à différents modèles.

Veillez à créer une instance GenerativeModel, à inclure responseModalities: ["TEXT", "IMAGE"] dans la configuration de votre modèle et à appeler generateContent.

Swift


import FirebaseAILogic

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Provide a text prompt instructing the model to generate interleaved text and images
let prompt = """
Generate an illustrated recipe for a paella.
Create images to go alongside the text as you generate the recipe
"""

// To generate interleaved text and images, call `generateContent` with the text input
let response = try await model.generateContent(prompt)

// Handle the generated text and image
guard let candidate = response.candidates.first else {
  fatalError("No candidates in response.")
}
for part in candidate.content.parts {
  switch part {
  case let textPart as TextPart:
    // Do something with the generated text
    let text = textPart.text
  case let inlineDataPart as InlineDataPart:
    // Do something with the generated image
    guard let uiImage = UIImage(data: inlineDataPart.data) else {
      fatalError("Failed to convert data to UIImage.")
    }
  default:
    fatalError("Unsupported part type: \(part)")
  }
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide a text prompt instructing the model to generate interleaved text and images
val prompt = """
    Generate an illustrated recipe for a paella.
    Create images to go alongside the text as you generate the recipe
    """.trimIndent()

// To generate interleaved text and images, call `generateContent` with the text input
val responseContent = model.generateContent(prompt).candidates.first().content

// The response will contain image and text parts interleaved
for (part in responseContent.parts) {
    when (part) {
        is ImagePart -> {
            // ImagePart as a bitmap
            val generatedImageAsBitmap: Bitmap? = part.asImageOrNull()
        }
        is TextPart -> {
            // Text content from the TextPart
            val text = part.text
        }
    }
}

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide a text prompt instructing the model to generate interleaved text and images
Content prompt = new Content.Builder()
        .addText("Generate an illustrated recipe for a paella.\n" +
                 "Create images to go alongside the text as you generate the recipe")
        .build();

// To generate interleaved text and images, call `generateContent` with the text input
ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        Content responseContent = result.getCandidates().get(0).getContent();
        // The response will contain image and text parts interleaved
        for (Part part : responseContent.getParts()) {
            if (part instanceof ImagePart) {
                // ImagePart as a bitmap
                Bitmap generatedImageAsBitmap = ((ImagePart) part).getImage();
            } else if (part instanceof TextPart){
                // Text content from the TextPart
                String text = ((TextPart) part).getText();
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        System.err.println(t);
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Provide a text prompt instructing the model to generate interleaved text and images
const prompt = 'Generate an illustrated recipe for a paella.\n.' +
  'Create images to go alongside the text as you generate the recipe';

// To generate interleaved text and images, call `generateContent` with the text input
const result = await model.generateContent(prompt);

// Handle the generated text and image
try {
  const response = result.response;
  if (response.candidates?.[0].content?.parts) {
    for (const part of response.candidates?.[0].content?.parts) {
      if (part.text) {
        // Do something with the text
        console.log(part.text)
      }
      if (part.inlineData) {
        // Do something with the image
        const image = part.inlineData;
        console.log(image.mimeType, image.data);
      }
    }
  }

} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.5-flash-image',
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [ResponseModalities.text, ResponseModalities.image]),
);

// Provide a text prompt instructing the model to generate interleaved text and images
final prompt = [Content.text(
  'Generate an illustrated recipe for a paella\n ' +
  'Create images to go alongside the text as you generate the recipe'
)];

// To generate interleaved text and images, call `generateContent` with the text input
final response = await model.generateContent(prompt);

// Handle the generated text and image
final parts = response.candidates.firstOrNull?.content.parts
if (parts.isNotEmpty) {
  for (final part in parts) {
    if (part is TextPart) {
      // Do something with text part
      final text = part.text
    }
    if (part is InlineDataPart) {
      // Process image
      final imageBytes = part.bytes
    }
  }
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Provide a text prompt instructing the model to generate interleaved text and images
var prompt = "Generate an illustrated recipe for a paella \n" +
  "Create images to go alongside the text as you generate the recipe";

// To generate interleaved text and images, call `GenerateContentAsync` with the text input
var response = await model.GenerateContentAsync(prompt);

// Handle the generated text and image
foreach (var part in response.Candidates.First().Content.Parts) {
  if (part is ModelContent.TextPart textPart) {
    if (!string.IsNullOrWhiteSpace(textPart.Text)) {
      // Do something with the text
    }
  } else if (part is ModelContent.InlineDataPart dataPart) {
    if (dataPart.MimeType == "image/png") {
      // Load the Image into a Unity Texture2D object
      UnityEngine.Texture2D texture2D = new(2, 2);
      if (texture2D.LoadImage(dataPart.Data.ToArray())) {
        // Do something with the image
      }
    }
  }
}

Modifier des images (entrée texte-image)

Avant d'essayer cet exemple, suivez les étapes de la section Avant de commencer de ce guide pour configurer votre projet et votre application.
Dans cette section, vous cliquerez également sur un bouton pour le fournisseur Gemini API de votre choix afin d'afficher le contenu spécifique au fournisseur sur cette page.

Vous pouvez demander à un modèle Gemini de modifier des images en envoyant une requête avec du texte et une ou plusieurs images.

Veillez à créer une instance GenerativeModel, à inclure responseModalities: ["TEXT", "IMAGE"] dans la configuration de votre modèle et à appeler generateContent.

Swift


import FirebaseAILogic

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Provide an image for the model to edit
guard let image = UIImage(named: "scones") else { fatalError("Image file not found.") }

// Provide a text prompt instructing the model to edit the image
let prompt = "Edit this image to make it look like a cartoon"

// To edit the image, call `generateContent` with the image and text input
let response = try await model.generateContent(image, prompt)

// Handle the generated image
guard let inlineDataPart = response.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let uiImage = UIImage(data: inlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide an image for the model to edit
val bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.scones)

// Provide a text prompt instructing the model to edit the image
val prompt = content {
    image(bitmap)
    text("Edit this image to make it look like a cartoon")
}

// To edit the image, call `generateContent` with the prompt (image and text input)
val generatedImageAsBitmap = model.generateContent(prompt)
    // Handle the generated text and image
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide an image for the model to edit
Bitmap bitmap = BitmapFactory.decodeResource(resources, R.drawable.scones);

// Provide a text prompt instructing the model to edit the image
Content promptcontent = new Content.Builder()
        .addImage(bitmap)
        .addText("Edit this image to make it look like a cartoon")
        .build();

// To edit the image, call `generateContent` with the prompt (image and text input)
ListenableFuture<GenerateContentResponse> response = model.generateContent(promptcontent);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        // iterate over all the parts in the first candidate in the result object
        for (Part part : result.getCandidates().get(0).getContent().getParts()) {
            if (part instanceof ImagePart) {
                ImagePart imagePart = (ImagePart) part;
                Bitmap generatedImageAsBitmap = imagePart.getImage();
                break;
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Prepare an image for the model to edit
async function fileToGenerativePart(file) {
  const base64EncodedDataPromise = new Promise((resolve) => {
    const reader = new FileReader();
    reader.onloadend = () => resolve(reader.result.split(',')[1]);
    reader.readAsDataURL(file);
  });
  return {
    inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
  };
}

// Provide a text prompt instructing the model to edit the image
const prompt = "Edit this image to make it look like a cartoon";

const fileInputEl = document.querySelector("input[type=file]");
const imagePart = await fileToGenerativePart(fileInputEl.files[0]);

// To edit the image, call `generateContent` with the image and text input
const result = await model.generateContent([prompt, imagePart]);

// Handle the generated image
try {
  const inlineDataParts = result.response.inlineDataParts();
  if (inlineDataParts?.[0]) {
    const image = inlineDataParts[0].inlineData;
    console.log(image.mimeType, image.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.5-flash-image',
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [ResponseModalities.text, ResponseModalities.image]),
);

// Prepare an image for the model to edit
final image = await File('scones.jpg').readAsBytes();
final imagePart = InlineDataPart('image/jpeg', image);

// Provide a text prompt instructing the model to edit the image
final prompt = TextPart("Edit this image to make it look like a cartoon");

// To edit the image, call `generateContent` with the image and text input
final response = await model.generateContent([
  Content.multi([prompt,imagePart])
]);

// Handle the generated image
if (response.inlineDataParts.isNotEmpty) {
  final imageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Prepare an image for the model to edit
var imageFile = System.IO.File.ReadAllBytes(System.IO.Path.Combine(
  UnityEngine.Application.streamingAssetsPath, "scones.jpg"));
var image = ModelContent.InlineData("image/jpeg", imageFile);

// Provide a text prompt instructing the model to edit the image
var prompt = ModelContent.Text("Edit this image to make it look like a cartoon.");

// To edit the image, call `GenerateContent` with the image and text input
var response = await model.GenerateContentAsync(new [] { prompt, image });

var text = response.Text;
if (!string.IsNullOrWhiteSpace(text)) {
  // Do something with the text
}

// Handle the generated image
var imageParts = response.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
foreach (var imagePart in imageParts) {
  // Load the Image into a Unity Texture2D object
  Texture2D texture2D = new Texture2D(2, 2);
  if (texture2D.LoadImage(imagePart.Data.ToArray())) {
    // Do something with the image
  }
}

Modifier des images de manière itérative à l'aide d'un chat multitour

Avant d'essayer cet exemple, suivez les étapes de la section Avant de commencer de ce guide pour configurer votre projet et votre application.
Dans cette section, vous cliquerez également sur un bouton pour le fournisseur Gemini API de votre choix afin d'afficher le contenu spécifique au fournisseur sur cette page.

À l'aide d'un chat multitour, vous pouvez effectuer des itérations avec un Gemini modèle sur les images qu'il génère ou que vous fournissez.

Veillez à créer une instance GenerativeModel, à inclure responseModalities: ["TEXT", "IMAGE"] dans la configuration de votre modèle et à appeler startChat() et sendMessage() pour envoyer de nouveaux messages utilisateur.

Swift


import FirebaseAILogic

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Initialize the chat
let chat = model.startChat()

guard let image = UIImage(named: "scones") else { fatalError("Image file not found.") }

// Provide an initial text prompt instructing the model to edit the image
let prompt = "Edit this image to make it look like a cartoon"

// To generate an initial response, send a user message with the image and text prompt
let response = try await chat.sendMessage(image, prompt)

// Inspect the generated image
guard let inlineDataPart = response.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let uiImage = UIImage(data: inlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

// Follow up requests do not need to specify the image again
let followUpResponse = try await chat.sendMessage("But make it old-school line drawing style")

// Inspect the edited image after the follow up request
guard let followUpInlineDataPart = followUpResponse.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let followUpUIImage = UIImage(data: followUpInlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide an image for the model to edit
val bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.scones)

// Create the initial prompt instructing the model to edit the image
val prompt = content {
    image(bitmap)
    text("Edit this image to make it look like a cartoon")
}

// Initialize the chat
val chat = model.startChat()

// To generate an initial response, send a user message with the image and text prompt
var response = chat.sendMessage(prompt)
// Inspect the returned image
var generatedImageAsBitmap = response
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

// Follow up requests do not need to specify the image again
response = chat.sendMessage("But make it old-school line drawing style")
generatedImageAsBitmap = response
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide an image for the model to edit
Bitmap bitmap = BitmapFactory.decodeResource(resources, R.drawable.scones);

// Initialize the chat
ChatFutures chat = model.startChat();

// Create the initial prompt instructing the model to edit the image
Content prompt = new Content.Builder()
        .setRole("user")
        .addImage(bitmap)
        .addText("Edit this image to make it look like a cartoon")
        .build();

// To generate an initial response, send a user message with the image and text prompt
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(prompt);
// Extract the image from the initial response
ListenableFuture<@Nullable Bitmap> initialRequest = Futures.transform(response, result -> {
    for (Part part : result.getCandidates().get(0).getContent().getParts()) {
        if (part instanceof ImagePart) {
            ImagePart imagePart = (ImagePart) part;
            return imagePart.getImage();
        }
    }
    return null;
}, executor);

// Follow up requests do not need to specify the image again
ListenableFuture<GenerateContentResponse> modelResponseFuture = Futures.transformAsync(
        initialRequest,
        generatedImage -> {
            Content followUpPrompt = new Content.Builder()
                    .addText("But make it old-school line drawing style")
                    .build();
            return chat.sendMessage(followUpPrompt);
        },
        executor);

// Add a final callback to check the reworked image
Futures.addCallback(modelResponseFuture, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        for (Part part : result.getCandidates().get(0).getContent().getParts()) {
            if (part instanceof ImagePart) {
                ImagePart imagePart = (ImagePart) part;
                Bitmap generatedImageAsBitmap = imagePart.getImage();
                break;
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Prepare an image for the model to edit
async function fileToGenerativePart(file) {
  const base64EncodedDataPromise = new Promise((resolve) => {
    const reader = new FileReader();
    reader.onloadend = () => resolve(reader.result.split(',')[1]);
    reader.readAsDataURL(file);
  });
  return {
    inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
  };
}

const fileInputEl = document.querySelector("input[type=file]");
const imagePart = await fileToGenerativePart(fileInputEl.files[0]);

// Provide an initial text prompt instructing the model to edit the image
const prompt = "Edit this image to make it look like a cartoon";

// Initialize the chat
const chat = model.startChat();

// To generate an initial response, send a user message with the image and text prompt
const result = await chat.sendMessage([prompt, imagePart]);

// Request and inspect the generated image
try {
  const inlineDataParts = result.response.inlineDataParts();
  if (inlineDataParts?.[0]) {
    // Inspect the generated image
    const image = inlineDataParts[0].inlineData;
    console.log(image.mimeType, image.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

// Follow up requests do not need to specify the image again
const followUpResult = await chat.sendMessage("But make it old-school line drawing style");

// Request and inspect the returned image
try {
  const followUpInlineDataParts = followUpResult.response.inlineDataParts();
  if (followUpInlineDataParts?.[0]) {
    // Inspect the generated image
    const followUpImage = followUpInlineDataParts[0].inlineData;
    console.log(followUpImage.mimeType, followUpImage.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.5-flash-image',
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [ResponseModalities.text, ResponseModalities.image]),
);

// Prepare an image for the model to edit
final image = await File('scones.jpg').readAsBytes();
final imagePart = InlineDataPart('image/jpeg', image);

// Provide an initial text prompt instructing the model to edit the image
final prompt = TextPart("Edit this image to make it look like a cartoon");

// Initialize the chat
final chat = model.startChat();

// To generate an initial response, send a user message with the image and text prompt
final response = await chat.sendMessage([
  Content.multi([prompt,imagePart])
]);

// Inspect the returned image
if (response.inlineDataParts.isNotEmpty) {
  final imageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

// Follow up requests do not need to specify the image again
final followUpResponse = await chat.sendMessage([
  Content.text("But make it old-school line drawing style")
]);

// Inspect the returned image
if (followUpResponse.inlineDataParts.isNotEmpty) {
  final followUpImageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Prepare an image for the model to edit
var imageFile = System.IO.File.ReadAllBytes(System.IO.Path.Combine(
  UnityEngine.Application.streamingAssetsPath, "scones.jpg"));
var image = ModelContent.InlineData("image/jpeg", imageFile);

// Provide an initial text prompt instructing the model to edit the image
var prompt = ModelContent.Text("Edit this image to make it look like a cartoon.");

// Initialize the chat
var chat = model.StartChat();

// To generate an initial response, send a user message with the image and text prompt
var response = await chat.SendMessageAsync(new [] { prompt, image });

// Inspect the returned image
var imageParts = response.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
// Load the image into a Unity Texture2D object
UnityEngine.Texture2D texture2D = new(2, 2);
if (texture2D.LoadImage(imageParts.First().Data.ToArray())) {
  // Do something with the image
}

// Follow up requests do not need to specify the image again
var followUpResponse = await chat.SendMessageAsync("But make it old-school line drawing style");

// Inspect the returned image
var followUpImageParts = followUpResponse.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
// Load the image into a Unity Texture2D object
UnityEngine.Texture2D followUpTexture2D = new(2, 2);
if (followUpTexture2D.LoadImage(followUpImageParts.First().Data.ToArray())) {
  // Do something with the image
}



Fonctionnalités compatibles, limites et bonnes pratiques

Modalités et fonctionnalités compatibles

Voici les modalités et fonctionnalités compatibles pour les modèles de génération d'images Gemini. Chaque fonctionnalité présente un exemple de requête et un exemple d'extrait de code ci-dessus.

  • Texte Image(s) (texte uniquement vers image)

    • Génère une image de la Tour Eiffel avec des feux d'artifice en arrière-plan.
  • Texte Image(s) (rendu de texte dans l'image)

    • Génère une photo cinématographique d'un grand bâtiment avec cette projection de texte géante mappée sur la façade du bâtiment.
  • Texte Image(s) et texte (entrelacés)

    • Génère une recette illustrée de paella. Crée des images à côté du texte au fur et à mesure que tu génères la recette.

    • Génère une histoire sur un chien dans un style d'animation de dessin animé en 3D. Pour chaque scène, génère une image.

  • Image(s) et texte Image(s) et texte (entrelacés)

    • [image d'une pièce meublée] + Quelles autres couleurs de canapés conviendraient dans mon espace ? Peux-tu mettre à jour l'image ?
  • Modification d'images (texte et image vers image)

    • [image de scones] + Modifie cette image pour qu'elle ressemble à un dessin animé

    • [image d'un chat] + [image d'un oreiller] + Crée un point de croix de mon chat sur cet oreiller.

  • Modification d'images multitour (chat)

    • [image d'une voiture bleue] + Transforme cette voiture en cabriolet., puis Maintenant, change la couleur en jaune.

De plus, les modèles Gemini 3 Pro Image et Gemini 3.1 Flash Image sont compatibles avec l'ancrage dans la recherche Google.

Limitations et bonnes pratiques

Firebase AI Logic ne permet pas encore de définir explicitement aspect_ratio ou image_size pour les images générées, mais vous pouvez indiquer au modèle les paramètres souhaités dans votre requête. Ces fonctionnalités seront bientôt disponibles.

Voici les limites et les bonnes pratiques concernant la sortie d'image d'un Gemini modèle.

  • Les modèles Gemini de génération d'images sont compatibles avec les éléments suivants :

    • Génération d'images PNG avec les dimensions maximales suivantes :
      • gemini-3-pro-image-preview et gemini-3.1-flash-image-preview : 4K
      • gemini-2.5-flash-image: 1024 px
    • Génération et modification d'images de personnes.
    • Utilisation de filtres de sécurité offrant une expérience utilisateur flexible et moins restrictive.
  • Les modèles Gemini de génération d'images ne sont pas compatibles avec les éléments suivants :

    • Inclure des entrées audio ou vidéo.
    • Générer uniquement des images.
      Les modèles renvoient toujours à la fois du texte et des images, et vous devez inclure responseModalities: ["TEXT", "IMAGE"] dans la configuration de votre modèle.
  • Pour des performances optimales, utilisez les langues suivantes : en, es-mx, ja-jp, zh-cn, hi-in.

  • La génération d'images ne se déclenche pas toujours. Voici quelques problèmes connus :

    • Le modèle ne peut générer que du texte.
      Essayez de demander explicitement des sorties d'image (par exemple, "générer une image", "fournir des images au fur et à mesure", "mettre à jour l'image").

    • Le modèle peut s'arrêter de générer en cours de route.
      Réessayez ou essayez une autre requête.

    • Le modèle peut générer du texte sous forme d'image.
      Essayez de demander explicitement des sorties de texte. Par exemple, "générer du texte narratif avec des illustrations".

  • Lorsque vous générez du texte pour une image, Gemini fonctionne mieux si vous générez d'abord le texte, puis demandez une image avec le texte.