Gerar e editar imagens usando o Gemini (também conhecido como "Nano Banana")


Você pode pedir a um Gemini modelo para gerar e editar imagens usando comandos somente de texto e de texto e imagem. Ao usar Firebase AI Logic, é possível fazer essa solicitação diretamente do app.

Com esse recurso, você pode fazer o seguinte:

  • Gerar imagens de forma iterativa por meio de conversas em linguagem natural, ajustando as imagens e mantendo a consistência e o contexto.

  • Gerar imagens com renderização de texto de alta qualidade, incluindo strings de texto longas.

  • Gerar saída de texto e imagem intercalada. Por exemplo, uma postagem de blog com texto e imagens em um único turno. Antes, isso exigia a combinação de vários modelos.

  • Gerar imagens usando o conhecimento mundial e os recursos de raciocínio do Gemini.

Você encontra uma lista completa de modalidades e recursos compatíveis (com exemplos de comandos) mais adiante nesta página.

Ir para o código de conversão de texto em imagem Ir para o código de texto e imagens intercaladas

Ir para o código de edição de imagens Ir para o código de edição iterativa de imagens


Consulte outros guias para mais opções de trabalho com imagens
Analisar imagens Analisar imagens no dispositivo Gerar saída estruturada

Antes de começar

Clique no provedor Gemini API para conferir o conteúdo específico do provedor e o código nesta página.

Se ainda não fez isso, conclua o guia de início rápido, que descreve como configurar seu projeto do Firebase, conectar seu app ao Firebase, adicionar o SDK, inicializar o serviço de back-end para o provedor Gemini API escolhido e criar uma instância GenerativeModel.

Para testar e iterar nos comandos, recomendamos o uso do Google AI Studio.

Modelos que oferecem suporte a esse recurso

  • gemini-3-pro-image-preview (também conhecido como "Nano Banana Pro")
  • gemini-3.1-flash-image-preview (também conhecido como "Nano Banana 2")
  • gemini-2.5-flash-image (também conhecido como "Nano Banana")

Gerar e editar imagens

É possível gerar e editar imagens usando um modelo Gemini.

Gerar imagens (entrada somente de texto)

Antes de testar este exemplo, conclua a seção Antes de começar deste guia para configurar seu projeto e app.
Nessa seção, você também vai clicar em um botão para o provedor da API Gemini escolhido Gemini API para que o conteúdo específico do provedor seja exibido nesta página.

Você pode pedir a um Gemini modelo para gerar imagens usando comandos de texto.

Crie uma instância GenerativeModel, inclua responseModalities: ["TEXT", "IMAGE"] na configuração do modelo e chame generateContent.

Swift


import FirebaseAILogic

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Provide a text prompt instructing the model to generate an image
let prompt = "Generate an image of the Eiffel tower with fireworks in the background."

// To generate an image, call `generateContent` with the text input
let response = try await model.generateContent(prompt)

// Handle the generated image
guard let inlineDataPart = response.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let uiImage = UIImage(data: inlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide a text prompt instructing the model to generate an image
val prompt = "Generate an image of the Eiffel tower with fireworks in the background."

// To generate image output, call `generateContent` with the text input
val generatedImageAsBitmap = model.generateContent(prompt)
    // Handle the generated image
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide a text prompt instructing the model to generate an image
Content prompt = new Content.Builder()
        .addText("Generate an image of the Eiffel Tower with fireworks in the background.")
        .build();

// To generate an image, call `generateContent` with the text input
ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) { 
        // iterate over all the parts in the first candidate in the result object
        for (Part part : result.getCandidates().get(0).getContent().getParts()) {
            if (part instanceof ImagePart) {
                ImagePart imagePart = (ImagePart) part;
                // The returned image as a bitmap
                Bitmap generatedImageAsBitmap = imagePart.getImage();
                break;
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Provide a text prompt instructing the model to generate an image
const prompt = 'Generate an image of the Eiffel Tower with fireworks in the background.';

// To generate an image, call `generateContent` with the text input
const result = model.generateContent(prompt);

// Handle the generated image
try {
  const inlineDataParts = result.response.inlineDataParts();
  if (inlineDataParts?.[0]) {
    const image = inlineDataParts[0].inlineData;
    console.log(image.mimeType, image.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.5-flash-image',
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [ResponseModalities.text, ResponseModalities.image]),
);

// Provide a text prompt instructing the model to generate an image
final prompt = [Content.text('Generate an image of the Eiffel Tower with fireworks in the background.')];

// To generate an image, call `generateContent` with the text input
final response = await model.generateContent(prompt);
if (response.inlineDataParts.isNotEmpty) {
  final imageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Provide a text prompt instructing the model to generate an image
var prompt = "Generate an image of the Eiffel Tower with fireworks in the background.";

// To generate an image, call `GenerateContentAsync` with the text input
var response = await model.GenerateContentAsync(prompt);

var text = response.Text;
if (!string.IsNullOrWhiteSpace(text)) {
  // Do something with the text
}

// Handle the generated image
var imageParts = response.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
foreach (var imagePart in imageParts) {
  // Load the Image into a Unity Texture2D object
  UnityEngine.Texture2D texture2D = new(2, 2);
  if (texture2D.LoadImage(imagePart.Data.ToArray())) {
    // Do something with the image
  }
}

Gerar resposta com textos e imagens

Antes de testar este exemplo, conclua a seção Antes de começar deste guia para configurar seu projeto e app.
Nessa seção, você também vai clicar em um botão para o provedor da API Gemini escolhido Gemini API para que o conteúdo específico do provedor seja exibido nesta página.

Você pode pedir a um Gemini modelo para gerar imagens intercaladas com as respostas de texto. Por exemplo, é possível gerar imagens de como cada etapa de uma receita gerada pode ser, junto com as instruções da etapa, sem precisar fazer solicitações separadas ao modelo ou a modelos diferentes.

Crie uma instância GenerativeModel, inclua responseModalities: ["TEXT", "IMAGE"] na configuração do modelo e chame generateContent.

Swift


import FirebaseAILogic

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Provide a text prompt instructing the model to generate interleaved text and images
let prompt = """
Generate an illustrated recipe for a paella.
Create images to go alongside the text as you generate the recipe
"""

// To generate interleaved text and images, call `generateContent` with the text input
let response = try await model.generateContent(prompt)

// Handle the generated text and image
guard let candidate = response.candidates.first else {
  fatalError("No candidates in response.")
}
for part in candidate.content.parts {
  switch part {
  case let textPart as TextPart:
    // Do something with the generated text
    let text = textPart.text
  case let inlineDataPart as InlineDataPart:
    // Do something with the generated image
    guard let uiImage = UIImage(data: inlineDataPart.data) else {
      fatalError("Failed to convert data to UIImage.")
    }
  default:
    fatalError("Unsupported part type: \(part)")
  }
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide a text prompt instructing the model to generate interleaved text and images
val prompt = """
    Generate an illustrated recipe for a paella.
    Create images to go alongside the text as you generate the recipe
    """.trimIndent()

// To generate interleaved text and images, call `generateContent` with the text input
val responseContent = model.generateContent(prompt).candidates.first().content

// The response will contain image and text parts interleaved
for (part in responseContent.parts) {
    when (part) {
        is ImagePart -> {
            // ImagePart as a bitmap
            val generatedImageAsBitmap: Bitmap? = part.asImageOrNull()
        }
        is TextPart -> {
            // Text content from the TextPart
            val text = part.text
        }
    }
}

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide a text prompt instructing the model to generate interleaved text and images
Content prompt = new Content.Builder()
        .addText("Generate an illustrated recipe for a paella.\n" +
                 "Create images to go alongside the text as you generate the recipe")
        .build();

// To generate interleaved text and images, call `generateContent` with the text input
ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        Content responseContent = result.getCandidates().get(0).getContent();
        // The response will contain image and text parts interleaved
        for (Part part : responseContent.getParts()) {
            if (part instanceof ImagePart) {
                // ImagePart as a bitmap
                Bitmap generatedImageAsBitmap = ((ImagePart) part).getImage();
            } else if (part instanceof TextPart){
                // Text content from the TextPart
                String text = ((TextPart) part).getText();
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        System.err.println(t);
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Provide a text prompt instructing the model to generate interleaved text and images
const prompt = 'Generate an illustrated recipe for a paella.\n.' +
  'Create images to go alongside the text as you generate the recipe';

// To generate interleaved text and images, call `generateContent` with the text input
const result = await model.generateContent(prompt);

// Handle the generated text and image
try {
  const response = result.response;
  if (response.candidates?.[0].content?.parts) {
    for (const part of response.candidates?.[0].content?.parts) {
      if (part.text) {
        // Do something with the text
        console.log(part.text)
      }
      if (part.inlineData) {
        // Do something with the image
        const image = part.inlineData;
        console.log(image.mimeType, image.data);
      }
    }
  }

} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.5-flash-image',
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [ResponseModalities.text, ResponseModalities.image]),
);

// Provide a text prompt instructing the model to generate interleaved text and images
final prompt = [Content.text(
  'Generate an illustrated recipe for a paella\n ' +
  'Create images to go alongside the text as you generate the recipe'
)];

// To generate interleaved text and images, call `generateContent` with the text input
final response = await model.generateContent(prompt);

// Handle the generated text and image
final parts = response.candidates.firstOrNull?.content.parts
if (parts.isNotEmpty) {
  for (final part in parts) {
    if (part is TextPart) {
      // Do something with text part
      final text = part.text
    }
    if (part is InlineDataPart) {
      // Process image
      final imageBytes = part.bytes
    }
  }
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Provide a text prompt instructing the model to generate interleaved text and images
var prompt = "Generate an illustrated recipe for a paella \n" +
  "Create images to go alongside the text as you generate the recipe";

// To generate interleaved text and images, call `GenerateContentAsync` with the text input
var response = await model.GenerateContentAsync(prompt);

// Handle the generated text and image
foreach (var part in response.Candidates.First().Content.Parts) {
  if (part is ModelContent.TextPart textPart) {
    if (!string.IsNullOrWhiteSpace(textPart.Text)) {
      // Do something with the text
    }
  } else if (part is ModelContent.InlineDataPart dataPart) {
    if (dataPart.MimeType == "image/png") {
      // Load the Image into a Unity Texture2D object
      UnityEngine.Texture2D texture2D = new(2, 2);
      if (texture2D.LoadImage(dataPart.Data.ToArray())) {
        // Do something with the image
      }
    }
  }
}

Editar imagens (entrada de texto e imagem)

Antes de testar este exemplo, conclua a seção Antes de começar deste guia para configurar seu projeto e app.
Nessa seção, você também vai clicar em um botão para o provedor da API Gemini escolhido Gemini API para que o conteúdo específico do provedor seja exibido nesta página.

Você pode pedir a um Gemini modelo para editar imagens usando texto e uma ou mais imagens.

Crie uma instância GenerativeModel, inclua responseModalities: ["TEXT", "IMAGE"] na configuração do modelo e chame generateContent.

Swift


import FirebaseAILogic

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Provide an image for the model to edit
guard let image = UIImage(named: "scones") else { fatalError("Image file not found.") }

// Provide a text prompt instructing the model to edit the image
let prompt = "Edit this image to make it look like a cartoon"

// To edit the image, call `generateContent` with the image and text input
let response = try await model.generateContent(image, prompt)

// Handle the generated image
guard let inlineDataPart = response.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let uiImage = UIImage(data: inlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide an image for the model to edit
val bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.scones)

// Provide a text prompt instructing the model to edit the image
val prompt = content {
    image(bitmap)
    text("Edit this image to make it look like a cartoon")
}

// To edit the image, call `generateContent` with the prompt (image and text input)
val generatedImageAsBitmap = model.generateContent(prompt)
    // Handle the generated text and image
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide an image for the model to edit
Bitmap bitmap = BitmapFactory.decodeResource(resources, R.drawable.scones);

// Provide a text prompt instructing the model to edit the image
Content promptcontent = new Content.Builder()
        .addImage(bitmap)
        .addText("Edit this image to make it look like a cartoon")
        .build();

// To edit the image, call `generateContent` with the prompt (image and text input)
ListenableFuture<GenerateContentResponse> response = model.generateContent(promptcontent);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        // iterate over all the parts in the first candidate in the result object
        for (Part part : result.getCandidates().get(0).getContent().getParts()) {
            if (part instanceof ImagePart) {
                ImagePart imagePart = (ImagePart) part;
                Bitmap generatedImageAsBitmap = imagePart.getImage();
                break;
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Prepare an image for the model to edit
async function fileToGenerativePart(file) {
  const base64EncodedDataPromise = new Promise((resolve) => {
    const reader = new FileReader();
    reader.onloadend = () => resolve(reader.result.split(',')[1]);
    reader.readAsDataURL(file);
  });
  return {
    inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
  };
}

// Provide a text prompt instructing the model to edit the image
const prompt = "Edit this image to make it look like a cartoon";

const fileInputEl = document.querySelector("input[type=file]");
const imagePart = await fileToGenerativePart(fileInputEl.files[0]);

// To edit the image, call `generateContent` with the image and text input
const result = await model.generateContent([prompt, imagePart]);

// Handle the generated image
try {
  const inlineDataParts = result.response.inlineDataParts();
  if (inlineDataParts?.[0]) {
    const image = inlineDataParts[0].inlineData;
    console.log(image.mimeType, image.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.5-flash-image',
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [ResponseModalities.text, ResponseModalities.image]),
);

// Prepare an image for the model to edit
final image = await File('scones.jpg').readAsBytes();
final imagePart = InlineDataPart('image/jpeg', image);

// Provide a text prompt instructing the model to edit the image
final prompt = TextPart("Edit this image to make it look like a cartoon");

// To edit the image, call `generateContent` with the image and text input
final response = await model.generateContent([
  Content.multi([prompt,imagePart])
]);

// Handle the generated image
if (response.inlineDataParts.isNotEmpty) {
  final imageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Prepare an image for the model to edit
var imageFile = System.IO.File.ReadAllBytes(System.IO.Path.Combine(
  UnityEngine.Application.streamingAssetsPath, "scones.jpg"));
var image = ModelContent.InlineData("image/jpeg", imageFile);

// Provide a text prompt instructing the model to edit the image
var prompt = ModelContent.Text("Edit this image to make it look like a cartoon.");

// To edit the image, call `GenerateContent` with the image and text input
var response = await model.GenerateContentAsync(new [] { prompt, image });

var text = response.Text;
if (!string.IsNullOrWhiteSpace(text)) {
  // Do something with the text
}

// Handle the generated image
var imageParts = response.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
foreach (var imagePart in imageParts) {
  // Load the Image into a Unity Texture2D object
  Texture2D texture2D = new Texture2D(2, 2);
  if (texture2D.LoadImage(imagePart.Data.ToArray())) {
    // Do something with the image
  }
}

Iterar e editar imagens usando a conversa multiturno

Antes de testar este exemplo, conclua a seção Antes de começar deste guia para configurar seu projeto e app.
Nessa seção, você também vai clicar em um botão para o provedor da API Gemini escolhido Gemini API para que o conteúdo específico do provedor seja exibido nesta página.

Usando a conversa multiturno, é possível iterar com um modelo Gemini nas imagens geradas ou fornecidas.

Crie uma instância GenerativeModel, inclua responseModalities: ["TEXT", "IMAGE"] na configuração do modelo e chame startChat() e sendMessage() para enviar mensagens de novo usuário.

Swift


import FirebaseAILogic

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Initialize the chat
let chat = model.startChat()

guard let image = UIImage(named: "scones") else { fatalError("Image file not found.") }

// Provide an initial text prompt instructing the model to edit the image
let prompt = "Edit this image to make it look like a cartoon"

// To generate an initial response, send a user message with the image and text prompt
let response = try await chat.sendMessage(image, prompt)

// Inspect the generated image
guard let inlineDataPart = response.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let uiImage = UIImage(data: inlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

// Follow up requests do not need to specify the image again
let followUpResponse = try await chat.sendMessage("But make it old-school line drawing style")

// Inspect the edited image after the follow up request
guard let followUpInlineDataPart = followUpResponse.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let followUpUIImage = UIImage(data: followUpInlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide an image for the model to edit
val bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.scones)

// Create the initial prompt instructing the model to edit the image
val prompt = content {
    image(bitmap)
    text("Edit this image to make it look like a cartoon")
}

// Initialize the chat
val chat = model.startChat()

// To generate an initial response, send a user message with the image and text prompt
var response = chat.sendMessage(prompt)
// Inspect the returned image
var generatedImageAsBitmap = response
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

// Follow up requests do not need to specify the image again
response = chat.sendMessage("But make it old-school line drawing style")
generatedImageAsBitmap = response
    .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.5-flash-image",
    // Configure the model to respond with text and images (required)
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide an image for the model to edit
Bitmap bitmap = BitmapFactory.decodeResource(resources, R.drawable.scones);

// Initialize the chat
ChatFutures chat = model.startChat();

// Create the initial prompt instructing the model to edit the image
Content prompt = new Content.Builder()
        .setRole("user")
        .addImage(bitmap)
        .addText("Edit this image to make it look like a cartoon")
        .build();

// To generate an initial response, send a user message with the image and text prompt
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(prompt);
// Extract the image from the initial response
ListenableFuture<@Nullable Bitmap> initialRequest = Futures.transform(response, result -> {
    for (Part part : result.getCandidates().get(0).getContent().getParts()) {
        if (part instanceof ImagePart) {
            ImagePart imagePart = (ImagePart) part;
            return imagePart.getImage();
        }
    }
    return null;
}, executor);

// Follow up requests do not need to specify the image again
ListenableFuture<GenerateContentResponse> modelResponseFuture = Futures.transformAsync(
        initialRequest,
        generatedImage -> {
            Content followUpPrompt = new Content.Builder()
                    .addText("But make it old-school line drawing style")
                    .build();
            return chat.sendMessage(followUpPrompt);
        },
        executor);

// Add a final callback to check the reworked image
Futures.addCallback(modelResponseFuture, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        for (Part part : result.getCandidates().get(0).getContent().getParts()) {
            if (part instanceof ImagePart) {
                ImagePart imagePart = (ImagePart) part;
                Bitmap generatedImageAsBitmap = imagePart.getImage();
                break;
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Prepare an image for the model to edit
async function fileToGenerativePart(file) {
  const base64EncodedDataPromise = new Promise((resolve) => {
    const reader = new FileReader();
    reader.onloadend = () => resolve(reader.result.split(',')[1]);
    reader.readAsDataURL(file);
  });
  return {
    inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
  };
}

const fileInputEl = document.querySelector("input[type=file]");
const imagePart = await fileToGenerativePart(fileInputEl.files[0]);

// Provide an initial text prompt instructing the model to edit the image
const prompt = "Edit this image to make it look like a cartoon";

// Initialize the chat
const chat = model.startChat();

// To generate an initial response, send a user message with the image and text prompt
const result = await chat.sendMessage([prompt, imagePart]);

// Request and inspect the generated image
try {
  const inlineDataParts = result.response.inlineDataParts();
  if (inlineDataParts?.[0]) {
    // Inspect the generated image
    const image = inlineDataParts[0].inlineData;
    console.log(image.mimeType, image.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

// Follow up requests do not need to specify the image again
const followUpResult = await chat.sendMessage("But make it old-school line drawing style");

// Request and inspect the returned image
try {
  const followUpInlineDataParts = followUpResult.response.inlineDataParts();
  if (followUpInlineDataParts?.[0]) {
    // Inspect the generated image
    const followUpImage = followUpInlineDataParts[0].inlineData;
    console.log(followUpImage.mimeType, followUpImage.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.5-flash-image',
  // Configure the model to respond with text and images (required)
  generationConfig: GenerationConfig(responseModalities: [ResponseModalities.text, ResponseModalities.image]),
);

// Prepare an image for the model to edit
final image = await File('scones.jpg').readAsBytes();
final imagePart = InlineDataPart('image/jpeg', image);

// Provide an initial text prompt instructing the model to edit the image
final prompt = TextPart("Edit this image to make it look like a cartoon");

// Initialize the chat
final chat = model.startChat();

// To generate an initial response, send a user message with the image and text prompt
final response = await chat.sendMessage([
  Content.multi([prompt,imagePart])
]);

// Inspect the returned image
if (response.inlineDataParts.isNotEmpty) {
  final imageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

// Follow up requests do not need to specify the image again
final followUpResponse = await chat.sendMessage([
  Content.text("But make it old-school line drawing style")
]);

// Inspect the returned image
if (followUpResponse.inlineDataParts.isNotEmpty) {
  final followUpImageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.5-flash-image",
  // Configure the model to respond with text and images (required)
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Prepare an image for the model to edit
var imageFile = System.IO.File.ReadAllBytes(System.IO.Path.Combine(
  UnityEngine.Application.streamingAssetsPath, "scones.jpg"));
var image = ModelContent.InlineData("image/jpeg", imageFile);

// Provide an initial text prompt instructing the model to edit the image
var prompt = ModelContent.Text("Edit this image to make it look like a cartoon.");

// Initialize the chat
var chat = model.StartChat();

// To generate an initial response, send a user message with the image and text prompt
var response = await chat.SendMessageAsync(new [] { prompt, image });

// Inspect the returned image
var imageParts = response.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
// Load the image into a Unity Texture2D object
UnityEngine.Texture2D texture2D = new(2, 2);
if (texture2D.LoadImage(imageParts.First().Data.ToArray())) {
  // Do something with the image
}

// Follow up requests do not need to specify the image again
var followUpResponse = await chat.SendMessageAsync("But make it old-school line drawing style");

// Inspect the returned image
var followUpImageParts = followUpResponse.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
// Load the image into a Unity Texture2D object
UnityEngine.Texture2D followUpTexture2D = new(2, 2);
if (followUpTexture2D.LoadImage(followUpImageParts.First().Data.ToArray())) {
  // Do something with the image
}



Recursos, limitações e práticas recomendadas

Modalidades e recursos compatíveis

A seguir, estão as modalidades e os recursos compatíveis com modelos que geram imagens Gemini. Cada recurso mostra um exemplo de comando e tem um exemplo de código acima.

  • Texto Imagem(s) (texto para imagem)

    • Gere uma imagem da Torre Eiffel com fogos de artifício ao fundo.
  • Texto Imagem(s) (renderização de texto na imagem)

    • Gere uma foto cinematográfica de um prédio grande com essa projeção de texto gigante mapeada na frente do prédio.
  • Texto Imagem(s) e texto (intercalados)

    • Gere uma receita ilustrada de paella. Crie imagens ao lado do texto ao gerar a receita.

    • Gere uma história sobre um cachorro em um estilo de animação de desenho animado 3D. Para cada cena, gere uma imagem.

  • Imagem(s) e texto Imagem(s) e texto (intercalados)

    • [imagem de um quarto mobiliado] + Que outras cores de sofás funcionariam no meu espaço? Você pode atualizar a imagem?
  • Edição de imagens (texto e imagem para imagem)

    • [imagem de bolinhos] + Edite essa imagem para que pareça um desenho animado

    • [imagem de um gato] + [imagem de um travesseiro] + Crie um ponto cruz do meu gato nesse travesseiro.

  • Edição de imagens multiturno (chat)

    • [imagem de um carro azul] + Transforme esse carro em um conversível., depois Agora mude a cor para amarelo.

Além disso, os modelos Gemini 3 Pro Image e Gemini 3.1 Flash Image oferecem suporte ao embasamento com a Pesquisa Google.

Limitações e práticas recomendadas

Firebase AI Logic ainda não oferece suporte à definição explícita de aspect_ratio ou image_size para imagens geradas, mas você pode informar ao modelo no comando quais configurações quer. Esses recursos serão lançados em breve.

A seguir, estão as limitações e práticas recomendadas para a saída de imagens de um Gemini modelo.

  • Os modelos do Gemini que geram imagens Gemini oferecem suporte ao seguinte:

    • Gerar imagens PNG com estas dimensões máximas:
      • gemini-3-pro-image-preview e gemini-3.1-flash-image-preview: 4K
      • gemini-2.5-flash-image: 1024 px
    • Gerar e editar imagens de pessoas.
    • Usar filtros de segurança que proporcionam uma experiência do usuário flexível e menos restritiva.
  • Os modelos do Gemini que geram imagens Gemini não oferecem suporte ao seguinte:

    • Incluir entradas de áudio ou vídeo.
    • Gerar apenas imagens.
      Os modelos sempre retornam texto e imagens, e é necessário incluir responseModalities: ["TEXT", "IMAGE"] na configuração do modelo.
  • Para melhor desempenho, use os seguintes idiomas: en, es-mx, ja-jp, zh-cn, hi-in.

  • A geração de imagens nem sempre é acionada. Confira alguns problemas conhecidos:

    • O modelo pode gerar apenas texto.
      Tente pedir saídas de imagem explicitamente (por exemplo, "gerar uma imagem", "fornecer imagens à medida que você avança", "atualizar a imagem").

    • O modelo pode parar de gerar no meio do processo.
      Tente de novo ou use outro comando.

    • O modelo pode gerar texto como uma imagem.
      Tente pedir saídas de texto explicitamente. Por exemplo, "gerar texto narrativo com ilustrações".

  • Ao gerar texto para uma imagem, Gemini funciona melhor se você primeiro gerar o texto e depois pedir uma imagem com o texto.