[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-10 (世界標準時間)。"],[],[],null,["\u003cbr /\u003e\n\n\n| **Preview** : Using the Firebase AI Logic SDKs to access Imagen models is a feature that's in Preview, which means that it isn't subject to any SLA or deprecation policy and could change in backwards-incompatible ways.\n|\n| Editing with Imagen is only supported if you're using the\n| Vertex AI Gemini API. It's also currently only supported for\n| Android and Flutter apps. Support for other platforms is coming later in the\n| year.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\n|----------------------------------------------------------------------------|\n| *Only available when using the Vertex AI Gemini API as your API provider.* |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nThe Firebase AI Logic SDKs give you access to the\nImagen models (via the\n[Imagen API](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/imagen-api))\nso that you can edit images using either:\n\n- [**Mask-based editing**](#mask-based-editing), like inserting and removing\n objects, expanding image content beyond original borders, and replacing\n backgrounds\n\n- [**Customization**](#customization) options based on ***style*** (like\n pattern, texture, or artist style), ***subject*** (like product, person, or\n animal), or ***control*** (like a hand-drawn sketch).\n\nThis page describes each editing option at a high level. Each option has its\nown separate page with more details and code samples.\n\n\nModels that support this capability\n\n\nImagen offers image editing through its `capability`\nmodel:\n\n- `imagen-3.0-capability-001`\n\n\nNote that for Imagen models, the `global` location is\n***not*** supported.\n\n\u003cbr /\u003e\n\nMask-based editing\n\n**Mask-based editing** lets you make localized, precise changes to an image. The\nmodel makes changes exclusively within a defined *masked area* of the image. A\n*mask* is a digital overlay defining the specific area you want to edit. The\nmasked area can either be auto-detected and created by the model or be defined\nin a masked image that you provide. Depending on the use case, the model may\nrequire a text prompt to know what changes to make.\n\nHere are the common use cases for mask-based editing:\n\n- [Insert new objects into an image](#insert-objects)\n- [Remove unwanted objects from an image](#remove-objects)\n- [Expand an image's content beyond its original borders](#expand-images)\n- [Replace the background of an image](#replace-background)\n\nInsert objects (inpainting)\n\nYou can use inpainting to\n[insert objects](/docs/ai-logic/edit-images-imagen-insert-objects)\ninto an image.\n\n\n**How it works**: You provide an original image and a\ncorresponding masked image --- either auto-generated or provided by you --- that\ndefines a mask over an area where you want to add new content. You also\nprovide a text prompt describing what you want to add. The model then\ngenerates and adds new content within the masked area.\n\n\nFor example, you can mask a table and prompt the model to add a vase of\nflowers.\n\n\u003cbr /\u003e\n\nRemove objects (inpainting)\n\nYou can use inpainting to\n[remove objects](/docs/ai-logic/edit-images-imagen-remove-objects)\nfrom an image.\n\n\n**How it works**: You provide an original image and a\ncorresponding masked image --- either auto-generated or provided by you --- that\ndefines a mask over the object or subject that you want to remove. You can\nalso optionally provide a text prompt describing what you want to remove, or\nthe model can intelligently detect which object to remove. The model then\nremoves the object and fills in the area with new, contextually appropriate\ncontent.\n\n\nFor example, you can mask a ball and replace it with a blank wall or a grassy\nfield.\n\n\u003cbr /\u003e\n\nExpand an image beyond its original borders (outpainting)\n\nYou can use *outpainting* to\n[expand an image beyond its original borders](/docs/ai-logic/edit-images-imagen-expand-images).\n\n\n**How it works**: You provide an original image and a\ncorresponding masked image --- either auto-generated or provided by you --- that\ndefines a mask of the new, expanded area. You can also optionally provide a\ntext prompt describing what you want in the expanded area, or the model can\nintelligently decide what will logically continue the existing scene. The\nmodel generates the new content and fills in the masked area.\n\n\nFor example, you can change an image's aspect ratio or add more background\ncontext.\n\n\u003cbr /\u003e\n\nReplace the background\n\nYou can\n[replace the background](/docs/ai-logic/edit-images-imagen-replace-background)\nof an image.\n\n\n**How it works**: You provide an original image and a\ncorresponding masked image that defines a mask over the background --- either\nusing automatic background detection or providing the mask of the background\nyourself. You also provide a text prompt describing what you want to change.\nThe model then generates and applies a new background.\n\n\nFor example, you can change the setting around a subject or object without\naffecting the foreground (for example, in a product image).\n\n\u003cbr /\u003e\n\nCustomization\n\n**Customization** lets you edit or generate images using text prompts and\nreference images that guide the model to generate a new image based on a\nspecified [style](#style-customization),\n[subject](#subject-customization) (like a product, person, or animal), or a\n[control](#controlled-customization).\n\nCustomize based on a style\n\nYou can\n[edit or generate images based on a specified *style*](/docs/ai-logic/edit-images-imagen-style-customization).\n\n\n**How it works** : You provide a text prompt and at least one\nreference image that shows a specific style (like a pattern, texture, or\ndesign style). The model uses these inputs to generate a new image based on\nthe specified *style* in the reference images.\n\n\nFor example, you can generate a new image of a kitchen based on an image from\na popular retail catalog that you provide.\n\n\u003cbr /\u003e\n\nCustomize based on a subject\n\nYou can\n[edit or generate images based on a specified *subject*](/docs/ai-logic/edit-images-imagen-subject-customization).\n\n\n**How it works** : You provide a text prompt and at least one\nreference image that shows a specific subject (like a product, person, or\nanimal companion). The model uses these inputs to generate a new image based\non the specified *subject* in the reference images.\n\n\nFor example, you can ask the model to apply a cartoon style to a photo of a\nchild or change the color of a bicycle in a picture.\n\n\u003cbr /\u003e\n\nCustomize based on a control\n\nYou can\n[edit or generate images based on a specified *control*](/docs/ai-logic/edit-images-imagen-controlled-customization).\n\n\n**How it works** : You provide a text prompt and at least one\n*control* reference image (like a drawing or a Canny edge image). The\nmodel uses these inputs to generate a new image based on the control images.\n\n\nFor example, you can provide the model with a drawing of a rocket ship and\nthe moon along with a text prompt to create a watercolor painting based on the\ndrawing.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n[Give feedback\nabout your experience with Firebase AI Logic](/docs/ai-logic/feedback)\n\n\u003cbr /\u003e"]]