Understand prompt design

Click your Gemini API provider to view provider-specific content and code on this page.


When you make a request to a generative model, you send along a prompt with your request. By carefully crafting these prompts, you can influence the model to generate output specific to your needs.

Prompting for Gemini models

Prompts for Gemini models can contain questions, instructions, contextual information, few-shot examples, and partial input for the model to complete or continue.

Learn about prompt design in the Gemini Developer API documentation:

Prompting for Imagen models

For Imagen, learn about specific prompting strategies and options

Other options to control content generation

  • Configure model parameters to control how the model generates a response. For Gemini models, these parameters include max output tokens, temperature, topK, and topP. For Imagen models, these include aspect ratio, person generation, watermarking, etc.
  • Use safety settings to adjust the likelihood of getting responses that may be considered harmful, including hate speech and sexually explicit content.
  • Set system instructions to steer the behavior of the model. This feature is like a preamble that you add before the model gets exposed to any further instructions from the end user.
  • Pass a response schema along with the prompt to specify a specific output schema. This feature is most commonly used when generating JSON output, but it can also be used for classification tasks (like when you want the model to use specific labels or tags).