Understand pricing

Click your Gemini API provider to view provider-specific content and code on this page.


Using Firebase AI Logic is free-of-charge. However, the costs for your Gemini API usage and whether your Firebase project needs to be on the pay-as-you-go Blaze pricing plan depends on your chosen Gemini API provider and the features of Firebase AI Logic that you use.

For the Vertex AI Gemini API:

  • Pricing is largely based on the model and features that you use.

This page provides a high-level overview of pricing and billing account requirements. For more details, see the Vertex AI Gemini API pricing documentation.

Billing account requirements

To use the Vertex AI Gemini API, you must link your project to a Cloud Billing account. This means your Firebase project is upgraded to the pay-as-you-go Blaze pricing plan.

Learn more about Vertex AI Gemini API pricing in its documentation.

Other considerations for costs

If you're on the Blaze pricing plan, you might incur costs when you use other products in conjunction with Firebase AI Logic:

  • Other Firebase products may incur costs. For details, see the Firebase pricing page.

    • Using some of the attestation providers supported by Firebase App Check.
    • Using Cloud Storage for Firebase to send files in your multimodal requests beyond the no-cost usage levels.
    • Using Firebase Authentication beyond the no-cost usage levels.

    • Using any of Firebase's database products beyond their no-cost usage levels.

  • AI monitoring in the Firebase console may incur costs.
    While AI monitoring in the Firebase console is free-of-charge itself, you may incur costs if you go beyond the no-cost usage levels of the underlying Google Cloud Observability Suite products. Learn more in the Google Cloud Observability Suite pricing documentation.

Recommendations to manage costs

We recommend doing the following to help manage your costs:

  • Avoid surprise bills by monitoring your costs and usage and setting up budget alerts.

  • When using Gemini models (except Live API models), get an estimate of the token size of your requests using the countTokens API before sending your requests and accessing the usageMetadata attribute in your responses.

  • Set the thinking budget (for Gemini 3 models and Gemini 2.5 models only) and the maxOutputTokens (all Gemini models) in the model's configuration.

  • Enable AI monitoring to view dashboards in the Firebase console with information about your requests, including token counts.