Understand pricing

Click your Gemini API provider to view provider-specific content and code on this page.


Pricing and whether your Firebase project needs to be on the pay-as-you-go Blaze pricing plan depends on your chosen Gemini API provider and the features of Firebase AI Logic that you use.

Using Firebase AI Logic is free-of-charge.

However, if you're on the Blaze pricing plan, you might incur costs when you use other products in conjunction with Firebase AI Logic.

  • Other Firebase products may incur costs. For details, see the Pricing page.

    • Using some of the attestation providers supported by Firebase App Check.
    • Using Cloud Storage for Firebase to send files in your multimodal requests beyond the no-cost usage levels.
    • Using Firebase Authentication beyond the no-cost usage levels.

    • Using any of Firebase's database products beyond their no-cost usage levels.

  • AI monitoring in the Firebase console may incur costs.
    While AI monitoring in the Firebase console is free-of-charge itself, you may incur costs if you go beyond the no-cost usage levels of the underlying Google Cloud Observability Suite products. Learn more in the Google Cloud Observability Suite pricing documentation.

  • Your chosen Gemini API provider may incur costs. For details, see Vertex AI Gemini API pricing.

    • Pricing is largely based on the model and features that you use.

Firebase pricing plan requirements for your chosen API provider

Using the Vertex AI Gemini API requires that your project is linked to a Cloud Billing account. This means that your Firebase project is on the pay-as-you-go Blaze pricing plan.

Learn about Vertex AI Gemini API pricing in its documentation.

Recommendations to manage costs

We recommend doing the following to help manage your costs:

  • Avoid surprise bills by monitoring your costs and usage and setting up budget alerts.

  • When using Gemini models (except Live API models), get an estimate of the token size of your requests using the countTokens API before sending your requests and accessing the usageMetadata attribute in your responses.

  • Set the thinking budget (for Gemini 3 models and Gemini 2.5 models only) and the maxOutputTokens (all Gemini models) in the model's configuration.

  • Enable AI monitoring to view dashboards in the Firebase console with information about your requests, including token counts.