The total number of billable characters counted across all instances from the request.This property is only supported when using the Vertex AI Gemini API (VertexAIBackend). When using the Gemini Developer API (GoogleAIBackend), this property is not supported and will default to 0.
The total number of tokens counted across all instances from the request.
CountTokensResponse.promptTokensDetails
The breakdown, by modality, of how many tokens are consumed by the prompt.
Signature:
promptTokensDetails?:ModalityTokenCount[];
CountTokensResponse.totalBillableCharacters
The total number of billable characters counted across all instances from the request.
This property is only supported when using the Vertex AI Gemini API (VertexAIBackend). When using the Gemini Developer API (GoogleAIBackend), this property is not supported and will default to 0.
Signature:
totalBillableCharacters?:number;
CountTokensResponse.totalTokens
The total number of tokens counted across all instances from the request.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-05-20 UTC."],[],[],null,["# CountTokensResponse interface\n\nResponse from calling [GenerativeModel.countTokens()](./vertexai.generativemodel.md#generativemodelcounttokens).\n\n**Signature:** \n\n export interface CountTokensResponse \n\nProperties\n----------\n\n| Property | Type | Description |\n|---------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [promptTokensDetails](./vertexai.counttokensresponse.md#counttokensresponseprompttokensdetails) | [ModalityTokenCount](./vertexai.modalitytokencount.md#modalitytokencount_interface)\\[\\] | The breakdown, by modality, of how many tokens are consumed by the prompt. |\n| [totalBillableCharacters](./vertexai.counttokensresponse.md#counttokensresponsetotalbillablecharacters) | number | The total number of billable characters counted across all instances from the request.This property is only supported when using the Vertex AI Gemini API ([VertexAIBackend](./vertexai.vertexaibackend.md#vertexaibackend_class)). When using the Gemini Developer API ([GoogleAIBackend](./vertexai.googleaibackend.md#googleaibackend_class)), this property is not supported and will default to 0. |\n| [totalTokens](./vertexai.counttokensresponse.md#counttokensresponsetotaltokens) | number | The total number of tokens counted across all instances from the request. |\n\nCountTokensResponse.promptTokensDetails\n---------------------------------------\n\nThe breakdown, by modality, of how many tokens are consumed by the prompt.\n\n**Signature:** \n\n promptTokensDetails?: ModalityTokenCount[];\n\nCountTokensResponse.totalBillableCharacters\n-------------------------------------------\n\nThe total number of billable characters counted across all instances from the request.\n\nThis property is only supported when using the Vertex AI Gemini API ([VertexAIBackend](./vertexai.vertexaibackend.md#vertexaibackend_class)). When using the Gemini Developer API ([GoogleAIBackend](./vertexai.googleaibackend.md#googleaibackend_class)), this property is not supported and will default to 0.\n\n**Signature:** \n\n totalBillableCharacters?: number;\n\nCountTokensResponse.totalTokens\n-------------------------------\n\nThe total number of tokens counted across all instances from the request.\n\n**Signature:** \n\n totalTokens: number;"]]