网址上下文

借助网址上下文工具,您可以网址的形式向模型提供更多上下文。模型可以访问这些网址中的内容,从而提供更明智、更出色的回答。

网址上下文具有以下优势:

  • 提取数据:提供特定信息,例如文章或多个网址中的价格、名称或主要发现。

  • 比较信息:分析多份报告、多篇文章或多个 PDF 文件,以找出差异并跟踪趋势。

  • 整合和创建内容:整合来自多个来源网址的信息,以生成准确的摘要、博文、报告或测试问题。

  • 分析代码和技术内容:提供 GitHub 代码库或技术文档的网址,以解释代码、生成设置说明或回答问题。

使用网址上下文工具时,请务必查看最佳实践限制

支持的模型

  • gemini-3-pro-preview
  • gemini-3-flash-preview
  • gemini-2.5-pro
  • gemini-2.5-flash
  • gemini-2.5-flash-lite

支持的语言

如需了解 Gemini 模型支持的语言,请参阅支持的语言

使用网址上下文工具

您可以通过两种主要方式使用网址上下文工具:

仅限网址上下文工具

点击您的 Gemini API 提供商,以查看此页面上特定于提供商的内容和代码。

创建 GenerativeModel 实例时,请提供 UrlContext 作为工具。然后,直接在提示中提供要让模型访问和分析的具体网址。

以下示例展示了如何比较来自不同网站的两份食谱:

Swift


import FirebaseAILogic

// Initialize the Gemini Developer API backend service
let ai = FirebaseAI.firebaseAI(backend: .googleAI())

// Create a `GenerativeModel` instance with a model that supports your use case
let model = ai.generativeModel(
    modelName: "GEMINI_MODEL_NAME",
    // Enable the URL context tool.
    tools: [Tool.urlContext()]
)

// Specify one or more URLs for the tool to access.
let url1 = "FIRST_RECIPE_URL"
let url2 = "SECOND_RECIPE_URL"

// Provide the URLs in the prompt sent in the request.
let prompt = "Compare the ingredients and cooking times from the recipes at \(url1) and \(url2)"

// Get and handle the model's response.
let response = try await model.generateContent(prompt)
print(response.text ?? "No text in response.")

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "GEMINI_MODEL_NAME",
    // Enable the URL context tool.
    tools = listOf(Tool.urlContext())
)

// Specify one or more URLs for the tool to access.
val url1 = "FIRST_RECIPE_URL"
val url2 = "SECOND_RECIPE_URL"

// Provide the URLs in the prompt sent in the request.
val prompt = "Compare the ingredients and cooking times from the recipes at $url1 and $url2"

// Get and handle the model's response.
val response = model.generateContent(prompt)
print(response.text)

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI())
                .generativeModel("GEMINI_MODEL_NAME",
                        null,
                        null,
                        // Enable the URL context tool.
                        List.of(Tool.urlContext(new UrlContext())));

// Use the GenerativeModelFutures Java compatibility layer which offers
// support for ListenableFuture and Publisher APIs
GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Specify one or more URLs for the tool to access.
String url1 = "FIRST_RECIPE_URL";
String url2 = "SECOND_RECIPE_URL";

// Provide the URLs in the prompt sent in the request.
String prompt = "Compare the ingredients and cooking times from the recipes at " + url1 + " and " + url2 + "";

ListenableFuture response = model.generateContent(prompt);
  Futures.addCallback(response, new FutureCallback() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
          String resultText = result.getText();
          System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
          t.printStackTrace();
      }
  }, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(
  ai,
  {
    model: "GEMINI_MODEL_NAME",
    // Enable the URL context tool.
    tools: [{ urlContext: {} }]
  }
);

// Specify one or more URLs for the tool to access.
const url1 = "FIRST_RECIPE_URL"
const url2 = "SECOND_RECIPE_URL"

// Provide the URLs in the prompt sent in the request.
const prompt = `Compare the ingredients and cooking times from the recipes at ${url1} and ${url2}`

// Get and handle the model's response.
const result = await model.generateContent(prompt);
console.log(result.response.text());

Dart


import 'package:firebase_core/firebase_core.dart';
import 'package:firebase_ai/firebase_ai.dart';
import 'firebase_options.dart';

// Initialize FirebaseApp
await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
final model = FirebaseAI.googleAI().generativeModel(
  model: 'GEMINI_MODEL_NAME',
  // Enable the URL context tool.
  tools: [
    Tool.urlContext(),
  ],
);

// Specify one or more URLs for the tool to access.
final url1 = "FIRST_RECIPE_URL";
final url2 = "SECOND_RECIPE_URL";

// Provide the URLs in the prompt sent in the request.
final prompt = "Compare the ingredients and cooking times from the recipes at $url1 and $url2";

// Get and handle the model's response.
final response = await model.generateContent([Content.text(prompt)]);
print(response.text);

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());

// Create a `GenerativeModel` instance with a model that supports your use case
var model = ai.GetGenerativeModel(
  modelName: "GEMINI_MODEL_NAME",
  // Enable the URL context tool.
  tools: new[] { new Tool(new UrlContext()) }
);

// Specify one or more URLs for the tool to access.
var url1 = "FIRST_RECIPE_URL";
var url2 = "SECOND_RECIPE_URL";

// Provide the URLs in the prompt sent in the request.
var prompt = $"Compare the ingredients and cooking times from the recipes at {url1} and {url2}";

// Get and handle the model's response.
var response = await model.GenerateContentAsync(prompt);
UnityEngine.Debug.Log(response.Text ?? "No text in response.");

了解如何选择适合您的应用场景和应用的模型

点击您的 Gemini API 提供商,以查看此页面上特定于提供商的内容和代码。

您可以同时启用网址上下文和依托 Google 搜索进行接地。通过此配置,您可以撰写包含或不包含特定网址的提示。

如果还启用了“依托 Google 搜索进行接地”,模型可能会先使用 Google 搜索查找相关信息,然后使用网址上下文工具读取搜索结果的内容,以便更深入地了解相关信息。对于需要广泛搜索和深入分析特定网页的提示,这种方法非常有效。

以下是一些用例:

  • 您可以在提示中提供网址,以帮助生成部分回答。 不过,为了生成合适的回答,模型仍然需要更多关于其他主题的信息,因此它会使用“依托 Google 搜索对模型回答进行接地”工具。

    提示示例:
    Give me a three day event schedule based on YOUR_URL. Also what do I need to pack according to the weather?

  • 您在提示中根本没有提供网址。因此,为了生成适当的回答,模型会使用“依托 Google 搜索进行接地”工具查找相关网址,然后使用网址上下文工具分析这些网址的内容。

    提示示例:
    Recommend 3 beginner-level books to learn about the latest YOUR_SUBJECT.

以下示例展示了如何启用并使用这两种工具 - 网址 上下文和 Google 搜索接地:


import FirebaseAILogic

// Initialize the Gemini Developer API backend service
let ai = FirebaseAI.firebaseAI(backend: .googleAI())

// Create a `GenerativeModel` instance with a model that supports your use case
let model = ai.generativeModel(
    modelName: "GEMINI_MODEL_NAME",
    // Enable both the URL context tool and Google Search tool.
    tools: [
      Tool.urlContex(),
      Tool.googleSearch()
    ]
)

// Specify one or more URLs for the tool to access.
let url = "YOUR_URL"

// Provide the URLs in the prompt sent in the request.
// If the model can't generate a response using its own knowledge or the content in the specified URL,
// then the model will use the grounding with Google Search tool.
let prompt = "Give me a three day event schedule based on \(url). Also what do I need to pack according to the weather?"

// Get and handle the model's response.
let response = try await model.generateContent(prompt)
print(response.text ?? "No text in response.")

// Make sure to comply with the "Grounding with Google Search" usage requirements,
// which includes how you use and display the grounded result


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "GEMINI_MODEL_NAME",
    // Enable both the URL context tool and Google Search tool.
    tools = listOf(Tool.urlContext(), Tool.googleSearch())
)

// Specify one or more URLs for the tool to access.
val url = "YOUR_URL"

// Provide the URLs in the prompt sent in the request.
// If the model can't generate a response using its own knowledge or the content in the specified URL,
// then the model will use the grounding with Google Search tool.
val prompt = "Give me a three day event schedule based on $url. Also what do I need to pack according to the weather?"

// Get and handle the model's response.
val response = model.generateContent(prompt)
print(response.text)

// Make sure to comply with the "Grounding with Google Search" usage requirements,
// which includes how you use and display the grounded result


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI())
                .generativeModel("GEMINI_MODEL_NAME",
                        null,
                        null,
                        // Enable both the URL context tool and Google Search tool.
                        List.of(Tool.urlContext(new UrlContext()), Tool.googleSearch(new GoogleSearch())));

// Use the GenerativeModelFutures Java compatibility layer which offers
// support for ListenableFuture and Publisher APIs
GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Specify one or more URLs for the tool to access.
String url = "YOUR_URL";

// Provide the URLs in the prompt sent in the request.
// If the model can't generate a response using its own knowledge or the content in the specified URL,
// then the model will use the grounding with Google Search tool.
String prompt = "Give me a three day event schedule based on " + url + ". Also what do I need to pack according to the weather?";

ListenableFuture response = model.generateContent(prompt);
  Futures.addCallback(response, new FutureCallback() {
      @Override
      public void onSuccess(GenerateContentResponse result) {
          String resultText = result.getText();
          System.out.println(resultText);
      }

      @Override
      public void onFailure(Throwable t) {
          t.printStackTrace();
      }
  }, executor);

// Make sure to comply with the "Grounding with Google Search" usage requirements,
// which includes how you use and display the grounded result


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(
  ai,
  {
    model: "GEMINI_MODEL_NAME",
    // Enable both the URL context tool and Google Search tool.
    tools: [{ urlContext: {} }, { googleSearch: {} }],
  }
);

// Specify one or more URLs for the tool to access.
const url = "YOUR_URL"

// Provide the URLs in the prompt sent in the request.
// If the model can't generate a response using its own knowledge or the content in the specified URL,
// then the model will use the grounding with Google Search tool.
const prompt = `Give me a three day event schedule based on ${url}. Also what do I need to pack according to the weather?`

// Get and handle the model's response.
const result = await model.generateContent(prompt);
console.log(result.response.text());

// Make sure to comply with the "Grounding with Google Search" usage requirements,
// which includes how you use and display the grounded result


import 'package:firebase_core/firebase_core.dart';
import 'package:firebase_ai/firebase_ai.dart';
import 'firebase_options.dart';

// Initialize FirebaseApp
await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
final model = FirebaseAI.googleAI().generativeModel(
  model: 'GEMINI_MODEL_NAME',
  // Enable both the URL context tool and Google Search tool.
  tools: [
    Tool.urlContext(),
    Tool.googleSearch(),
  ],
);

// Specify one or more URLs for the tool to access.
final url = "YOUR_URL";

// Provide the URLs in the prompt sent in the request.
// If the model can't generate a response using its own knowledge or the content in the specified URL,
// then the model will use the grounding with Google Search tool.
final prompt = "Give me a three day event schedule based on $url. Also what do I need to pack according to the weather?";

final response = await model.generateContent([Content.text(prompt)]);
print(response.text);

// Make sure to comply with the "Grounding with Google Search" usage requirements,
// which includes how you use and display the grounded result


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());

// Create a `GenerativeModel` instance with a model that supports your use case
var model = ai.GetGenerativeModel(
  modelName: "GEMINI_MODEL_NAME",
  // Enable both the URL context tool and Google Search tool.
  tools: new[] { new Tool(new GoogleSearch()), new Tool(new UrlContext()) }
);

// Specify one or more URLs for the tool to access.
var url = "YOUR_URL";

// Provide the URLs in the prompt sent in the request.
// If the model can't generate a response using its own knowledge or the content in the specified URL,
// then the model will use the grounding with Google Search tool.
var prompt = $"Give me a three day event schedule based on {url}. Also what do I need to pack according to the weather?";

// Get and handle the model's response.
var response = await model.GenerateContentAsync(prompt);
UnityEngine.Debug.Log(response.Text ?? "No text in response.");

// Make sure to comply with the "Grounding with Google Search" usage requirements,
// which includes how you use and display the grounded result

了解如何选择适合您的应用场景和应用的模型

网址上下文工具的运作方式

网址上下文工具使用两步检索流程来平衡速度、费用和对最新数据的访问。

第 1 步:当您提供特定网址时,该工具会先尝试从内部索引缓存中提取相应内容。这可作为高度优化的缓存。

第 2 步:如果网址未编入索引(例如,如果它是新近发布的网页),该工具会自动回退到执行实时抓取。此工具会直接访问网址,以实时检索其内容。

最佳做法

  • 提供具体网址:为获得最佳结果,请提供您希望模型分析的内容的直接网址。模型只会从您提供的网址中检索内容,而不会从嵌套链接中检索任何内容。

  • 检查可访问性:验证您提供的网址是否不会指向需要登录或位于付费墙后面的网页。

  • 使用完整网址:提供完整网址,包括协议(例如,https://www.example.com 而不是仅 example.com)。

了解响应

模型的回答将基于其从网址中检索到的内容。

如果模型从网址检索了内容,则响应将包含 url_context_metadata。此类回答可能如下所示(为简洁起见,省略了部分回答):

{
  "candidates": [
    {
      "content": {
        "parts": [
          {
            "text": "... \n"
          }
        ],
        "role": "model"
      },
      ...
      "url_context_metadata":
      {
          "url_metadata":
          [
            {
              "retrieved_url": "https://www.example.com",
              "url_retrieval_status": "URL_RETRIEVAL_STATUS_SUCCESS"
            },
            {
              "retrieved_url": "https://www.example.org",
              "url_retrieval_status": "URL_RETRIEVAL_STATUS_SUCCESS"
            },
          ]
        }
    }
  ]
}

安全检查

系统会对网址进行内容审核检查,以确认其符合安全标准。如果您提供的网址未通过此检查,您将获得 URL_RETRIEVAL_STATUS_UNSAFEurl_retrieval_status

限制

网址上下文工具存在以下一些限制:

  • 与函数调用结合使用:网址上下文工具无法在同时使用函数调用的请求中使用。

  • 每个请求的网址数量上限:每个请求的网址数量上限为 20 个。

  • 网址内容大小限制:从单个网址检索到的内容大小上限为 34 MB。

  • 新鲜度:此工具不会提取网页的实时版本,因此可能会出现新鲜度问题或信息过时的情况。

  • 网址公开可访问性:所提供的网址必须可在网上公开访问。以下内容受支持:付费墙内容、需要用户登录的内容、专用网络、本地主机地址(例如 localhost127.0.0.1)和隧道服务(例如 ngrok 或 pinggy)。

支持和不支持的内容类型

支持:该工具可以从具有以下内容类型的网址中提取内容:

  • 文本(text/htmlapplication/jsontext/plaintext/xmltext/csstext/javascripttext/csvtext/rtf

  • 图片(image/pngimage/jpegimage/bmpimage/webp

  • PDF (application/pdf)

不支持:该工具支持以下内容类型:

  • YouTube 视频(请改为参阅分析视频

  • 视频和音频文件(请改为参阅分析视频分析音频

  • Google Workspace 文件,例如 Google 文档或电子表格

  • (如果使用 Vertex AI Gemini API Cloud Storage 网址
    无论您以何种方式访问,Gemini Developer API 都不支持这些类型的网址。

  • 无法公开访问的内容。以下内容受支持:付费墙内容、需要用户登录的内容、专用网络、本地主机地址(例如 localhost127.0.0.1)和隧道服务(例如 ngrok 或 pinggy)。

价格和统计工具 token 数量

从网址检索的内容计为输入令牌。

您可以在模型输出的 usage_metadata 对象中查看提示的 token 数量和工具使用情况。以下是输出示例:

'usage_metadata': {
  'candidates_token_count': 45,
  'prompt_token_count': 27,
  'prompt_tokens_details': [{'modality': <MediaModality.TEXT: 'TEXT'>,
    'token_count': 27}],
  'thoughts_token_count': 31,
  'tool_use_prompt_token_count': 10309,
  'tool_use_prompt_tokens_details': [{'modality': <MediaModality.TEXT: 'TEXT'>,
    'token_count': 10309}],
  'total_token_count': 10412
  }

速率限制和价格取决于所用模型。如需详细了解所选 Gemini API 提供商的网址上下文工具的定价,请参阅以下文档:Gemini Developer API | Vertex AI Gemini API