Halaman ini menjelaskan kemampuan Live API saat Anda menggunakannya melalui Firebase AI Logic, termasuk:
Modalitas input yang didukung, termasuk:
Kemampuan lanjutan, termasuk pembaruan di tengah sesi:
Daftar fitur yang tidak didukung, yang sebagian besar akan segera hadir.
Anda juga dapat menyesuaikan penerapan dengan menggunakan berbagai opsi konfigurasi, seperti menambahkan transkripsi atau menyetel suara respons.
Modalitas input
Bagian ini menjelaskan cara mengirim berbagai jenis input ke model Live API. Model audio native selalu memerlukan input audio (bersama dengan modalitas tambahan opsional berupa input teks atau video), dan selalu merespons dengan output audio.
Streaming input audio
|
Klik penyedia Gemini API untuk melihat konten dan kode khusus penyedia di halaman ini. |
Kemampuan Live API yang paling umum adalah streaming audio dua arah, yang berarti streaming real-time dari input dan output audio.
Live API mendukung format audio berikut:
- Format audio input: Audio PCM 16 bit mentah pada 16 kHz little-endian
Format audio output: Audio PCM 16 bit mentah pada 24 kHz little-endian
Jenis MIME yang didukung:
audio/x-aac,audio/flac,audio/mp3,audio/m4a,audio/mpeg,audio/mpga,audio/mp4,audio/ogg,audio/pcm,audio/wav,audio/webm
Untuk menyampaikan sample rate audio input, tetapkan jenis MIME setiap Blob yang berisi audio ke nilai seperti audio/pcm;rate=16000.
Swift
Untuk menggunakan Live API, buat instance
LiveModel
dan tetapkan
modalitas respons
ke audio.
import FirebaseAILogic
// Initialize the Gemini Developer API backend service
// Create a `liveModel` instance with a model that supports the Live API
let liveModel = FirebaseAI.firebaseAI(backend: .googleAI()).liveModel(
modelName: "gemini-2.5-flash-native-audio-preview-12-2025",
// Configure the model to respond with audio
generationConfig: LiveGenerationConfig(
responseModalities: [.audio]
)
)
do {
let session = try await liveModel.connect()
// Load the audio file, or tap a microphone
guard let audioFile = NSDataAsset(name: "audio.pcm") else {
fatalError("Failed to load audio file")
}
// Provide the audio data
await session.sendAudioRealtime(audioFile.data)
var outputText = ""
for try await message in session.responses {
if case let .content(content) = message.payload {
content.modelTurn?.parts.forEach { part in
if let part = part as? InlineDataPart, part.mimeType.starts(with: "audio/pcm") {
// Handle 16bit pcm audio data at 24khz
playAudio(part.data)
}
}
// Optional: if you don't require to send more requests.
if content.isTurnComplete {
await session.close()
}
}
}
} catch {
fatalError(error.localizedDescription)
}
Kotlin
Untuk menggunakan Live API, buat instance
LiveModel
dan tetapkan
modalitas respons
ke AUDIO.
// Initialize the Gemini Developer API backend service
// Create a `liveModel` instance with a model that supports the Live API
val liveModel = Firebase.ai(backend = GenerativeBackend.googleAI()).liveModel(
modelName = "gemini-2.5-flash-native-audio-preview-12-2025",
// Configure the model to respond with audio
generationConfig = liveGenerationConfig {
responseModality = ResponseModality.AUDIO
}
)
val session = liveModel.connect()
// This is the recommended approach.
// However, you can create your own recorder and handle the stream.
session.startAudioConversation()
Java
Untuk menggunakan Live API, buat instance
LiveModel
dan tetapkan
modalitas respons
ke AUDIO.
ExecutorService executor = Executors.newFixedThreadPool(1);
// Initialize the Gemini Developer API backend service
// Create a `liveModel` instance with a model that supports the Live API
LiveGenerativeModel lm = FirebaseAI.getInstance(GenerativeBackend.googleAI()).liveModel(
"gemini-2.5-flash-native-audio-preview-12-2025",
// Configure the model to respond with audio
new LiveGenerationConfig.Builder()
.setResponseModality(ResponseModality.AUDIO)
.build()
);
LiveModelFutures liveModel = LiveModelFutures.from(lm);
ListenableFuture<LiveSession> sessionFuture = liveModel.connect();
Futures.addCallback(sessionFuture, new FutureCallback<LiveSession>() {
@Override
public void onSuccess(LiveSession ses) {
LiveSessionFutures session = LiveSessionFutures.from(ses);
session.startAudioConversation();
}
@Override
public void onFailure(Throwable t) {
// Handle exceptions
}
}, executor);
Web
Untuk menggunakan Live API, buat instance
LiveGenerativeModel
dan tetapkan
modalitas respons
ke AUDIO.
import { initializeApp } from "firebase/app";
import { getAI, getLiveGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";
// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
// ...
};
// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);
// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
// Create a `LiveGenerativeModel` instance with a model that supports the Live API
const liveModel = getLiveGenerativeModel(ai, {
model: "gemini-2.5-flash-native-audio-preview-12-2025",
// Configure the model to respond with audio
generationConfig: {
responseModalities: [ResponseModality.AUDIO],
},
});
const session = await liveModel.connect();
// Start the audio conversation
const audioConversationController = await startAudioConversation(session);
// ... Later, to stop the audio conversation
// await audioConversationController.stop()
Dart
Untuk menggunakan Live API, buat instance
LiveGenerativeModel
dan tetapkan
modalitas respons
ke audio.
import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';
import 'package:your_audio_recorder_package/your_audio_recorder_package.dart';
late LiveModelSession _session;
final _audioRecorder = YourAudioRecorder();
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Gemini Developer API backend service
// Create a `liveGenerativeModel` instance with a model that supports the Live API
final liveModel = FirebaseAI.googleAI().liveGenerativeModel(
model: 'gemini-2.5-flash-native-audio-preview-12-2025',
// Configure the model to respond with audio
liveGenerationConfig: LiveGenerationConfig(
responseModalities: [ResponseModalities.audio],
),
);
_session = await liveModel.connect();
final audioRecordStream = _audioRecorder.startRecordingStream();
// Map the Uint8List stream to InlineDataPart stream
final mediaChunkStream = audioRecordStream.map((data) {
return InlineDataPart('audio/pcm', data);
});
await _session.startMediaStream(mediaChunkStream);
// In a separate thread, receive the audio response from the model
await for (final message in _session.receive()) {
// Process the received message
}
Unity
Untuk menggunakan Live API, buat instance
LiveModel
dan tetapkan
modalitas respons
ke Audio.
using Firebase;
using Firebase.AI;
async Task SendTextReceiveAudio() {
// Initialize the Gemini Developer API backend service
// Create a `LiveModel` instance with a model that supports the Live API
var liveModel = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetLiveModel(
modelName: "gemini-2.5-flash-native-audio-preview-12-2025",
// Configure the model to respond with audio
liveGenerationConfig: new LiveGenerationConfig(
responseModalities: new[] { ResponseModality.Audio })
);
LiveSession session = await liveModel.ConnectAsync();
// Start a coroutine to send audio from the Microphone
var recordingCoroutine = StartCoroutine(SendAudio(session));
// Start receiving the response
await ReceiveAudio(session);
}
IEnumerator SendAudio(LiveSession liveSession) {
string microphoneDeviceName = null;
int recordingFrequency = 16000;
int recordingBufferSeconds = 2;
var recordingClip = Microphone.Start(microphoneDeviceName, true,
recordingBufferSeconds, recordingFrequency);
int lastSamplePosition = 0;
while (true) {
if (!Microphone.IsRecording(microphoneDeviceName)) {
yield break;
}
int currentSamplePosition = Microphone.GetPosition(microphoneDeviceName);
if (currentSamplePosition != lastSamplePosition) {
// The Microphone uses a circular buffer, so we need to check if the
// current position wrapped around to the beginning, and handle it
// accordingly.
int sampleCount;
if (currentSamplePosition > lastSamplePosition) {
sampleCount = currentSamplePosition - lastSamplePosition;
} else {
sampleCount = recordingClip.samples - lastSamplePosition + currentSamplePosition;
}
if (sampleCount > 0) {
// Get the audio chunk
float[] samples = new float[sampleCount];
recordingClip.GetData(samples, lastSamplePosition);
// Send the data, discarding the resulting Task to avoid the warning
_ = liveSession.SendAudioAsync(samples);
lastSamplePosition = currentSamplePosition;
}
}
// Wait for a short delay before reading the next sample from the Microphone
const float MicrophoneReadDelay = 0.5f;
yield return new WaitForSeconds(MicrophoneReadDelay);
}
}
Queue audioBuffer = new();
async Task ReceiveAudio(LiveSession liveSession) {
int sampleRate = 24000;
int channelCount = 1;
// Create a looping AudioClip to fill with the received audio data
int bufferSamples = (int)(sampleRate * channelCount);
AudioClip clip = AudioClip.Create("StreamingPCM", bufferSamples, channelCount,
sampleRate, true, OnAudioRead);
// Attach the clip to an AudioSource and start playing it
AudioSource audioSource = GetComponent();
audioSource.clip = clip;
audioSource.loop = true;
audioSource.Play();
// Start receiving the response
await foreach (var message in liveSession.ReceiveAsync()) {
// Process the received message
foreach (float[] pcmData in message.AudioAsFloat) {
lock (audioBuffer) {
foreach (float sample in pcmData) {
audioBuffer.Enqueue(sample);
}
}
}
}
}
// This method is called by the AudioClip to load audio data.
private void OnAudioRead(float[] data) {
int samplesToProvide = data.Length;
int samplesProvided = 0;
lock(audioBuffer) {
while (samplesProvided < samplesToProvide && audioBuffer.Count > 0) {
data[samplesProvided] = audioBuffer.Dequeue();
samplesProvided++;
}
}
while (samplesProvided < samplesToProvide) {
data[samplesProvided] = 0.0f;
samplesProvided++;
}
}
Input teks + audio streaming
|
Klik penyedia Gemini API untuk melihat konten dan kode khusus penyedia di halaman ini. |
Jika perlu, Anda dapat mengirim input teks bersama dengan input audio dan menerima output audio yang di-streaming.
Swift
Untuk menggunakan Live API, buat instance
LiveModel
dan tetapkan
modalitas respons
ke audio.
import FirebaseAILogic
// Initialize the Gemini Developer API backend service
// Create a `liveModel` instance with a model that supports the Live API
let liveModel = FirebaseAI.firebaseAI(backend: .googleAI()).liveModel(
modelName: "gemini-2.5-flash-native-audio-preview-12-2025",
// Configure the model to respond with audio
generationConfig: LiveGenerationConfig(
responseModalities: [.audio]
)
)
do {
let session = try await liveModel.connect()
// Provide a text prompt
let text = "tell a short story"
await session.sendTextRealtime(text)
var outputText = ""
for try await message in session.responses {
if case let .content(content) = message.payload {
content.modelTurn?.parts.forEach { part in
if let part = part as? InlineDataPart, part.mimeType.starts(with: "audio/pcm") {
// Handle 16bit pcm audio data at 24khz
playAudio(part.data)
}
}
// Optional: if you don't require to send more requests.
if content.isTurnComplete {
await session.close()
}
}
}
} catch {
fatalError(error.localizedDescription)
}
Kotlin
Untuk menggunakan Live API, buat instance
LiveModel
dan tetapkan
modalitas respons
ke AUDIO.
// Initialize the Gemini Developer API backend service
// Create a `liveModel` instance with a model that supports the Live API
val liveModel = Firebase.ai(backend = GenerativeBackend.googleAI()).liveModel(
modelName = "gemini-2.5-flash-native-audio-preview-12-2025",
// Configure the model to respond with audio
generationConfig = liveGenerationConfig {
responseModality = ResponseModality.AUDIO
}
)
val session = liveModel.connect()
// Provide a text prompt
val text = "tell a short story"
session.send(text)
session.receive().collect {
if(it.turnComplete) {
// Optional: if you don't require to send more requests.
session.stopReceiving();
}
// Handle 16bit pcm audio data at 24khz
playAudio(it.data)
}
Java
Untuk menggunakan Live API, buat instance
LiveModel
dan tetapkan
modalitas respons
ke AUDIO.
ExecutorService executor = Executors.newFixedThreadPool(1);
// Initialize the Gemini Developer API backend service
// Create a `liveModel` instance with a model that supports the Live API
LiveGenerativeModel lm = FirebaseAI.getInstance(GenerativeBackend.googleAI()).liveModel(
"gemini-2.5-flash-native-audio-preview-12-2025",
// Configure the model to respond with text
new LiveGenerationConfig.Builder()
.setResponseModality(ResponseModality.AUDIO)
.build()
);
LiveModelFutures model = LiveModelFutures.from(lm);
ListenableFuture<LiveSession> sessionFuture = model.connect();
class LiveContentResponseSubscriber implements Subscriber<LiveContentResponse> {
@Override
public void onSubscribe(Subscription s) {
s.request(Long.MAX_VALUE); // Request an unlimited number of items
}
@Override
public void onNext(LiveContentResponse liveContentResponse) {
// Handle 16bit pcm audio data at 24khz
liveContentResponse.getData();
}
@Override
public void onError(Throwable t) {
System.err.println("Error: " + t.getMessage());
}
@Override
public void onComplete() {
System.out.println("Done receiving messages!");
}
}
Futures.addCallback(sessionFuture, new FutureCallback<LiveSession>() {
@Override
public void onSuccess(LiveSession ses) {
LiveSessionFutures session = LiveSessionFutures.from(ses);
// Provide a text prompt
String text = "tell me a short story?";
session.send(text);
Publisher<LiveContentResponse> publisher = session.receive();
publisher.subscribe(new LiveContentResponseSubscriber());
}
@Override
public void onFailure(Throwable t) {
// Handle exceptions
}
}, executor);
Web
Untuk menggunakan Live API, buat instance
LiveGenerativeModel
dan tetapkan
modalitas respons
ke AUDIO.
import { initializeApp } from "firebase/app";
import { getAI, getLiveGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";
// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
// ...
};
// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);
// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
// Create a `LiveGenerativeModel` instance with a model that supports the Live API
const liveModel = getLiveGenerativeModel(ai, {
model: "gemini-2.5-flash-native-audio-preview-12-2025",
// Configure the model to respond with audio
generationConfig: {
responseModalities: [ResponseModality.AUDIO],
},
});
const session = await liveModel.connect();
// Provide a text prompt
const prompt = "tell a short story";
session.send(prompt);
// Handle the model's audio output
const messages = session.receive();
for await (const message of messages) {
switch (message.type) {
case "serverContent":
if (message.turnComplete) {
// TODO(developer): Handle turn completion
} else if (message.interrupted) {
// TODO(developer): Handle the interruption
break;
} else if (message.modelTurn) {
const parts = message.modelTurn?.parts;
parts?.forEach((part) => {
if (part.inlineData) {
// TODO(developer): Play the audio chunk
}
});
}
break;
case "toolCall":
// Ignore
case "toolCallCancellation":
// Ignore
}
}
Dart
Untuk menggunakan Live API, buat instance
LiveGenerativeModel
dan tetapkan
modalitas respons
ke audio.
import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';
import 'dart:async';
import 'dart:typed_data';
late LiveModelSession _session;
Future<Stream<Uint8List>> textToAudio(String textPrompt) async {
WidgetsFlutterBinding.ensureInitialized();
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Gemini Developer API backend service
// Create a `liveGenerativeModel` instance with a model that supports the Live API
final liveModel = FirebaseAI.googleAI().liveGenerativeModel(
model: 'gemini-2.5-flash-native-audio-preview-12-2025',
// Configure the model to respond with audio
liveGenerationConfig: LiveGenerationConfig(
responseModalities: [ResponseModalities.audio],
),
);
_session = await liveModel.connect();
final prompt = Content.text(textPrompt);
await _session.send(input: prompt);
return _session.receive().asyncMap((response) async {
if (response is LiveServerContent && response.modelTurn?.parts != null) {
for (final part in response.modelTurn!.parts) {
if (part is InlineDataPart) {
return part.bytes;
}
}
}
throw Exception('Audio data not found');
});
}
Future<void> main() async {
try {
final audioStream = await textToAudio('Convert this text to audio.');
await for (final audioData in audioStream) {
// Process the audio data (e.g., play it using an audio player package)
print('Received audio data: ${audioData.length} bytes');
// Example using flutter_sound (replace with your chosen package):
// await _flutterSoundPlayer.startPlayer(fromDataBuffer: audioData);
}
} catch (e) {
print('Error: $e');
}
}
Unity
Untuk menggunakan Live API, buat instance
LiveModel
dan tetapkan
modalitas respons
ke Audio.
using Firebase;
using Firebase.AI;
async Task SendTextReceiveAudio() {
// Initialize the Gemini Developer API backend service
// Create a `LiveModel` instance with a model that supports the Live API
var liveModel = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetLiveModel(
modelName: "gemini-2.5-flash-native-audio-preview-12-2025",
// Configure the model to respond with audio
liveGenerationConfig: new LiveGenerationConfig(
responseModalities: new[] { ResponseModality.Audio })
);
LiveSession session = await liveModel.ConnectAsync();
// Provide a text prompt
var prompt = ModelContent.Text("Convert this text to audio.");
await session.SendAsync(content: prompt, turnComplete: true);
// Start receiving the response
await ReceiveAudio(session);
}
Queue<float> audioBuffer = new();
async Task ReceiveAudio(LiveSession session) {
int sampleRate = 24000;
int channelCount = 1;
// Create a looping AudioClip to fill with the received audio data
int bufferSamples = (int)(sampleRate * channelCount);
AudioClip clip = AudioClip.Create("StreamingPCM", bufferSamples, channelCount,
sampleRate, true, OnAudioRead);
// Attach the clip to an AudioSource and start playing it
AudioSource audioSource = GetComponent<AudioSource>();
audioSource.clip = clip;
audioSource.loop = true;
audioSource.Play();
// Start receiving the response
await foreach (var message in session.ReceiveAsync()) {
// Process the received message
foreach (float[] pcmData in message.AudioAsFloat) {
lock (audioBuffer) {
foreach (float sample in pcmData) {
audioBuffer.Enqueue(sample);
}
}
}
}
}
// This method is called by the AudioClip to load audio data.
private void OnAudioRead(float[] data) {
int samplesToProvide = data.Length;
int samplesProvided = 0;
lock(audioBuffer) {
while (samplesProvided < samplesToProvide && audioBuffer.Count > 0) {
data[samplesProvided] = audioBuffer.Dequeue();
samplesProvided++;
}
}
while (samplesProvided < samplesToProvide) {
data[samplesProvided] = 0.0f;
samplesProvided++;
}
}
Perhatikan bahwa Anda juga dapat mengirim teks sebagai pembaruan konten inkremental selama sesi aktif.
Streaming video + input audio
Menyediakan konten video input memberikan konteks visual untuk audio input.
Live API mengharapkan urutan frame gambar diskrit dan mendukung input frame video pada 1 frame per detik (FPS).
Input yang direkomendasikan: resolusi 768x768 native pada 1 FPS.
Jenis MIME yang didukung:
video/x-flv,video/quicktime,video/mpeg,video/mpegs,video/mpg,video/mp4,video/webm,video/wmv,video/3gpp
Input video + audio streaming adalah penerapan yang lebih canggih, jadi lihat aplikasi contoh untuk mempelajari cara menerapkan kemampuan ini: Swift - segera hadir! | Android - aplikasi contoh | Web - segera hadir! | Flutter - aplikasi contoh | Unity - segera hadir!
Kemampuan lanjutan
Model Live API mendukung kemampuan lanjutan berikut untuk pembaruan di tengah sesi:
Petunjuk update sistem (khusus untuk Vertex AI Gemini API)
Menambahkan update konten inkremental
Anda dapat menambahkan update inkremental selama sesi aktif. Gunakan ini untuk mengirim input teks, membuat konteks sesi, atau memulihkan konteks sesi.
Untuk konteks yang lebih panjang, sebaiknya berikan ringkasan pesan tunggal untuk mengosongkan jendela konteks untuk interaksi berikutnya.
Untuk konteks singkat, Anda dapat mengirim interaksi belokan demi belokan untuk merepresentasikan urutan peristiwa yang tepat, seperti cuplikan di bawah.
Swift
// Define initial turns (history/context).
let turns: [ModelContent] = [
ModelContent(role: "user", parts: [TextPart("What is the capital of France?")]),
ModelContent(role: "model", parts: [TextPart("Paris")]),
]
// Send history, keeping the conversational turn OPEN (false).
await session.sendContent(turns, turnComplete: false)
// Define the new user query.
let newTurn: [ModelContent] = [
ModelContent(role: "user", parts: [TextPart("What is the capital of Germany?")]),
]
// Send the final query, CLOSING the turn (true) to trigger the model response.
await session.sendContent(newTurn, turnComplete: true)
Kotlin
Not yet supported for Android apps - check back soon!
Java
Not yet supported for Android apps - check back soon!
Web
const turns = [{ text: "Hello from the user!" }];
await session.send(
turns,
false // turnComplete: false
);
console.log("Sent history. Waiting for next input...");
// Define the new user query.
const newTurn [{ text: "And what is the capital of Germany?" }];
// Send the final query, CLOSING the turn (true) to trigger the model response.
await session.send(
newTurn,
true // turnComplete: true
);
console.log("Sent final query. Model response expected now.");
Dart
// Define initial turns (history/context).
final List turns = [
Content(
"user",
[Part.text("What is the capital of France?")],
),
Content(
"model",
[Part.text("Paris")],
),
];
// Send history, keeping the conversational turn OPEN (false).
await session.send(
input: turns,
turnComplete: false,
);
// Define the new user query.
final List newTurn = [
Content(
"user",
[Part.text("What is the capital of Germany?")],
),
];
// Send the final query, CLOSING the turn (true) to trigger the model response.
await session.send(
input: newTurn,
turnComplete: true,
);
Unity
// Define initial turns (history/context).
List turns = new List {
new ModelContent("user", new ModelContent.TextPart("What is the capital of France?") ),
new ModelContent("model", new ModelContent.TextPart("Paris") ),
};
// Send history, keeping the conversational turn OPEN (false).
foreach (ModelContent turn in turns)
{
await session.SendAsync(
content: turn,
turnComplete: false
);
}
// Define the new user query.
ModelContent newTurn = ModelContent.Text("What is the capital of Germany?");
// Send the final query, CLOSING the turn (true) to trigger the model response.
await session.SendAsync(
content: newTurn,
turnComplete: true
);
Memperbarui petunjuk sistem di tengah sesi
| Hanya tersedia saat menggunakan Vertex AI Gemini API sebagai penyedia API Anda. |
Anda dapat memperbarui petunjuk sistem selama sesi aktif. Gunakan ini untuk menyesuaikan respons model, misalnya untuk mengubah bahasa respons atau mengubah gaya bahasa.
Untuk memperbarui petunjuk sistem di tengah sesi, Anda dapat mengirimkan konten teks dengan peran system. Petunjuk sistem yang diperbarui akan tetap berlaku selama
sisa sesi.
Swift
await session.sendContent(
[ModelContent(
role: "system",
parts: [TextPart("new system instruction")]
)],
turnComplete: false
)
Kotlin
Not yet supported for Android apps - check back soon!
Java
Not yet supported for Android apps - check back soon!
Web
Not yet supported for Web apps - check back soon!
Dart
try {
await _session.send(
input: Content(
'system',
[Part.text('new system instruction')],
),
turnComplete: false,
);
} catch (e) {
print('Failed to update system instructions: $e');
}
Unity
try
{
await session.SendAsync(
content: new ModelContent(
"system",
new ModelContent.TextPart("new system instruction")
),
turnComplete: false
);
}
catch (Exception e)
{
Debug.LogError($"Failed to update system instructions: {e.Message}");
}
Fitur yang tidak didukung
Fitur yang belum didukung oleh Firebase AI Logic saat menggunakan Live API, tetapi akan segera hadir.
Menangani gangguan
Pengelolaan sesi, termasuk melanjutkan sesi di beberapa koneksi, memperpanjang durasi sesi, atau memadatkan jendela konteks.
Menonaktifkan dan mengonfigurasi deteksi aktivitas suara (VAD)
Menyetel resolusi media input
Menambahkan konfigurasi pemikiran
Mengaktifkan dialog afektif atau audio proaktif
Menerima
UsageMetadatadalam respons
Fitur tidak didukung oleh Firebase AI Logic saat menggunakan Live API, dan saat ini tidak direncanakan.
Template perintah server
Inferensi hybrid atau pada perangkat
Pemantauan AI di konsol Firebase
Kamu bisa apa lagi?
Sesuaikan penerapan Anda dengan menggunakan berbagai opsi konfigurasi, seperti menambahkan transkripsi atau menyetel suara respons.
Tingkatkan kualitas penerapan Anda dengan memberi model akses ke alat, seperti panggilan fungsi dan perujukan dengan Google Penelusuran. Dokumentasi resmi untuk menggunakan alat dengan Live API akan segera hadir.
Pelajari batas dan spesifikasi, untuk menggunakan Live API, seperti durasi sesi, batas kecepatan, bahasa yang didukung, dll.