Bahkan dengan penerapan dasar untuk Live API, Anda dapat membangun interaksi yang menarik dan efektif bagi pengguna. Secara opsional, Anda dapat menyesuaikan pengalaman lebih lanjut dengan menggunakan opsi konfigurasi berikut:
Suara dan bahasa respons
Anda dapat membuat model merespons dengan suara tertentu dan memengaruhi model untuk merespons dalam bahasa yang berbeda-beda.
Menentukan suara respons
|
Klik penyedia Gemini API untuk melihat konten dan kode khusus penyedia di halaman ini. |
Live API menggunakan Chirp 3 untuk mendukung respons ucapan yang disintesis dalam suara HD.
Jika Anda tidak menentukan suara respons, defaultnya adalah Puck.
Untuk menentukan suara respons, tetapkan nama suara dalam objek speechConfig
sebagai bagian dari
konfigurasi model.
Swift
// ...
let liveModel = FirebaseAI.firebaseAI(backend: .googleAI()).liveModel(
modelName: "gemini-2.5-flash-native-audio-preview-09-2025",
// Configure the model to use a specific voice for its audio response
generationConfig: LiveGenerationConfig(
responseModalities: [.audio],
speech: SpeechConfig(voiceName: "VOICE_NAME")
)
)
// ...
Kotlin
// ...
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).liveModel(
modelName = "gemini-2.5-flash-native-audio-preview-09-2025",
// Configure the model to use a specific voice for its audio response
generationConfig = liveGenerationConfig {
responseModality = ResponseModality.AUDIO
speechConfig = SpeechConfig(voice = Voice("VOICE_NAME"))
}
)
// ...
Java
// ...
LiveGenerativeModel lm = FirebaseAI.getInstance(GenerativeBackend.googleAI()).liveModel(
"gemini-2.5-flash-native-audio-preview-09-2025",
// Configure the model to use a specific voice for its audio response
new LiveGenerationConfig.Builder()
.setResponseModality(ResponseModality.AUDIO)
.setSpeechConfig(new SpeechConfig(new Voice("VOICE_NAME")))
.build()
);
// ...
Web
// ...
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
const liveModel = getLiveGenerativeModel(ai, {
model: "gemini-2.5-flash-native-audio-preview-09-2025",
// Configure the model to use a specific voice for its audio response
generationConfig: {
responseModalities: [ResponseModality.AUDIO],
speechConfig: {
voiceConfig: {
prebuiltVoiceConfig: { voiceName: "VOICE_NAME" },
},
},
},
});
// ...
Dart
// ...
final _liveModel = FirebaseAI.googleAI().liveGenerativeModel(
model: 'gemini-2.5-flash-native-audio-preview-09-2025',
// Configure the model to use a specific voice for its audio response
liveGenerationConfig: LiveGenerationConfig(
responseModalities: [ResponseModalities.audio],
speechConfig: SpeechConfig(voiceName: 'VOICE_NAME'),
),
);
// ...
Unity
// ...
var liveModel = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetLiveModel(
modelName: "gemini-2.5-flash-native-audio-preview-09-2025",
// Configure the model to use a specific voice for its audio response
liveGenerationConfig: new LiveGenerationConfig(
responseModalities: new[] { ResponseModality.Audio },
speechConfig: SpeechConfig.UsePrebuiltVoice("VOICE_NAME")
)
);
// ...
Memengaruhi bahasa respons
Model Live API secara otomatis memilih bahasa yang sesuai untuk responsnya.
Jika Anda ingin model merespons dalam bahasa non-Inggris atau selalu dalam bahasa tertentu, Anda dapat memengaruhi respons model menggunakan petunjuk sistem seperti contoh berikut:
Memperkuat model bahwa bahasa selain Inggris mungkin sesuai
Listen to the speaker carefully. If you detect a non-English language, respond in the language you hear from the speaker. You must respond unmistakably in the speaker's language.Memberi tahu model untuk selalu merespons dalam bahasa tertentu
RESPOND IN LANGUAGE. YOU MUST RESPOND UNMISTAKABLY IN LANGUAGE.
Transkripsi untuk input dan output audio
|
Klik penyedia Gemini API untuk melihat konten dan kode khusus penyedia di halaman ini. |
Sebagai bagian dari respons model, Anda dapat menerima transkripsi input audio dan respons audio model. Anda menetapkan konfigurasi ini sebagai bagian dari konfigurasi model.
Untuk transkripsi input audio, tambahkan
inputAudioTranscription.Untuk transkripsi respons audio model, tambahkan
outputAudioTranscription.
Perhatikan hal berikut:
Anda dapat mengonfigurasi model untuk menampilkan transkripsi input dan output (seperti yang ditunjukkan dalam contoh berikut), atau Anda dapat mengonfigurasinya untuk menampilkan salah satu saja.
Transkrip di-streaming bersama dengan audio, jadi sebaiknya kumpulkan transkrip seperti yang Anda lakukan pada bagian teks di setiap giliran.
Bahasa transkripsi disimpulkan dari input audio dan respons audio model.
Swift
// ...
let liveModel = FirebaseAI.firebaseAI(backend: .googleAI()).liveModel(
modelName: "gemini-2.5-flash-native-audio-preview-09-2025",
// Configure the model to return transcriptions of the audio input and output
generationConfig: LiveGenerationConfig(
responseModalities: [.audio],
inputAudioTranscription: AudioTranscriptionConfig(),
outputAudioTranscription: AudioTranscriptionConfig()
)
)
var inputTranscript: String = ""
var outputTranscript: String = ""
do {
let session = try await liveModel.connect()
for try await response in session.responses {
if case let .content(content) = response.payload {
if let inputText = content.inputAudioTranscription?.text {
// Handle transcription text of the audio input
inputTranscript += inputText
}
if let outputText = content.outputAudioTranscription?.text {
// Handle transcription text of the audio output
outputTranscript += outputText
}
if content.isTurnComplete {
// Log the transcripts after the current turn is complete
print("Input audio: \(inputTranscript)")
print("Output audio: \(outputTranscript)")
// Reset the transcripts for the next turn
inputTranscript = ""
outputTranscript = ""
}
}
}
} catch {
// Handle error
}
// ...
Kotlin
// ...
val liveModel = Firebase.ai(backend = GenerativeBackend.googleAI()).liveModel(
modelName = "gemini-2.5-flash-native-audio-preview-09-2025",
// Configure the model to return transcriptions of the audio input and output
generationConfig = liveGenerationConfig {
responseModality = ResponseModality.AUDIO
inputAudioTranscription = AudioTranscriptionConfig()
outputAudioTranscription = AudioTranscriptionConfig()
}
)
val liveSession = liveModel.connect()
fun handleTranscription(input: Transcription?, output: Transcription?) {
input?.text?.let { text ->
// Handle transcription text of the audio input
println("Input Transcription: $text")
}
output?.text?.let { text ->
// Handle transcription text of the audio output
println("Output Transcription: $text")
}
}
liveSession.startAudioConversation(null, ::handleTranscription)
// ...
Java
// ...
ExecutorService executor = Executors.newFixedThreadPool(1);
LiveGenerativeModel lm = FirebaseAI.getInstance(GenerativeBackend.googleAI()).liveModel(
"gemini-2.5-flash-native-audio-preview-09-2025",
// Configure the model to return transcriptions of the audio input and output
new LiveGenerationConfig.Builder()
.setResponseModality(ResponseModality.AUDIO)
.setInputAudioTranscription(new AudioTranscriptionConfig())
.setOutputAudioTranscription(new AudioTranscriptionConfig())
.build()
);
LiveModelFutures liveModel = LiveModelFutures.from(lm);
ListenableFuture sessionFuture = liveModel.connect();
Futures.addCallback(sessionFuture, new FutureCallback() {
@Override
public void onSuccess(LiveSessionFutures ses) {
LiveSessionFutures session = ses;
session.startAudioConversation((Transcription input, Transcription output) -> {
if (input != null) {
// Handle transcription text of the audio input
System.out.println("Input Transcription: " + input.getText());
}
if (output != null) {
// Handle transcription text of the audio output
System.out.println("Output Transcription: " + output.getText());
}
return null;
});
}
@Override
public void onFailure(Throwable t) {
// Handle exceptions
t.printStackTrace();
}
}, executor);
// ...
Web
// ...
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
const liveModel = getLiveGenerativeModel(ai, {
model: 'gemini-2.5-flash-native-audio-preview-09-2025',
// Configure the model to return transcriptions of the audio input and output
generationConfig: {
responseModalities: [ResponseModality.AUDIO],
inputAudioTranscription: {},
outputAudioTranscription: {},
},
});
const liveSession = await liveModel.connect();
liveSession.sendAudioRealtime({ data, mimeType: "audio/pcm" });
const messages = liveSession.receive();
for await (const message of messages) {
switch (message.type) {
case 'serverContent':
if (message.inputTranscription) {
// Handle transcription text of the audio input
console.log(`Input transcription: ${message.inputTranscription.text}`);
}
if (message.outputTranscription) {
// Handle transcription text of the audio output
console.log(`Output transcription: ${message.outputTranscription.text}`);
} else {
// Handle other message types (modelTurn, turnComplete, interruption)
}
default:
// Handle other message types (toolCall, toolCallCancellation)
}
}
// ...
Dart
// ...
final _liveModel = FirebaseAI.googleAI().liveGenerativeModel(
model: 'gemini-2.5-flash-native-audio-preview-09-2025',
// Configure the model to return transcriptions of the audio input and output
liveGenerationConfig: LiveGenerationConfig(
responseModalities: [ResponseModalities.audio],
inputAudioTranscription: AudioTranscriptionConfig(),
outputAudioTranscription: AudioTranscriptionConfig(),
),
);
final LiveSession _session = _liveModel.connect();
await for (final response in _session.receive()) {
LiveServerContent message = response.message;
if (message.inputTranscription?.text case final inputText?) {
// Handle transcription text of the audio input
print('Input: $inputText');
}
if (message.outputTranscription?.text case final outputText?) {
// Handle transcription text of the audio output
print('Output: $outputText');
}
}
// ...
Unity
// ...
var liveModel = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetLiveModel(
modelName: "gemini-2.5-flash-native-audio-preview-09-2025",
// Configure the model to return transcriptions of the audio input and output
liveGenerationConfig: new LiveGenerationConfig(
responseModalities: new[] { ResponseModality.Audio },
inputAudioTranscription: new AudioTranscriptionConfig(),
outputAudioTranscription: new AudioTranscriptionConfig()
)
);
try
{
var session = await liveModel.ConnectAsync();
var stream = session.ReceiveAsync();
await foreach (var response in stream) {
if (response.Message is LiveSessionContent sessionContent) {
if (!string.IsNullOrEmpty(sessionContent.InputTranscription?.Text)) {
// handle transcription text of input audio
}
if (!string.IsNullOrEmpty(sessionContent.OutputTranscription?.Text)) {
// handle transcription text of output audio
}
}
}
}
catch (Exception e)
{
// Handle error
}
// ...
Deteksi aktivitas suara (VAD)
Model ini otomatis melakukan deteksi aktivitas suara (VAD) pada aliran input audio yang berkelanjutan. VAD diaktifkan secara default.
Pengelolaan sesi
Pelajari topik terkait sesi berikut:
Kemampuan lanjutan, termasuk:
Batas terkait sesi, termasuk batas koneksi dan durasi sesi, batas jendela konteks sesi, dan batas kecepatan.
Firebase AI Logic belum mendukung fitur berikut untuk pengelolaan sesi. Periksa kembali nanti!
- Menangani gangguan
- Memperpanjang durasi sesi
- Melanjutkan sesi
- Mempertahankan konteks di seluruh sesi dan permintaan
- Memadatkan jendela konteks