LiveSession

@PublicPreviewAPI
class LiveSession


Represents a live WebSocket session capable of streaming content to and from the server.

Summary

Public functions

suspend Unit

Closes the client session.

Boolean

Indicates whether an audio conversation is being used for this session object.

Boolean

Indicates whether the underlying websocket connection is active.

Flow<LiveServerMessage>

Receives responses from the model for both streaming and standard requests.

suspend Unit
send(content: Content)

Sends data to the model.

suspend Unit
send(text: String)

Sends text to the model.

suspend Unit

Sends an audio input stream to the model, using the realtime API.

suspend Unit

Sends function calling responses to the model.

suspend Unit

This function is deprecated. Use sendAudioRealtime, sendVideoRealtime, or sendTextRealtime instead

suspend Unit

Sends a text input stream to the model, using the realtime API.

suspend Unit

Sends a video frame to the model, using the realtime API.

suspend Unit
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?
)

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

suspend Unit
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    liveAudioConversationConfig: LiveAudioConversationConfig
)

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

suspend Unit
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?,
    enableInterruptions: Boolean
)

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

suspend Unit
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?,
    transcriptHandler: ((Transcription?, Transcription?) -> Unit)?,
    enableInterruptions: Boolean
)

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

suspend Unit
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?,
    transcriptHandler: ((Transcription?, Transcription?) -> Unit)?,
    goAwayHandler: ((LiveServerGoAway) -> Unit)?,
    enableInterruptions: Boolean
)

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

Unit

Stops the audio conversation with the model.

Unit

Stops receiving from the model.

Public functions

close

suspend fun close(): Unit

Closes the client session.

Once a LiveSession is closed, it can not be reopened; you'll need to start a new LiveSession.

See also
stopReceiving

isAudioConversationActive

fun isAudioConversationActive(): Boolean

Indicates whether an audio conversation is being used for this session object.

isClosed

fun isClosed(): Boolean

Indicates whether the underlying websocket connection is active.

receive

fun receive(): Flow<LiveServerMessage>

Receives responses from the model for both streaming and standard requests.

Call close to stop receiving responses from the model.

Returns
Flow<LiveServerMessage>

A Flow which will emit LiveServerMessage from the model.

Throws
com.google.firebase.ai.type.SessionAlreadyReceivingException: com.google.firebase.ai.type.SessionAlreadyReceivingException

when the session is already receiving.

See also
stopReceiving

send

suspend fun send(content: Content): Unit

Sends data to the model.

Calling this after startAudioConversation will play the response audio immediately.

Parameters
content: Content

Client Content to be sent to the model.

send

suspend fun send(text: String): Unit

Sends text to the model.

Calling this after startAudioConversation will play the response audio immediately.

Parameters
text: String

Text to be sent to the model.

sendAudioRealtime

suspend fun sendAudioRealtime(audio: InlineData): Unit

Sends an audio input stream to the model, using the realtime API.

To learn more about audio formats, and the required state they should be provided in, see the docs on Supported audio formats

Parameters
audio: InlineData

Raw audio data used to update the model on the client's conversation. For best results, send 16-bit PCM audio at 24kHz.

sendFunctionResponse

suspend fun sendFunctionResponse(functionList: List<FunctionResponsePart>): Unit

Sends function calling responses to the model.

NOTE: If you're using startAudioConversation, the method will handle sending function responses to the model for you. You do not need to call this method in that case.

Parameters
functionList: List<FunctionResponsePart>

The list of FunctionResponsePart instances indicating the function response from the client.

sendMediaStream

suspend fun sendMediaStream(mediaChunks: List<MediaData>): Unit

Streams client data to the model.

Calling this after startAudioConversation will play the response audio immediately.

Parameters
mediaChunks: List<MediaData>

The list of MediaData instances representing the media data to be sent.

sendTextRealtime

suspend fun sendTextRealtime(text: String): Unit

Sends a text input stream to the model, using the realtime API.

Parameters
text: String

Text content to append to the current client's conversation.

sendVideoRealtime

suspend fun sendVideoRealtime(video: InlineData): Unit

Sends a video frame to the model, using the realtime API.

Instead of raw video data, the model expects individual frames of the video, sent as images.

If your video has audio, send it separately through sendAudioRealtime.

For better performance, frames can also be sent at a lower rate than the video; even as low as 1 frame per second.

Parameters
video: InlineData

Encoded image data extracted from a frame of the video, used to update the model on the client's conversation, with the corresponding IANA standard MIME type of the video frame data (for example, image/png, image/jpeg, etc.).

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
suspend fun startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)? = null
): Unit

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

Parameters
functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)? = null

A callback function that is invoked whenever the model receives a function call. The FunctionResponsePart that the callback function returns will be automatically sent to the model.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
suspend fun startAudioConversation(
    liveAudioConversationConfig: LiveAudioConversationConfig
): Unit

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

Parameters
liveAudioConversationConfig: LiveAudioConversationConfig

A LiveAudioConversationConfig provided by the user to control the various aspects of the conversation.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
suspend fun startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)? = null,
    enableInterruptions: Boolean = false
): Unit

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

Parameters
functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)? = null

A callback function that is invoked whenever the model receives a function call. The FunctionResponsePart that the callback function returns will be automatically sent to the model.

enableInterruptions: Boolean = false

If enabled, allows the user to speak over or interrupt the model's ongoing reply.

WARNING: The user interruption feature relies on device-specific support, and may not be consistently available.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
suspend fun startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)? = null,
    transcriptHandler: ((Transcription?, Transcription?) -> Unit)? = null,
    enableInterruptions: Boolean = false
): Unit

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

Parameters
functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)? = null

A callback function that is invoked whenever the model receives a function call. The FunctionResponsePart that the callback function returns will be automatically sent to the model.

transcriptHandler: ((Transcription?, Transcription?) -> Unit)? = null

A callback function that is invoked whenever the model receives a transcript. The first Transcription object is the input transcription, and the second is the output transcription.

enableInterruptions: Boolean = false

If enabled, allows the user to speak over or interrupt the model's ongoing reply.

WARNING: The user interruption feature relies on device-specific support, and may not be consistently available.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
suspend fun startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)? = null,
    transcriptHandler: ((Transcription?, Transcription?) -> Unit)? = null,
    goAwayHandler: ((LiveServerGoAway) -> Unit)? = null,
    enableInterruptions: Boolean = false
): Unit

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

Parameters
functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)? = null

A callback function that is invoked whenever the model receives a function call. The FunctionResponsePart that the callback function returns will be automatically sent to the model.

transcriptHandler: ((Transcription?, Transcription?) -> Unit)? = null

A callback function that is invoked whenever the model receives a transcript. The first Transcription object is the input transcription, and the second is the output transcription.

goAwayHandler: ((LiveServerGoAway) -> Unit)? = null

A callback function that is invoked when the server initiates a disconnect via a LiveServerGoAway message. This allows the application to handle server-initiated session termination gracefully.

enableInterruptions: Boolean = false

If enabled, allows the user to speak over or interrupt the model's ongoing reply.

WARNING: The user interruption feature relies on device-specific support, and may not be consistently available.

stopAudioConversation

fun stopAudioConversation(): Unit

Stops the audio conversation with the model.

This only needs to be called after a previous call to startAudioConversation.

If there is no audio conversation currently active, this function does nothing.

stopReceiving

fun stopReceiving(): Unit

Stops receiving from the model.

If this function is called during an ongoing audio conversation, the model's response will not be received, and no audio will be played; the live session object will no longer receive data from the server.

To resume receiving data, you must either handle it directly using receive, or indirectly by using startAudioConversation.

See also
close