LiveSessionFutures

@PublicPreviewAPI
abstract class LiveSessionFutures


Wrapper class providing Java compatible methods for LiveSession.

See also
LiveSession

Summary

Public companion functions

LiveSessionFutures
from(session: LiveSession)

Public functions

abstract ListenableFuture<Unit>

Closes the client session.

abstract Publisher<LiveServerMessage>

Receives responses from the model for both streaming and standard requests.

abstract ListenableFuture<Unit>
send(content: Content)

Sends data to the model.

abstract ListenableFuture<Unit>
send(text: String)

Sends text to the model.

abstract ListenableFuture<Unit>

Sends an audio input stream to the model, using the realtime API.

abstract ListenableFuture<Unit>

Sends function calling responses to the model.

abstract ListenableFuture<Unit>

This function is deprecated. Use sendAudioRealtime, sendVideoRealtime, or sendTextRealtime instead

abstract ListenableFuture<Unit>

Sends text data to the server in realtime.

abstract ListenableFuture<Unit>

Sends a video input stream to the model, using the realtime API.

abstract ListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation()

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation.

abstract ListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(enableInterruptions: Boolean)

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

abstract ListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?
)

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

abstract ListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    transcriptHandler: ((Transcription?, Transcription?) -> Unit)?
)

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation.

abstract ListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?,
    enableInterruptions: Boolean
)

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

abstract ListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    transcriptHandler: ((Transcription?, Transcription?) -> Unit)?,
    enableInterruptions: Boolean
)

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

abstract ListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?,
    transcriptHandler: ((Transcription?, Transcription?) -> Unit)?,
    enableInterruptions: Boolean
)

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

abstract ListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
stopAudioConversation()

Stops the audio conversation with the Gemini Server.

abstract Unit

Stops receiving from the model.

Public companion functions

from

fun from(session: LiveSession): LiveSessionFutures
Returns
LiveSessionFutures

a LiveSessionFutures created around the provided LiveSession

Public functions

close

abstract fun close(): ListenableFuture<Unit>

Closes the client session.

Once a LiveSession is closed, it can not be reopened; you'll need to start a new LiveSession.

See also
stopReceiving

receive

abstract fun receive(): Publisher<LiveServerMessage>

Receives responses from the model for both streaming and standard requests.

Call close to stop receiving responses from the model.

Returns
Publisher<LiveServerMessage>

A Publisher which will emit LiveServerMessage from the model.

Throws
com.google.firebase.ai.type.SessionAlreadyReceivingException: com.google.firebase.ai.type.SessionAlreadyReceivingException

when the session is already receiving.

See also
stopReceiving

send

abstract fun send(content: Content): ListenableFuture<Unit>

Sends data to the model.

Calling this after startAudioConversation will play the response audio immediately.

Parameters
content: Content

Client Content to be sent to the model.

send

abstract fun send(text: String): ListenableFuture<Unit>

Sends text to the model.

Calling this after startAudioConversation will play the response audio immediately.

Parameters
text: String

Text to be sent to the model.

sendAudioRealtime

abstract fun sendAudioRealtime(audio: InlineData): ListenableFuture<Unit>

Sends an audio input stream to the model, using the realtime API.

Parameters
audio: InlineData

The audio data to send.

sendFunctionResponse

abstract fun sendFunctionResponse(functionList: List<FunctionResponsePart>): ListenableFuture<Unit>

Sends function calling responses to the model.

Parameters
functionList: List<FunctionResponsePart>

The list of FunctionResponsePart instances indicating the function response from the client.

sendMediaStream

abstract fun sendMediaStream(mediaChunks: List<MediaData>): ListenableFuture<Unit>

Streams client data to the model.

Calling this after startAudioConversation will play the response audio immediately.

Parameters
mediaChunks: List<MediaData>

The list of MediaData instances representing the media data to be sent.

sendTextRealtime

abstract fun sendTextRealtime(text: String): ListenableFuture<Unit>

Sends text data to the server in realtime. Check https://ai.google.dev/api/live#bidigeneratecontentrealtimeinput for details about the realtime input usage.

Parameters
text: String

The text data to send.

sendVideoRealtime

abstract fun sendVideoRealtime(video: InlineData): ListenableFuture<Unit>

Sends a video input stream to the model, using the realtime API.

Parameters
video: InlineData

The video data to send. Video MIME type could be either video or image.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(enableInterruptions: Boolean): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

Parameters
enableInterruptions: Boolean

If enabled, allows the user to speak over or interrupt the model's ongoing reply.

WARNING: The user interruption feature relies on device-specific support, and may not be consistently available.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?
): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

Parameters
functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?

A callback function that is invoked whenever the model receives a function call.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(
    transcriptHandler: ((Transcription?, Transcription?) -> Unit)?
): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation.

Parameters
transcriptHandler: ((Transcription?, Transcription?) -> Unit)?

A callback function that is invoked whenever the model receives a transcript. The first Transcription object is the input transcription, and the second is the output transcription

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?,
    enableInterruptions: Boolean
): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

Parameters
functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?

A callback function that is invoked whenever the model receives a function call.

enableInterruptions: Boolean

If enabled, allows the user to speak over or interrupt the model's ongoing reply.

WARNING: The user interruption feature relies on device-specific support, and may not be consistently available.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(
    transcriptHandler: ((Transcription?, Transcription?) -> Unit)?,
    enableInterruptions: Boolean
): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

Parameters
transcriptHandler: ((Transcription?, Transcription?) -> Unit)?

A callback function that is invoked whenever the model receives a transcript. The first Transcription object is the input transcription, and the second is the output transcription

enableInterruptions: Boolean

If enabled, allows the user to speak over or interrupt the model's ongoing reply.

WARNING: The user interruption feature relies on device-specific support, and may not be consistently available.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(
    functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?,
    transcriptHandler: ((Transcription?, Transcription?) -> Unit)?,
    enableInterruptions: Boolean
): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped using stopAudioConversation or close.

Parameters
functionCallHandler: ((FunctionCallPart) -> FunctionResponsePart)?

A callback function that is invoked whenever the model receives a function call.

transcriptHandler: ((Transcription?, Transcription?) -> Unit)?

A callback function that is invoked whenever the model receives a transcript. The first Transcription object is the input transcription, and the second is the output transcription

enableInterruptions: Boolean

If enabled, allows the user to speak over or interrupt the model's ongoing reply.

WARNING: The user interruption feature relies on device-specific support, and may not be consistently available.

stopAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun stopAudioConversation(): ListenableFuture<Unit>

Stops the audio conversation with the Gemini Server.

This only needs to be called after a previous call to startAudioConversation.

If there is no audio conversation currently active, this function does nothing.

stopReceiving

abstract fun stopReceiving(): Unit

Stops receiving from the model.

If this function is called during an ongoing audio conversation, the model's response will not be received, and no audio will be played; the live session object will no longer receive data from the server.

To resume receiving data, you must either handle it directly using receive, or indirectly by using startAudioConversation.

See also
close