FirebaseVertexAI Framework Reference

GenerativeModel

@available(iOS 15.0, macOS 11.0, *)
public final class GenerativeModel

A type that represents a remote multimodal model (like Gemini), with the ability to generate content based on various input types.

  • Generates content from String and/or image inputs, given to the model as a prompt, that are representable as one or more Parts.

    Since Parts do not specify a role, this method is intended for generating content from zero-shot or “direct” prompts. For few-shot prompts, see generateContent(_ content: @autoclosure () throws -> [ModelContent]).

    Throws

    A GenerateContentError if the request failed.

    Declaration

    Swift

    public func generateContent(_ parts: any ThrowingPartsRepresentable...)
      async throws -> GenerateContentResponse

    Parameters

    content

    The input(s) given to the model as a prompt (see ThrowingPartsRepresentable for conforming types).

    Return Value

    The content generated by the model.

  • Generates new content from input content given to the model as a prompt.

    Throws

    A GenerateContentError if the request failed.

    Declaration

    Swift

    public func generateContent(_ content: @autoclosure () throws -> [ModelContent]) async throws
      -> GenerateContentResponse

    Parameters

    content

    The input(s) given to the model as a prompt.

    Return Value

    The generated content response from the model.

  • Generates content from String and/or image inputs, given to the model as a prompt, that are representable as one or more Parts.

    Since Parts do not specify a role, this method is intended for generating content from zero-shot or “direct” prompts. For few-shot prompts, see generateContentStream(_ content: @autoclosure () throws -> [ModelContent]).

    Declaration

    Swift

    @available(macOS 12.0, *)
    public func generateContentStream(_ parts: any ThrowingPartsRepresentable...)
      -> AsyncThrowingStream<GenerateContentResponse, Error>

    Parameters

    content

    The input(s) given to the model as a prompt (see ThrowingPartsRepresentable for conforming types).

    Return Value

    A stream wrapping content generated by the model or a GenerateContentError error if an error occurred.

  • Generates new content from input content given to the model as a prompt.

    Declaration

    Swift

    @available(macOS 12.0, *)
    public func generateContentStream(_ content: @autoclosure () throws -> [ModelContent])
      -> AsyncThrowingStream<GenerateContentResponse, Error>

    Parameters

    content

    The input(s) given to the model as a prompt.

    Return Value

    A stream wrapping content generated by the model or a GenerateContentError error if an error occurred.

  • Creates a new chat conversation using this model with the provided history.

    Declaration

    Swift

    public func startChat(history: [ModelContent] = []) -> Chat
  • Runs the model’s tokenizer on String and/or image inputs that are representable as one or more Parts.

    Since Parts do not specify a role, this method is intended for tokenizing zero-shot or “direct” prompts. For few-shot input, see countTokens(_ content: @autoclosure () throws -> [ModelContent]).

    Throws

    A CountTokensError if the tokenization request failed.

    Declaration

    Swift

    public func countTokens(_ parts: any ThrowingPartsRepresentable...) async throws
      -> CountTokensResponse

    Parameters

    content

    The input(s) given to the model as a prompt (see ThrowingPartsRepresentable for conforming types).

    Return Value

    The results of running the model’s tokenizer on the input; contains totalTokens.

  • Runs the model’s tokenizer on the input content and returns the token count.

    Throws

    A CountTokensError if the tokenization request failed or the input content was invalid.

    Declaration

    Swift

    public func countTokens(_ content: @autoclosure () throws -> [ModelContent]) async throws
      -> CountTokensResponse

    Parameters

    content

    The input given to the model as a prompt.

    Return Value

    The results of running the model’s tokenizer on the input; contains totalTokens.