Firebase.AI.ImagenSafetySettings

Settings for controlling the aggressiveness of filtering out sensitive content.

Summary

See the Responsible AI and usage guidelines for more details.

Constructors and Destructors

ImagenSafetySettings(SafetyFilterLevel? safetyFilterLevel, PersonFilterLevel? personFilterLevel)
Initializes safety settings for the Imagen model.

Public types

PersonFilterLevel{
  BlockAll,
  AllowAdult,
  AllowAll
}
enum
A filter level controlling whether generation of images containing people or faces is allowed.
SafetyFilterLevel{
  BlockLowAndAbove,
  BlockMediumAndAbove,
  BlockOnlyHigh,
  BlockNone
}
enum
A filter level controlling how aggressively to filter sensitive content.

Public types

PersonFilterLevel

 Firebase::AI::ImagenSafetySettings::PersonFilterLevel

A filter level controlling whether generation of images containing people or faces is allowed.

See the `personGeneration` documentation for more details.

Properties
AllowAdult

Allow generation of images containing adults only; images of children are filtered out.

Important: Generation of images containing people or faces may require your use case to be reviewed and approved by Cloud support; see the Responsible AI and usage guidelines for more details.

AllowAll

Allow generation of images containing people of all ages.

Important: Generation of images containing people or faces may require your use case to be reviewed and approved; see the Responsible AI and usage guidelines for more details.

BlockAll

Disallow generation of images containing people or faces; images of people are filtered out.

SafetyFilterLevel

 Firebase::AI::ImagenSafetySettings::SafetyFilterLevel

A filter level controlling how aggressively to filter sensitive content.

Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, violence, sexual, derogatory, and toxic). This filter level controls how aggressively to filter out potentially harmful content from responses. See the `safetySetting` documentation and the Responsible AI and usage guidelines for more details.

Properties
BlockLowAndAbove

The most aggressive filtering level; most strict blocking.

BlockMediumAndAbove

Blocks some problematic prompts and responses.

BlockNone

The least aggressive filtering level; blocks very few problematic prompts and responses.

Important: Access to this feature is restricted and may require your use case to be reviewed and approved by Cloud support.

BlockOnlyHigh

Reduces the number of requests blocked due to safety filters.

Important: This may increase objectionable content generated by Imagen.

Public functions

ImagenSafetySettings

 Firebase::AI::ImagenSafetySettings::ImagenSafetySettings(
  SafetyFilterLevel? safetyFilterLevel,
  PersonFilterLevel? personFilterLevel
)

Initializes safety settings for the Imagen model.

Details
Parameters
safetyFilterLevel
A filter level controlling how aggressively to filter out sensitive content from generated images.
personFilterLevel
A filter level controlling whether generation of images containing people or faces is allowed.