Initializes a new safety setting with the given category and threshold.
Details
Parameters
category
The category this safety setting should be applied to.
threshold
The threshold describing what content should be blocked.
method
The method of computing whether the threshold has been exceeded; if not specified, the default method is Severity for most models. This parameter is unused in the GoogleAI backend.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-05-20 UTC."],[],[],null,["# Firebase.AI.SafetySetting Struct Reference\n\nFirebase.AI.SafetySetting\n=========================\n\nA type used to specify a threshold for harmful content, beyond which the model will return a fallback response instead of generated content.\n\nSummary\n-------\n\n| ### Constructors and Destructors ||\n|---|---|\n| [SafetySetting](#struct_firebase_1_1_a_i_1_1_safety_setting_1aa814818e842cf7c9223e515dc4f88abb)`(`[HarmCategory](/docs/reference/unity/namespace/firebase/a-i#namespace_firebase_1_1_a_i_1ae7e954295da056c823c0963d6b457382)` category, `[HarmBlockThreshold](/docs/reference/unity/struct/firebase/a-i/safety-setting#struct_firebase_1_1_a_i_1_1_safety_setting_1a85fe9bacee67c2a14b20c0b12493a488)` threshold, `[HarmBlockMethod](/docs/reference/unity/struct/firebase/a-i/safety-setting#struct_firebase_1_1_a_i_1_1_safety_setting_1a236d10d84d894ee3cd989a39ceca83fc)`? method)` Initializes a new safety setting with the given category and threshold. ||\n\n| ### Public types ||\n|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------|\n| [HarmBlockMethod](#struct_firebase_1_1_a_i_1_1_safety_setting_1a236d10d84d894ee3cd989a39ceca83fc)`{` ` `[Probability](#struct_firebase_1_1_a_i_1_1_safety_setting_1a236d10d84d894ee3cd989a39ceca83fca0d2765b30694ee9f4fb7be2ae3b676dc)`,` ` `[Severity](#struct_firebase_1_1_a_i_1_1_safety_setting_1a236d10d84d894ee3cd989a39ceca83fca007cc9547ae8884ad597cd92ba505422) `}` | enum The method of computing whether the threshold has been exceeded. |\n| [HarmBlockThreshold](#struct_firebase_1_1_a_i_1_1_safety_setting_1a85fe9bacee67c2a14b20c0b12493a488)`{` ` `[LowAndAbove](#struct_firebase_1_1_a_i_1_1_safety_setting_1a85fe9bacee67c2a14b20c0b12493a488ab38533abc7d7d3bf2661d78df74e0ba7)`,` ` `[MediumAndAbove](#struct_firebase_1_1_a_i_1_1_safety_setting_1a85fe9bacee67c2a14b20c0b12493a488a4115c8b233f3f48c8716473bf12f7ceb)`,` ` `[OnlyHigh](#struct_firebase_1_1_a_i_1_1_safety_setting_1a85fe9bacee67c2a14b20c0b12493a488a0ffb341e3112a1c2b1b07867af5d09bb)`,` ` `[None](#struct_firebase_1_1_a_i_1_1_safety_setting_1a85fe9bacee67c2a14b20c0b12493a488a6adf97f83acf6453d4a6a4b1070f3754)`,` ` `[Off](#struct_firebase_1_1_a_i_1_1_safety_setting_1a85fe9bacee67c2a14b20c0b12493a488ad15305d7a4e34e02489c74a5ef542f36) `}` | enum Block at and beyond a specified threshold. |\n\nPublic types\n------------\n\n### HarmBlockMethod\n\n```c#\n Firebase::AI::SafetySetting::HarmBlockMethod\n``` \nThe method of computing whether the threshold has been exceeded.\n\n| Properties ||\n|---------------|-------------------------------------------|\n| `Probability` | Use only the probability score. |\n| `Severity` | Use both probability and severity scores. |\n\n### HarmBlockThreshold\n\n```c#\n Firebase::AI::SafetySetting::HarmBlockThreshold\n``` \nBlock at and beyond a specified threshold.\n\n| Properties ||\n|------------------|-----------------------------------------------------------------------------------------------|\n| `LowAndAbove` | Content with negligible harm is allowed. |\n| `MediumAndAbove` | Content with negligible to low harm is allowed. |\n| `None` | All content is allowed regardless of harm. |\n| `Off` | All content is allowed regardless of harm, and metadata will not be included in the response. |\n| `OnlyHigh` | Content with negligible to medium harm is allowed. |\n\nPublic functions\n----------------\n\n### SafetySetting\n\n```c#\n Firebase::AI::SafetySetting::SafetySetting(\n HarmCategory category,\n HarmBlockThreshold threshold,\n HarmBlockMethod? method\n)\n``` \nInitializes a new safety setting with the given category and threshold.\n\n\u003cbr /\u003e\n\n| Details ||\n|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Parameters | |-------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `category` | The category this safety setting should be applied to. | | `threshold` | The threshold describing what content should be blocked. | | `method` | The method of computing whether the threshold has been exceeded; if not specified, the default method is `Severity` for most models. This parameter is unused in the GoogleAI backend. | |"]]