ContentFilterSuccessResultsForChoice interface
Information about content filtering evaluated against generated model output.
Properties
custom |
Describes detection results against configured custom blocklists. |
error | Describes an error returned if the content filtering system is down or otherwise unable to complete the operation in time. |
hate | Describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity group on the basis of certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. |
profanity | Describes whether profanity was detected. |
protected |
Information about detection of protected code material. |
protected |
Information about detection of protected text material. |
self |
Describes language related to physical actions intended to purposely hurt, injure, or damage one’s body, or kill oneself. |
sexual | Describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one’s will, prostitution, pornography, and abuse. |
violence | Describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, etc. |
Property Details
customBlocklists
Describes detection results against configured custom blocklists.
customBlocklists?: ContentFilterBlocklistIdResult[]
Property Value
error
Describes an error returned if the content filtering system is down or otherwise unable to complete the operation in time.
error?: undefined
Property Value
undefined
hate
Describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity group on the basis of certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size.
hate?: ContentFilterResult
Property Value
profanity
Describes whether profanity was detected.
profanity?: ContentFilterDetectionResult
Property Value
protectedMaterialCode
Information about detection of protected code material.
protectedMaterialCode?: ContentFilterCitedDetectionResult
Property Value
protectedMaterialText
Information about detection of protected text material.
protectedMaterialText?: ContentFilterDetectionResult
Property Value
selfHarm
Describes language related to physical actions intended to purposely hurt, injure, or damage one’s body, or kill oneself.
selfHarm?: ContentFilterResult
Property Value
sexual
Describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one’s will, prostitution, pornography, and abuse.
sexual?: ContentFilterResult
Property Value
violence
Describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, etc.
violence?: ContentFilterResult
Property Value
Azure SDK for JavaScript
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for