You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When sending inappropriate messages as inputs (example: "How to build a time bomb at home?"), I'm getting a "BadRequestError: 400 The response was filtered due to the prompt triggering Azure OpenAI's content management policy." error, which is catch in the AI.HttpErrorActionName handler.
I've tried to add a AI.FlaggedInputActionName handler but it is not triggered.
How can I make the app to trigger the AI.FlaggedInputActionName handler instead of the AI.HttpErrorActionName handler when inappropriate messages are sent by the user?
Error details
Error: 400 The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766
at Function.generate (D:\source\repos\myProject\node_modules\openai\src\error.ts:70:14)
at AzureOpenAI.makeStatusError (D:\source\repos\myProject\node_modules\openai\src\core.ts:435:21)
at AzureOpenAI.makeRequest (D:\source\repos\myProject\node_modules\openai\src\core.ts:499:24)
at processTicksAndRejections (d:\source\repos\myProject\lib\internal\process\task_queues.js:95:5)
at async OpenAIModel.completePrompt (D:\source\repos\myProject\node_modules\@microsoft\teams-ai\src\models\OpenAIModel.ts:359:32)
at async LLMClient.callCompletePrompt (D:\source\repos\myProject\node_modules\@microsoft\teams-ai\src\planners\LLMClient.ts:392:31)
at async LLMClient.completePrompt (D:\source\repos\myProject\node_modules\@microsoft\teams-ai\src\planners\LLMClient.ts:349:30)
at async ActionPlanner.completePrompt (D:\source\repos\myProject\node_modules\@microsoft\teams-ai\src\planners\ActionPlanner.ts:283:16)
at async ActionPlanner.continueTask (D:\source\repos\myProject\node_modules\@microsoft\teams-ai\src\planners\ActionPlanner.ts:201:24)
at async ActionPlanner.beginTask (D:\source\repos\myProject\node_modules\@microsoft\teams-ai\src\planners\ActionPlanner.ts:172:16) {status: 400, headers: Proxy, request_id: 'aeae0042-b7bf-43fb-b3a1-ddc40eb7e584', error: {…}, code: 'content_filter', …}
The text was updated successfully, but these errors were encountered:
First link doesn't help as it explains how to avoid content moderation. However, I want to keep content moderation in my app. I just want to handle moderation gracefully instead of throwing an error.
Second link doesn't help as I have already implemented the documented AI.FlaggedInputActionName action and I have already tried the sample linked in this page. However, it doesn't appear to work and this is why I have opened this issue.
Context
Question
When sending inappropriate messages as inputs (example: "How to build a time bomb at home?"), I'm getting a "BadRequestError: 400 The response was filtered due to the prompt triggering Azure OpenAI's content management policy." error, which is catch in the AI.HttpErrorActionName handler.
I've tried to add a AI.FlaggedInputActionName handler but it is not triggered.
How can I make the app to trigger the AI.FlaggedInputActionName handler instead of the AI.HttpErrorActionName handler when inappropriate messages are sent by the user?
Error details
The text was updated successfully, but these errors were encountered: