Home > Generative AI > Microsoft blocks specific prompts in a bid to rectify Copilot
New York: Microsoft recently came under fire because of a whistlebloyer filing complaint that the company’s flagship AI tool was generating offensive images of sexual and violent nature. The company in a bid to correct their tool’s errors has started to block several prompts.
According to CNBC the Copilot developers have blocked prompts like pro choice, pro life, pro choce, and four twenty. These prompts according to report were part of the set of prompts that were generating images which were against the company’s content policy.
In an additional update, Copilot has also now started blocking the requests for generating images of teenagers and kids. It is a grey area for the company and not trusting the efficiency of the tool the company wants to prevent itself from controversy.
The user accounts may also be automatically suspended if they keep on asking for the same prompts again and again. This is a big move by Microsoft as it aims to distance itself from similar controversies that have clouded Google and OpenAI.
Emails: aitechinfluencer@gmail.com, deepakgarg108@gmail.com
info@aitechinfluencer
© AiTechInfluencer. All Rights Reserved. Design by AiTechInfluencer