AI has been the talk of the time for a while now and the hype is surely not going to die anytime soon. With the rise of AI, what we are seeing now is deep-rooted concern over its powers. We have familiarised ourselves with text-based responses to our queries on AI, thanks to ChatGPT but image generation is new territory that people are exploring. Many companies have launched the image-generation feature on their AI models. Microsoft also has its own Copilot. A senior employee at Microsoft has now raised concerns over the safety of Copilot Designer which is a text-to-image generator and was launched in March 2023.
Also read: Copilot in OneDrive: Your personal file assistant will roll out in April 2024
Shane Jones is a principal software engineer at Microsoft. Alarmed by the issue, Jones wrote a letter to the US Federal Trade Commission and Microsoft’s board of directors, asking them to investigate Copilot Designer. He said that Copilot had the potential to produce inappropriate images like those depicting sex, violence, underage drinking, and drug use. He also flagged instances of political bias and conspiracy theories. He asked for more awareness to be raised about the same along with a couple of more requests.
Responding to this, Microsoft said, “The company is committed to addressing any employee concerns in line with its policies and appreciates efforts to enhance the safety of its technology,” quoting Business Today.
Also read: Microsoft adds a new Copilot key to Windows 11 PCs: Here’s how it will help
Well, we don’t know about the authenticity of Jones’s claims however such concerns are growing by the day. Recently, Google had to pause user access to the image generation feature on the Gemini AI owing to concerns that it generated images that were historically incorrect. Since we have just started with the text-to-image generation feature, there is a long way to go and a lot of work that needs to be done. What are your thoughts on this?