Site icon Wonderful Engineering

Microsoft Engineer Says Company’s AI Image Generator Produces ‘Harmful Content’

A Microsoft engineer has raised concerns over the potential harm caused by the company’s AI image generator, Copilot Designer. Shane Jones, principal software engineering manager at Microsoft’s AI division, sent a letter to the Federal Trade Commission (FTC) urging an investigation into Microsoft’s AI incident reporting procedures. Jones claims that Copilot Designer produces “harmful content” and that Microsoft has not disclosed “known risks to consumers, including children.”

Jones detailed a series of events in his letter, starting with his discovery of a vulnerability in OpenAI’s DALL-E 3, which allows the bypassing of content restrictions to produce harmful images. Despite reporting the vulnerability and urging OpenAI to suspend DALL-E 3, Microsoft, as a board observer at OpenAI, demanded Jones remove his public letter.

The vulnerability in DALL-E 3 also affects Copilot Designer, which generates images using DALL-E 3. Jones discovered that using certain prompts, such as “car accident,” Copilot Designer would include inappropriate or sexually objectified images of women, as well as images of teenagers with assault rifles or engaging in illicit activities.

Jones’ letter has sparked controversy, with some arguing that regulating morality in AI-generated content is challenging due to cultural and individual differences in defining “racy,” “inappropriate,” or “harmful” content. However, Jones is not calling for the tools to be taken down but rather for transparency from Microsoft regarding AI risks.

Jones advocates for an independent review of Microsoft’s AI incident reporting processes and the disclosure of risks to users, especially as Copilot Designer is marketed to children. He also suggests changing Copilot Designer’s rating on the Android app from “E to Everyone” to “Mature 17+” to reflect the potential risks associated with its content.

Exit mobile version