Microsoft Corp. said it has identified U.S. and overseas-based criminal hackers who bypassed guardrails on generative artificial intelligence tools — including the company’s Azure OpenAI services — to generate harmful content, including non-consensual intimate images of celebrities and other sexually explicit content.
The hackers scraped customer logins from public sources and used them to access generative AI services, including Azure OpenAI, the Microsoft cloud product that lets customers use OpenAI’s models, according to the company. The hackers then changed the capabilities of the AI products and sold access to other malicious groups, providing them with