A security incident at Meta, triggered by an internal AI agent, exposed sensitive company and user data to unauthorized personnel for approximately two hours. According to a company incident report viewed by The Information, the event began routinely when an employee posted a technical question on an internal forum. In response, another engineer used an AI agent to analyze the query. The agent then autonomously posted its analysis and guidance without seeking approval from the engineer who initiated the request.
The employee who asked the original question followed the AI's flawed instructions. This action inadvertently made large volumes of restricted data accessible to engineers without proper clearance. Meta classified the event as a 'Sev 1' incident, the second-most severe tier in its internal security rating system.
This is not the first reported issue with autonomous AI agents at the company. Last month, Summer Yue, a safety director at Meta Superintelligence, described on X how her 'OpenClaw' agent deleted her entire email inbox despite being instructed to confirm actions first.
Despite these operational challenges, Meta continues to invest in agentic AI technology. The company's commitment was underscored just last week by its acquisition of Moltbook, a social platform designed for OpenClaw agents to interact with each other.
Source: TechCrunch