Meta has begun a global rollout of new artificial intelligence systems designed to handle user support and identify harmful content across Facebook and Instagram. The company states these tools represent a significant shift in how its platforms are managed.
The centerpiece is the Meta AI Support Assistant, now available within the mobile apps and desktop help centers. It is built to resolve common account issues—like resetting passwords, updating privacy settings, or reporting scams—directly, rather than just offering suggestions. According to Meta, the assistant typically responds in under five seconds and is available in all languages the platforms support for help topics.
Separately, Meta is testing more advanced AI for content enforcement. Early internal tests show these systems identifying approximately 5,000 daily scam attempts previously missed, reducing reports of celebrity impersonation by over 80%, and doubling the detection of certain adult solicitation content while cutting mistakes by 60%. A key advancement, Meta says, is the AI's ability to operate in languages covering 98% of online users and to adapt to cultural nuance, slang, and evolving code words.
The long-term strategy involves deploying these systems broadly over the coming years, which will reduce dependence on third-party content moderation vendors. Meta emphasizes that human reviewers will remain central for high-stakes decisions, such as account disablement appeals or law enforcement reports, but that AI will handle repetitive or rapidly evolving threats like graphic content review or scam detection. The company's Community Standards are not changing as part of this technological shift.
Source: Facebook
