The VergeAI & LLMs

Grammarly Halts AI 'Expert Review' After Backlash, Pledges Overhaul

Grammarly has disabled and will redesign a controversial AI feature after it was found to be mimicking the writing styles of journalists and other experts without their consent. The tool, called Expert Review, suggested edits it claimed were "inspired by" the work of specific individuals, including The Verge's editor-in-chief.

The company now says it will rebuild the feature from the ground up, with a core principle being explicit permission from any expert whose style or knowledge is referenced. In a statement, Ailian Gan, Grammarly's director of product management, acknowledged the misstep: "We clearly missed the mark. We are sorry and will do things differently going forward."

The initial response, an email opt-out for writers, was deemed insufficient. CEO Shishir Mehrotra elaborated on LinkedIn, stating the goal is a system where "experts choose to participate, shape how their knowledge is represented, and control their business model."

The incident highlights a critical tension in the 2026 AI ecosystem: the ease of training models on public data versus the right of individuals to control their digital persona. For data engineers, it underscores the growing operational and ethical complexity of sourcing training data. The promise of AI agents that can emulate expert guidance remains, but Grammarly's stumble shows that building them requires more than just technical pipelines—it demands clear consent and collaboration from the people whose expertise fuels the system.

Source: The Verge

← Back to News