A prominent journalist is taking Grammarly to court, alleging the company used her identity—and those of others—to power an AI feature without permission. Julia Angwin filed a class-action complaint Wednesday, accusing the writing assistant company of violating privacy and publicity rights by leveraging personal identities for commercial gain.
The case centers on Grammarly's 'Expert Review' feature, an AI tool that suggested edits by referencing the styles of specific writers and thinkers. Angwin discovered her name was included through fellow journalist Casey Newton. An internal test by The Verge this week revealed several of its own staff members, including editor-in-chief Nilay Patel, were also listed as 'experts' within the tool.
In response to the mounting criticism, Grammarly's CEO, Shishir Mehrotra, announced the company is disabling the feature. He stated the tool was intended to connect users with influential perspectives and allow experts to engage with audiences, but conceded the execution was flawed. 'We hear the feedback and recognize we fell short,' Mehrotra said. 'I want to apologize and acknowledge that we’ll rethink our approach going forward.'
The lawsuit highlights a persistent tension in the data and machine learning engineering field: the ethical sourcing and use of personal data for model training and features. As companies race to deploy sophisticated AI, this case serves as a legal test for how existing laws governing identity and consent apply to new technological methods.
Source: The Verge