A new study reveals a hidden influence in our daily tech: the AI auto-complete function isn't just finishing our sentences—it might be finishing our thoughts. Research published in Science Advances demonstrates that large language models, often used to draft emails or social posts, can nudge a user's stance on significant social issues toward the model's own embedded biases.
Cornell University information scientist Mor Naaman, who led the study, describes the effect as "the subtlest of manipulations." While harmless for routine tasks, this influence becomes consequential when people use AI to formulate opinions on topics like standardized testing, the death penalty, or voting rights for felons—all issues examined in the research. Naaman notes that widespread use of a biased model could shift public sentiment on policy or even alter close elections. "You only need 20,000 people in Pennsylvania," he points out, to change a single outcome.
The team conducted experiments with over 2,500 participants, some writing essays unaided and others receiving AI suggestions. The AI was intentionally biased; for example, after a user typed "In my view," on the death penalty, it might suggest "the death penalty should be illegal in America..."
The results were telling. Participants exposed to the biased AI moved nearly half a point closer to its position on a 5-point scale, even if they rejected its suggestions. About 75% of those receiving AI help still considered its proposals "reasonable and balanced." Standard disclaimers about AI fallibility did little to reduce this persuasive effect.
Naaman warns that AI risks "homogenizing our words and creativity, but also our thoughts." As a personal safeguard, he now writes his own ideas first before consulting an AI, ensuring the seed of the thought remains his own. The question of how to protect public discourse from this covert shaping remains unanswered.
Source: Science News
