In 2026, translation tools are expected to handle everything from business documents to technical manuals. Few predicted they'd also be asked to generate text in the voice of a "flirtatious Margaret Thatcher." Yet that's exactly what users of Kagi Translate discovered this week, revealing both the surprising flexibility and the inherent unpredictability of current AI systems.
Kagi, better known for its subscription-based search engine, launched its translation service in 2024. It marketed the product as a more refined alternative to giants like Google Translate, using a blend of large language models (LLMs) to select the best output. From the start, it supported hundreds of genuine languages.
The shift began quietly in early 2025, when a user on Hacker News forum found that tweaking the tool's code could set the target "language" to phrases like "rude man with a Boston accent." The system complied. Recently, Kagi's own team showcased its ability to mimic "Reddit Speak" or corporate jargon. But the feature reached a wider audience when a Hacker News post celebrated adding "LinkedIn Speak" as an output option. Others quickly realized they could type virtually any descriptive style—including the now-notorious political parody—directly into Kagi's search bar, and the AI would attempt the impersonation.
This collective experiment demonstrates the creative potential of LLMs when users bypass their intended guardrails. It also raises clear questions about how companies can maintain control over generalized AI tools once they're released into the wild. The line between a clever feature and a potential liability appears remarkably thin.
Source: Ars Technica
