In March 2025, veteran developer Les Orchard published a blog post that struck a nerve. Titled 'Grief and the AI Split,' it wasn't another technical critique. It was a personal account of friendships and professional bonds fracturing over artificial intelligence. Orchard, who spent years at Mozilla and helped build the open web, described watching a community splinter over fundamental questions of consent and ownership.
The divide isn't about utility. It's about principle. On one side, builders see generative AI as a logical next step for technology. On the other, creators feel their life's work—blog posts, code, art—was taken without permission to train corporate systems. The legal battles, like The New York Times' suit against OpenAI, are in progress, but the personal breach is already deep.
This split has tangible effects. Open-source maintainers are altering licenses to block AI training. Artists are using technical countermeasures to corrupt their work in datasets. In spaces like Mastodon, enthusiasm for AI tools can lead to social exile.
Yet, outside these circles, adoption accelerates. GitHub Copilot is a standard tool for millions. Investment continues. For many, especially newer developers, these models are just another part of the toolkit.
Orchard's grief reflects a lost shared belief: that the web was a reciprocal commons. The feeling that this contract is broken is particularly acute for those who built that commons. Their response—withdrawing work, restricting access—could reshape the open-source foundation the tech industry relies on. The industry races forward, but the builders who made it possible are asking if the cost is too high. Their uncertainty is now a permanent feature of the landscape.
Source: Webpronews