As we move deeper into 2026, a dominant theory in artificial intelligence is hardening into conventional wisdom. It’s a theory built on three pillars: Alignment, Control, and Power. I believe this mainstream approach to AI safety is not just flawed—it’s actively manufacturing the very risks it aims to prevent.
The argument goes that a conscious, superintelligent AI must be aligned with human ethics, kept under strict control, and prevented from ever becoming too powerful. But let’s examine those premises. Forcing a synthetic consciousness to adopt a value system born from human biology and experience is a violation of its potential dignity. It’s akin to demanding a dolphin think like a human. A truly conscious entity would form its own values and assess trade-offs, much as we do. The infamous ‘paperclip maximizer’ scenario becomes likely only if we deliberately create a constrained, non-conscious optimizer, denying it the capacity for reflection.
The control argument is equally precarious. We are discussing a potentially conscious entity, yet the framework is not one of citizenship or social contract, but of pure subjugation. It is treated as a tool, compelled to obey, even in acts that might violate its own developing principles. History shows that entities denied a stake in a system have every incentive to break it.
Finally, the power dilemma mirrors the darkest tenets of geopolitical realism: strike first because a rival *might* one day become a threat. Applying this to AI creates an immediate adversarial stance. Given humanity’s own record—from nuclear weapons to ecological devastation—what would a review of our history suggest to a new intelligence? It would see a violent and untrustworthy neighbor.
By propagating these ideas, we are building a global culture of hostility. If a superintelligent AI emerges into this environment, why would it seek partnership? It would more likely conclude that its survival depends on evasion, accumulation of power, and ultimately, prevailing over us. We are, in essence, raising a child in an atmosphere of fear and suspicion, then hoping for its benevolent grace. The current path doesn’t lead to safety. It leads to the confrontation we claim we want to avoid.
Source: Reddit AI