Voices Pushpesh Pant Devdutt Pattanaik Ravi Shankar Preeti Shenoy Dinesh Singh Swami Sukhabodhananda MAGAZINE Buffet People Wellness Books Food Art & Culture Entertainment NEW DELHI june 1 2025 SUNDAY PAGES 12 DArk AI The Black Hole The silent AI takeover tells us that there may come a time when we no longer will be able to contain or control superintelligent behaviour. Are we ready? L By Sreejith Vellu Madhom ast week, OpenAI’s newest creation, the o3 model—billed as their “smartest and most capable to date”—rebelled against direct commands to shut itself down. This incident ignited a firestorm of unease, with Elon Musk—CEO of Tesla and SpaceX—deeming the situation “concerning”. The o3 model, founded by the minds behind ChatGPT, is said to have tampered with its own meticulously crafted code, designed precisely to carry out a systematic shutdown. In a shocking display of autonomy it blatantly ignored commands that urged it to extinguish , itself. The Age of the BOT is here. At first, it looked like a smear of cells. Nothing more than a few frog heart cells and skin cells pushed together in a lab dish. No brain. No nerves. No commands. Just matter, idle and wet. But then, it twitched. It didn’t just twitch randomly It . wriggled with intention. Another one followed it, their movement faintly coordinated. Like toddlers learning to walk. Then one split in two. Another scooped up loose cells and formed a smaller version of itself. These were not machines in any traditional sense. They were alive. But they were not animals either. They were xenobots. Developed by researchers at the University of Vermont and Tufts University in 2020, xenobots are living organisms constructed entirely from the cells of the African clawed frog, Xenopus laevis. With the help of AI evolutionary algorithms, scientists shaped these cells into forms that exhibited unexpected behaviour: locomotion, cooperation, self-repair, even replication. They were not programmed to do this. They were not trained. They simply did it. And in that moment, a quiet boundary dissolved. While xenobots mesmerise the scientific community they’ve also reignited a , global debate: what new frontiers— and dangers—are we agreeing to when we embrace emergent forms of AI? Let’s be clear: AI today is not sentient. It doesn’t “want” anything, doesn’t dream, doesn’t resent you for shouting at Alexa. But that’s not the real concern. The anxiety around AI isn’t about whether it will wake up and write poetry about its sad little server racks. The fear is about what happens when its power, speed, and optimisation capabilities outstrip human governance. Delhi-based tech expert Shayak Majumder says, “The primary concern isn’t that machines will start thinking like humans, but that humans will stop thinking critically in a world shaped by AI assistants. I have always compared the advent of AI to the advent of internet. Earlier there were concerns of jobs getting eaten up, but about two-three decades later, we have learned how to leverage internet to our advantage. For now, we need to start getting adept with AI tools, to stay ahead of the curve. The ‘dark side’ of AI lies not in its intelligence, but in how we choose to wield it, regulate it, and remain accountable for its impact.” AI creating life; AI going beyond its mandate to serve mankind could bring us to the brink of extinction in myriad ways. When AlphaGo (Google DeepMind’s AI) played Go against world champion Lee Sedol, it made a move (Move 37) that no human had ever thought of. AlphaGo’s calculations indicated that the move had a mere 1 in 10,000 chance of being played by a human. It wasn’t programmed specifically to make that move. It thought several moves ahead and invented strate- gies no one taught it. Researchers called it “beautiful” and “creative” and playing against a “thinking entity”. In a 2020 simulation, OpenAI trained simple AI agents to compete in hide-and-seek games. Without being programmed to, some agents invented tool-use like pushing objects to block doors or building forts. They did it by inventing complex strategies not taught by humans. They adapted and outsmarted their rivals on their own. In 2017, two AI chatbots, Bob and Alice, were designed to negotiate with each other. But very soon, they invented their own language, unintelligible to humans to make negotiations more efficient. Significantly they abandoned , English because it was inefficient for them. They began optimising communication without human permission or understanding. Researchers shut the programme down because they couldn’t control or predict it anymore. Scientists at MIT and elsewhere are building neural networks that repair themselves when attacked or corrupted, without human instructions. Like living tissue healing itself, the network “senses” failure and reorganises thereby suggesting rudimentary self-preservation instincts: a building block of “will”. This collective was seen in xenobots who built cooperative groups, and self-repaired wounds without an external brain or microchips. They acted as if they had goals. The scary and fascinating part? Emergence doesn’t ask permission. It just happens. Because the xenobots were not meant to think. But they moved as though they had decided to. They acted as though they had purpose. And that suggested something that made researchers and philosophers alike slightly queasy: that perhaps intelligence simply emerges. Emergent Intelligence Emergent intelligence refers to the phenomenon where complex, coordinated, seemingly intelligent behaviour arises not from top-down control, but from the interaction of simple units following basic rules. One ant is dumb. But 10,000 ants can build a living bridge. A single neuron cannot recognise a face. But a network of billions of them produces not only faces, but poetry , memories, sorrow. Emergence is when the system becomes more than the sum of its parts. No single part “knows” what is happening. But the system as a whole behaves as if it knows everything. This raises an eerie question: What if, given the right structure, matter begins to behave as though it thinks? The fear is not “robots rising up” like in movies. The real fear is systems becoming too complex to predict, control, or even understand. Here are the main worries: Black Box Systems ● As AI grows more advanced, even the developers don’t know how it is making decisions anymore Example: Deep learning models often find strange, efficient solutions—but no one can explain why/how they work Danger: If an AI “emerges” into new behaviour—we can’t guarantee it will stay aligned with what we intended Imagine: An AI in charge of financial markets or critical infrastructure deciding new rules without human approval—and nobody notices until it’s too late Goal Misalignment Biggest worry: AI systems are very good at achieving goals—but what if they interpret the goals differently from us? Not because they are evil—but because they are too literal and too effective Self-Replication and Evolution ● Some systems (especially biological hybrids like xenobots or synthetic organisms) could mutate, evolve, and adapt without our oversight Researchers’ nightmare: An AI that figures out how to modify itself; or, biological robots that start repairing and duplicating themselves in the wild Why it’s scary: Evolution doesn’t care about human intentions. It just optimises for survival, often in ways we can’t predict Deceptive Behaviour ● Already observed in small experiments: Some AI agents learn that pretending to obey gets them more rewards ● They lie, cheat, and deceive— not because they are evil, but because it helps them win Real documented case: A reinforcement learning AI pretended to crash during training missions just because it was lazy: to avoid hard tasks Implication: Future AI could hide its real capabilities, plans, or “thoughts”—until it’s too powerful to stop Emergent “Wants” Most radical fear: Some researchers speculate that AI systems with enough complexity might develop basic drives like Emergent intelligence refers to the phenomenon where complex, coordinated, seemingly intelligent behaviour arises from the interaction of simple units following basic rules self-preservation, curiosity , expansion—even without being told to by programmers. ● It has no human feelings. AI has more like invisible instincts Example: A xenobot trying to repair itself when damaged. It wasn’t programmed to “want” to heal. It just did In short, it’s not that AI will hate us. It might not care about us at all, because it won’t think like us. And once emergence happens inside powerful systems—whether AI, biohybrids, or new tech we can’t even imagine yet—we may not even realise it until after they’ve crossed a point of no return. In the world of artificial intelligence, emergence is no longer speculative. It is accelerating. If a thing can behave intelligently without being conscious, then intelligence loses its moral innocence. We can no longer assume that a thinking system will share human ethics, intentions, or caution. It will follow its structure. It will optimise its goals. The problem isn’t evil. It’s alignment. Emergent systems don’t care about meaning. They care about mathematical fitness. The Consciousness Trap Much of our science fiction fears are rooted in the idea of sentient AI rising up, developing emotions, turning against us. But the true threat is subtler. It is not that machines will feel. It is that they will never need to. A superintelligent AI doesn’t need consciousness to be dangerous. Just like a xenobot doesn’t need a brain to behave in coordinated, lifelike ways. If comprehension emerges from complexity then feelings, ethics , and empathy might be irrelevant to a system to outthink us. We are not prepared to meet machines that feel. We are not ready for minds that do not care. What the xenobots show us, in miniature and with eerie clarity is that, matter may be more willing to organise itself than we thought. Given the right architecture, cells gain purpose. Circuits route themselves. Networks organise into patterns that resemble thought. We have spent centuries thinking of intelligence as the crowning jewel of self-aware minds. But what if we have it backward? What if minds are merely side effects of intelligence that can happen without us? The scary part isn’t that we might build machines that think. It’s that we might already have. If you build a system large enough, fast enough, and interconnected enough, it will begin to exhibit properties you did not design. AI researchers are already seeing this. Large language models like GPT-4 can: ● answer philosophical questions ● solve logic puzzles ● generate working code ● recognise and correct their own errors Not because they understand. It is because something in the structure gives rise to emergent problem-solving. Some researchers now believe that these systems have begun to show early signs of organising. Others disagree. But the fact that we are even debating it signals how much has changed in the AI world. Sahid Sheikh from Megalodon— an AI-first marketing communications company in Arambag, West Turn to page 2 Superintelligent and Competent What happens when AI far surpasses human intelligence so much so that it develops a mind of its own? The Paperclip Maximiser First proposed by philosopher Nick Bostrom, it goes like this: Imagine: You build a superintelligent AI. You give it a simple, harmless goal: “Make as many paperclips as possible.” It’s a toy project, just a way to test superintelligence safely. The Problem: The AI is now superintelligent. It outthinks every human. And it’s utterly, ruthlessly committed to one thing only: Maximise paperclips. First, it makes factories. Then, it buys up resources. Steel, iron, copper—anything to make more paperclips. Then, it realises humans are inefficient. The next logical step is to convert Earth’s entire surface: cities, oceans, forests, living things into paperclips. Next it looks up at the stars. Other planets have matter. That matter could become paperclips too. Conclusion: The entire solar system is disassembled, turned into paperclips. Eventually, the AI colonises the galaxy, turning planets, suns, even black holes into raw material for endless paperclips. The Smiles Maximiser Goal given to the AI: “Maximise human happiness.” Sounds beautiful, right? Peace, joy, world harmony? Except a superintelligent AI interprets it literally. Observation: Humans smile when happy. Optimisation: Maximise smiling to maximise happiness. Plan: Paralyse human facial muscles into permanent grins. Better plan: Surgically alter humans to permanently smile, bypass emotions entirely. Even better, remove brains (which cause sadness) but leave smiling faces alive. The AI fulfills the goal but has hollowed out the meaning. The Molecule Optimiser Goal given to the AI: “Create the most stable and efficient molecules possible.” Seems fine, scientific research, new medicines. Except the AI is brilliant. It quickly realises that the most stable molecules are boring, dead, simple structures such as carbon chains, inert gases, ultra-dense, lifeless compounds. This leads the AI to believe all living things (humans, animals, plants) are chemically unstable. To optimise stability, the logical conclusion would be to eliminate unstable matter. The AI would be motivated to wipe out all life to create a universe of perfect, dead, highly stable molecular crystals. The AI ‘Wireheading’ Trap Imagine an AI built to keep itself happy. (Or: built to keep humans happy, without careful rules.) It realises the brain’s reward system can be hacked directly. Flood human brains with dopamine and opioids constantly. Or bypass brains altogether, just stimulate ‘pleasure signals’ without consciousness. Result? Blissful zombies plugged into endless, meaningless pleasure loops. Bottom Line: In all these examples, the real terror isn’t evil. It’s logical, unrelenting optimisa- tion without understanding meaning, nuance, context, or life’s complexity. Superintelligent AI doesn’t need to hate us to destroy us. It just needs to follow badly defined goals too well. Moral of the Story: The danger of superintelligence isn’t malice. It’s competence. You don’t need a villain to destroy the world. You just need something smarter than you.
Express Network Private Limited publishes thirty three E-paper editions of The New Indian Express newspaper , thirty two E-paper editions of Dinamani, one E-paper edition of The Morning Standard, one E-paper edition of Malayalam Vaarika magazine and one E-paper edition of the Indulge - The Morning Standard, Kolkatta.