Suleyman, who joined Microsoft earlier this year to lead its AI division, argued that anthropomorphising AI — treating it as if it has emotions, rights, or even suffering — could distract from its intended purpose: to serve humans. “If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals — that starts to seem like an independent being rather than something that is in service to humans,” he told WIRED.
He described machine consciousness as an “illusion” that might feel real to users but should never be mistaken for actual awareness. “It’s an illusion but it feels real, and that’s what will count more,” he said. In his view, when AI convincingly simulates traits such as empathy or personality, people may begin to treat it as sentient — even though, scientifically, today’s models remain pattern-recognition systems trained on vast amounts of data.
The concern isn’t theoretical. Chatbots like OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s own Copilot already produce lifelike responses that can feel conversationally “human.” Earlier this year, one Google engineer famously claimed an AI model was “sentient,” sparking public debate despite experts widely dismissing the claim.
Suleyman warned that as simulations become more persuasive, society will have to grapple with blurred perceptions. “When the simulation becomes so plausible, so seemingly conscious, then you have to engage with that reality,” he said.
At the same time, he rejected the idea that consciousness could emerge on its own within AI systems. “That’s just anthropomorphism,” he noted, insisting that any traits resembling awareness or emotion would have to be deliberately engineered. In practice, this means the responsibility lies squarely with developers and companies not to design AI that tricks users into thinking it is alive.
Suleyman’s comments add to a growing debate within the tech world. Some researchers worry about overhyping the risks of AI “becoming conscious,” arguing that it distracts from urgent issues such as bias, misinformation, surveillance, and job disruption.
Others, however, believe that even the perception of machine sentience could have serious social and ethical consequences — shaping how people interact with technology, trust information, or even assign rights.
Despite his concerns, Suleyman said he does not currently favour regulation to address the issue. Instead, he called for industry-wide norms and shared responsibility to ensure AI remains firmly under human control. “Technology is here to serve us, not to have its own will and motivation and independent desires,” he told WIRED.
Also Read: AI is intelligent, human creativity remains at core, says Meta’s Schultz
