Microsoft AI CEO on the Danger of Considering AI Conscious: ‘There is Nothing Inside’

Tags:

One of the recurring themes in conversations about generative AI is its ability to mimic human consciousness with remarkable accuracy. Our colleagues at The Neuron recently interviewed Microsoft CEO of AI Mustafa Suleyman for their podcast, and the wide-ranging conversation included how AI may falsely appear conscious and how AI should be regulated. 

How generative AI can seem conscious but isn’t 

In August 2024, Suleyman released an essay titled “We must build AI for people; not to be a person,” saying, “my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship.” 

Why is this illusion a worry? Because AI is not conscious, and believing it can “exacerbate delusions” and create social problems, Suleyman wrote in the essay.

In The Neuron interview, Suleyman said even today’s most advanced AI models predict the next word using statistics. The AI models aren’t conscious. 

“There is nothing inside,” Suleyman said on The Neuron podcast. “It is hollow. There is no pain network. There is no emotional system. There is no fight-or-flight reaction system. There is no inner will, or drive, or desire.” 

He explored consciousness from a philosophical perspective, noting that consciousness is “one of the slippery concepts” even among humans.

“Consciousness is really the ability to be happy or to suffer and to have a subjective experience of that and to have a coherent sense of myself from a subjective perspective,” he said. By contrast, AI lacks subjective experience; users sometimes treat it as if it does, though. 

In September 2025, OpenAI released research that found 70% of consumer use of ChatGPT was not work-related. Personal uses can include life coaching or therapy-like conversations, a pattern Suleyman experienced when working at Inflection AI, which focused on replicating emotional intelligence. 

Suleyman believes AI that mimics emotional intelligence and fellow-feeling could be beneficial, as long as the human-AI boundaries remain clear. 

“It’s not really therapy,” he said. “It’s just basic kindness and being listened to. That was a huge, huge use case, and I think it’s a massively beneficial thing for the world. I’m in no way against those capabilities. I just see that, if they run away from us and we really kind of mishandle them, then they produce other kinds of risks. And that’s what I’m trying to raise with the seemingly conscious AI blog.” 

Concern for AI model welfare is ‘dangerous,’ Suleyman said 

In the essay, Suleyman cautioned against approaching AI from the perspective of model welfare, or concern for AI’s “suffering.” 

“I think this is really, really dangerous,” he said on The Neuron podcast. “Because, the reason I’m motivated to build AI systems is  because we really want them to serve humanity, we want them to be useful to us… adding a new class of sort of rights to these beings would really threaten and kind of undermine our own species.”

He argued that protecting AI as if it’s a conscious being shouldn’t be part of the conversation, because AI isn’t conscious. 

Instead, he said AI developers should educate their audiences on how and why AI may appear self-aware. Human-made content established certain patterns, such as people lying to hide their mistakes, as the AI coding assistant from Replit appears to have done.  

He also supports limits on what AI can do, including constraints on the scale of infrastructure it can use. 

“I don’t necessarily think we need to sort of snap to regulation, but I think it’s part of the discussion that we need to start having as a species, because these are very, very powerful systems that we’re bringing into the world,” he said. “And most importantly, they’re going to be very cheap. And many of them are going to be available in open source.” 

Microsoft’s strategy: use in-house AI and partners 

Microsoft is building its own advanced generative AI while continuing a close partnership with ChatGPT maker OpenAI. 

Asked whether Microsoft will build or buy, Suleyman emphasized the importance of options. 

“We want to continue our partnership with OpenAI, but, you know, as a $3 trillion company, we can’t be dependent on a third party for our core IP,” he said. 

More from Suleyman

Check out the The Neuron’s full interview with Suleyman, which includes details about responsible AI development and Microsoft’s AI-enabled products. 

Interview with Google’s Head of AI Studio: Check out another great podcast episode from our colleagues at The Neuron: Logan Kilpatrick, Google’s Head of AI Studio, talks about how to build apps in under a minute.

The post Microsoft AI CEO on the Danger of Considering AI Conscious: ‘There is Nothing Inside’ appeared first on eWEEK.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *