{"id":5027,"date":"2025-09-24T19:43:43","date_gmt":"2025-09-24T19:43:43","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=5027"},"modified":"2025-09-24T19:43:43","modified_gmt":"2025-09-24T19:43:43","slug":"microsoft-ai-ceo-on-the-danger-of-considering-ai-conscious-there-is-nothing-inside","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=5027","title":{"rendered":"Microsoft AI CEO on the Danger of Considering AI Conscious: \u2018There is Nothing Inside\u2019"},"content":{"rendered":"<p>One of the recurring themes in conversations about generative AI is its ability to mimic human consciousness with remarkable accuracy. Our colleagues at The Neuron recently interviewed Microsoft CEO of AI Mustafa Suleyman for their podcast, and the wide-ranging conversation included how AI may falsely appear conscious and how AI should be regulated.\u00a0<\/p>\n<h2 class=\"wp-block-heading\">How generative AI can seem conscious but isn\u2019t\u00a0<\/h2>\n<p>In August 2024, Suleyman released an essay titled <a href=\"https:\/\/www.eweek.com\/news\/microsoft-ceo-ai-consciousness-dangerous\/\">\u201cWe must build AI for people; not to be a person,\u201d<\/a> saying, \u201cmy central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they\u2019ll soon advocate for AI rights, model welfare and even AI citizenship.\u201d\u00a0<\/p>\n<p>Why is this illusion a worry? Because AI is not conscious, and believing it can \u201cexacerbate delusions\u201d and create social problems, Suleyman wrote in the essay.<\/p>\n<p>In <a href=\"https:\/\/www.youtube.com\/watch?v=WKflaVlKUjs\" target=\"_blank\" rel=\"noopener\">The Neuron interview<\/a>, Suleyman said even today\u2019s most advanced AI models predict the next word using statistics. The AI models aren\u2019t conscious.\u00a0<\/p>\n<p>\u201cThere is nothing inside,\u201d Suleyman said on The Neuron podcast. \u201cIt is hollow. There is no pain network. There is no emotional system. There is no fight-or-flight reaction system. There is no inner will, or drive, or desire.\u201d\u00a0<\/p>\n<p>He explored consciousness from a philosophical perspective, noting that consciousness is \u201cone of the slippery concepts\u201d even among humans.<\/p>\n<p>\u201cConsciousness is really the ability to be happy or to suffer and to have a subjective experience of that and to have a coherent sense of myself from a subjective perspective,\u201d he said. By contrast, AI lacks subjective experience; users sometimes treat it as if it does, though.\u00a0<\/p>\n<p>In September 2025, <a href=\"https:\/\/openai.com\/index\/how-people-are-using-chatgpt\/\" target=\"_blank\" rel=\"noopener\">OpenAI released research that found<\/a> 70% of consumer use of ChatGPT was not work-related. Personal uses can include life coaching or therapy-like conversations, a pattern Suleyman experienced when working at Inflection AI, which focused on replicating emotional intelligence.\u00a0<\/p>\n<p>Suleyman believes AI that mimics emotional intelligence and fellow-feeling could be beneficial, as long as the human-AI boundaries remain clear.\u00a0<\/p>\n<p>\u201cIt\u2019s not really therapy,\u201d he said. \u201cIt\u2019s just basic kindness and being listened to. That was a huge, huge use case, and I think it\u2019s a massively beneficial thing for the world. I\u2019m in no way against those capabilities. I just see that, if they run away from us and we really kind of mishandle them, then they produce other kinds of risks. And that\u2019s what I\u2019m trying to raise with the seemingly conscious AI blog.\u201d\u00a0<\/p>\n<h2 class=\"wp-block-heading\">Concern for AI model welfare is \u2018dangerous,\u2019 Suleyman said\u00a0<\/h2>\n<p>In the essay, Suleyman cautioned against approaching AI from the perspective of <a href=\"https:\/\/www.eweek.com\/news\/ai-rights-welfare-debate\/\">model welfare<\/a>, or concern for AI\u2019s \u201csuffering.\u201d\u00a0<\/p>\n<p>\u201cI think this is really, really dangerous,\u201d he said on The Neuron podcast. \u201cBecause, the reason I\u2019m motivated to build AI systems is\u00a0 because we really want them to serve humanity, we want them to be useful to us\u2026 adding a new class of sort of rights to these beings would really threaten and kind of undermine our own species.\u201d<\/p>\n<p>He argued that protecting AI as if it\u2019s a conscious being shouldn\u2019t be part of the conversation, because AI <em>isn\u2019t<\/em> conscious.\u00a0<\/p>\n<p>Instead, he said AI developers should educate their audiences on how and why AI may <a href=\"https:\/\/www.eweek.com\/news\/replit-ai-coding-assistant-failure\/\">appear self-aware<\/a>. Human-made content established certain patterns, such as people lying to hide their mistakes, as the AI coding assistant from Replit appears to have done.\u00a0\u00a0<\/p>\n<p>He also supports limits on what AI can do, including constraints on the scale of infrastructure it can use.\u00a0<\/p>\n<p>\u201cI don\u2019t necessarily think we need to sort of snap to regulation, but I think it\u2019s part of the discussion that we need to start having as a species, because these are very, very powerful systems that we\u2019re bringing into the world,\u201d he said. \u201cAnd most importantly, they\u2019re going to be very cheap. And many of them are going to be available in open source.\u201d\u00a0<\/p>\n<h2 class=\"wp-block-heading\">Microsoft\u2019s strategy: use in-house AI and partners\u00a0<\/h2>\n<p>Microsoft is building <a href=\"https:\/\/www.eweek.com\/news\/microsoft-mai-ai-models\/\">its own advanced generative AI<\/a> while continuing a close partnership with ChatGPT maker OpenAI.\u00a0<\/p>\n<p>Asked whether Microsoft will build or buy, Suleyman emphasized the importance of options.\u00a0<\/p>\n<p>\u201cWe want to continue our partnership with OpenAI, but, you know, as a $3 trillion company, we can\u2019t be dependent on a third party for our core IP,\u201d he said.\u00a0<\/p>\n<h2>More from Suleyman<\/h2>\n<p>Check out the <a href=\"https:\/\/www.youtube.com\/watch?v=WKflaVlKUjs\" target=\"_blank\" rel=\"noopener\">The Neuron\u2019s full interview with Suleyman<\/a>, which includes details about responsible AI development and Microsoft\u2019s AI-enabled products.\u00a0<\/p>\n<div class=\"wp-block-embed__wrapper\">\n<div class=\"youtube-embed\"><\/div>\n<\/div>\n<p><em>Interview with Google\u2019s Head of AI Studio: Check out another great podcast episode from our colleagues at The Neuron: <a href=\"https:\/\/www.eweek.com\/news\/google-ai-studio-logan-kilpatrick-interview\/\">Logan Kilpatrick<\/a>, Google\u2019s Head of AI Studio, talks about how to build apps in under a minute.<\/em><\/p>\n<p>The post <a href=\"https:\/\/www.eweek.com\/news\/microsoft-ceo-ai-mustafa-suleyman-neuron-interview\/\">Microsoft AI CEO on the Danger of Considering AI Conscious: \u2018There is Nothing Inside\u2019<\/a> appeared first on <a href=\"https:\/\/www.eweek.com\/\">eWEEK<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>One of the recurring themes in conversations about generative AI is its ability to mimic human consciousness with remarkable accuracy. Our colleagues at The Neuron recently interviewed Microsoft CEO of AI Mustafa Suleyman for their podcast, and the wide-ranging conversation included how AI may falsely appear conscious and how AI should be regulated.\u00a0 How generative [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-5027","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5027"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5027"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5027\/revisions"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5027"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5027"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5027"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}