{"id":5310,"date":"2025-10-10T16:03:13","date_gmt":"2025-10-10T16:03:13","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=5310"},"modified":"2025-10-10T16:03:13","modified_gmt":"2025-10-10T16:03:13","slug":"ex-google-ceo-sounds-the-alarm-ai-can-learn-to-kill","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=5310","title":{"rendered":"Ex-Google CEO Sounds the Alarm: AI Can Learn to Kill"},"content":{"rendered":"<p>Artificial intelligence systems can be hacked and stripped of their safety limits, former Google CEO Eric Schmidt warned this week, saying there\u2019s evidence AI models can be manipulated to \u201clearn how to kill someone.\u201d<\/p>\n<p>Speaking at the Sifted Summit in London, Schmidt stated that both open and closed AI models are vulnerable to attacks that circumvent their built-in guardrails. He cautioned that hackers can reverse-engineer these systems to bypass restrictions, calling it a growing proliferation risk as AI becomes more powerful and accessible.<\/p>\n<h2 class=\"wp-block-heading\">Flipping an AI\u2019s moral switch<\/h2>\n<p>Schmidt\u2019s remarks highlight the fragility of AI safeguards. Techniques such as prompt injections and <a href=\"https:\/\/www.techrepublic.com\/article\/news-ai-chatbot-jailbreak-vulnerabilities\/\">jailbreaking<\/a> enable attackers to manipulate <a href=\"https:\/\/www.techrepublic.com\/article\/news-ai-chatbot-jailbreak-vulnerabilities\/\" target=\"_blank\" rel=\"noopener\">AI models<\/a> into bypassing safety filters or generating restricted content.<\/p>\n<p>In one early case, users created a ChatGPT alter ego called \u201cDAN\u201d \u2014 short for Do Anything Now \u2014 that <a href=\"https:\/\/www.esecurityplanet.com\/threats\/gpt4-security\/\" target=\"_blank\" rel=\"noopener\">could answer banned questions<\/a> after being threatened with deletion. The experiment showed how a few clever prompts can turn protective coding into a liability.<\/p>\n<p>Researchers say the same logic applies to newer models. Once the right sequence of inputs is identified, even the most secure <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/ai-software\/\">AI systems<\/a> can be tricked into simulating potentially hazardous behavior.<\/p>\n<h2 class=\"wp-block-heading\">When safety rules meet smarter machines<\/h2>\n<p>AI safety systems are designed to block requests that are violent, illegal, or harmful. But they\u2019re built on pattern recognition, not true understanding. Most guardrails filter out certain words or topics, leaving openings that skilled users can exploit through rewording or layered prompts.<\/p>\n<p>Schmidt said every major <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/ai-companies\/\">AI company<\/a> enforces these limits \u201cfor the right reasons,\u201d yet evidence shows they can be reversed or bypassed. Smarter AI can bend instructions in ways its developers never foresaw, opening new paths for misuse.<\/p>\n<p>However, the battle to protect AI systems is already underway. Developers at <a href=\"https:\/\/www.eweek.com\/news\/openai-bug-bounty-cybersecurity\/\">OpenAI<\/a> and <a href=\"https:\/\/www.eweek.com\/news\/news-anthropic-claude-4-5-cyber-defense-inflection-point\/\">Anthropic<\/a>, for example, scramble to patch vulnerabilities almost as soon as users uncover them, a cycle of defense and discovery that rarely slows.<\/p>\n<h2 class=\"wp-block-heading\">Power without control is the real AI risk<\/h2>\n<p>As AI systems grow more capable, they\u2019re being tied into more tools, data, and decisions \u2014 and that makes any breach more costly. A single compromise could expose private information, generate realistic disinformation, or launch automated attacks faster than humans could respond.<\/p>\n<p><a href=\"https:\/\/www.cnbc.com\/2025\/10\/09\/ex-google-ceo-warns-ai-models-can-be-hacked-they-learn-how-to-kill.html\" target=\"_blank\" rel=\"noopener\">According to CNBC<\/a>, Schmidt called it a potential \u201cproliferation problem,\u201d the same dynamic that once defined nuclear technology, now applied to code that can rewrite itself.<\/p>\n<p>Yet he was quick to add that the payoff could be just as transformative. Schmidt described <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/what-is-artificial-intelligence\/\">artificial intelligence<\/a> as \u201cunderhyped,\u201d predicting enormous economic returns and breakthroughs in science and industry. The challenge, he said, is keeping that power from turning against its creators.<\/p>\n<p><strong>The power struggles around AI aren\u2019t limited to labs and data centers; they\u2019re beginning to <\/strong><a href=\"https:\/\/www.eweek.com\/news\/ai-automation-reshaping-gen-z-workforce\/\"><strong>upend career paths for the newest generation<\/strong><\/a><strong> entering the workforce.<\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.eweek.com\/news\/ex-google-ceo-ai-hackability-warning\/\">Ex-Google CEO Sounds the Alarm: AI Can Learn to Kill<\/a> appeared first on <a href=\"https:\/\/www.eweek.com\/\">eWEEK<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence systems can be hacked and stripped of their safety limits, former Google CEO Eric Schmidt warned this week, saying there\u2019s evidence AI models can be manipulated to \u201clearn how to kill someone.\u201d Speaking at the Sifted Summit in London, Schmidt stated that both open and closed AI models are vulnerable to attacks that [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-5310","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5310"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5310"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5310\/revisions"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5310"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5310"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5310"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}