{"id":1565,"date":"2025-01-18T03:24:54","date_gmt":"2025-01-18T03:24:54","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=1565"},"modified":"2025-01-18T03:24:54","modified_gmt":"2025-01-18T03:24:54","slug":"secure-ai-dream-on-says-ai-red-team","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=1565","title":{"rendered":"Secure AI? Dream on, says AI red team"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>The group responsible for red teaming of over 100 generative AI products at Microsoft has concluded that the work of building safe and secure AI systems will never be complete.<\/p>\n<p>In a paper published this week, the authors, including Microsoft Azure CTO Mark Russinovich, described some of the team\u2019s work and provided eight recommendations designed to \u201calign <a href=\"https:\/\/www.infoworld.com\/article\/3627088\/the-vital-role-of-red-teaming-in-safeguarding-ai-systems-and-data.html\">red teaming<\/a> efforts with real world risks.\u201d<\/p>\n<p>Lead author Blake Bullwinkel, a researcher on the <a href=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2023\/08\/07\/microsoft-ai-red-team-building-future-of-safer-ai\/\">AI Red Team<\/a> at Microsoft, and his 25 co-authors wrote in <a href=\"https:\/\/arxiv.org\/abs\/2501.07238\">the paper<\/a>, \u00a0\u201cas generative AI (genAI) systems are adopted across an increasing number of domains, AI red teaming has emerged as a central practice for assessing the safety and security of these technologies.\u201d<\/p>\n<p>At its core, they said, \u201cAI red teaming strives to push beyond model-level safety benchmarks by emulating real-world attacks against end-to-end systems. However, there are many open questions about how red teaming operations should be conducted and a healthy dose of skepticism about the efficacy of current AI red teaming efforts.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>The group responsible for red teaming of over 100 generative AI products at Microsoft has concluded that the work of building safe and secure AI systems will never be complete. In a paper published this week, the authors, including Microsoft Azure CTO Mark Russinovich, described some of the team\u2019s work and provided eight recommendations designed [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":1566,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-1565","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/1565"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1565"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/1565\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/1566"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1565"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1565"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1565"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}