{"id":2853,"date":"2025-04-21T06:30:00","date_gmt":"2025-04-21T06:30:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=2853"},"modified":"2025-04-21T06:30:00","modified_gmt":"2025-04-21T06:30:00","slug":"two-ways-ai-hype-is-worsening-the-cybersecurity-skills-crisis","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=2853","title":{"rendered":"Two ways AI hype is worsening the cybersecurity skills crisis"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>AI was supposed to make security teams more efficient, but instead, it\u2019s making their jobs harder. Security professionals are being pulled in two directions: they\u2019re being expected to govern their organisation\u2019s AI use while also figuring out how to integrate the technology into their own workflows, often without proper training. The result? Overstretched teams, mounting pressure, and an ever-widening skills gap.<\/p>\n<p>Despite these growing pressures faced by cybersecurity teams, Richard Addiscott, vice president analyst at Gartner, points out how businesses are embracing AI at an unprecedented pace. \u201cOur research shows 98% of organizations have already adopted or are planning to adopt generative AI or another form of AI. Only 1% plan not to adopt AI, and the other 1% is not sure,\u201d he tells CSO. \u201cBut if you\u2019re the head of the security organization, blocking AI probably won\u2019t do you or your team any favors.\u201d<\/p>\n<p>This adoption, however, adds a new layer of responsibility for cybersecurity professionals, who must oversee AI governance while using AI themselves. \u201cAs a security function, where things like cost efficiency, operation productivity, operational continuity, and talent shortages already have an impact, it\u2019s also entirely appropriate that those teams look at, \u2018How can I use AI from a security function benefit, whether it\u2019s improving operation efficiency, cost efficiency, or giving my team the opportunity to do more with the same level of resources,\u201d Addiscott says.<\/p>\n<h2 class=\"wp-block-heading\">The security burden that comes with AI<\/h2>\n<p>One key challenge is that many cybersecurity professionals are expected to deploy and oversee AI tools, often without formal training. An O\u2019Reilly report, <a href=\"https:\/\/www.oreilly.com\/radar\/technology-trends-for-2025\/\">Technology Trends for 2025<\/a>, highlights just how quickly the interest in AI-related skills are growing. From 2023 to 2024, interest in artificial intelligence grew by 190%, while generative AI skyrocketed by 289%. But the most telling increase was in AI principles, up 386%, and <a href=\"https:\/\/www.infoworld.com\/article\/2334745\/how-to-get-started-with-prompt-engineering.html\">prompt engineering<\/a>, which jumped 456%.<\/p>\n<p>\u201cIt\u2019s all well and good to want to embrace the organization\u2019s AI ambitions, but if no one in the team understands large language model operations or prompt engineering \u2026 then it\u2019s going to be really difficult,\u201d Addiscott explains. \u201cYour capacity and capability mix needs to shift, which has a fundamental impact on your strategic workforce plan \u2026 and a whole heap of other downstream impacts that we need to think about from a strategic operational perspective in security.\u201d<\/p>\n<p>Anil Appayanna, CISO at India International Insurance and founder of NexisCyber, agrees, noting that organizations often rush to implement AI without ensuring their teams are prepared. \u201cThere\u2019s a fear of missing out because everybody in the world today is talking about AI,\u201d he says. \u201cFrankly, if I\u2019m speaking at a seminar and telling people I do this and I do that, there is a lot of pressure on others to go back to their companies and say, \u2018Hey they are using it, why can\u2019t we use it?\u2019\u00a0<\/p>\n<p>\u201cBut preparedness is very important,\u201d Appayanna says. \u201cIt\u2019s not about just putting things in place, but do you understand where you\u2019re heading? What is it that you\u2019re looking for? And then, do you have the right kind of skills and people in place to implement it?\u201d<\/p>\n<p>Beyond technical skills, AI also requires a mindset shift. Instead of blindly trusting AI recommendations, Appayanna insists that human oversight is maintained to verify any AI-generated results.<\/p>\n<p>\u201cI will never fully automate or over-rely on AI,\u201d he says.<\/p>\n<p>\u201cThere will always have to be one human interface somewhere there. It could be as simple as that \u2018see it, forget it, no problem\u2019, but I don\u2019t want it to become \u2018just because AI told me that this is a thing, I should have to take its word\u2019. You have to do include some kind of human intervention, to look at what exactly is happening.\u201d<\/p>\n<p>But with all the hype, some level of disillusionment is inevitable, the O\u2019Reilly report warns that many organizations adopting AI may not fully understand its capabilities or limitations, particularly in emerging fields like prompt engineering. While searches for prompt engineering grew sharply in 2023, the report indicated early signs of decline in early 2024. It also questions whether this is just noise or the first indication of AI fatigue, suggesting that if excitement around prompt engineering fades, broader enthusiasm for machine learning and AI could follow.<\/p>\n<p>Appayanna likens the rush to adopt AI to the digital transformation wave of the past decade when companies felt pressured to move to the cloud, automate workflows, and embrace digital technologies, but many failed to consider whether those changes actually aligned with business needs.<\/p>\n<p>\u201cThe question is: yes, you can introduce AI, but to what context? You have to define the context and make sure that you meet your business requirements. Only then can AI provide value.\u201d<\/p>\n<p>Appayanna\u2019s personal approach to AI has been methodical, where he explains how he introduced AI to his team to handle repetitive, low-level tasks initially before gradually expanding its use in more complex areas like <a href=\"https:\/\/www.csoonline.com\/article\/648266\/how-llms-are-making-red-and-blue-teams-more-efficient.html\">automated red teaming<\/a>. \u201cIf I am introducing an AI technology, the first thing I look at is, \u2018Do I have the right skill sets and do the people on my team have the skills to even look at it?\u2019 Because I always make sure that training is first and foremost.\u201d\u00a0<\/p>\n<h2 class=\"wp-block-heading\">Attackers are using AI, but are defenders ready?<\/h2>\n<p>AI isn\u2019t just increasing workloads; it\u2019s also raising expectations. Many security teams are already stretched thin, and the push to integrate AI is adding further strain. \u201cIf you\u2019ve got a very lean team that barely has enough time to look above the parapet as it is and you\u2019re already behind the eight ball when it comes to business as usual, and with AI that\u2019s a challenge,\u201d Addiscott highlights.<\/p>\n<p>Another critical factor in the AI-skills shortage discussion is that attackers are also leveraging AI, putting defenders at an even greater disadvantage. Cybercriminals are using AI to generate more convincing phishing emails, automate reconnaissance, and develop malware that can evade detection. Meanwhile, security teams are struggling just to keep up.<\/p>\n<p>\u201cAI exacerbates what\u2019s already going on at an accelerated pace,\u201d says Rona Spiegel, cyber risk advisor at GroScale and former cloud governance leader at Wells Fargo and Cisco. \u201cIn cybersecurity, the defenders have to be right all the time, while attackers only have to be right once. AI is increasing the probability of attackers getting it right more often.\u201d<\/p>\n<p>Without proper AI training, security professionals may not even realize they are dealing with AI-generated threats. \u201cWe have a threat environment that is wanting to leverage [AI] to the nth degree \u2026 typically, [the attackers] have a lot more time and money on their hands to be able to play around with this stuff,\u201d Addiscott says.<\/p>\n<h2 class=\"wp-block-heading\">Can AI fix the skills shortage?<\/h2>\n<p>However, not everyone sees AI as a purely negative force in the cybersecurity talent landscape. Spiegel acknowledges the complexity of AI adoption but argues that the issue is more about how leadership goes about acquiring. She suggests that cybersecurity teams require a diverse, well-rounded set of skills and experiences.<\/p>\n<p>\u201cI don\u2019t think we have a cybersecurity skills shortage \u2013 I think we have a leadership understanding shortage,\u201d Spiegel argues. \u201cLeaders are being pressured to adopt AI at lightning speed, and they\u2019re focused on the efficiencies that can be gained through AI automation, but they\u2019re looking at how they staff with a very narrow view of what cybersecurity is.\u201d<\/p>\n<p>She believes AI could ultimately help alleviate some of the skills shortage.<\/p>\n<p>\u201cCISOs will have to be more tactical in their approach,\u201d she explains. \u201cThere\u2019s so much pressure for them to automate, automate, automate. I think it would be best if they could partner cross-functionality and focus on things like policy and urge the unification and simplification of how polices are adapted\u2026 and make sure how we\u2019re educating the entire environment, the entire workforce, not just the cybersecurity.\u201d<\/p>\n<p>Appayanna echoes this sentiment, arguing that when used correctly, AI can ease talent shortages rather than exacerbate them. He believes AI frees up security professionals to develop higher-level skills rather than being stuck in mundane, repetitive tasks.<\/p>\n<p>\u201cIf my L1 security analysts spend two or three hours glancing through logs, which my AI can do in five minutes, I want to use AI there \u2014 not as a replacement, but an augmenter,\u201d he explains.<\/p>\n<p>Despite AI\u2019s potential, the short-term reality remains challenging. Addiscott believes that AI will sit atop existing security responsibilities rather than replace them in the foreseeable future. \u201cWe still need security monitoring, application security, infrastructure security, cloud security, policy security, security awareness \u2014 all of those things are still required,\u201d he says.<\/p>\n<p>\u201cWhat\u2019s going to happen in the short to medium term, AI will sit on top of all those things. And until we start to see the embedding \u2013 which is probably going to take a generational shift \u2013 it\u2019s going to be a long time before we can happily look back and reflect and say, \u2018I think we\u2019ve landed on best practice\u2019.\u201d<\/p>\n<p>Appayanna cautions against the misconception that AI alone can solve security challenges. He believes that organizations that invest in structured AI training and thoughtful implementation will be better positioned to succeed.<\/p>\n<p>\u201cAI tools can only automate. They can help you; they can support you, but they will never replace a human expertise and organizations must manage their expectations with their boards or their teams that we recognize AI as an augmented tool, but not as a replacement.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>AI was supposed to make security teams more efficient, but instead, it\u2019s making their jobs harder. Security professionals are being pulled in two directions: they\u2019re being expected to govern their organisation\u2019s AI use while also figuring out how to integrate the technology into their own workflows, often without proper training. The result? Overstretched teams, mounting [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":2841,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-2853","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/2853"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2853"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/2853\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/2841"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2853"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2853"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2853"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}