{"id":2656,"date":"2025-04-07T06:00:00","date_gmt":"2025-04-07T06:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=2656"},"modified":"2025-04-07T06:00:00","modified_gmt":"2025-04-07T06:00:00","slug":"the-risks-of-entry-level-developers-over-relying-on-ai","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=2656","title":{"rendered":"The risks of entry-level developers over relying on AI"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>Whenever tools like ChatGPT go down, it\u2019s not unusual to see software developers step away from their desks, take an unplanned break, or lean back in their chairs in frustration. For many professionals in the tech space, AI-assisted coding tools have become a convenience. And even brief outages, like the one that happened on <a href=\"https:\/\/www.techradar.com\/computing\/live\/chatgpt-down-march-2025\">24 March 2025<\/a>, can bring development to a halt.<\/p>\n<p>\u201cTime to make a coffee and sit in the sun for 15 mins,\u201d a Reddit user<a href=\"https:\/\/www.reddit.com\/r\/ChatGPT\/comments\/1jiqwqi\/comment\/mjh64ny\/\"> wrote<\/a>. \u201cSame,\u201d another responds.<\/p>\n<p>Overreliance on generative AI tools like ChatGPT is steadily growing among tech professionals, including those working in cybersecurity. These tools are changing how developers write code, solve problems, learn, and think \u2014 often boosting short-term efficiency. However, this shift comes with a trade-off: developers risk weakening their coding and critical thinking skills, which can ultimately have long-term consequences for both them and the organizations they work for.<\/p>\n<p>\u201cWe\u2019ve observed a trend where junior professionals, especially those entering cybersecurity, struggle with deep system-level understanding,\u201d says Om Moolchandani, co-founder and CISO\/CPO at Tuskira. \u201cMany can generate functional code snippets but struggle to explain the logic behind them or secure them against real-world attack scenarios.\u201d<\/p>\n<p>A<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/lee_2025_ai_critical_thinking_survey.pdf\"> recent survey by Microsoft<\/a> backs Moolchandani\u2019s observations, highlighting that workers who rely on AI to do part of their job tend to engage less deeply in questioning, analyzing, and evaluating their work, especially if they trust that AI will deliver accurate results. \u201cWhen using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship,\u201d the paper reads.<\/p>\n<p>As AI code generators change how developers work, they also reshape how organizations function. The challenge for cybersecurity leaders is to leverage this technology without sacrificing <a href=\"https:\/\/www.cio.com\/article\/3841632\/with-critical-thinking-in-decline-it-must-rethink-application-usability.html\">critical thinking<\/a>, creativity, and problem-solving, the very skills that make developers great.<\/p>\n<h2 class=\"wp-block-heading\">Short-term wins, long-term risks<\/h2>\n<p>Some CISOs are concerned about the growing reliance on AI code generators \u2014 especially among junior developers \u2014 while others take a more relaxed, wait-and-see approach, saying that this might be an issue in the future rather than an immediate threat. Karl Mattson, CISO at Endor Labs, argues that the adoption of AI is still in its early stages in most large enterprises and that the benefits of experimentation still <a href=\"https:\/\/www.csoonline.com\/article\/3801012\/gen-ai-strategies-put-cisos-in-a-stressful-bind.html\">outweigh the risks<\/a>.\u00a0<\/p>\n<p>\u201cI haven\u2019t seen clear evidence that AI reliance is leading to a widespread decline in fundamental coding skills,\u201d he says. \u201cRight now, we\u2019re in a zone of creative optimism, prototyping, and finding early successes with AI. A decline in core fundamentals still feels quite a way down the road.\u201d<\/p>\n<p>Others are already seeing some of the effects of overreliance on AI tools for writing code. Sean O\u2019Brien, founder of Yale Privacy Lab and CEO and founder of PrivacySave, voices strong concerns about the growing dependence on generative AI. Developers who rely heavily on AI-powered tools like ChatGPT or low-code platforms \u201coften encourage a \u2018vibe coding\u2019 mentality, where they are more focused on getting something to work than actually understanding how or why it works,\u201d O\u2019Brien says.<\/p>\n<p>Aviad Hasnis, CTO at Cynet, is also worried, particularly when it comes to junior professionals, who \u201crely heavily on AI-generated code without fully grasping its underlying logic.\u201d According to him, this overreliance creates multiple challenges for both individuals and organizations. \u201cCybersecurity work demands critical thinking, troubleshooting skills, and the ability to assess risks beyond what an AI model suggests,\u201d he says.\u00a0<\/p>\n<p>While relying on AI code generators can provide quick solutions and short-term gains, over time this dependency can backfire. \u201cAs a result, developers may struggle to adapt when AI systems are unavailable or insufficient, potentially rendering them ineffective as innovators and technologists in the long run,\u201d says Oj Ngo, CTO and co-founder of DH2i.<\/p>\n<h2 class=\"wp-block-heading\">The risks of blind spots, compliance and license violation<\/h2>\n<p>As generative AI becomes more embedded in software development and security workflows, cybersecurity leaders are raising concerns about the blind spots it can potentially introduce.<\/p>\n<p>\u201cAI can produce secure-looking code, but it lacks contextual awareness of the organization\u2019s threat model, compliance needs, and adversarial risk environment,\u201d Moolchandani says.<\/p>\n<p>Tuskira\u2019s CISO lists two major issues: first, that AI-generated security code may not be hardened against evolving attack techniques; and second, that it may fail to reflect the specific security landscape and needs of the organization. Additionally, AI-generated code might give a false sense of security, as developers, particularly inexperienced ones, often assume it is secure by default.<\/p>\n<p>Furthermore, there are risks associated with compliance and violations of licensing terms or regulatory standards, which can lead to legal issues down the line. \u201cMany AI tools, especially those generating code based on open-source codebases, can inadvertently introduce unvetted, improperly licensed, or even malicious code into your system,\u201d O\u2019Brien says.\u00a0<\/p>\n<p>Open-source licenses, for example, often have specific requirements regarding attribution, redistribution, and modifications, and relying on AI-generated code could mean accidentally violating these licenses. \u201cThis is particularly dangerous in the context of software development for cybersecurity tools, where compliance with open-source licensing is not just a legal obligation but also impacts security posture,\u201d O\u2019Brien adds. \u201cThe risk of inadvertently violating intellectual property laws or triggering legal liabilities is significant.\u201d<\/p>\n<p>From a technological perspective, Wing To, CTO at Digital.ai, points out that AI-generated code should not be seen as a silver bullet. \u201cThe key challenge with AI-generated code \u2014 in security and other domains \u2014 is believing that it is of any better quality than code generated by a human,\u201d he says. \u201cAI-generated code runs the risk of including vulnerabilities, bugs, protected IP, and other quality issues buried in the trained data.\u201d<\/p>\n<p>The rise in AI-generated code reinforces the need for organizations to adopt best practices in their software development and delivery. This includes consistently applying independent code reviews and implementing robust CI\/CD processes with automated quality and security checks.<\/p>\n<h2 class=\"wp-block-heading\">Changing the hiring process<\/h2>\n<p>Since generative AI is here to stay, CISOs and the organizations they serve can no longer afford to overlook its impact. In this new normal, it becomes necessary to set up guardrails that promote critical thinking, foster a deep understanding of code, and reinforce accountability across all teams involved in any kind of code writing.<\/p>\n<p>Companies should also rethink how they evaluate technical skills during the hiring process, particularly when recruiting less experienced professionals, says Moolchandani. \u201cCode tests may no longer be sufficient \u2014 there needs to be a greater focus on security reasoning, architecture, and adversarial thinking.\u201d\u00a0<\/p>\n<p>During DH2i\u2019s hiring process, Ngo tells they assess candidates\u2019 dependence on AI to gauge their ability to think critically and work independently. \u201cWhile we recognize the value of AI in enhancing productivity, we prefer to hire employees who possess a strong foundation in fundamental skills, allowing them to effectively use AI as a tool rather than relying on it as a crutch.\u201d<\/p>\n<p>Don Welch, global CIO at New York University, has a similar perspective, adding that the people who will thrive in this new paradigm will be the ones who stay curious, ask questions, and seek to understand the world around them as best as they can. \u201cHire people where growth and learning are important to them,\u201d Welch says.<\/p>\n<p>Some cybersecurity leaders fear that becoming over reliant on AI can widen the talent shortage crisis the industry already struggles with. For small and mid-sized organizations it can become increasingly difficult to find skilled people and then help them grow.\u00a0\u201cIf the next generation of security professionals is trained primarily to use AI rather than think critically about security challenges, the industry may struggle to cultivate the experienced leaders necessary to drive innovation and resilience,\u201d Hasnis says.<\/p>\n<h2 class=\"wp-block-heading\">Generative AI must not replace coding knowledge<\/h2>\n<p>Early-career professionals who use AI tools to write code without developing a deep technical foundation are at a high risk of stagnating. They might not have a good understanding of attack vectors, system internals, or secure software design, says Moolchandani. \u201cMid-to-long term, this could limit their growth into senior security roles, where expertise in threat modelling, exploitability analysis, and security engineering is crucial. Companies will likely differentiate between those who augment their skills with AI and those who depend on AI to bridge fundamental gaps.\u201d<\/p>\n<p>Moolchandani and others recommend organizations increase their training efforts and adjust their methods of transferring knowledge. \u201cOn-the-job training has to be more hands-on, focusing on real-world vulnerabilities, exploitation techniques, and secure coding principles,\u201d he says.<\/p>\n<p>Mattson says that organizations should focus more on helping employees gain relevant skills in the future. Technology will evolve quickly and training programs alone may not be enough to keep pace. \u201cBut a culture of continuous skill improvement is durable for any change that comes,\u201d Mattson adds.<\/p>\n<p>These training programs should help employees understand both the strengths and limitations of AI, learning when to rely on these tools and when human intervention is mandatory, says Hasnis. \u201cBy combining AI-driven efficiency with human oversight, companies can harness the power of AI while ensuring their security teams remain engaged, skilled, and resilient,\u201d he says. He advises developers to always question AI outputs, especially in security-sensitive environments.\u00a0\u00a0<\/p>\n<p>O\u2019Brien also believes that AI should go hand in hand with human expertise. \u201cCompanies need to create a culture where AI is seen as a tool: one that can help but not replace a deep understanding of programming and traditional software development and deployment,\u201d he says.<\/p>\n<p>\u201cIt\u2019s essential that companies don\u2019t fall into the trap of just using AI to patch over a lack of expertise.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Whenever tools like ChatGPT go down, it\u2019s not unusual to see software developers step away from their desks, take an unplanned break, or lean back in their chairs in frustration. For many professionals in the tech space, AI-assisted coding tools have become a convenience. And even brief outages, like the one that happened on 24 [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":2645,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-2656","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/2656"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2656"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/2656\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/2645"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2656"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2656"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2656"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}