{"id":5031,"date":"2025-09-25T07:00:00","date_gmt":"2025-09-25T07:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=5031"},"modified":"2025-09-25T07:00:00","modified_gmt":"2025-09-25T07:00:00","slug":"ai-coding-assistants-amplify-deeper-cybersecurity-risks","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=5031","title":{"rendered":"AI coding assistants amplify deeper cybersecurity risks"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>The productivity improvements that arise from increasing use of AI coding tools are coming at the cost of greater security risks.<\/p>\n<p>While use of AI coding assistants decrease the number of shallow syntax errors, this is more than offset by an increase in more costly structural flaws, according to <a href=\"https:\/\/apiiro.com\/blog\/4x-velocity-10x-vulnerabilities-ai-coding-assistants-are-shipping-more-risks\/\">research by application security firm Apiiro<\/a>. Apiiro\u2019s analysis shows trivial syntax errors in AI-written code dropped and logic bugs fell but privilege escalation paths jumped and architectural design flaws also increased.<\/p>\n<p>AI is multiplying flaws ranging from open-source dependencies to insecure coding patterns, exposed secrets, and cloud misconfigurations, the researchers found, adding that fewer, much larger pull requests associated with AI coding tools compounds risk.<\/p>\n<h2 class=\"wp-block-heading\">AI code development is \u2018automating risk at scale\u2019<\/h2>\n<p>Independent experts quizzed by CSO agree with Apiiro\u2019s main findings that AI-generated code often introduces deeper architectural vulnerabilities and privilege escalation risks that are both harder to detect and costlier to fix.<\/p>\n<p>Zahra Timsah, co-founder and CEO of i-GENTIC AI, says that Apiiro\u2019s findings highlight what their firm has seen in practice: AI assistants can eliminate trivial bugs while at the same time amplifying deeper systemic vulnerabilities.<\/p>\n<p>\u201cAI tools are not designed to exercise judgment,\u201d Timsah says. \u201cThey do not think about privilege escalation paths, secure architectural patterns, or compliance nuances. That is where the risk comes in.\u201d<\/p>\n<p>Timsah adds: \u201cCode gets shipped faster, but if oversight is thin, enterprises are effectively automating risk at scale.\u201d<\/p>\n<p>Raj Dandage, CTO and co-founder of Codespy AI, tells CSO that AI-powered software development often comes at the cost of creating hard-to-find bugs.<\/p>\n<p>\u201cWe very rarely see simple bugs generated by top LLMs; instead, most bugs we come across have made it to testing or even production before being spotted,\u201d Dandage says.<\/p>\n<h2 class=\"wp-block-heading\">\u2018Shadow\u2019 engineers and vibe coding compound risks<\/h2>\n<p>Ashwin Mithra, global head of information security at continuous software development firm Cloudbees, notes that part of the problem is that non-technical teams are using AI to build apps, scripts, and dashboards.<\/p>\n<p>\u201cThese shadow engineers don\u2019t realize they\u2019re part of the software development life cycle, and often bypass critical reviews and security checks,\u201d Mithra explains. \u201cFurthermore, foundational security tools like SAST [static application security testing], DAST [dynamic application security testing], and manual reviews weren\u2019t built to catch AI-generated code at time of prompt.\u201d<\/p>\n<p>The result is a growing attack surface, powered by people who were never trained to secure code, Mithra warns.<\/p>\n<p>\u201cWhen anyone can code, risks multiply, and security checks are limited and can\u2019t catch everything, especially context-specific risks or complex vulnerabilities, API leaks, weak authentication, exposed PII [personally identifiable information], and unencrypted data,\u201d Mithra says.<\/p>\n<p>Chetan Conikee, founder and CTO at Qwiet AI, agrees that \u201c<a href=\"https:\/\/www.csoonline.com\/article\/4053635\/when-ai-nukes-your-database-the-dark-side-of-vibe-coding.html\">vibe coding<\/a>\u201d poses a problem in bringing more untrained contributors into production pipelines.<\/p>\n<p>\u201cLarge, multi-touch AI-generated pull requests overwhelm reviewers, diluting oversight and increasing the blast radius of each merge,\u201d Conikee explains.<\/p>\n<h2 class=\"wp-block-heading\">Massive AI pull requests complexify flaw detection<\/h2>\n<p>Roman Rylko, CTO at software development and consulting firm Pynest, says Apiiro\u2019s research matched the problems his firm faced when it began using AI assistants in development, with the elimination of syntax errors being more than offset by an increase in architectural vulnerabilities and cloud configuration errors.<\/p>\n<p>\u201cIn one of the projects for a fintech from Canada, AI generation created a service with ideal code formatting, but with insecure authorization logic, despite the fact that the fix looked obvious, which could lead to privilege escalation between modules,\u201d Rylko says. \u201cWithout a deep review, such a bug could easily reach production.\u201d<\/p>\n<p>Another issue comes from AI\u2019s tendency to make massive pull requests that involve dozens of files and even several microservices in one go.<\/p>\n<p>\u201cWe saw this happen in a small retailer project \u2014 one commit by AI involved more than 10 files at the same time, and reviewers struggled to get through all of it line by line,\u201d Rylko explains.<\/p>\n<p>John Otte, senior security consultant at Resultant, agrees that the shift toward fewer but significantly larger AI-generated pull requests \u201camplifies the blast radius of vulnerabilities, making detection, review, and rollback far more challenging for development and security teams.\u201d<\/p>\n<p>\u201cTo mitigate these risks, enterprises should pair AI-driven development with rigorous architectural threat modelling, enforce fine-grained code review policies with automated scanning of dependencies and secrets, and integrate continuous cloud security posture management to catch design-level weaknesses before they reach production,\u201d Otte advises.<\/p>\n<h2 class=\"wp-block-heading\">Verbose AI coding assistants heighten risk<\/h2>\n<p>Neil Carpenter, principal solution architect at application security startup Minimus, says that AI coding assistants often implement more code to do the same amount of work \u2014 which results in increased attack vectors and lower reliability.<\/p>\n<p>\u201cAI assistants, when not given proper context, often rebuild or rewrite functionality, instead of calling out to other functions or modules in the application,\u201d Carpenter says.<\/p>\n<p>Mehran Farimani, CEO at RapidFort, supports the assessment that AI coding assistants are prone to generating verbose and difficult to understand software components.<\/p>\n<p>\u201cAI tools are generating larger, more complex software that often includes unnecessary components, dependencies, and configuration decisions that teams don\u2019t fully consider or review,\u201d Farimani says.<\/p>\n<h2 class=\"wp-block-heading\">Orders of magnitude<\/h2>\n<p>Apiiro used its Deep Code Analysis (DCA) engine to analyze code from tens of thousands of code repositories involving several thousand developers and a variety of coding assistants. By June 2025, AI-generated code was introducing more than 10,000 new security findings per month across the repositories in Apiiro\u2019s study \u2014 a 10-fold spike in just six months.<\/p>\n<p>Flaws ranged from open-source dependencies to insecure coding patterns, exposed secrets, and cloud misconfigurations.<\/p>\n<p>Jeff Williams, co-founder and CTO at runtime application security vendor Contrast Security, disputes Apiiro\u2019s conclusions that AI coding assistants quadruple development speed while resulting in a 10-fold increase in vulnerabilities. Other studies point to much lower figures for both metrics, Williams notes.<\/p>\n<p>\u201cI\u2019m reading studies suggesting 10% increased velocity (Google) to 19% decrease (METR),\u201d Williams tells CSO. \u201cI was also surprised to hear about 10x vulnerabilities. Again, the studies I\u2019m reading are suggesting that AI-generated code is roughly the same amount of vulnerabilities.\u201d<\/p>\n<p>Williams adds: \u201cI wish they had addressed the recent studies (Semgrep) that show AI-based vulnerability detection finding only 10-20% of true positive vulnerabilities along with high false positive rates.\u201d<\/p>\n<p>Reached for comment, Apiiro said differences in the scope, methodology, and population explain the gap between its research and earlier lab-based studies.<\/p>\n<p>\u201cApiiro\u2019s findings reflect a broader scope than earlier studies. We looked not only at code-level flaws, but also at open-source dependency risks and secret exposures, all of which create critical enterprise risk,\u201d says Itay\u00a0Nussbaum, product manager at Apiiro. \u201cUnlike Semgrep\u2019s work, we weren\u2019t measuring the accuracy of AI-based vulnerability detection. Instead, our research examined the\u00a0output\u00a0of AI coding assistants in real-world enterprise environments over time.\u201d<\/p>\n<p>Pieter Danhieux, CEO &amp; co-founder of Secure Code Warrior, said its research into LLM comparisons performed 24 months ago revealed that while more straightforward vulnerability classes, such as injection flaws, were handled accurately in many cases, more subjective classes such as access control and security misconfiguration had a poor accuracy rate, failing to compete with security-skilled developers.<\/p>\n<p>\u201cAdditionally, our research has shown that AI coding assistants\u00a0\u2014 and the LLM versions they use \u2014 can sometimes be good at producing secure code in one coding language (e.g., TypeScript) but way worse in another (e.g., PHP),\u201d Danhieux says.<\/p>\n<p>\u201cThere is no world yet where the human [developer] should be taken out of the loop,\u201d he adds.<\/p>\n<h2 class=\"wp-block-heading\">AI is not a replacement for accountability<\/h2>\n<p>Rich Marcus, CISO at audit, compliance, and risk management software platform provider AuditBoard, argues that failure to recognize the limitations of AI represents the greatest risk in using the technology.<\/p>\n<p>Before enabling developers with AI, enterprises should provide training on the risks, and usage best practices.<\/p>\n<p>\u201cDevelopers must understand that AI is not a replacement for accountability,\u201d Marcus explains. \u201cEach developer is responsible for the code they commit, even if AI wrote it.\u201d<\/p>\n<p>Marcus continues: \u201cThat means AI-generated code is still subject to the same secure software development principles and practices like code review, SCA [static code analysis], SAST, and manual testing. If a flaw in there results in bugs or an incident, they will be called upon to address it \u2014 so they better understand it and own it.\u201d<\/p>\n<p>AI should accelerate workflows but not at the expense of proper vetting, others agree.<\/p>\n<p>\u201cPull requests tied to AI-generated code should always be reviewed by experienced engineers who understand the code, the business logic, and the compliance context,\u201d i-GENTIC AI\u2019s Timsah says. \u201cOrganizations should also prioritize transparency and lineage by treating AI-authored code like any other third-party dependency.\u201d<\/p>\n<p>Timsah adds: \u201cThey need full traceability into who wrote it, what model generated it, and under what parameters, which makes it easier to audit and remediate issues later.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Mitigation strategies<\/h2>\n<p>AI coding assistants can be a force multiplier for development teams but only if enterprises build guardrails to manage the associated risk.<\/p>\n<p>\u201cWith strong governance, automated oversight, and human accountability organizations can harness the speed of AI without multiplying vulnerabilities,\u201d i-GENTIC AI\u2019s Timsah advises.<\/p>\n<p>Other experts put forward recommendations on mitigating the risks associated with AI coding assistants:<\/p>\n<p>Integrate security tooling into AI code assistants, for example, by taking advantage of MCP (model context protocol) servers.<\/p>\n<p>Limit the volume of AI-generated changes depending on the project so that pull requests remain manageable.<\/p>\n<p>Strictly enable automatic checks in CI\/CD \u2014 secret scanners, static analysis, and cloud configuration control.<\/p>\n<p>Mitigation of flaws created by AI coding assistants <a href=\"https:\/\/www.csoonline.com\/article\/3633403\/how-organizations-can-secure-their-ai-code.html\">requires a different mindset<\/a>, i-GENTIC AI\u2019s Timsah says.<\/p>\n<p>\u201cEnterprises should use AI to watch AI by deploying agentic AI solutions that automatically scan AI-generated code against policies, security standards, and regulatory requirements before code is merged,\u201d he argues.<\/p>\n<p>Enterprises should also adopt shift-left security and continuous monitoring.<\/p>\n<p>\u201cSecurity checks cannot be bolted on at the end of the pipeline,\u201d Timsah says. \u201cThey must be integrated directly into CI\/CD processes so that AI-generated code receives the same scrutiny as open-source contributions.\u201d<\/p>\n<p>Pynest\u2019s Rylko adds: \u201cWe treat AI assistants as \u2018junior developers\u2019 \u2014 their code is always checked by seniors.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>The productivity improvements that arise from increasing use of AI coding tools are coming at the cost of greater security risks. While use of AI coding assistants decrease the number of shallow syntax errors, this is more than offset by an increase in more costly structural flaws, according to research by application security firm Apiiro. [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":5032,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-5031","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5031"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5031"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5031\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/5032"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5031"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5031"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5031"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}