Artificial intelligence tools are revamping DevSecOps processes, enabling security and development teams to more effectively build safeguards into software products from the get-go.
But AI’s impact on DevSecOps goes well beyond tooling and processes, altering the scope, skills, and strategies foundational to the discipline as well.
“AI is fundamentally shifting DevSecOps from reactive validation to continuous, intelligent enforcement,” says Siddardha Vangala, senior AI engineer and AI systems architect at engineering and construction company MasTec. “In enterprise environments, the biggest gains are coming from automation that operates alongside development workflows rather than after deployment.”
Revamping DevSecOps processes
AI is reshaping DevSecOps first and foremost by embedding security earlier in development and improving how issues are detected and remediated, says Katie Norton, a research manager for IDC’s DevSecOps and software supply chain security research practice.
Its impact on DevSecOps processes breaks down into three main areas, Norton says. The first is AI-assisted secure coding. “One of the clearest changes is the integration of third-party security tooling into coding assistants and agents,” she says. “Rather than assuming AI-generated code is secure by default, organizations are increasingly embedding security controls into the generation workflow itself.”
These controls provide policy guidance, secure coding patterns, validation checks, secrets detection, and approved dependency or configuration recommendations while code is produced, Norton says. As a result, security’s position within the development lifecycle is changing.
“Security is no longer interacting only with the developer after or alongside code creation,” Norton says. “It is increasingly interacting with the agent that is generating the code. That changes DevSecOps in a practical way. Security controls are moving closer to the point of generation, and [application security] teams are beginning to govern the behavior of AI systems, not just the behavior of human developers.”
The second area is large language model (LLM) vulnerability scanning. “LLMs are increasingly used to analyze code, configurations, and APIs for vulnerabilities, using contextual reasoning rather than fixed rules,” Norton says. “This allows them to identify logic flaws and insecure usage patterns that traditional scanners often miss. This expands detection coverage, particularly in complex or modern application architectures.”
At the same time, scanning itself is evolving, Norton says, becoming more autonomous and in some cases capable of initiating analysis, confirming findings, and integrating more directly into development workflows without requiring explicit human activities.
A third area is automated remediation suggestions and execution. “AI is increasingly used to generate fixes for vulnerabilities, including code changes, dependency updates, and configuration adjustments,” Norton notes. “These suggestions are often integrated directly into developer workflows, such as pull requests or IDEs [integrated development environments]. This reduces mean time to remediation and lowers the expertise required to resolve issues.”
The overall impact of AI on DevSecOps processes is that it’s collapsing the distance between writing code, finding vulnerabilities, and fixing them. “That makes DevSecOps more continuous, but also more machine-mediated,” Norton says. “The key challenge now becomes validating machine-generated code, machine-identified findings, and machine-suggested remediation across a development lifecycle.”
Explicit security requirements elevate AI benefits
While deploying AI with DevSecOps is helping to shift the emphasis on security to earlier in the development lifecycle, this requires “explicit instruction to do it right,” says Noe Ramos, vice president of AI operations at business software provider Agiloft.
“AI coding assistants accelerate development meaningfully, but they optimize for functional code by default, not secure enterprise code,” Ramos says. “Those aren’t the same target. We’ve had to build explicit security requirements into our AI coding prompts and project-level instructions — input validation, secrets management, least privilege, vulnerability patterns — because if you don’t specify it, it won’t reliably appear.”
Once that instruction layer is in place, “it applies consistently at scale in a way human developers working under deadline pressure don’t,” Ramos says.
AI tools are increasingly useful for flagging dependency vulnerabilities, identifying common vulnerability patterns, and suggesting remediation, tasks that previously required dedicated security review cycles, Ramos says. “This is compressing the feedback loop between writing code and catching security issues,” she says.
AI has improved the ability of teams to prioritize vulnerabilities. “Too much noise has been a long-standing problem in the DevSecOps space,” says Monika Malik, lead data/AI software engineer at communications provider AT&T. “Too many findings are generated with little context provided to make informed decisions.”
AI tools provide value by correlating multiple types of findings across code, dependencies, configurations, and runtime behaviors, Malik says. “This allows teams to then focus on those items that represent actual exploits or operationally impactful issues,” she says. “Teams are no longer treating all scanner results as equal.”
For example, AI-assisted analysis identifies actual exposures related to public-facing services, privileged workloads, or sensitive data, Malik says. “This enables teams focused on security engineering [to] spend time addressing relevant issues,” she says.
Transforming DevSecOps as a discipline
Given the impact AI is having in transforming DevSecOps on a larger scale, IT, security, and development leaders need to be on top of what changes when AI is introduced into development strategies.
“Historically, DevSecOps has been centered on application code security, infrastructure security, and software supply chain security,” Malik says. “With the introduction of AI, the scope of concern has expanded significantly. DevSecOps can no longer simply address source code security, container security, pipeline security, and cloud infrastructure security.”
Additional concerns now include model access exposure, prompt abuse/injection risks, sensitive data leakage, data lineage, third-party models and API dependencies, deployment of AI-generated code, and others, Malik says.
Strategic impact and challenges
“From a strategic standpoint, AI is leading DevSecOps towards a more risk-based operating model,” Malik says. “The mature strategy will be to apply different levels of scrutiny to different use cases. Teams will increasingly separate low-risk internal productivity use cases from high-risk use cases based upon customer-facing decisions, regulated data usage, authentication flows, privileged operations, etc.”
Agiloft is treating AI coding governance not as a DevSecOps-specific problem, “but as an enterprise governance problem with a DevSecOps component,” Agiloft’s Ramos says. That means cross-functional alignment among security, IT, AI operations, engineering, legal, and others, rather than expecting DevSecOps to absorb the entire new surface area alone, she says.
“The organizations that will get this right are the ones building governance infrastructure now, before the incidents force it,” Ramos says.
Traditional DevSecOps processes assumed human authorship of code, Ramos says. “AI authorship creates new questions: Who is accountable for AI-generated code that passes review and later causes a breach?” she says. “How do you track provenance? How do you handle the reality that developers are copy-pasting AI-generated code from consumer tools into enterprise codebases, potentially carrying licensing, security, or compliance baggage with it?”
New threat vectors arise
New threats are emerging, many of which stem from the growing use of AI.
“DevSecOps now has to cover a new attack surface it didn’t exist to address,” Ramos says. “AI models themselves, the prompts sent to them, the data used to fine-tune them, the outputs fed into production systems, are all threat vectors. That’s a material scope expansion on top of an already stretched discipline.”
DevSecOps is expanding beyond application and cloud security to include AI systems as “first-class” assets, IDC’s Norton says. “This includes securing models, training data, prompts, and inference pipelines, as well as addressing new attack vectors such as prompt injection, data leakage, and model manipulation,” she adds.
At a strategic level, “organizations are shifting from controlling developer behavior alone to governing AI-assisted development as a system,” Norton says. “This includes standardizing approved tools, defining usage policies, and embedding security controls into developer environments and AI systems.”
Application security teams are increasingly responsible for shaping how code is generated, by influencing the behavior of AI systems rather than relying solely on downstream detection and remediation, Norton says.
Skill sets evolve
AI’s infusion into DevSecOps will have a big impact on skills. “Security and engineering teams need a broader skill set that includes understanding how AI systems behave, how data flows through them, and where they introduce risk,” Norton says.
There is a shift away from developers needing to be deeply knowledgeable about how to write secure code themselves, as more of that responsibility is mediated through AI systems and embedded controls, Norton says.
“Developers need to understand how to use AI coding tools responsibly, while [application security] teams need to define and implement guardrails that shape what AI systems produce,” she says.
The DevSecOps practitioner “now needs enough AI literacy to evaluate risk in AI-assisted code, not just, ‘Does this code have a SQL injection risk?’ But, ‘Did an AI generate this in a way that introduced subtle logic errors or trained-in vulnerabilities?’” Ramos says. “That’s a different kind of code review skill, and most teams haven’t fully developed it yet.”
Among the necessary skill sets for DevSecOps teams, Malik says, are AI threat modeling; the ability to investigate model and prompt abuse scenarios and ensure secure use of coding copilots; data governance and provenance; and knowing how to evaluate supply chain AI models and services.
There is growing demand for engineers who understand both traditional security practices and AI-specific risks such as prompt injection, data leakage, and model misuse, Vangala says. “Teams increasingly need hybrid skills combining DevOps, application security, and AI system architecture,” he says.
Automation in overdrive
One of the biggest impacts of AI in any area is the rise in automation, and applying AI to DevSecOps will make automation increasingly common in the coming months.
DevSecOps practices are becoming more machine-to-machine and more tightly looped, while also reinforcing separation of concerns, IDC’s Norton says. “Security teams are shaping how AI systems generate code through embedded guardrails, while independently scaling detection and remediation through automated workflows,” she says.
The result is less reliance on manual, developer-mediated handoffs and more reliance on coordinated systems where code generation, analysis, and remediation occur through automated interactions, with humans focused on validation and oversight.
“When developers generate code using AI assistants, automated validation checks flag insecure patterns such as unsafe API calls, improper authentication logic, or exposed secrets,” Vangala says. “This reduces the number of security issues reaching downstream testing environments.”
Security logs are increasingly analyzed using AI models to identify anomalies and prioritize alerts, Vangala says. “Instead of manually reviewing large volumes of telemetry, automated systems highlight suspicious activity patterns and reduce alert fatigue by grouping related signals into actionable insights,” he says.
No Responses