AI-powered bug hunting shakes up bounty industry — for better or worse

Tags:

AI-powered bug hunting has changed the calculus of what makes for an effective bounty program by accelerating vulnerability discovery — and subjecting code maintainers to ballooning volumes of AI flaw-hunting slop.

Security researchers are using large language models (LLMs) to automate reconnaissance, reverse engineer APIs, and scan codebases faster than ever. By applying AI tools to techniques ranging from fuzzing and exploit automation to pattern recognition across codebases and websites, researchers are discovering flaws at accelerated rates.

“Over the past year, we’ve entered what we call the era of the ‘bionic hacker,’ which is human researchers using agentic AI systems to collect data, triage, and advance discovery,” says Crystal Hazen, senior bug bounty program manager at HackerOne, which has added AI tools to its platform to help streamline submissions and triage.

Research by HackerOne found a 210% increase in valid AI-related vulnerability reports this year compared to 2024. It also saw a 339% jump in total bounties paid for AI vulnerabilities this year, as bug bounty programs evolve to address vulnerabilities in AI-enabled applications, with prompt-injection flaws, model manipulation, and insecure plug-in design accounting for the majority of findings.

AI slop taxes defenders

Industry experts advise that AI should only be used as a ‘research assistant’ or guide rather than the principle mechanism of vulnerability discovery.

Inti De Ceukelaire, chief hacker officer at bug bounty platform Intigriti, says AI has leveled the playing field for hackers because it can help less skilled researchers to identify potentially vulnerable systems or analyze code for flaws. But the results of AI-based analysis are not always reliable — and this has created practical problems.

“We have seen AI acting as an echo chamber and amplifier for individuals that believe they might be onto something, luring them into a downwards spiral of confirmation bias,” De Ceukelaire tells CSO.

Security teams dealing with external vulnerability reports will need to be increasingly skeptical of inbound reports that show signs of relying heavily on AI.

“Bug bounty platforms offering triage services could help with this, as they are able to measure the track record of researchers over time, and use in-depth technologies to detect and recognize AI slop before it gets to the company,” De Ceukelaire says.

Other security experts agree that outcomes of applying AI tools to bug hunting have thus far been mixed, while arguing that problems can be mitigated by careful triage.

“AI tools, when properly applied and validated, do provide high impact findings, but we’re also seeing programs being overwhelmed by huge numbers of reports, most of which are slop, to put it delicately,” says Bobby Kuzma, director of offensive cyber operations at cybersecurity and compliance consulting firm ProCircular.

Triaging the increasing volume of variable-quality reports generated by some AI tools is putting a strain on under-resourced programs, including those associated with critical open-source software projects.

For example, the curl project — a command-line tool commonly used for downloading files — has put out public entreaties to stop submitting AI-detected bugs. Maintainers of the project complained that they were spending too much time on low-quality bug reports generated using AI tools.

Project lead Daniel Stenberg compared the barrage of unsubstantiated and false reports to a denial-of-service attack. More recently, Stenberg softened his criticism following the submission of some genuine bug reports partly generated by AI tools.

Firehose of ‘false positives’

Gunter Ollmann, CTO at Cobalt.io, warns that AI is exacerbating the existing problem that comes from vendors getting swamped with often low-quality bug submissions.

Security researchers turning to AI is creating a “firehose of noise, false positives, and duplicates,” according to Ollmann.

“The future of security testing isn’t about managing a crowd of bug hunters finding duplicate and low-quality bugs; it’s about accessing on demand the best experts to find and fix exploitable vulnerabilities — as part of a continuous, programmatic, offensive security program,” Ollmann says.

Trevor Horwitz, CISO at UK-based investment research platform TrustNet, adds: “The best results still come from people who know how to guide the tools. AI brings speed and scale, but human judgment is what turns output into impact.”

Gal Nagli, head of threat exposure at cloud security vendor Wiz and a bug bounty hunter, tells CSO that AI tools are yet to make a dramatic difference in bug bounty hunting, at least when it comes to more skilled practitioners. 

For example, researchers who automate infrastructure-based vulnerabilities at scale — like default credentials or subdomain takeovers — already have reliable tooling and detections in place. “AI isn’t needed in those cases,” Nagli says.

“The real value of AI is in augmenting expert researchers, especially when testing authenticated portals or analyzing sprawling codebases and JavaScript files,” Nagli explains. “It helps uncover vulnerabilities that were previously too complex or subtle to detect without AI.”

The latest generation of models can provide real assistance to skilled bug bounty hunters, not by replacing them, but by enhancing what they’re able to find.

“Fully autonomous agents still struggle, especially with authentication and scenarios where human context is critical,” Nagli adds.

Enterprise risk management

Bug bounty programs have matured into extensions of enterprise risk management strategies by constantly surfacing real threats before attackers exploit them.

Security leaders are moving toward continuous, data-driven exposure management, combining human intelligence with automation to maintain real-time visibility across assets, supply chains, and APIs.

HackerOne reports 83% of organizations surveyed now use bug bounties, and payouts grew 13% year-over-year, reaching $81 million across all programs.

As common vulnerability types like cross-site scripting (XSS) and SQL injection become easier to mitigate, organizations are shifting their focus and rewards toward findings that expose deeper systemic risk, including identity, access, and business logic flaws, according to HackerOne.

HackerOne’s latest annual benchmark report shows that improper access control and insecure direct object reference (IDOR) vulnerabilities increased between 18% and 29% year over year, highlighting where both attackers and defenders are now concentrating their efforts.

“The challenge for organizations in 2025 will be balancing speed, transparency, and trust: measuring crowdsourced offensive testing while maintaining responsible disclosure, fair payouts, and AI-augmented vulnerability report validation,” HackerOne’s Hazen concludes.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *