A Closer Look at the AppSec Productivity Gap

Updated May 5, 2025.

Ask any AppSec engineer what their biggest challenge is, and you’re bound to hear there just aren’t enough hands on deck.
Today’s developers outnumber security engineers by as much as 200 to 1. They're shipping faster––particularly since the adoption of AI and copilots has accelerated delivery and code generation. And this is now happening across stacks that are growing more complex by the week, all while the AppSec function is expected to somehow keep up—mostly with manual effort, scattered tooling, and no extra time on the roadmap.
As a result, product vulnerabilities are appearing faster than they can be resolved. We call this the AppSec Productivity Gap.
In the rest of this article, we'll explore some of the root causes of the AppSec Productivity Gap, and how to address these challenges with AI Agents.
The Most Pressing Challenges in AppSec Today
Jit’s 2025 Developer Survey confirms what many in security already know intuitively: the AppSec model is strained to the point of breaking. But it also reveals a path forward—one where AI doesn’t just offer “copilot” productivity boosts, but fundamentally changes how security work gets done.
In this post we’ll take a look at some of the areas AppSec is struggling with—and how AI can step in to level up the AppSec game.
The report reveals three central areas where developers consistently face friction with application security efforts:
Organizational Neglect and Lack of Prioritization: 60% of developers say security is not a major part of their team’s culture. In many organizations, shipping features remains the primary incentive—security only becomes a priority when it blocks a release, compliance deadline, or customer sale. Leadership teams often expect “secure-by-default” software without allocating time, training, or planning cycles for security itself.
Complex Architectures, Limited Knowledge: The top-cited challenge was the complexity of modern applications. With microservices, sprawling cloud infrastructure, and tangled dependency trees, security is no longer just about writing clean code—it’s about managing an ecosystem of interactions and vulnerabilities. Developers feel this weight, particularly when they lack adequate guidance or domain expertise. One respondent put it plainly: "As a software engineer, I don’t have the expertise to be able to enforce and ensure code security."
Incomplete or Ineffective Tooling: Automated security tools (like SAST, SCA, and secrets detection) were cited as the most preferred strategy for securing code. But here’s the caveat: they often create more problems than they solve. Developers complained about poor integration into CI/CD workflows, noisy outputs, and high false positive rates. These issues erode trust and waste cycles—leaving security bugs buried or ignored.
What Developers Actually Use for Security Guidance
Despite the rise of AI and LLMs, developers still rely most on traditional online documentation (e.g., OWASP), forums like StackOverflow, and colleagues. AI tools like Copilot were rated among the least effective resources for code security. That’s not necessarily because developers don’t believe in the potential of AI—it’s because today’s AI lacks precision, context, and trust in security-critical decisions.
In essence, developers aren’t rejecting automation—they’re rejecting bad automation.
Enter AI. That has the potential to reduce and alleviate the friction across many areas that remain challenging for AppSec.
Below we’ll take a look at the common areas that harbor the most friction and generate cognitive and manual load, and how AI can step in to help turn the tide for the AppSec domain.
1. Security Isn’t a Priority—Until It’s a Problem
Most developers surveyed said security is not part of their daily workflow. It's not discussed in team rituals, not factored into sprint planning, and not included in estimates. It only surfaces when something breaks or compliance forces a review.
This leaves AppSec teams in reactive mode, constantly chasing down risks long after the code has shipped.
Where AI Can Help:
Security doesn’t have to be a disruptive, manual process. With AI embedded into dev workflows, security checks can happen naturally as part of the coding process. Unlike traditional scanners that flag everything by policy, AI can distinguish what actually matters. It can contextualize vulnerabilities based on code usage, data flows, or exploitability, reducing false positives and alert fatigue. For example, AI can recognize when a vulnerable dependency is unreachable, or when a code pattern introduces actual authorization bypass. Instead of forcing developers to sift through dozens of alerts, where traditional scanners tell you what is wrong. AI can tell you why it matters—and even how to fix it.
By analyzing code in context, AI can explain vulnerabilities in natural language, link to relevant documentation, and even propose secure alternatives or open a patch PR. This bridges the knowledge gap that most developers struggle with and turns each issue into a learning opportunity—not just a red flag.
2. Modern Architectures Are Too Complex to Manually Threat Model
AppSec professionals know that today's architectures—microservices, serverless, containers, APIs, third-party integrations—create sprawling attack surfaces. But understanding those surfaces requires time, years of domain expertise, some oral tradition and institutional knowledge, and documentation that rarely exists.
Threat modeling today is a time-consuming, manual process that simply doesn’t scale, and largely remains point in time and not continuous.
Where AI can help: Threat modeling is often static, done once per release (if at all), and quickly outdated. AI can help make it continuous. By parsing Terraform, Helm charts, API specs, and code changes, AI can detect architectural drift, surface new attack paths, and alert on misaligned configurations as your system evolves—bringing security closer to how developers actually ship software. AI can then also construct and maintain a live model of your system, identifying trust boundaries and common misconfigurations. Instead of threat modeling once a quarter, you get continuous modeling that evolves with your stack.
3. Developers Don’t Know What to Fix—or Why
“Lack of knowledge, training, and guidelines” was one of the top reasons developers said they struggled to secure their code. Security tooling inundates with alerts—but with very minimal context. Even tools that provide the context, and the fixes in the PR (like Jit), may still require engineers to do manual research and verification before securely merging the updates. Other times, issues are simply labeled “high severity” with no real explanation of exploitability, impact, or how this ties into actual business logic.
And that’s assuming the alerts get noticed between all the false positives.
Where AI can help: Instead of scanning code and dumping results, AI - that excels at pattern recognition and categorization - can triage findings based on risk, project history, and business-critical context. It can rewrite the alert in plain language, summarize similar historical issues. When agentic AI is leveraged, they can then analyze this data, and even open a pull request with a proposed fix—complete with tests and guardrails.
Security should no longer be about overwhelming developers with noise. We need to evolve to a practice that prioritizes delivering the right signal at the right time, with just enough explanation to drive action.
4. Tooling Is Fragmented and Confidence Is Low
SAST, SCA, secrets detection, DAST—the alphabet soup of AppSec tools is growing, but trust in them isn’t. Developers are tired of tools that alert without precision. Security teams are tired of tools that don’t speak the same language or coordinate their findings. And no one knows what coverage they actually have.
Where AI can help: AI agents can become the connective tissue between tools—normalizing their outputs, de-duplicating overlapping findings, and tracking issues through remediation. More advanced agents can use telemetry to verify exploitability in staging or live traffic—shifting security posture from “scan and guess” to “observe and know.” This restores trust in automation by making it tangible, accurate, and consolidated.
5. Security Is Still Too Far from the Code
Despite all the “shift left” talk, most developers say they only touch security issues a few times a year—and when they do, it’s usually under pressure. There’s still a massive gap between what security knows and what engineering is doing day-to-day.
Where AI can help: Agentic AI can act as an embedded security advocate inside the dev team. It can sit in the IDE, scan GitHub workflows, and proactively enforce secure defaults—like locking down third-party actions, checking for secrets, or validating infrastructure policies. It doesn’t just throw knowledge over the wall and hope engineers catch it, it becomes a quiet partner who explains (possibly nudges), and patches in context.
6. Security Culture Depends on Humans—and That’s Always a Bottleneck
The data shows a clear correlation: when dev and security collaborate well, security culture thrives. But right now, that relationship depends on a few overworked humans—security champions, AppSec leads, and the rare engineers who care enough to get involved. It’s fragile, inconsistent, and hard to scale.
Where AI can help: AI can’t replace the trust and collaboration built between developers and security teams (yet!)—but it can help scale that relationship. Instead of relying on one overworked AppSec engineer to support multiple teams, AI can help track each team’s security activity, highlight areas of risk, and route the right issues to the right people. It can surface trends—like recurring misconfigurations or teams that never fix critical vulnerabilities—so security teams can step in more strategically. Think of it as giving AppSec a dashboard for human relationships, not just technical findings.
The Future of AppSec Isn’t Just AI-Assisted. It’s AI-Orchestrated.
AppSec today is held together by brute force and duct tape––endless alerts, too many fragmented tools with disparate outputs, but mostly not enough time. Agentic AI offers something new—not just automation, but autonomy.
Systems that don’t wait for tickets to be filed, but take action. Tools that don’t just scan and shout, but learn and guide.
The future isn’t about more dashboards. It’s about fewer decisions that need to be made manually. And it’s already starting.
If we want to close the 200:1 gap between developers and security, we don’t need to hire faster—we need to delegate smarter.