I've spent six months working through Harvard's CS50 and an MIT course learning how to build with todays new tech and while I learned A LOT, what really crystallized for me, wasn't just how to write better code, but how little visibility any of us have into what our code actually does. That realization, combined with watching AI transform how software gets written, has led me to an uncomfortable conclusion: the entire discipline of application security has entered its twilight.
The Visibility Problem Was Already Bad
Before we talk about AI, we need to be honest about where we were.
When a classically trained developer writes code in Python, JavaScript, Go, or any modern language, they're working several abstraction layers above reality. They call a function from a standard library. That library calls another library. Somewhere down the stack, something eventually talks to the operating system. At no point does the developer have meaningful visibility into what they've actually invoked.
Standard libraries contain hundreds of thousands of lines of code that virtually no application developer has ever read. When you import `requests` or call `fetch()`, you're trusting that someone, somewhere, has audited that code.
They haven't. Not really.
Log4j wasn't an edge case. OpenSSL's Heartbleed wasn't an edge case. These were foundational components that millions of applications depended on, hiding critical vulnerabilities for years. For every CVE we find, how many remain undiscovered? The honest answer: we have no idea.
Application security was already operating on faith. We just didn't talk about it that way.
Now Add AI to the Stack
Claude Code. GitHub Copilot. Cursor. Amazon CodeWhisperer. Cody. The AI coding assistants are multiplying fast, and they're being integrated directly into VS Code, JetBrains, and every other IDE developers actually use.
This changes everything—and not in the ways the marketing copy suggests.
These tools don't understand code the way humans do. They predict statistically likely token sequences based on patterns in training data. When an AI suggests a function, it's not reasoning about security implications. It's pattern-matching against a corpus that includes both secure and insecure code, with no reliable mechanism to distinguish between them.
Here's what this means in practice:
A developer using Claude Code or Copilot can generate in an afternoon what might have taken a week. That's a 5x to 10x acceleration in code volume. Which means a corresponding acceleration in attack surface creation.
The AI doesn't know why it made the choices it made. There's nothing to document. Nothing to explain. When a security reviewer asks "why did you implement it this way?"—the honest answer is "the AI suggested it and it worked."
We now have black boxes generating code that calls into other black boxes (standard libraries), reviewed by humans who can't possibly keep pace, secured by tools designed for a world where humans wrote most of the code.
Why Traditional AppSec Can't Survive This
Application security programs were designed with certain assumptions:
- Developers write most of their code deliberately, with intent they can explain
- Code review happens at roughly the pace code is created
- Static analysis can identify patterns in source code
- The dependency tree is enumerable and auditable
None of these hold anymore.
When your application is 5% original code and 95% dependencies plus AI-generated snippets, your SAST tool is analyzing the tip of an iceberg. When code generates faster than humans can review, code review becomes theater. When AI writes code it can't explain, there's no design rationale to evaluate.
Your application security program isn't failing because your team is incompetent. It's failing because the model assumes visibility and control that no longer exist.
The Math Doesn't Work
Let's be clear.
If AI-assisted development increases code output by 5x, and your security team's capacity stays flat, you've just created a 5x gap between attack surface creation and security coverage. That gap compounds over time. Every sprint. Every release.
Meanwhile, your adversaries—nation-states, ransomware crews, and the rest—are using the same AI tools to analyze your code, find vulnerabilities, and generate exploits. They're accelerating too.
Application security as a discipline assumed humans could, with enough effort and tooling, maintain meaningful oversight of what software does. That assumption is breaking down in real time.
What Now?
I don't have a product to sell you. I'm not going to pretend a tool exists that solves this.
What I do know is that honesty is the starting point. If you're running an AppSec program today, you should be asking: what are we actually achieving? What visibility do we really have? Are we reducing risk, or are we generating compliance artifacts while the real attack surface grows unchecked?
Maybe the future involves AI-powered security analysis that operates at the same speed as AI-powered development. Maybe it means rethinking how we build software—smaller trusted computing bases, fewer abstractions, more visibility. Maybe it means accepting that certain categories of software simply cannot be secured to the standards we've historically claimed.
What I'm certain of is this: pretending the old model still works is the most dangerous path forward.
The twilight is here. The question is what we build next.
---
*Jeff Stutzman, CISSP, is CEO of Monadnock Cyber LLC with over 30 years of cybersecurity experience. He recently completed CS50 through Harvard/edX and MIT's Developing AI Applications and Services program.*

No comments:
Post a Comment