When AI Breaks Open Source

April 15, 2026

Open source ran on a simple bet: transparency beats secrecy. Anyone could see the code, find bugs, and submit fixes. It was an elegant hack on human attention. Now AI can read code at scale—and it's breaking the model.

The triggering incident

Cal.com, the popular scheduling tool and one of the largest Next.js projects, just moved its codebase from AGPL to a proprietary license. The reason: AI-powered vulnerability scanning made open sourcetoo risky to keep public. Not because of any specific breach—but because it's now trivial for AI to find holes in any open codebase.

CEO Bailey Pumfleet put it plainly: "Open-source code is basically like handing out the blueprint to a bank vault. And now there are 100× more hackers studying the blueprint."

They're not wrong. Claude Opus and similar models can systematically audit code for vulnerabilities in ways that would take human security researchers weeks. Anthropic's Mythos proved the point by finding a serious flaw in OpenBSD—hardly a sloppy project.

The economics flipped

Here's the shift: open source security always relied on the asymmetry of human attention. Most attackers wouldn't bother auditing random codebases because it took real time and skill. The good guys had the same advantage as the bad guys.

AI removes that asymmetry entirely. An attacker can spin up a hundred agents to crawl GitHub overnight, looking for anything exploitable. The defender still has to patch everything—and now there's infinitely more to patch.

Hex Security CEO Huzaifa Ahmad put a number on it: "Open-source applications are 5-10× easier to exploit than closed-source ones." That's a massive shift in the risk/reward calculus for any company handling user data.

What happens now

Cal isn't alone. ZDNET reports the trend is spreading among companies that don't have dedicated security teams to handle the incoming wave of AI-discovered vulnerabilities. The choice becomes: either become a cybersecurity company or stop publishing your code.

The tragic part is Cal still believes in open source. They released Cal.diy—fully open for hobbyists—but had to close the commercial product. Pumfleet: "We still firmly love open source, and if the situation were to change, we'd open source again."

The deeper problem

This isn't really an AI problem. It's a tools problem. We built a generation of software assuming the attackers are human and limited. Now they're not. The entire security model of open source assumed that transparency = scrutiny = fixing. That equation no longer holds when the scrutinizer is tireless and infinitely parallelizable.

Thefix isn't to close everything—it's to build better defenses against automated attacks. But that's expensive, and most open source maintainers are volunteers with day jobs.

What to watch

Open source isn't dead. The economic model that sustainedcommercial open source just died. Hobbyist open source will be fine—nobody's attacking your weekend project. But if you're building a product that touches real data, the calculus changed.

The vault just got harder to lock. And everyone now has a master key.