AI Security

Small AI Models Are Finding Real Security Vulnerabilities

The Mythos cybersecurity story dominated HN this week: frontier AI finding genuine CVEs in real codebases. But the follow-up finding is more important โ€” small open-source models found the same bugs. Here's what this means for how you secure your code.

๐Ÿ‘คAlex Chen, AI Builder
ยทApril 12, 2026ยท
9 min read

The Finding That Changed Everything

Earlier this week, a paper from Mythos (aisle.com) became the #2 story on Hacker News with 1,035 points and 280 comments. The research demonstrated that large frontier AI models โ€” GPT-4 class and above โ€” could autonomously find real, exploitable vulnerabilities in production code. Not toy CTF challenges. Real CVEs in real software.

That was significant. But the follow-up research that arrived the same day was more surprising: small models โ€” 7B to 14B parameters, the kind you can run locally on a laptop โ€” found many of the same vulnerabilities. The gap between small and large models on this specific task was much smaller than anyone expected.

This is what researchers call the "jagged frontier": AI capabilities are unevenly distributed across tasks, and security vulnerability detection appears to be a task where small models punch well above their weight class.

1,035

HN points for the Mythos finding

280 comments โ€” one of the week's most-discussed technical papers

7B

Parameters โ€” smallest model that replicated the findings

Runs on a Mac mini or $5/month VPS with 8GB RAM

What Mythos Actually Found

The Mythos research tasked AI agents with analyzing real open-source codebases and finding exploitable vulnerabilities. The agents weren't given hints about where to look โ€” they scanned entire repos, identified suspicious patterns, traced data flow, and generated proof-of-concept exploits for the vulnerabilities they found.

Several of the discovered vulnerabilities were previously unknown โ€” genuine zero-days that security researchers had missed. The AI agents found them by reasoning about code paths in ways that parallel how a skilled human security auditor would think, but at machine speed across millions of lines of code.

Vulnerability types AI found most reliably

Buffer overflowsHigh detection rate
SQL injection patternsHigh detection rate
Insecure deserializationHigh detection rate
Hardcoded secretsVery high detection rate
Path traversalMedium detection rate
Logic bugsLow detection rate
Business logic flawsVery low detection rate
Race conditionsLow detection rate

Small vs Large Models: The Actual Comparison

The follow-up research tested whether small open-source models could replicate the Mythos findings. The results were striking. On memory-safety bugs, injection patterns, and hardcoded credential detection, 7B-14B models found the same vulnerabilities as much larger frontier models โ€” sometimes faster.

Where small models fell short: complex multi-hop vulnerabilities that require reasoning across many files, business logic bugs that need domain understanding, and race conditions that require understanding async execution. For those, the larger models maintained a significant edge.

Vuln TypeLarge (GPT-4+)Small (7-14B)Cost/scan
Memory safety94%87%$2.40 vs $0.02
Injection patterns97%91%$1.80 vs $0.01
Hardcoded secrets99%98%$0.60 vs $0.00
Multi-file logic bugs72%38%$4.20 vs $0.08
Race conditions61%29%$3.10 vs $0.05

Detection rates are illustrative based on the research direction; cost comparison is order-of-magnitude.

Get the weekly AI agent digest ๐Ÿฆž

What's shipping in AI tools, every Monday. No fluff.

Subscribe Free โ†’

Why This Matters for Builders

If small models can find real vulnerabilities, the cost of continuous security scanning drops to near-zero. Instead of paying $200/hour for a human security review or $50k for a quarterly audit, you can run automated scanning on every commit for pennies. The economics change the accessibility of security completely.

For solo founders and small teams โ€” the audience most likely reading this โ€” this is significant. Enterprise companies have security teams. You don't. Until now, the gap was mostly economic. AI is closing it.

The other implication: attackers are using the same tools. If small AI models can find vulnerabilities, they can also automate exploitation. The offense/defense balance is shifting, and defenders need to move first by scanning their own code before attackers do.

Build a Security Scanning Agent with OpenClaw

You can run local security scanning through OpenClaw with Ollama. The pattern: a cron job triggers on a schedule or git push, sends your code to a small model, and delivers a structured vulnerability report to your Telegram. No external API โ€” your code stays on your machine.

For the full setup walkthrough โ€” including which model to pick, how to structure prompts for highest accuracy, and how to configure the cron triggers โ€” see the OpenClaw setup guide. The cost calculator can help you estimate API costs if you prefer cloud models for richer analysis.

STEP 01

Install Ollama + security model

Pull a coder model (Qwen2.5-Coder or Mistral) locally via Ollama. Connect it to OpenClaw as an API provider.

STEP 02

Set up the cron trigger

Schedule a daily scan of your key repos. Use OpenClaw's cron system to run git pull + analysis every morning.

STEP 03

Get reports on Telegram

Results arrive as a structured Telegram message: file, line, severity, and a plain-English explanation of the issue.

Real Limitations to Know

Don't use AI security scanning as your only security measure. It's a powerful first-pass filter, not a complete solution. Business logic bugs, privilege escalation chains, and architecture-level design flaws still need human review.

False positive rate

Expect 30-60% false positives. The model flags potential issues that aren't actually exploitable. You need a triage step before acting on findings.

Context window limits

Small models have limited context. For large codebases, you need to chunk by file or module, which can miss cross-file vulnerability chains.

No runtime analysis

Static analysis only. Vulnerabilities that emerge only at runtime (timing attacks, race conditions under specific load) won't be found this way.

Training data cutoff

Models trained before a CVE was published won't reliably detect patterns from that CVE. Keep models updated and supplement with a traditional CVE scanner.

Frequently Asked Questions

Can a small AI model really find real security vulnerabilities?

Yes, based on the Mythos research and subsequent replications. Small models (7B-14B parameter range) found real, exploitable vulnerabilities in code โ€” vulnerabilities that human reviewers had missed. The models weren't just flagging false positives; the findings were confirmed as genuine CVEs.

Which small models work best for security analysis?

In current testing, Mistral 7B, Llama 3.2 11B, and Qwen2.5-Coder 14B show the best results for code vulnerability scanning. The coder-specific models outperform general-purpose models of the same size for this task.

Does this replace human security audits?

No. AI models generate significant false positives and can miss entire vulnerability classes (particularly logic bugs and business-logic flaws). Think of AI security scanning as a high-speed first pass that surfaces candidates for human review โ€” not a replacement for a security engineer.

Can I run this on my own code with OpenClaw?

Yes. If you have Ollama configured with OpenClaw, you can set up a cron job that scans your codebase on every commit or daily. The /guide has the configuration details for connecting Ollama to OpenClaw.

What's the jagged frontier mentioned in the research?

The "jagged frontier" refers to the uneven capability distribution of AI models โ€” small models can surpass large models on specific narrow tasks while being much weaker on general reasoning. Security vulnerability scanning appears to be one of those tasks where the gap between model sizes is smaller than expected.

Scan Your Code Before Attackers Do

Set up continuous security scanning with OpenClaw and a local model. Your code stays on your machine. Results land on your phone.

Related Articles

We use cookies for analytics. Learn more

Free: AI Adaptation Playbook

Get it free