Is Vibe Coding Safe? Security Risks of AI-Generated Code
Is vibe coding safe? We analyzed hundreds of AI-generated apps and found that 63% ship with critical vulnerabilities. Here's what goes wrong, why it happens, and how to build safely with AI coding tools.
By Paula C · Kraftwire Software
· 9 min readWhat Is Vibe Coding - And Why Does It Matter for Security?
"Vibe coding" is a term coined to describe a new way of building software: you describe what you want in natural language, and an AI builds it for you. You don't write code line by line - you **vibe** with the AI, iterating through prompts until the app matches your vision.
Platforms like Lovable, Bolt.new, Cursor, Windsurf, and Replit have made vibe coding mainstream. You can go from idea to deployed app in minutes. No computer science degree required. No years of coding experience needed.
But here's the question nobody's asking until it's too late: **Is the code that AI generates actually safe?**
We've scanned hundreds of vibe-coded applications through SimplyScan. The data tells a clear story: vibe coding produces functional apps with systemic security gaps. This article breaks down what we've found, why it happens, and what you can do about it.
---
The Data: How Secure Are Vibe-Coded Apps?
After analyzing over 500 applications built with AI coding tools, here are the numbers:
**63%** ship with at least one critical vulnerability
**41%** have API keys or secrets exposed in client-side code
**38%** have missing or misconfigured database access controls
**28%** have no authentication on routes that should require it
**22%** use packages with known security vulnerabilities
**15%** have overly permissive file storage policies
These aren't edge cases or contrived examples. These are real applications built by real users and deployed to production.
---
Why AI-Generated Code Has Security Gaps
1. AI Optimizes for "Does It Work?" Not "Is It Safe?"
The AI models behind vibe coding tools are trained on massive code datasets. They learn patterns that make code functional. But security isn't about making something work - it's about preventing it from being misused. These are fundamentally different objectives.
When you prompt "add user authentication," the AI generates a working login flow. But it may:
Store the service-role key in frontend code (works, but catastrophically insecure)
Skip email verification (works, but allows fake accounts)
Implement client-side role checks instead of database-level enforcement (works in the UI, but trivially bypassable)
2. Training Data Contains Insecure Patterns
Stack Overflow answers, GitHub repos, and tutorial code - the primary training sources for AI models - are full of insecure patterns. When someone asks "how to connect to MongoDB" on Stack Overflow, the top answer often includes the connection string directly in the code. The AI learns this pattern and reproduces it.
**Common insecure patterns from training data:**
API keys as string constants in configuration files
`CORS: { origin: '*' }` to fix cross-origin errors
`app.use(express.json())` without request size limits
Database queries built with string concatenation instead of parameterized queries
Error handlers that return full stack traces
3. Prompts Don't Mention Security
This is perhaps the most fundamental issue. When you vibe code, you describe features: "create a dashboard," "add payments," "build a chat feature." You don't say "create a dashboard with proper authentication, authorization scoped to the current user, rate-limited API endpoints, input validation on all fields, and sanitized HTML output."
The AI builds what you ask for. If you don't ask for security, you don't get security.
4. No Adversarial Testing
Human developers (at least experienced ones) mentally test their code from an attacker's perspective. They ask: "What if someone sends a malicious input here? What if someone calls this API without being logged in? What if someone modifies the request payload?"
AI doesn't do this. It generates the happy path and moves on. There's no adversarial thinking built into the code generation process.
---
The 6 Most Common Vibe Coding Vulnerabilities
1. Exposed API Keys and Secrets
**Frequency:** Found in 41% of apps scanned
The most common and most immediately dangerous vulnerability. AI places API keys where they're needed - which is usually in the frontend code where they're visible to everyone.
**What we find:**
OpenAI keys in React components (`const OPENAI_KEY = "sk-..."`)
Supabase service-role keys in utility files
Stripe secret keys in payment processing components
Database connection strings in configuration files
**The fix:** Move all secrets to server-side functions (Edge Functions, API routes). Only publishable keys belong in the frontend.
2. Missing or Broken Access Control
**Frequency:** Found in 38% of apps scanned
AI builds features without thinking about who should be allowed to use them. Admin panels, data management pages, and user-specific content are often accessible to anyone.
**What we find:**
Admin routes with no auth check
API endpoints that return all users' data regardless of who's asking
RLS policies that check `auth.uid() IS NOT NULL` instead of `auth.uid() = user_id`
Client-side role checks that can be bypassed in DevTools
**The fix:** Implement authorization at the database level (RLS policies) and verify auth in every server-side handler.
3. No Input Validation
**Frequency:** Found in 35% of apps scanned
AI-generated form handlers and API endpoints accept whatever data is sent without validation. This opens the door to injection attacks.
**What we find:**
Form data passed directly to database queries
File uploads accepted without type or size restrictions
URL parameters used unsanitized in database lookups
HTML content rendered without sanitization (XSS)
**The fix:** Use a validation library (Zod, Joi) on every input. Validate on both client and server.
4. Vulnerable Dependencies
**Frequency:** Found in 22% of apps scanned
AI suggests packages based on training data that may be months or years old. Those packages may have known CVEs that have since been patched.
**What we find:**
Packages with critical severity CVEs
Unnecessary dependencies that increase attack surface
Packages that haven't been updated in years
Multiple packages solving the same problem
**The fix:** Run `npm audit` after every build. Update to latest secure versions. Remove unused packages.
5. Missing Security Headers
**Frequency:** Found in 67% of apps scanned
Most vibe-coded apps ship without any security headers. These headers are your first line of defense against common web attacks.
**What we find:**
No Content-Security-Policy (allows XSS)
No Strict-Transport-Security (allows HTTPS downgrade)
No X-Frame-Options (allows clickjacking)
No X-Content-Type-Options (allows MIME sniffing)
**The fix:** Configure security headers in your hosting platform or server middleware.
6. Insecure Session Management
**Frequency:** Found in 19% of apps scanned
Authentication tokens stored insecurely, sessions that never expire, and incomplete logout flows.
**What we find:**
JWTs stored in localStorage (accessible to XSS attacks)
Tokens with no expiration
Logout that only clears the client without invalidating the server session
No session refresh mechanism
**The fix:** Use httpOnly cookies, set reasonable expiration times, and implement proper token refresh.
---
How to Vibe Code Safely
Vibe coding isn't inherently unsafe - it just needs a security layer that the AI doesn't provide by default. Here's how to build safely:
1. Add Security Prompts
When prompting your AI tool, explicitly mention security requirements:
"Add authentication with email verification"
"Ensure only the data owner can read their records"
"Store the API key in an environment variable, not in the code"
"Add input validation to this form"
2. Review the Generated Code
Don't treat AI output as a black box. Read every file the AI generates. Look for:
Hardcoded strings that look like API keys
Routes without auth middleware
Database queries without RLS policies
Missing error handling
3. Scan Before You Ship
Run an automated security scan on your deployed application before going live. SimplyScan checks for 51+ security and performance issues in 30 seconds, covering the most common vibe coding vulnerabilities.
4. Adopt a Security Checklist
Use a pre-deployment checklist that covers:
✅ No secrets in client-side code
✅ Authentication on all protected routes
✅ RLS policies on all database tables
✅ Input validation on all forms and APIs
✅ Dependencies audited for CVEs
✅ Security headers configured
✅ Error messages don't leak internals
✅ File uploads validated for type and size
---
The Bottom Line
Vibe coding is transforming how software is built. It's making development accessible to more people and dramatically reducing time-to-market. But the security gap is real, and ignoring it puts your users' data at risk.
The solution isn't to stop vibe coding - it's to add security as a standard step in your workflow. Scan every app before it ships. Review the code the AI generates. And use tools like SimplyScan to catch what human review misses.
[Scan your vibe-coded app now →](/)