Houston, we have a problem: Why vibe coding could become your biggest security risk
With the hype around vibe coding and the rocket-like rise of tools such as Claude Code, Codex, and the newly rebranded Moltbot (formerly Clawdbot), security risks are climbing fast. In the midst of this euphoria, even Sam Altman himself has let his guard down. This is a wake-up call for early adopters disappearing into late-night coding rabbit holes.
Guest post by Ralph Hutter, GObugfree Advisory Board Member

The YOLO Trap: When Even the OpenAI CEO Gives In After Two Hours
In a recent developer Q&A, Sam Altman openly admitted that he had planned not to grant the Codex model full system access. After two hours, he gave up. His reasoning? “The agent seems to really do reasonable things.”
His warning is far more serious. Altman fears that we are sliding into a “YOLO mentality” (YOLO: you only live once) where convenience feels so overwhelming and error rates appear so low that we ignore potentially catastrophic consequences. He worries society is “sleepwalking into a crisis,” trusting increasingly powerful models without having built the necessary security infrastructure around them.
Permission Fatigue Is Real
Constant notifications and permission prompts are exhausting. After the hundredth request, it becomes tempting to just click, approve, and move on. That’s human, but it’s also dangerous.
Right now, enormous amounts of code are being created that nobody fully understands or actively maintains. Entire applications are deployed on cloud platforms with no one taking responsibility for their configuration. Credentials such as API keys are routinely written in plaintext into codebases and n8n automations. Permissions are granted freely and generously - to local file systems, cloud storage via connectors, email, calendars, and more.
It’s the digital equivalent of inviting a hundred Trojan horses into your own environment.
Moltbot: A perfect case study in how everything can go wrong
The recently renamed AI assistant Moltbot (formerly Clawdbot) illustrates these risks perfectly.
Publicly exposed instances Security researcher Jamieson O’Reilly from Dvuln discovered hundreds of Moltbot instances publicly accessible on the internet. Of the manually examined instances, eight had no authentication at all — providing unrestricted access to commands and configuration data for anyone.
Credentials stored in plaintext Hudson Rock analysed the Moltbot codebase and found that some secrets are stored in plaintext Markdown and JSON files on the local file system. Infostealer malware such as Redline, Lumma, and Vidar already includes functionality specifically designed to search for these directory structures.
Supply-chain attacks O’Reilly also demonstrated a proof of concept: he uploaded a manipulated skill package to the ClawdHub library, artificially inflated its download count to over 4,000, and watched developers from seven countries download the poisoned package. The library does not moderate uploads, and downloaded code is automatically treated as trusted.
Warnings from Google Heather Adkins, VP of Security Engineering at Google Cloud, issued an explicit warning “Don’t run Clawdbot.” One security researcher went even further, describing Moltbot as “infostealer malware disguised as an AI personal assistant.”
The Uncomfortable Truth Eric Schwake from Salt Security puts it bluntly: there is a massive gap between the “one-click appeal” that excites users and the level of technical expertise required to operate an agent-based gateway securely.
Jamieson O’Reilly is even more direct:
“We spent 20 years building security boundaries into modern operating systems — sandboxing, process isolation, permission models, firewalls. All of this work was done to limit blast radius and prevent remote access to local resources. AI agents tear all of this down by design. They need to read your files, access credentials, execute commands, and interact with external services. The value proposition requires punching holes through every boundary we’ve built over decades.”
Practical guardrails for vibe coders and Claude coders
If you don’t yet have a clear plan, now is the time to think seriously about one. Here are a few practical tips from the security community:
- Use a zero-knowledge cloud for sensitive data, without connectors or APIs
- Never mix production business data with experimental sandboxes
- Store encrypted backups in a location no AI agent can access
- Grant AI agents access only to dedicated email accounts, calendars, and drives
- Never connect AI agents to your primary Google or Microsoft 365 account
- Use separate browsers for AI plugins
- Never run local servers inside production networks
- Never allow AI assistants to modify firewall rules
- Use a separate system user for all programming experiments
- Regularly clean up sandbox environments
- No external port exposure for experimental services
- No password managers in AI-enabled browser profiles
- Never store API keys, tokens, or passwords in plaintext code
- Rotate API keys regularly, especially if used in AI-generated code
- Always review generated code for hardcoded credentials
- Perform regular backups, especially before major AI-driven changes
- Grant AI agents only the permissions they absolutely need
Conclusion: Convenience is the enemy of security
Sam Altman is right. The convenience of these tools is so compelling that we are tempted to throw caution out the window. But “YOLO” is not a security strategy.
Vibe coding is powerful. But vibe responsibly.
References
The Register: Clawdbot becomes Moltbot, but can’t shed security concerns https://www.theregister.com/2026/01/27/clawdbot_moltbot_security_concerns/
The Decoder: OpenAI CEO Altman admits he broke his own AI security rule after just two hours https://the-decoder.com/openai-ceo-altman-admits-he-broke-his-own-ai-security-rule-after-just-two-hours-says-were-all-going-yolo/
Editorial note from GObugfree
From a vulnerability management and triage perspective, we are already seeing a rise in AI-generated vulnerability reports. This makes it even more important to balance efficiency with quality, and to ensure findings are validated, reproducible, and security-relevant. Responsible use of AI applies not only to coding, but also to security research itself.