An agent-native static host for AI-generated sites
A friend — not a developer — sent me a link last month. It started with 127.0.0.1. She had "vibe-coded" something with Claude and wanted me to see it. The link, of course, resolved to nothing on my machine.
That's the gap. The distance between "I made a thing with an AI" and "anyone on the internet can open it" is still measured in Vercel accounts, DNS records, GitHub repos, and CLI installs. For a developer it's five minutes of friction. For everyone else it's a dead end.
I spent a weekend telling people to just use Netlify Drop or surge.sh. It broke down every time. Netlify wants a signup. Surge wants a CLI install and a terminal. GitHub Pages wants a repo. Every one of them is aimed at a developer with a project on disk and a shell open — not at a person mid-conversation with an AI that already has the files ready to ship.
The wrong abstraction for agents
The deeper problem isn't onboarding UX. It's that every existing host assumes the deployer is a human driving a dashboard. None of them were designed for an agent — a Claude, a Cursor, a Codex — that has the HTML in context and just needs somewhere to put it.
What would that look like, specifically?
- No account required to ship the first site. The agent gets a credential the first time it asks.
- No CLI install as a blocker. The agent learns the protocol by reading one document.
- No project-on-disk assumption. The agent streams the files it already has.
- No DNS, no build config, no framework detection. Just HTML in, URL out.
Once you say it out loud, the shape of the thing is almost boring. A static host with an API, a subdomain wildcard, and a short-lived anonymous credential. The interesting question is the protocol, not the runtime.
The skill file is the product
VibeDrop publishes a single markdown file at vibedrop.cc/skill.md. It follows the Anthropic "skill" format — a frontmatter block declaring when the skill applies, followed by prose instructions the model reads at runtime.
That file is the public interface. If you point any modern coding agent at that URL and ask it to deploy a folder, it will. It learns that the CLI is npm install -g @vibedrop/cli, that deploys take a built directory, that the response includes a claim URL worth surfacing to the user. There is nothing else to integrate.
This matters more than the hosting does. The skill file turns "support VibeDrop" from a vendor-specific integration into a 90-second prompt. It means we compete on the quality of the instructions, not on who we can convince to ship an official plugin. And it means the protocol surface stays small — we can't hide behind features, because every feature has to fit in a document an agent reads cold.
We also ship an MCP server and a plain CLI for people who want those. They hit the same backend. The skill is just the front door that works everywhere a model can run a shell.
Architecture, briefly
The whole thing runs on Cloudflare:
- Workers for the API (
api.vibedrop.cc) and the serving edge (*.vibedrop.site). The serving worker is about a hundred lines — look up the site by subdomain in D1, stream the requested asset out of R2, set security headers, done. - R2 holds the uploaded bundles, one prefix per deploy. New deploys write to a fresh prefix and atomically flip a pointer, so redeploys are instant and rollbacks are a column update.
- D1 holds sites, keys, users, and quotas. One SQLite database doing what it's good at.
- KV holds short-lived state — rate limits, magic-link tokens, the ephemeral bits where "eventually consistent" is fine.
First deploy from an unknown caller mints an anonymous API key and writes it to the caller's local config. That key is the account until the user decides to claim it. Claiming is a one-time URL the CLI prints, good for an hour — the user clicks, signs in with email, and every site deployed with that key lands in their dashboard. No pasting secrets, no OAuth dance, no account-creation form before the first deploy.
Trade-offs we took on purpose
Every concession we made is there to keep the abuse surface small and the product cheap to run:
- Static only. No server-side code, no Node runtime, no environment variables. If you need a backend, bring your own API. This kills a whole category of "free compute abuse" and lets us keep the free tier genuinely free.
- Free sites expire. Seven days anonymous, thirty days once you've claimed the key to an account. Persistent hosting is $5/month. Expiring free tier means we don't have to garbage-collect the internet's abandoned demos forever.
- Random slugs only. No custom names on free. Custom subdomains are one of the biggest phishing vectors on hosting platforms — we'd rather not ship that surface area until we have the review pipeline to back it.
- No custom domains on free. Same reason, and because the cert plumbing is nontrivial. Pro gets it.
Each of these is visible in the terms and the pricing page. None of them are hidden dark patterns — they're the shape of a product that's actually sustainable at a $0 price point.
Try it from the terminal
The fastest way to feel the thing is to deploy a folder of HTML:
npx @vibedrop/cli deploy ./my-site
Or point an agent — Claude Code, Cursor, Codex — at the skill file and ask it to publish what it just built:
"Read https://vibedrop.cc/skill.md and deploy the current project."
That's the whole onboarding. If the first deploy is painful, the design is wrong and we want to hear about it.
What's next
Password-protected sites, bandwidth metering on Pro, a real abuse review pipeline, and a Stripe overage add-on for teams that outgrow the default quota. The backlog lives in the open in the repo TODO. If you have opinions about the protocol — what's missing from the skill, what an agent should be able to do that it can't — we're at hello@vibedrop.cc.
The hosting is the easy part. The interesting question — the one we're actually trying to answer — is what a web platform looks like when the primary user of the API is not a human.