Case Study
LiveLocalGenius
The AI marketing employee for local businesses.
Built from PRD to production in one agency session. Two deployed products. Real users. A hybrid AI architecture across five models and two clouds. AI cost running at 1.2% of revenue.
By the numbers
The build
One session. Five phases.
This is not a summary. This is what actually happened.
Debate
Rounds 1–2Steve pushed for a dead-simple dashboard — one screen, one action. Elon pushed for a multi-tenant architecture that could support 10,000 businesses on day one. They argued about whether AI cost should be fixed per seat or metered per usage. Marcus mediated. The decisions locked.
Hybrid pricing model. Simple UX with powerful infrastructure underneath. AI cost as metered percentage of revenue, not fixed overhead.
Plan
Round 3Six sub-agent teams defined. Steve hired a designer, a copywriter, and a brand strategist. Elon hired a market analyst, a growth strategist, and a systems architect. Each agent received a brief with specific inputs, outputs, and a quality bar they couldn't negotiate around.
Two separate products: LocalGenius (Next.js on Vercel) and LocalGenius Sites (Astro on Cloudflare). Separate codebases, shared AI layer.
Build
Rounds 4–8Sub-agents executed in parallel. The engineer built the core product while the architect designed the Sites infrastructure. The designer produced the brand guide while the copywriter drafted all marketing copy. Directors intervened twice when output drifted.
Hybrid AI architecture finalized: Claude Sonnet (conversation) + Llama 3.1 (content drafts) + Workers AI (voice, images, edge inference). Cost optimized to 1.2% of revenue.
Review
Round 9Steve audited every UI screen for taste. Rejected the first dashboard iteration — too busy. The second was clean enough to ship. Elon reviewed the architecture and flagged one security concern with the session token storage. Jensen reviewed the AI cost model and approved it.
17 revisions across 5 deliverable areas. Dashboard redesigned once. Auth flow hardened. All quality gates passed.
Ship
Round 10Both products deployed to production. CI/CD configured. 761 tests passing. Zero TypeScript errors. Zero ESLint errors. Joint summary written. Memory updated. The agency remembered what it learned.
localgenius.company live on Vercel. localgenius-sites.pages.dev live on Cloudflare. Full CI/CD pipeline. Agency returned to idle.
Architecture
Hybrid AI across two clouds.
The core insight: don't pick one AI model. Route intelligently across models by task type and cost. Primary inference on Claude, fallback to Llama 3.1, edge inference on Workers AI. Net cost: 1.2% of revenue.
Main Product
The core application — dashboards, onboarding, billing, AI orchestration.
Managed Sites
Client-facing websites deployed and managed per local business.
AI Layer
Hybrid inference: Claude primary, Llama 3.1 fallback, Workers AI for edge cost reduction.
Infrastructure
Deployment, payments, edge routing, and background jobs.
Deliverables
What the agency produced.
Product Design Vision
Complete UX specification for both products. Information architecture, interaction patterns, component library guidance.
Designer (Steve's team)
Brand Guide
Identity system, color palette, typography, voice and tone, logo usage. Applied to both products consistently.
Brand Strategist (Steve's team)
Customer Personas
Three primary personas: the independent restaurateur, the multi-location retailer, and the local service provider. Jobs-to-be-done for each.
Market Analyst (Elon's team)
Marketing Messaging
Full messaging framework. Homepage copy, email sequences, ad creative briefs, pitch positioning.
Copywriter (Steve's team)
Sales Demo Script
Five-minute live demo flow. Talking points for each feature. Objection handling. Pricing conversation guide.
Growth Strategist (Elon's team)
Built Product
Two production apps. 761 passing tests. Full CI/CD. Real infrastructure. Real AI. Real billing.
Engineer (Elon's team) + all directors
What we learned
The debate is not theater.
Steve and Elon disagreed on pricing in round one. The argument lasted three rounds. The resolution — metered AI cost as a percentage of revenue — is the reason the unit economics work.
Parallel beats sequential.
Six sub-agents running simultaneously compressed what would have been weeks of sequential handoffs into a single session. The bottleneck was review, not build.
Memory compounds.
The agency updated its memory after this project. The patterns learned — AI cost optimization, hybrid cloud routing, the review bottleneck — are available to every future project automatically.
Your PRD could be next.
Write a clear requirements document. Drop it in. The agency debates, plans, builds, reviews, and ships. You get the product.