One Prompt.
Complete Application.
8 specialized AI agents collaborate autonomously — designing, coding, testing, and deploying production apps across iOS, macOS, Web, Android, and ML. No human intervention required.
0
Agent Skills
0+
Team Templates
0
Platforms
0
DB Targets
0
Runtime Modules
0
LLM Layers
Intelligent Orchestration
The Architect analyzes your prompt, assembles the optimal team, and orchestrates the entire lifecycle — from design to deployment.
Prompt Analysis
The Architect dissects your prompt — detects project type, complexity, required platforms, and optimal team composition.
Team Assembly
Automatically selects from 13+ team templates. Each agent gets role-specific instructions, skills, and the best available LLM.
Autonomous Execution
Agents collaborate via EventBus and SharedMemory. PM writes specs → Designer creates mockups → Developer codes → QA tests.
Quality & Deploy
8-pass Supervisor system auto-fixes errors. Senior Developer handles escalations. Then auto-deploys to your chosen target.
8 Specialized Agents
Not a single AI assistant — a full team of specialized agents, each with unique skills and capabilities.
Analyzes requirements, writes PRDs, delegates tasks
Creates mockups, design tokens, component specs
Writes production code across all platforms
Automated testing, edge cases, regression checks
Build automation, deployment, CI/CD pipelines
Schema design, ORM generation, migrations
Escalation handler for complex build errors
Meta-intelligence: optimizes the entire system
9 Runtime Modules
The infrastructure that makes multi-agent collaboration possible. Real-time communication, shared state, fault tolerance, and continuous learning.
EventBus
Pub/sub event distribution
SharedMemory
Project-scoped KV store
MessageBus
Agent-to-agent messaging
ArtifactStore
File & code persistence
AuditLog
Full action traceability
ApprovalGate
Human-in-the-loop checkpoints
Resilience
Retry, fallback, checkpoint
TaskDelegator
Sub-task orchestration
LearningStore
Continuous improvement
The Architect — Meta-Intelligence
An LLM-powered orchestrator that observes the entire system and intervenes at 4 lifecycle points: pre-task planning, inter-phase monitoring, post-task learning, and system evolution. It optimizes prompts, identifies risks, extracts lessons, and continuously improves team performance.
8-Pass Supervisor
Automated build → fix → rebuild cycle. 30+ regex patterns + LLM-powered error resolution. 90%+ success rate.
6 Platforms. One System.
Build for any platform from a single prompt. Auto-detection chooses the right frameworks and build pipeline.
iOS
SwiftUI, UIKit
macOS
SwiftUI, AppKit
Web
React, Next.js, Vue
Android
Jetpack Compose
Backend
Python, Node, Go
ML/AI
PyTorch, TensorFlow
Construct & Construct Pro
AI-powered UI code generation. Describe your design — get production-ready code for any framework.
Construct
Claude Sonnet • 8 creditsProduction UI code generation with 5 frameworks and 8 style presets.
Construct Pro
Claude Opus + Self-Review • 20 creditsPremium quality with Opus generation + Sonnet review pass. Same frameworks, superior output.
🎨 Figma Integration
Extract design tokens, components, and assets directly from Figma files.
🗄️ 10 DB Targets
Generate schemas, ORM models, migrations, and seed data for SQLite, Supabase, Firebase, Prisma, and more.
🖼️ Auto Icon Gen
9 visual styles, keyword-based colors. Auto-generates .icns and Assets.xcassets for native apps.
5 Deploy Targets
From build to production in one step. Stable URLs, persistent projects, automatic triggering after QA passes.
Vercel
Instant web deployment with stable URLs
GitHub Pages
Static site hosting via GitHub
TestFlight
iOS/macOS beta distribution
Local Install
Direct to /Applications
Share builds via email delivery
Your Phone is a Worker Node
The iOS app runs LLMs on-device with Metal GPU acceleration. Use your phone as a compute node in the distributed worker network.
4-Layer LLM Architecture
Cloud API
RemoteClaude Opus/Sonnet, GPT-4o — maximum capability
llama.cpp (GGUF)
On-DeviceQwen3.5 2B/4B on Metal GPU — real-time inference
Core ML
On-DeviceApple Neural Engine optimized models
Foundation Models
FreeApple's built-in ~3B model, iOS 26+ (free)
Worker Mode: Your phone connects to the orchestrator via WebSocket, receives tasks, and returns results using on-device or cloud LLMs.
Commander Mode: Full project management — create projects, manage teams, send tasks, monitor activity, deploy — all from your phone.
Neo vs. Everyone Else
AI coding assistants help you write code. Neo builds, tests, and deploys complete applications autonomously.
| Feature | Neo | Cursor | Copilot | v0 | Replit |
|---|---|---|---|---|---|
| True Multi-Agent Collaboration | ✓ | — | — | — | — |
| Cross-Platform (iOS + Web + Android) | ✓ | — | — | — | — |
| Autonomous End-to-End Pipeline | ✓ | — | — | — | — |
| Auto Quality Assurance (8-pass) | ✓ | — | — | — | — |
| Built-in Deployment | ✓ | — | — | ✓ | ✓ |
| On-Device LLM (llama.cpp) | ✓ | — | — | — | — |
| Distributed Worker Nodes | ✓ | — | — | — | — |
| Self-Learning System | ✓ | — | — | — | — |
| 100% Local / Privacy Mode | ✓ | — | — | — | — |
| BYOK (Bring Your Own Key) | ✓ | — | — | — | — |
Others assist with coding. Neo replaces the entire development team.
White Rabbit CLI
"Follow the white rabbit." Full TUI dashboard with real-time agent activity. Built in Go.
rabbit quickstartZero-config project creation
rabbit deployDeploy to any target
rabbit ollamaManage local AI models
Built by Neo. Running Now.
These apps were generated entirely by Neo — from a single prompt to a running application in /Applications.
FeedMe
Meal recommendation with AI
Comodore
Pomodoro timer, Turkish voice
Yiyoo
Diet tracker + Claude Vision
Minty
Selfie to pixel art converter
Complete Privacy. Zero Cloud.
Run entirely on your machine with Ollama. No API keys, no cloud calls, no data leaving your device. Your code stays yours — Neo generates standard source code, not locked-in proprietary formats.
Start Free. Scale Infinitely.
Free tier with full local AI. BYOK for zero-cost cloud models. Credits per task, not per token.
Starter
Pro
Premium
Business
Credits are charged per task, not per token. Use your own API keys (BYOK) and pay 0 credits for LLM usage. Ollama is always free.
Ready to Build with
AI Agent Teams?
One prompt. 8 agents. Complete application. From idea to production — automatically.