About This Architecture
Ghost-OS Edge AI Split-Compute Architecture distributes inference and control across a Raspberry Pi 4/5 nervous system and a laptop zero-footprint host, with local GPU inference via Ollama and fallback to cloud LLMs. The Raspberry Pi runs FastAPI, Ghost-Claw AI Agent, hardware abstraction layers (GPIO, I2C, camera), and persistent state via SQLite and btrfs snapshots, while the laptop hosts an Electron desktop app with React dashboard communicating via WebSocket and REST over USB-C Ethernet or Wi-Fi mDNS. This architecture enables offline-first autonomous operation with graceful degradation to Gemini API and cloud fallbacks, ideal for robotics, edge vision, and field deployments where latency and connectivity are critical constraints. Fork and customize this diagram on Diagrams.so to adapt the split-compute model for your own edge AI stack, whether adding additional sensors, swapping inference engines, or integrating alternative cloud providers. The modular design—separating presentation, application logic, inference, and hardware control—makes it easy to scale from single-robot prototypes to multi-agent deployments.