← Back to Home

Why We Built an AI Agent That Lives on Your Phone

Every AI assistant I tried was a chat window that forgot everything the moment I closed it. No memory, no files, no ability to actually run code or schedule work. I wanted an agent that could keep going while I slept — so I built one. Here's the thinking behind Forge OS and why on-device matters more than people realize.

Read more →

How We Built a Three-Tier Memory System for a Mobile AI Agent

Getting an LLM to remember things across sessions on a phone — without a server — is harder than it sounds. We ended up with three distinct tiers: working memory for the current turn, daily memory for today's context, and long-term semantic embeddings for everything worth keeping. This post walks through the architecture and the tradeoffs we made.

Read more →

Why We Built Safety Into Companion Mode Before Shipping

Forge OS includes a Companion mode for everyday conversation. Before shipping it, we had to decide what kind of product we were building. This is the reasoning behind the crisis lines, dependency monitoring, safety filter, memory transparency screen, and the no-dark-patterns audit — and an honest admission of what we still don't know.

Read more →

Running Python 3.11 on Android Without a Server

Chaquopy makes it possible, but there are real constraints: no native extensions that require compilation, a limited subset of the stdlib, and a 64-bit ARM-only target. We also needed AST-based import filtering so the agent can't import things it shouldn't. Here's exactly how we set it up, what packages we ship, and where the edges are.

Read more →