Forge OS is an on-device, agentic Android operating-system layer for Large Language Models. It provides a complete environment for AI agents to execute tasks, manage files, automate workflows, and interact with your device—all running locally with no server.
Forge OS requires Android 26 or higher (API 26+). We recommend Android 34+ for best performance. You'll need at least 2GB of RAM and 500MB of free storage. The app runs on 64-bit ARM processors.
Forge OS works with OpenAI, Anthropic, Groq, Google Gemini, OpenRouter, xAI, DeepSeek, Mistral, Together, Cerebras, and local Ollama. You can add multiple providers and set up a fallback chain so if one provider fails, the agent automatically tries the next one.
Yes. Everything runs on your device. Your API keys are encrypted on-device using EncryptedSharedPreferences. Memory, files, and conversation history are stored locally. No data is sent to our servers. You're responsible for your LLM provider's privacy policy.
Forge OS requires an internet connection to communicate with your LLM provider. However, it's designed to keep working on flaky networks and will retry automatically. Local Ollama support allows you to run models entirely on-device if you have sufficient resources.
Forge OS includes Python 3.11 via Chaquopy. Pre-installed packages include numpy, pandas, pillow, requests, beautifulsoup4, and many others. You can execute Python scripts with timeouts and captured output.
The pre-installed packages cover most common use cases. For additional packages, you can create plugins or use the agent's Python execution environment. Some packages may not work on Android due to platform limitations.
Forge OS uses a three-tier memory architecture: working memory (current conversation), daily memory (today's events), and long-term memory (semantic embeddings). The agent learns to write important information to memory and search it before research to avoid recomputing things.
Use plain-English syntax like "every 30 m" or "daily at 9 am". Alarms support actions: NOTIFY (send notification), RUN_TOOL (execute a tool), RUN_PYTHON (run Python code), or PROMPT_AGENT (run a prompt through the agent loop). Every fire is logged.
Plugins are Python packages (.fp or .zip files) that extend the agent's tool surface. You can install them from the Plugins screen or have the agent create new plugins mid-loop. Plugins survive app upgrades.
MCP (Model Context Protocol) is a standard for connecting AI agents to external tools and resources. Forge OS includes an MCP client so you can connect to MCP servers and import their tools. Configure in Settings → Tools → MCP.
Companion mode is a warmer, everyday conversation mode with persona, episodic memory, and safety features. It includes crisis-aware responses, dependency monitoring, real-world nudges (opt-in), and daily token budgets. All data is stored locally.
Yes. Companion mode includes safeguards against romantic/sexual content and dependency language. It has region-aware crisis lines, dependency monitoring, and memory transparency controls. You can delete all Companion data at any time.
Yes. Forge OS exposes a permission-gated AIDL interface so other Android apps can call it as an on-device LLM service.
Yes. Forge OS includes first-class git tools (init, add, commit, branch, log, diff, push, pull, clone) backed by JGit. Personal access tokens are stored in encrypted memory; HTTPS only.
Forge OS includes a cost meter that tracks token and USD spend per call, session, and lifetime. Optional Compact Mode shrinks prompts to reduce costs.
Forge OS requests: SCHEDULE_EXACT_ALARM (for alarms), POST_NOTIFICATIONS (for notifications), and file access (for the workspace). All permissions are optional and can be managed in Settings.
Yes. Forge OS is available on GitHub under the MIT license. You can review the code, contribute, or build your own version.
Report issues on the GitHub repository: github.com/thekingsmediastudio/forge-os/issues