Gumm — Documentation

Gumm is a self-hosted, modular AI brain. It runs entirely on your own infrastructure — your data never leaves your server.


What is Gumm?

Gumm is not just a chatbot. It is a brain you own and deploy, capable of growing without limits through a module system. Think of it as a personal AI infrastructure:

  • The brain lives on a server or VPS. It knows who you are, remembers your conversations, and can use any tool you give it.
  • The CLI turns any of your machines (laptop, desktop, server) into a connected agent. Run gumm up on a machine and the brain can control it remotely — open apps, run commands, take screenshots, read and write files — triggered from chat or Telegram.
  • Modules extend the brain’s capabilities infinitely: Spotify, Gmail, weather, calendar, reminders, GitHub, and anything else you can code or install. Each module adds tools that the LLM calls automatically when relevant.
  • Telegram is your mobile interface. Ask the brain to do something from your phone, and it happens on your server or on one of your connected machines.

The more modules you install, the more powerful the brain becomes. There is no hard limit on what it can do.


What Gumm can do

CapabilityDetails
ChatFull conversation with any OpenRouter-supported LLM (GPT-4, Claude, Mistral, Llama…)
Persistent memoryAuto-extracts and stores personal facts from conversations
Self-evolving knowledgeThe brain writes and maintains its own knowledge base
Unlimited modulesAny capability can be added via hot-swappable modules
Remote machine controlgumm up on any machine → brain can control it from chat or Telegram
Distributed storageAny machine can become a file storage node for the brain
TelegramInteract via a Telegram bot from any device, anytime
CRON schedulesModules can run recurring tasks (daily briefings, reminders, snapshots)
Secure mesh networkingPrivate VPN between brain and devices via Tailscale or NetBird
Secrets vaultStore credentials safely; sensitive patterns are auto-redacted before reaching the LLM
Device registryThe brain tracks all connected CLI machines and their status

What Gumm cannot do (current limitations)

  • Single-user only — Gumm is a personal assistant, not a multi-tenant platform. One admin account.
  • No real-time web browsing by default — the brain doesn’t search the web unless a module provides that tool.
  • Requires OpenRouter — LLM calls go through OpenRouter. You need an API key and internet access for the AI to work.
  • No native mobile app — mobile access is via the browser or Telegram.
  • Module sandboxing is logical, not OS-level — modules run in the same process; a buggy module is isolated via try/catch but is not a full OS sandbox.

Requirements

ComponentRequirement
Server / VPSAny x86_64 or ARM64 Linux machine, 512 MB RAM minimum (1 GB+ recommended)
OpenRouter accountFree tier available — openrouter.ai
Domain / reverse proxyOptional, but recommended for HTTPS

The recommended install method handles Docker and all dependencies automatically. You just need a fresh Linux VPS.


Quick start — one command

Run this on a fresh Linux VPS and the script handles everything: Docker, VPN, Caddy, CLI, and the full Gumm setup.

curl -fsSL https://raw.githubusercontent.com/gumm-ai/gumm/main/scripts/setup-server.sh | sudo bash

The installer is interactive — it asks for your admin password, VPN choice (Tailscale or NetBird), and Telegram token. When it finishes, your dashboard URL is printed and ready.

→ Full installation guide: installation/README.md


Documentation

SectionDescription
InstallationDeploy Gumm — automated script or manual Docker
VPN — TailscaleSecure mesh networking with Tailscale
VPN — NetBirdSecure mesh networking with NetBird (EU)
Getting StartedSetup Wizard, first login, first message
ChatUsing the conversation interface
Brain & MemoryIdentity, memory, and knowledge base
ModulesInstalling and managing modules
CLI & Agent ModeRemote machine control with the CLI
TelegramConnecting a Telegram bot
Technical OverviewArchitecture, configuration, API reference
SecuritySecurity model, hardening, and best practices
TroubleshootingCommon issues and how to fix them