Skip to content

Configuration

OpenMLR can be configured via environment variables (.env file) or through the Settings UI after login.

Environment Variables

Required vs Optional

Key point

No environment variables are strictly required to start OpenMLR. The app will start and guide you through configuration via the onboarding flow.

However, for the app to be functional, you need:

  1. A database connection (auto-configured in Docker)
  2. At least one LLM provider API key (configurable via UI)

Variable Categories

CategoryRequired to Start?Can Configure via UI?
DatabaseYes (auto in Docker)No
SecurityAuto-generated in devNo
LLM ProvidersNoYes
DevelopmentNoNo
Background JobsNoNo
Tools & IntegrationsNoYes
SandboxNoPartial

Database

Required for the app to function, but auto-configured when using Docker Compose.

VariableDefaultDescription
DATABASE_URLSee belowPostgreSQL connection string

Docker Compose default:

postgresql+asyncpg://openmlr:openmlr@db:5432/openmlr

Local development:

bash
DATABASE_URL="postgresql+asyncpg://user:pass@localhost:5432/openmlr"

Security

VariableDefaultDescription
JWT_SECRET_KEYAuto-generated in devSecret for signing auth tokens
CORS_ORIGINShttp://localhost:3000,http://localhost:5173Allowed CORS origins
ENVIRONMENTdevelopmentdevelopment or production

Production

In production, always set a strong JWT_SECRET_KEY:

bash
JWT_SECRET_KEY=$(openssl rand -hex 32)

LLM Providers

All LLM keys are optional at startup. Configure via Settings > Providers in the UI, or set in .env.

UI Configuration

API keys set via the Settings UI are stored encrypted in the database and take precedence over .env values. This is the recommended approach.

Cloud Providers

VariableProviderGet Key
OPENAI_API_KEYOpenAIplatform.openai.com
ANTHROPIC_API_KEYAnthropicconsole.anthropic.com
OPENROUTER_API_KEYOpenRouteropenrouter.ai
OPENCODE_GO_API_KEYOpenCode Goopencode.ai

Local Models

For self-hosted models with OpenAI-compatible APIs:

VariableDefaultDescription
OLLAMA_API_BASEhttp://localhost:11434/v1Ollama server URL
OLLAMA_MODELllama3.1Default Ollama model
LMSTUDIO_API_BASEhttp://localhost:1234/v1LM Studio server URL
LMSTUDIO_MODELdefaultDefault LM Studio model
LOCAL_API_BASEhttp://localhost:8000/v1Custom OpenAI-compatible API
LOCAL_MODELlocal/defaultCustom model name
LOCAL_API_KEYnot-neededAPI key if required

Custom Providers

You can add custom OpenAI-compatible or Anthropic-compatible providers via Settings > Providers > Add Custom Provider. Each custom provider requires:

FieldDescription
Display NameHuman-readable name shown in the model picker
Provider IDPrefix for model IDs (e.g., my-org makes models like my-org/model-name)
SDK TypeOpenAI SDK, Anthropic SDK, OpenRouter, or LiteLLM
API Base URLThe provider's API endpoint
API KeyAuthentication key

After saving, use the Fetch Models button to retrieve the provider's model list via its /models endpoint. Fetched models appear in the model picker alongside standard providers.

Model Selection

Models are auto-detected based on configured keys. The model picker shows:

  • Recently used models (top 5) for quick access
  • Models grouped by provider with logos, sorted by release date (newest first)
  • Live model lists from models.dev for standard providers

Override in Settings > Agent or the model dropdown.

Key PresentExample Model
ANTHROPIC_API_KEYanthropic/claude-sonnet-4
OPENAI_API_KEYopenai/gpt-4o
OPENROUTER_API_KEYopenrouter/anthropic/claude-sonnet-4
OLLAMA_MODELollama/llama3.1

Development

VariableDefaultDescription
DEV_MODEfalseEnables Swagger UI at /docs, disables static frontend serving

When DEV_MODE=true:

  • Swagger UI is available at http://localhost:3000/docs
  • ReDoc is available at http://localhost:3000/redoc
  • The root URL (/) redirects to /docs
  • The static frontend bundle is not served (use Vite dev server on port 5173 instead)

This is auto-set in Docker development mode (docker-compose.yml).


Background Jobs

Enable persistent processing that survives browser refreshes. Auto-configured in Docker Compose.

VariableDefaultDescription
REDIS_URLredis://localhost:6379/0Redis connection string
USE_BACKGROUND_JOBSfalseEnable Celery workers
USE_REDIS_PUBSUBfalseEnable Redis pub/sub for SSE

When enabled:

  • Agent continues processing if you close the browser
  • Multiple conversations process in parallel
  • Interrupt commands actually kill running worker tasks
  • SSE reconnection catches up on missed events

Tools & Integrations

All optional. Configure via Settings > Providers in the UI.

VariableServiceDescription
BRAVE_API_KEYBrave SearchWeb search capability
GITHUB_TOKENGitHubRepository access, code search
OPENALEX_API_KEYOpenAlexOptional API key for higher rate limits
SEMANTIC_SCHOLAR_API_KEYSemantic ScholarAPI key for paper search with abstracts

Research tools work without keys

Paper search (OpenAlex, arXiv, CrossRef) works without any API keys. API keys just provide higher rate limits.


Sandbox

Controls how the agent executes code.

VariableDefaultDescription
OPEN_MLR_DOCKER_IMAGEpython:3.12-slimDocker image for sandboxed execution
OPENMLR_ALLOW_DIRECT_EXECfalseAllow host execution (security risk)
OPENMLR_WORKSPACE_ROOTCurrent directoryBase path for file operations
MODAL_TOKEN_ID-Modal.com token ID
MODAL_TOKEN_SECRET-Modal.com token secret

MCP Servers

OpenMLR supports the Model Context Protocol (MCP) for connecting external tools. Configure MCP servers in Settings > MCP Servers.

Adding an MCP Server

  1. Go to Settings > MCP Servers
  2. Click Add Server
  3. Enter a unique server name
  4. Configure the transport:
    • HTTP: Enter the server URL (e.g., http://localhost:8080/mcp)
    • stdio: Enter the command and arguments (e.g., npx -y @anthropic/mcp-server-filesystem /path/to/dir)

Example Configuration

HTTP server:

Name: my-tools
Transport: HTTP
URL: http://localhost:3001/mcp

stdio server (filesystem access):

Name: filesystem
Transport: stdio
Command: npx
Arguments: -y, @anthropic/mcp-server-filesystem, /Users/me/projects

MCP servers are connected when you start a new session. Tools from connected servers appear alongside built-in tools.


Settings Pages

Settings are accessible from the sidebar after login:

RouteWhat You Configure
/settings/providersAPI keys for LLM providers and tools
/settings/agentDefault model, research model, max iterations
/settings/sandboxExecution environment (Docker/SSH/Modal)
/settings/writingCitation style, export format
/settings/mcpMCP server connections

Full .env Example

bash
# ═══════════════════════════════════════════════════════════
# REQUIRED FOR LOCAL DEVELOPMENT (auto-configured in Docker)
# ═══════════════════════════════════════════════════════════

DATABASE_URL="postgresql+asyncpg://openmlr:openmlr@localhost:5432/openmlr"
JWT_SECRET_KEY=change-me-in-production

# ═══════════════════════════════════════════════════════════
# LLM PROVIDERS (configure at least one - via .env or UI)
# ═══════════════════════════════════════════════════════════

# Cloud providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
OPENROUTER_API_KEY=sk-or-...

# Local models
# OLLAMA_API_BASE=http://localhost:11434/v1
# OLLAMA_MODEL=llama3.1

# ═══════════════════════════════════════════════════════════
# BACKGROUND JOBS (optional, auto-configured in Docker)
# ═══════════════════════════════════════════════════════════

REDIS_URL=redis://localhost:6379/0
USE_BACKGROUND_JOBS=true
USE_REDIS_PUBSUB=true

# ═══════════════════════════════════════════════════════════
# TOOLS (optional - configure via UI or here)
# ═══════════════════════════════════════════════════════════

BRAVE_API_KEY=...
GITHUB_TOKEN=ghp_...

Agent Configuration

backend/configs/agent_config.yaml controls agent defaults:

yaml
model_name: ""              # empty = auto-detect from available providers
max_iterations: 300         # max tool calls per conversation turn
stream: true                # stream responses
paper_search_budget: 25     # max papers per search
require_plan_approval: true # require approval before executing plans

Override per-user via Settings > Agent.