Configuring OpenClaw with Docker: A Practical Guide
How I built a self-hosted AI assistant infrastructure, with model recommendations and hard-earned lessons from the setup process.
OpenClaw is an autonomous AI assistant framework that runs on your infrastructure. Here's how I set it up in Docker, configured cloud models, and integrated it with my homelab services.
Prerequisites
- Linux server with Docker support enabled
- Neo4j instance (for memory storage)
- Discord server (for notifications and interaction)
- Ollama Cloud API access (or local Ollama with cloud relay)
Step 1: Deploy the OpenClaw Container
Deploy OpenClaw as a Docker container. Key configuration:
- Repository:
openclaw/openclaw:latest - Port Mapping: 18789 (gateway) → host port of your choice
- Volume Mapping:
/mnt/user/appdata/openclaw→/root/.openclaw
Step 2: Configure Models
This is where most people get stuck. Here's what worked for me:
Cloud Models (Recommended)
Edit openclaw.json in your workspace:
{
"models": {
"default": "ollama/qwen3.5:cloud",
"available": [
{ "id": "ollama/qwen3.5:cloud", "name": "Qwen 3.5" },
{ "id": "ollama/devstral-small-2:cloud", "name": "Devstral Small 2" },
{ "id": "ollama/glm-5:cloud", "name": "GLM-5" },
{ "id": "ollama/ministral-3:8b-cloud", "name": "Ministral 3" }
]
}
}Model recommendations:
- Qwen3.5:cloud (397B) - Primary model, handles everything including vision
- Devstral Small 2:cloud - Coding tasks
- GLM-5:cloud - Heavy text tasks (blog writing, analysis)
- Ministral 3:8b-cloud - Heartbeats and simple cron jobs
Local Models (If You Have the Hardware)
If you have a GPU with 24GB+ VRAM, you can run smaller models locally:
docker exec openclaw ollama pull llama3.2:3bThen update your config to use ollama/llama3.2:3b for simple tasks.
Step 3: Set Up Neo4j Memory
OpenClaw uses Neo4j as its primary memory store. Configure the connection in TOOLS.md:
## Neo4j Memory Graph
**Connection:** bolt://your-server-ip:7687
**Auth:** neo4j / your-passwordThe memory system automatically ingests data from markdown files and creates graph nodes for people, projects, goals, services, and events.
Step 4: Configure Discord Integration
In the OpenClaw config, add your Discord bot token and server ID:
{
"plugins": {
"entries": {
"discord": {
"token": "your-bot-token",
"guildId": "your-server-id"
}
}
}
}Create channels for:
- #status - Hourly health checks
- #updates - Daily summaries
- #blog-prompts - Daily writing prompts (optional)
Step 5: Set Up Cron Jobs
OpenClaw includes a cron system for scheduled tasks. Here are the essential jobs I configured:
Hourly Status Check
{
"name": "hourly-alive-check",
"schedule": "0 * * * *",
"model": "ministral-3:8b-cloud",
"task": "Check system health and post to Discord"
}Media Manager (Every 30 Minutes)
{
"name": "media-manager",
"schedule": "*/30 * * * *",
"model": "ministral-3:8b-cloud",
"task": "Check Radarr/Sonarr/Lidarr for missing content and stalled downloads"
}Daily Blog Prompt (6 AM)
{
"name": "blog-prompt",
"schedule": "0 6 * * *",
"model": "glm-5:cloud",
"task": "Generate a blog writing prompt and post to #blog-prompts"
}Step 6: Configure Exec Permissions
For OpenClaw to run shell commands (like checking Docker containers or rebooting nodes), you need to configure exec permissions. Edit exec-approvals.json:
{
"defaults": {
"security": "full",
"ask": "off"
}
}Security note: Only use security: "full" in trusted home network environments. For multi-user or exposed deployments, prefer security: "allowlist" with ask: "on-miss".
Step 7: Add Node Support (Optional)
OpenClaw supports remote nodes (Windows, macOS, Linux) for distributed tasks. I set up a Windows gaming PC as a node for browser automation:
- Install OpenClaw Node on the Windows machine
- Configure gateway URL:
ws://your-server-ip:18789 - Enable WinRM for remote reboot capabilities
The main instance can now reboot the Windows node if it goes offline—a huge quality-of-life improvement.
Hard-Earned Lessons
- Don't run large models locally unless you have enterprise hardware. I wasted days trying to make 70B models work on consumer GPUs. Cloud models are worth it.
- Neo4j is non-negotiable. Without graph-based memory, your AI is just a chatbot. Invest time in setting this up properly.
- Start with one cron job. Don't try to automate everything at once. Begin with hourly status checks, then expand.
- Monitor the monitor. Set up alerts for when OpenClaw itself goes down. I use a simple watchdog script that restarts the container if it crashes.
- Document everything. Your future self will thank you when you need to debug why the media manager stopped working at 3 AM.
What's Next
With OpenClaw running, you can start building automations. Some ideas:
- Email triage and categorization
- Calendar management and meeting prep
- Infrastructure monitoring with intelligent alerts
- Content generation (blog posts, reports, summaries)
The framework is flexible—what you build depends on your needs. If you run into issues, the community Discord is active and helpful.