Guilgo Blog

Notes from my daily work with technology.

Those who know me know I’m a bit of a nerd, and my posts make that clear. I’ve been automating home and network tasks with my own script (cachifo.py) and n8n workflows. What’s interesting about OpenClaw is its proactive approach: not relying only on cronjobs or manual interaction. The idea is to integrate it into the existing system.

OpenClaw is a conversational assistant you can deploy on your own homelab to use local LLMs (Ollama) or APIs like OpenAI via Telegram. The guided installation usually works in theory, but in a real deployment with Docker, Telegram and Ollama on CPU several errors show up that are frustrating and give the impression of a “broken install”. This post documents the issues found during a real installation and how to fix them.

It complements the official OpenClaw guide and references like the 2026 Complete Guide from el diario IA, bringing homelab deployment gotchas with Docker, Telegram and CPU-only Ollama down to earth.


OpenClaw homelab CPU-only installation checklist

Before going into detail, this order avoids about 80% of first-day failures:

  1. Telegram webhook cleared (avoid 409 error)
  2. Telegram pairing approved
  3. Model: confirm context ≥16k (avoid “Model context window too small”)
  4. Ollama: correct baseUrl from Docker, model warm-up
  5. Timeouts: timeoutSeconds increased on CPU
  6. Cloudflare (if applicable): WebSockets, correct endpoint, bind=lan

Telegram: 409 conflict (webhook vs getUpdates/polling)

Symptom

Logs show something like:

  • getUpdates conflict (409): terminated by setWebhook request

And the bot stops responding or becomes erratic.

Cause

The same bot/token is being used by webhook (e.g. n8n) and polling (OpenClaw with getUpdates) at the same time. Telegram does not allow both.

Solution

Before debugging OpenClaw, check the webhook:

curl "https://api.telegram.org/bot<BOT_TOKEN>/getWebhookInfo"
curl "https://api.telegram.org/bot<BOT_TOKEN>/deleteWebhook?drop_pending_updates=true"

If you use n8n or another system with a webhook on that bot, you cannot use it at the same time as OpenClaw in polling mode.


Telegram: “You are not authorized” due to dmPolicy pairing

Symptom

/status or other commands return: “You are not authorized to use this command.”

Cause

By default, the Telegram channel has dmPolicy: "pairing". Until the owner approves the pairing, the bot rejects commands.

Solution

  1. Message the bot in DM → it returns a pairing code.
  2. Approve:
openclaw pairing approve telegram <CODE>
# or with Docker:
docker compose run --rm openclaw-cli pairing approve telegram <CODE>

For initial testing you can switch to dmPolicy: "open" (only for testing, not recommended for deployments exposed to the internet):

jq '.channels.telegram.dmPolicy = "open" | .channels.telegram.allowFrom = ["*"]' ~/.openclaw/openclaw.json > tmp && mv tmp ~/.openclaw/openclaw.json

Docker + Ollama: “localhost” is not the host (baseUrl and network)

Symptom

OpenClaw doesn’t detect Ollama, timeouts or fetch failed if pointing to 127.0.0.1:11434 from the container.

Cause

Inside Docker, 127.0.0.1 is the container itself, not the host.

Solution

Ollama in another container (recommended): same docker-compose network and baseUrl: "http://ollama:11434".

Ollama on host: use the host IP or host-gateway. On Docker Desktop (Mac/Windows) host.docker.internal:11434 works; on native Linux it doesn’t exist by default — if you need it, add to the container service:

extra_hosts:
  - "host.docker.internal:host-gateway"

And use baseUrl: "http://host.docker.internal:11434". Never use 127.0.0.1 from inside the container.


Model: the agent requires ≥16k for tools and conversational memory

Symptom

  • Model context window too small (8192 tokens). Minimum is 16000.

Cause

Common models (e.g. llama3.2) with 8192 tokens don’t meet OpenClaw’s minimum (16k). Onboarding may create the model with wrong values.

Solution

Use models with ≥16k context; ideally 32k on CPU to be safe. Valid example: mistral (32768).

Verify:

ollama show mistral | grep -i "context length"

If the config has low contextWindow or maxTokens, fix it:

jq '.models.providers.ollama.models[0].contextWindow = 32768 | .models.providers.ollama.models[0].maxTokens = 8192' ~/.openclaw/openclaw.json > tmp && mv tmp ~/.openclaw/openclaw.json
jq '.agents.defaults.contextTokens = 16384' ~/.openclaw/openclaw.json > tmp && mv tmp ~/.openclaw/openclaw.json

Timeouts: the first reply on CPU can take a while (warm-up and timeoutSeconds)

Symptom

“Request timed out before a response was generated.”

Cause

On CPU, the first model load can take several minutes and the request times out. The first run may include model compilation/optimisation in Ollama as well as loading into memory. The default timeout (600 s) is not enough.

Solution

  • Increase timeout in config:
jq '.agents.defaults.timeoutSeconds = 1800' ~/.openclaw/openclaw.json > tmp && mv tmp ~/.openclaw/openclaw.json
  • Warm up the model:
ollama run mistral "hello"

Other common homelab deployment issues

CRLF in scripts (Linux)

Symptom: install.sh: line 4: set: -: invalid option / $'\r': command not found

Solution:

sed -i 's/\r$//' install.sh preflight.sh add-telegram.sh

Missing config / Gateway won’t start

Symptom: Missing config. Run openclaw setup or set gateway.mode=local

Cause: Onboarding not run. Solution: run non-interactive onboarding with Ollama as custom provider:

docker compose run --rm openclaw-cli onboard --non-interactive --accept-risk --no-install-daemon \
  --auth-choice custom-api-key \
  --custom-base-url "http://ollama:11434/v1" \
  --custom-model-id "mistral" \
  --custom-api-key "ollama-local" \
  --custom-provider-id "ollama" \
  --custom-compatibility openai \
  --gateway-port 18789 --gateway-bind lan --skip-skills

Cloudflare / exposure: WebSockets checklist

If you expose OpenClaw with Cloudflare Tunnel or reverse proxy:

  • Enable WebSockets
  • Point to the correct port (e.g. http://<IP>:18789)
  • Confirm bind=lan and token auth in the UI
  • If the proxy requires auth, check gateway token or API key config in OpenClaw docs

Ollama vs OpenAI API on a modest homelab

On a real homelab with 12 GB RAM and CPU-only, Mistral 7B with Ollama can max out CPU (~350%) and RAM (~4.7 GB). The system goes into heavy swap and responses hit the timeout even with timeoutSeconds at 1800.

Practical alternative: use the OpenAI API (e.g. gpt-4o-mini). Immediate latency, no local resource pressure. Requires an API key and pay-per-use, but on modest homelabs it’s often the viable option for a bot that answers in seconds.

CriteriaOllama + Mistral (local)OpenAI API (gpt-4o-mini)
Privacy100% localData leaves to provider
Local resources4+ GB RAM, high CPUMinimal
LatencyMinutes on CPU-onlySeconds
CostFree (electricity)Pay-per-token
Modest homelabNot very practical for frequent conversationRecommended

To integrate Telegram with AI in monitoring projects (alerts, bots), you can combine OpenClaw with Wazuh and Telegram or with Docker stacks depending on your use case.


Quick health check (Ollama + OpenClaw)

ollama list
docker compose run --rm openclaw-cli models list

Useful log filter:

docker compose logs -f openclaw-gateway 2>&1 | grep -E "(error|fail|timeout|telegram|getUpdates|ollama)"

Conclusion

With these tweaks, OpenClaw goes from seeming unstable to behaving predictably in home setups. Most initial issues are not software bugs but implicit assumptions about network, model context and inference latency.