From d32a2a1329f6f65f0bc07d2409b9223d003c9fbf Mon Sep 17 00:00:00 2001 From: Rubens Oliveira <149012455+jfxdev01@users.noreply.github.com> Date: Thu, 16 Apr 2026 10:23:44 -0300 Subject: [PATCH] docs: add Ollama launch integration documentation (#716) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Document the new `ollama launch openclaude` command as a shortcut for running OpenClaude through a local Ollama instance. This is now supported in Ollama's launch system and handles all environment variable setup automatically — no manual env vars needed. Changes: - README.md: Add "Using Ollama's launch command" section after the manual Ollama env var setup, and update the provider table to list `ollama launch` as a setup path for Ollama - docs/advanced-setup.md: Add `ollama launch` as the recommended method at the top of the Ollama section, with the manual env var approach kept below as an alternative --- README.md | 12 +++++++++++- docs/advanced-setup.md | 10 ++++++++++ 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 1e2bbe6e..52ecc7b8 100644 --- a/README.md +++ b/README.md @@ -92,6 +92,16 @@ $env:OPENAI_MODEL="qwen2.5-coder:7b" openclaude ``` +### Using Ollama's launch command + +If you have [Ollama](https://ollama.com) installed, you can skip the env var setup entirely: + +```bash +ollama launch openclaude --model qwen2.5-coder:7b +``` + +This automatically sets `ANTHROPIC_BASE_URL`, model routing, and auth so all API traffic goes through your local Ollama instance. Works with any model you have pulled — local or cloud. + ## Setup Guides Beginner-friendly guides: @@ -114,7 +124,7 @@ Advanced and source-build guides: | GitHub Models | `/onboard-github` | Interactive onboarding with saved credentials | | Codex OAuth | `/provider` | Opens ChatGPT sign-in in your browser and stores Codex credentials securely | | Codex | `/provider` | Uses existing Codex CLI auth, OpenClaude secure storage, or env credentials | -| Ollama | `/provider` or env vars | Local inference with no API key | +| Ollama | `/provider`, env vars, or `ollama launch` | Local inference with no API key | | Atomic Chat | advanced setup | Local Apple Silicon backend | | Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments | diff --git a/docs/advanced-setup.md b/docs/advanced-setup.md index 75c401ee..291aee7d 100644 --- a/docs/advanced-setup.md +++ b/docs/advanced-setup.md @@ -84,6 +84,16 @@ OpenRouter model availability changes over time. If a model stops working, try a ### Ollama +Using `ollama launch` (recommended if you have Ollama installed): + +```bash +ollama launch openclaude --model llama3.3:70b +``` + +This handles all environment setup automatically — no env vars needed. Works with any local or cloud model available in your Ollama instance. + +Using environment variables manually: + ```bash ollama pull llama3.3:70b