docs: add Ollama launch integration documentation (#716)

Document the new `ollama launch openclaude` command as a shortcut
for running OpenClaude through a local Ollama instance. This is
now supported in Ollama's launch system and handles all environment
variable setup automatically — no manual env vars needed.

Changes:
- README.md: Add "Using Ollama's launch command" section after the
  manual Ollama env var setup, and update the provider table to
  list `ollama launch` as a setup path for Ollama
- docs/advanced-setup.md: Add `ollama launch` as the recommended
  method at the top of the Ollama section, with the manual env var
  approach kept below as an alternative
This commit is contained in:
Rubens Oliveira
2026-04-16 10:23:44 -03:00
committed by GitHub
parent fbcd928f7f
commit d32a2a1329
2 changed files with 21 additions and 1 deletions

View File

@@ -92,6 +92,16 @@ $env:OPENAI_MODEL="qwen2.5-coder:7b"
openclaude
```
### Using Ollama's launch command
If you have [Ollama](https://ollama.com) installed, you can skip the env var setup entirely:
```bash
ollama launch openclaude --model qwen2.5-coder:7b
```
This automatically sets `ANTHROPIC_BASE_URL`, model routing, and auth so all API traffic goes through your local Ollama instance. Works with any model you have pulled — local or cloud.
## Setup Guides
Beginner-friendly guides:
@@ -114,7 +124,7 @@ Advanced and source-build guides:
| GitHub Models | `/onboard-github` | Interactive onboarding with saved credentials |
| Codex OAuth | `/provider` | Opens ChatGPT sign-in in your browser and stores Codex credentials securely |
| Codex | `/provider` | Uses existing Codex CLI auth, OpenClaude secure storage, or env credentials |
| Ollama | `/provider` or env vars | Local inference with no API key |
| Ollama | `/provider`, env vars, or `ollama launch` | Local inference with no API key |
| Atomic Chat | advanced setup | Local Apple Silicon backend |
| Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments |