6.2 KiB
OpenClaude Local Agent Playbook
This playbook is a practical guide to run OpenClaude with a local model (Ollama), work safely, and get strong day-to-day results.
1. What You Have
- A CLI agent loop that can read/write files, run terminal commands, and help with coding workflows.
- A local provider profile system (
profile:initanddev:profile). - Runtime checks (
doctor:runtime) and reporting (doctor:report). - A local model profile currently set to
llama3.1:8b.
2. Daily Start (Fast Path)
Run this in your project root:
bun run dev:profile
For quick switches:
# low latency preset
bun run dev:fast
# better coding quality preset
bun run dev:code
If everything is healthy, OpenClaude starts directly.
3. One-Time Setup (If Needed)
3.1 Initialize a local profile
bun run profile:init -- --provider ollama --model llama3.1:8b
Or let OpenClaude recommend the best local model for your goal:
bun run profile:init -- --provider ollama --goal coding
Preview recommendations before saving:
bun run profile:recommend -- --goal coding --benchmark
3.2 Confirm profile file
Get-Content .\.openclaude-profile.json
3.3 Validate environment
bun run doctor:runtime
4. Health and Diagnostics
4.1 Human-readable checks
bun run doctor:runtime
4.2 JSON diagnostics (automation/logging)
bun run doctor:runtime:json
4.3 Persist runtime report
bun run doctor:report
Report output:
reports/doctor-runtime.json
4.4 Hardening checks
# practical checks (smoke + runtime doctor)
bun run hardening:check
# strict checks (includes typecheck)
bun run hardening:strict
5. Provider Modes
5.1 Local mode (Ollama)
bun run profile:init -- --provider ollama --model llama3.1:8b
bun run dev:profile
Expected behavior:
- No API key required.
OPENAI_BASE_URLshould behttp://localhost:11434/v1.
5.2 OpenAI mode
bun run profile:init -- --provider openai --api-key sk-... --model gpt-4o
bun run dev:profile
Expected behavior:
- Real API key required.
- Placeholder values fail fast.
6. Troubleshooting Matrix
6.1 Script not found "dev"
Cause:
- You ran command in the wrong folder.
Fix:
cd C:\Users\Lucas Pedry\Documents\openclaude\openclaude
bun run dev:profile
6.2 ollama: term not recognized
Cause:
- Ollama not installed or PATH not loaded in this terminal.
Fix:
- Install Ollama from https://ollama.com/download/windows or
winget install Ollama.Ollama. - Open a new terminal and run:
ollama --version
6.3 Provider reachability failed for localhost
Cause:
- Ollama service not running.
Fix:
ollama serve
Then, in another terminal:
bun run doctor:runtime
6.4 Missing key for non-local provider URL
Cause:
OPENAI_BASE_URLpoints to remote endpoint without key.
Fix:
- Re-initialize profile for ollama:
bun run profile:init -- --provider ollama --model llama3.1:8b
Or auto-pick a local profile:
bun run profile:auto -- --goal balanced
6.5 Placeholder key (SUA_CHAVE) error
Cause:
- Placeholder was used instead of real key.
Fix:
- For OpenAI: use a real key.
- For Ollama: no key needed; keep localhost base URL.
7. Recommended Local Models
- Fast/general:
llama3.1:8b - Better coding quality (if hardware supports):
qwen2.5-coder:14b - Low-resource fallback: smaller instruct model
Switch model quickly:
bun run profile:init -- --provider ollama --model qwen2.5-coder:14b
bun run dev:profile
Preset shortcuts already configured:
bun run profile:fast # llama3.2:3b
bun run profile:code # qwen2.5-coder:7b
Goal-based auto-selection:
bun run profile:auto -- --goal latency
bun run profile:auto -- --goal balanced
bun run profile:auto -- --goal coding
8. Practical Prompt Playbook (Copy/Paste)
8.1 Code understanding
- "Map this repository architecture and explain the execution flow from entrypoint to tool invocation."
- "Find the top 5 risky modules and explain why."
8.2 Refactoring
- "Refactor this module for clarity without behavior change, then run checks and summarize diff impact."
- "Extract shared logic from duplicated functions and add minimal tests."
8.3 Debugging
- "Reproduce the failure, identify root cause, implement fix, and validate with commands."
- "Trace this error path and list likely failure points with confidence levels."
8.4 Reliability
- "Add runtime guardrails and fail-fast messages for invalid provider env vars."
- "Create a diagnostic command that outputs JSON report for CI artifacts."
8.5 Review mode
- "Do a code review of unstaged changes, prioritize bugs/regressions, and suggest concrete patches."
9. Safe Working Rules
- Run
doctor:runtimebefore debugging provider issues. - Prefer
dev:profileover manual env edits. - Keep
.openclaude-profile.jsonlocal (already gitignored). - Use
doctor:reportbefore asking for help so you have a reproducible snapshot.
10. Quick Recovery Checklist
When something breaks, run in order:
bun run doctor:runtime
bun run doctor:report
bun run smoke
If answers are very slow, check processor mode:
ollama ps
If PROCESSOR shows CPU, your setup is valid but latency will be higher for large models.
If local model mode is failing:
ollama --version
ollama serve
bun run doctor:runtime
bun run dev:profile
11. Command Reference
# profile
bun run profile:init -- --provider ollama --model llama3.1:8b
bun run profile:init -- --provider openai --api-key sk-... --model gpt-4o
# launch
bun run dev:profile
bun run dev:ollama
bun run dev:openai
# diagnostics
bun run doctor:runtime
bun run doctor:runtime:json
bun run doctor:report
# quality
bun run smoke
bun run hardening:check
bun run hardening:strict
12. Success Criteria
Your setup is healthy when:
bun run doctor:runtimepasses provider and reachability checks.bun run dev:profileopens the CLI normally.- Model shown in the UI matches your selected profile model.