docs: organize Python helpers and refresh README (#334)
* docs: organize Python helpers and refresh README * docs: add README status badges * test: centralize Python helper test imports * docs: add short provenance disclaimer
This commit is contained in:
134
README.md
134
README.md
@@ -1,33 +1,24 @@
|
||||
# OpenClaude
|
||||
|
||||
OpenClaude is an open-source coding-agent CLI that works with more than one model provider.
|
||||
OpenClaude is an open-source coding-agent CLI for cloud and local model providers.
|
||||
|
||||
Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping the same terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
|
||||
Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping one terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
|
||||
|
||||
[](https://github.com/Gitlawb/openclaude/actions/workflows/pr-checks.yml)
|
||||
[](https://github.com/Gitlawb/openclaude/tags)
|
||||
[](https://github.com/Gitlawb/openclaude/discussions)
|
||||
[](SECURITY.md)
|
||||
[](LICENSE)
|
||||
|
||||
[Quick Start](#quick-start) | [Setup Guides](#setup-guides) | [Providers](#supported-providers) | [Source Build](#source-build-and-local-development) | [VS Code Extension](#vs-code-extension) | [Community](#community)
|
||||
|
||||
## Why OpenClaude
|
||||
|
||||
- Use one CLI across cloud and local model providers
|
||||
- Use one CLI across cloud APIs and local model backends
|
||||
- Save provider profiles inside the app with `/provider`
|
||||
- Run locally with Ollama or Atomic Chat
|
||||
- Keep core coding-agent workflows: bash, file tools, grep, glob, agents, tasks, MCP, and web tools
|
||||
|
||||
## Provenance & Legal Notice
|
||||
|
||||
OpenClaude is derived from Anthropic's Claude Code CLI source code, which was
|
||||
inadvertently exposed in March 2026 through a packaging error in npm. The
|
||||
original Claude Code source is proprietary software owned by Anthropic PBC.
|
||||
|
||||
This project adds multi-provider support, strips telemetry, and adapts the
|
||||
codebase for open use. It is not an authorized fork or open-source release
|
||||
by Anthropic.
|
||||
|
||||
**"Claude" and "Claude Code" are trademarks of Anthropic PBC.**
|
||||
|
||||
Contributors should be aware that the legal status of distributing code
|
||||
derived from Anthropic's proprietary source is unresolved. See the LICENSE
|
||||
file for details.
|
||||
|
||||
---
|
||||
- Run with OpenAI-compatible services, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported providers
|
||||
- Keep coding-agent workflows in one place: bash, file tools, grep, glob, agents, tasks, MCP, and web tools
|
||||
- Use the bundled VS Code extension for launch integration and theme support
|
||||
|
||||
## Quick Start
|
||||
|
||||
@@ -37,7 +28,7 @@ file for details.
|
||||
npm install -g @gitlawb/openclaude
|
||||
```
|
||||
|
||||
If the npm install path later reports `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
|
||||
If the install later reports `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
|
||||
|
||||
### Start
|
||||
|
||||
@@ -47,8 +38,8 @@ openclaude
|
||||
|
||||
Inside OpenClaude:
|
||||
|
||||
- run `/provider` for guided setup of OpenAI-compatible, Gemini, Ollama, or Codex profiles
|
||||
- run `/onboard-github` for GitHub Models setup
|
||||
- run `/provider` for guided provider setup and saved profiles
|
||||
- run `/onboard-github` for GitHub Models onboarding
|
||||
|
||||
### Fastest OpenAI setup
|
||||
|
||||
@@ -94,8 +85,6 @@ $env:OPENAI_MODEL="qwen2.5-coder:7b"
|
||||
openclaude
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Setup Guides
|
||||
|
||||
Beginner-friendly guides:
|
||||
@@ -109,40 +98,26 @@ Advanced and source-build guides:
|
||||
- [Advanced Setup](docs/advanced-setup.md)
|
||||
- [Android Install](ANDROID_INSTALL.md)
|
||||
|
||||
---
|
||||
|
||||
## Supported Providers
|
||||
|
||||
| Provider | Setup Path | Notes |
|
||||
| --- | --- | --- |
|
||||
| OpenAI-compatible | `/provider` or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and compatible local `/v1` servers |
|
||||
| Gemini | `/provider` or env vars | Google Gemini support through the runtime provider layer (API key, access token, or local ADC) |
|
||||
| OpenAI-compatible | `/provider` or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and other compatible `/v1` servers |
|
||||
| Gemini | `/provider` or env vars | Supports API key, access token, or local ADC workflow on current `main` |
|
||||
| GitHub Models | `/onboard-github` | Interactive onboarding with saved credentials |
|
||||
| Codex | `/provider` | Uses existing Codex credentials when available |
|
||||
| Ollama | `/provider` or env vars | Local inference with no API key |
|
||||
| Atomic Chat | advanced setup | Local Apple Silicon backend |
|
||||
| Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments |
|
||||
|
||||
For Gemini, `/provider` can now save either the API-key path, a securely stored access-token path, or a local ADC profile.
|
||||
|
||||
---
|
||||
|
||||
## What Works
|
||||
|
||||
- Tool-driven coding workflows
|
||||
Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands
|
||||
- Streaming responses
|
||||
Real-time token output and tool progress
|
||||
- Tool calling
|
||||
Multi-step tool loops with model calls, tool execution, and follow-up responses
|
||||
- Images
|
||||
URL and base64 image inputs for providers that support vision
|
||||
- Provider profiles
|
||||
Guided setup plus saved `.openclaude-profile.json` support
|
||||
- Local and remote model backends
|
||||
Cloud APIs, local servers, and Apple Silicon local inference
|
||||
|
||||
---
|
||||
- **Tool-driven coding workflows**: Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands
|
||||
- **Streaming responses**: Real-time token output and tool progress
|
||||
- **Tool calling**: Multi-step tool loops with model calls, tool execution, and follow-up responses
|
||||
- **Images**: URL and base64 image inputs for providers that support vision
|
||||
- **Provider profiles**: Guided setup plus saved `.openclaude-profile.json` support
|
||||
- **Local and remote model backends**: Cloud APIs, local servers, and Apple Silicon local inference
|
||||
|
||||
## Provider Notes
|
||||
|
||||
@@ -155,13 +130,9 @@ OpenClaude supports multiple providers, but behavior is not identical across all
|
||||
|
||||
For best results, use models with strong tool/function calling support.
|
||||
|
||||
---
|
||||
|
||||
## Agent Routing
|
||||
|
||||
Route different agents to different AI providers within the same session. Useful for cost optimization (cheap model for code review, powerful model for complex coding) or leveraging model strengths.
|
||||
|
||||
### Configuration
|
||||
OpenClaude can route different agents to different models through settings-based routing. This is useful for cost optimization or splitting work by model strength.
|
||||
|
||||
Add to `~/.claude/settings.json`:
|
||||
|
||||
@@ -187,29 +158,19 @@ Add to `~/.claude/settings.json`:
|
||||
}
|
||||
```
|
||||
|
||||
### How It Works
|
||||
|
||||
- **agentModels**: Maps model names to OpenAI-compatible API endpoints
|
||||
- **agentRouting**: Maps agent types or team member names to model names
|
||||
- **Priority**: `name` > `subagent_type` > `"default"` > global provider
|
||||
- **Matching**: Case-insensitive, hyphen/underscore equivalent (`general-purpose` = `general_purpose`)
|
||||
- **Teams**: Team members are routed by their `name` — no extra config needed
|
||||
|
||||
When no routing match is found, the global provider (env vars) is used as fallback.
|
||||
When no routing match is found, the global provider remains the fallback.
|
||||
|
||||
> **Note:** `api_key` values in `settings.json` are stored in plaintext. Keep this file private and do not commit it to version control.
|
||||
|
||||
---
|
||||
|
||||
## Web Search and Fetch
|
||||
|
||||
By default, `WebSearch` now works on non-Anthropic models using DuckDuckGo. This gives GPT-4o, DeepSeek, Gemini, Ollama, and other OpenAI-compatible providers a free web search path out of the box.
|
||||
By default, `WebSearch` works on non-Anthropic models using DuckDuckGo. This gives GPT-4o, DeepSeek, Gemini, Ollama, and other OpenAI-compatible providers a free web search path out of the box.
|
||||
|
||||
> **Note:** DuckDuckGo fallback works by scraping search results and may be rate-limited, blocked, or subject to DuckDuckGo's Terms of Service. If you want a more reliable supported option, configure Firecrawl.
|
||||
|
||||
For Anthropic-native backends (Anthropic/Vertex/Foundry) and Codex responses, OpenClaude keeps the native provider web search behavior.
|
||||
For Anthropic-native backends and Codex responses, OpenClaude keeps the native provider web search behavior.
|
||||
|
||||
`WebFetch` works but uses basic HTTP plus HTML-to-markdown conversion. That fails on JavaScript-rendered pages (React, Next.js, Vue SPAs) and sites that block plain HTTP requests.
|
||||
`WebFetch` works, but its basic HTTP plus HTML-to-markdown path can still fail on JavaScript-rendered sites or sites that block plain HTTP requests.
|
||||
|
||||
Set a [Firecrawl](https://firecrawl.dev) API key if you want Firecrawl-powered search/fetch behavior:
|
||||
|
||||
@@ -219,14 +180,12 @@ export FIRECRAWL_API_KEY=your-key-here
|
||||
|
||||
With Firecrawl enabled:
|
||||
|
||||
- `WebSearch` can use Firecrawl's search API (while DuckDuckGo remains the default free path for non-Claude models)
|
||||
- `WebSearch` can use Firecrawl's search API while DuckDuckGo remains the default free path for non-Claude models
|
||||
- `WebFetch` uses Firecrawl's scrape endpoint instead of raw HTTP, handling JS-rendered pages correctly
|
||||
|
||||
Free tier at [firecrawl.dev](https://firecrawl.dev) includes 500 credits. The key is optional.
|
||||
|
||||
---
|
||||
|
||||
## Source Build
|
||||
## Source Build And Local Development
|
||||
|
||||
```bash
|
||||
bun install
|
||||
@@ -239,20 +198,31 @@ Helpful commands:
|
||||
- `bun run dev`
|
||||
- `bun run smoke`
|
||||
- `bun run doctor:runtime`
|
||||
- `bun run verify:privacy`
|
||||
- focused `bun test ...` runs for the areas you touch
|
||||
|
||||
---
|
||||
## Repository Structure
|
||||
|
||||
- `src/` - core CLI/runtime
|
||||
- `scripts/` - build, verification, and maintenance scripts
|
||||
- `docs/` - setup, contributor, and project documentation
|
||||
- `python/` - standalone Python helpers and their tests
|
||||
- `vscode-extension/openclaude-vscode/` - VS Code extension
|
||||
- `.github/` - repo automation, templates, and CI configuration
|
||||
- `bin/` - CLI launcher entrypoints
|
||||
|
||||
## VS Code Extension
|
||||
|
||||
The repo includes a VS Code extension in [`vscode-extension/openclaude-vscode`](vscode-extension/openclaude-vscode) for OpenClaude launch integration and theme support.
|
||||
|
||||
---
|
||||
The repo includes a VS Code extension in [`vscode-extension/openclaude-vscode`](vscode-extension/openclaude-vscode) for OpenClaude launch integration, provider-aware control-center UI, and theme support.
|
||||
|
||||
## Security
|
||||
|
||||
If you believe you found a security issue, see [SECURITY.md](SECURITY.md).
|
||||
|
||||
---
|
||||
## Community
|
||||
|
||||
- Use [GitHub Discussions](https://github.com/Gitlawb/openclaude/discussions) for Q&A, ideas, and community conversation
|
||||
- Use [GitHub Issues](https://github.com/Gitlawb/openclaude/issues) for confirmed bugs and actionable feature work
|
||||
|
||||
## Contributing
|
||||
|
||||
@@ -264,16 +234,12 @@ For larger changes, open an issue first so the scope is clear before implementat
|
||||
- `bun run smoke`
|
||||
- focused `bun test ...` runs for touched areas
|
||||
|
||||
---
|
||||
|
||||
## Disclaimer
|
||||
|
||||
OpenClaude is an independent community project and is not affiliated with, endorsed by, or sponsored by Anthropic.
|
||||
|
||||
"Claude" and "Claude Code" are trademarks of Anthropic.
|
||||
|
||||
---
|
||||
OpenClaude originated from the Claude Code codebase and has since been substantially modified to support multiple providers and open use. "Claude" and "Claude Code" are trademarks of Anthropic PBC. See [LICENSE](LICENSE) for details.
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
See [LICENSE](LICENSE).
|
||||
|
||||
1
python/__init__.py
Normal file
1
python/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Python helper package for standalone provider-side utilities.
|
||||
1
python/tests/__init__.py
Normal file
1
python/tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Pytest package marker for the Python helper test suite.
|
||||
5
python/tests/conftest.py
Normal file
5
python/tests/conftest.py
Normal file
@@ -0,0 +1,5 @@
|
||||
from pathlib import Path
|
||||
import sys
|
||||
|
||||
# Make the sibling `python/` helper modules importable from this test package.
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[1]))
|
||||
@@ -1,6 +1,6 @@
|
||||
"""
|
||||
test_atomic_chat_provider.py
|
||||
Run: pytest test_atomic_chat_provider.py -v
|
||||
Run: pytest python/tests/test_atomic_chat_provider.py -v
|
||||
"""
|
||||
|
||||
import pytest
|
||||
@@ -1,6 +1,6 @@
|
||||
"""
|
||||
test_ollama_provider.py
|
||||
Run: pytest test_ollama_provider.py -v
|
||||
Run: pytest python/tests/test_ollama_provider.py -v
|
||||
"""
|
||||
|
||||
import pytest
|
||||
@@ -13,25 +13,31 @@ from ollama_provider import (
|
||||
check_ollama_running,
|
||||
)
|
||||
|
||||
|
||||
def test_normalize_strips_prefix():
|
||||
assert normalize_ollama_model("ollama/llama3:8b") == "llama3:8b"
|
||||
|
||||
|
||||
def test_normalize_no_prefix():
|
||||
assert normalize_ollama_model("codellama:34b") == "codellama:34b"
|
||||
|
||||
|
||||
def test_normalize_empty():
|
||||
assert normalize_ollama_model("") == ""
|
||||
|
||||
|
||||
def test_converts_string_content():
|
||||
messages = [{"role": "user", "content": "Hello!"}]
|
||||
result = anthropic_to_ollama_messages(messages)
|
||||
assert result == [{"role": "user", "content": "Hello!"}]
|
||||
|
||||
|
||||
def test_converts_text_block_list():
|
||||
messages = [{"role": "user", "content": [{"type": "text", "text": "What is Python?"}]}]
|
||||
result = anthropic_to_ollama_messages(messages)
|
||||
assert result[0]["content"] == "What is Python?"
|
||||
|
||||
|
||||
def test_converts_image_block_to_placeholder():
|
||||
messages = [{"role": "user", "content": [{"type": "image", "source": {}}, {"type": "text", "text": "Describe this"}]}]
|
||||
result = anthropic_to_ollama_messages(messages)
|
||||
@@ -68,6 +74,7 @@ def test_converts_multi_turn():
|
||||
assert len(result) == 3
|
||||
assert result[1]["role"] == "assistant"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ollama_running_true():
|
||||
mock_response = MagicMock()
|
||||
@@ -77,6 +84,7 @@ async def test_ollama_running_true():
|
||||
result = await check_ollama_running()
|
||||
assert result is True
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ollama_running_false_on_exception():
|
||||
with patch("ollama_provider.httpx.AsyncClient") as MockClient:
|
||||
@@ -84,6 +92,7 @@ async def test_ollama_running_false_on_exception():
|
||||
result = await check_ollama_running()
|
||||
assert result is False
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_list_models_returns_names():
|
||||
mock_response = MagicMock()
|
||||
@@ -95,6 +104,7 @@ async def test_list_models_returns_names():
|
||||
models = await list_ollama_models()
|
||||
assert "llama3:8b" in models
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ollama_chat_returns_anthropic_format():
|
||||
mock_response = MagicMock()
|
||||
@@ -115,9 +125,11 @@ async def test_ollama_chat_returns_anthropic_format():
|
||||
assert result["role"] == "assistant"
|
||||
assert "42" in result["content"][0]["text"]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ollama_chat_prepends_system():
|
||||
captured = {}
|
||||
|
||||
async def mock_post(url, json=None, **kwargs):
|
||||
captured.update(json or {})
|
||||
m = MagicMock()
|
||||
@@ -134,7 +146,7 @@ async def test_ollama_chat_prepends_system():
|
||||
await ollama_chat(
|
||||
model="llama3:8b",
|
||||
messages=[{"role": "user", "content": "Hi"}],
|
||||
system="Be helpful."
|
||||
system="Be helpful.",
|
||||
)
|
||||
assert captured["messages"][0]["role"] == "system"
|
||||
assert "helpful" in captured["messages"][0]["content"]
|
||||
@@ -2,7 +2,7 @@
|
||||
test_smart_router.py
|
||||
--------------------
|
||||
Tests for the SmartRouter.
|
||||
Run: pytest test_smart_router.py -v
|
||||
Run: pytest python/tests/test_smart_router.py -v
|
||||
"""
|
||||
|
||||
import pytest
|
||||
@@ -18,6 +18,7 @@ from smart_router import SmartRouter, Provider
|
||||
def fake_api_key(monkeypatch):
|
||||
monkeypatch.setenv("FAKE_KEY", "test-key")
|
||||
|
||||
|
||||
def make_provider(name, healthy=True, configured=True,
|
||||
latency=100.0, cost=0.002, errors=0, requests=0):
|
||||
p = Provider(
|
||||
@@ -33,7 +34,7 @@ def make_provider(name, healthy=True, configured=True,
|
||||
p.error_count = errors
|
||||
p.request_count = requests
|
||||
if not configured:
|
||||
p.api_key_env = "" # makes is_configured False for non-ollama
|
||||
p.api_key_env = "" # makes is_configured False for non-local providers
|
||||
return p
|
||||
|
||||
|
||||
Reference in New Issue
Block a user