* fix: resolve keyboard freeze via sync render path and stable useAppState selectors Two compounding React 19 defects caused keyboard input to freeze after MCP notifications or rapid state updates: Defect 2 (ink.tsx): The render() path used async updateContainer, which batches updates across scheduler ticks. Keyboard events dispatched mid-render drained faster than React processed them, causing input to appear frozen. Fixed by switching to updateContainerSync + flushSyncWork (same pattern already used in the unmount path). Defect 4 (AppState.tsx): useAppState and useAppStateMaybeOutsideOfProvider used React Compiler _c cache invalidation tied to selector identity. Inline arrow selectors (new reference each render) invalidated the cache every cycle, producing a new `get` function. useSyncExternalStore treats a new `get` as a tearing signal, re-syncing state and re-rendering — causing a loop that starved the input handler. Fixed with useRef + useCallback(fn, []) to give useSyncExternalStore a permanently stable snapshot reference. Note: AppState.tsx is React Compiler output. The _c bypass for these two hooks is intentional — compiler cache invalidation on inline selectors is the root cause of the tearing loop. All 200 tests pass. Build and smoke test verified. Closes #77, #220, #228 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix: update selector refs during render instead of useLayoutEffect Addresses review feedback on PR #266. The previous useLayoutEffect approach updated selectorRef.current after the render phase, meaning a changed selector (e.g. s => s.tasks[attachment.taskId] when taskId changes) would still read stale data during the render it changed in. Fix: assign selectorRef.current and storeRef.current directly during render before useSyncExternalStore calls get(). Ref mutation during render is safe here — it's synchronous and happens before the snapshot is read. get() identity stays stable via useCallback(fn, []) so useSyncExternalStore never sees a new subscription function and won't trigger re-render loops. This is the standard pattern used by zustand and jotai for selector stability. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
OpenClaude
OpenClaude is an open-source coding-agent CLI that works with more than one model provider.
Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping the same terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
Why OpenClaude
- Use one CLI across cloud and local model providers
- Save provider profiles inside the app with
/provider - Run locally with Ollama or Atomic Chat
- Keep core coding-agent workflows: bash, file tools, grep, glob, agents, tasks, MCP, and web tools
Quick Start
Install
npm install -g @gitlawb/openclaude
If the npm install path later reports ripgrep not found, install ripgrep system-wide and confirm rg --version works in the same terminal before starting OpenClaude.
Start
openclaude
Inside OpenClaude:
- run
/providerfor guided setup of OpenAI-compatible, Gemini, Ollama, or Codex profiles - run
/onboard-githubfor GitHub Models setup
Fastest OpenAI setup
macOS / Linux:
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o
openclaude
Windows PowerShell:
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_MODEL="gpt-4o"
openclaude
Fastest local Ollama setup
macOS / Linux:
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=qwen2.5-coder:7b
openclaude
Windows PowerShell:
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_BASE_URL="http://localhost:11434/v1"
$env:OPENAI_MODEL="qwen2.5-coder:7b"
openclaude
Setup Guides
Beginner-friendly guides:
Advanced and source-build guides:
Supported Providers
| Provider | Setup Path | Notes |
|---|---|---|
| OpenAI-compatible | /provider or env vars |
Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and compatible local /v1 servers |
| Gemini | /provider or env vars |
Google Gemini support through the runtime provider layer |
| GitHub Models | /onboard-github |
Interactive onboarding with saved credentials |
| Codex | /provider |
Uses existing Codex credentials when available |
| Ollama | /provider or env vars |
Local inference with no API key |
| Atomic Chat | advanced setup | Local Apple Silicon backend |
| Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments |
What Works
- Tool-driven coding workflows Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands
- Streaming responses Real-time token output and tool progress
- Tool calling Multi-step tool loops with model calls, tool execution, and follow-up responses
- Images URL and base64 image inputs for providers that support vision
- Provider profiles
Guided setup plus saved
.openclaude-profile.jsonsupport - Local and remote model backends Cloud APIs, local servers, and Apple Silicon local inference
Provider Notes
OpenClaude supports multiple providers, but behavior is not identical across all of them.
- Anthropic-specific features may not exist on other providers
- Tool quality depends heavily on the selected model
- Smaller local models can struggle with long multi-step tool flows
- Some providers impose lower output caps than the CLI defaults, and OpenClaude adapts where possible
For best results, use models with strong tool/function calling support.
Agent Routing
Route different agents to different AI providers within the same session. Useful for cost optimization (cheap model for code review, powerful model for complex coding) or leveraging model strengths.
Configuration
Add to ~/.claude/settings.json:
{
"agentModels": {
"deepseek-chat": {
"base_url": "https://api.deepseek.com/v1",
"api_key": "sk-your-key"
},
"gpt-4o": {
"base_url": "https://api.openai.com/v1",
"api_key": "sk-your-key"
}
},
"agentRouting": {
"Explore": "deepseek-chat",
"Plan": "gpt-4o",
"general-purpose": "gpt-4o",
"frontend-dev": "deepseek-chat",
"default": "gpt-4o"
}
}
How It Works
- agentModels: Maps model names to OpenAI-compatible API endpoints
- agentRouting: Maps agent types or team member names to model names
- Priority:
name>subagent_type>"default"> global provider - Matching: Case-insensitive, hyphen/underscore equivalent (
general-purpose=general_purpose) - Teams: Team members are routed by their
name— no extra config needed
When no routing match is found, the global provider (env vars) is used as fallback.
Note:
api_keyvalues insettings.jsonare stored in plaintext. Keep this file private and do not commit it to version control.
Web Search and Fetch
WebFetch works out of the box.
WebSearch and richer JS-aware fetching work best with a Firecrawl API key:
export FIRECRAWL_API_KEY=your-key-here
With Firecrawl enabled:
WebSearchis available across more provider setupsWebFetchcan handle JavaScript-rendered pages more reliably
Firecrawl is optional. Without it, OpenClaude falls back to the built-in behavior.
Source Build
bun install
bun run build
node dist/cli.mjs
Helpful commands:
bun run devbun run smokebun run doctor:runtime
VS Code Extension
The repo includes a VS Code extension in vscode-extension/openclaude-vscode for OpenClaude launch integration and theme support.
Security
If you believe you found a security issue, see SECURITY.md.
Contributing
Contributions are welcome.
For larger changes, open an issue first so the scope is clear before implementation. Helpful validation commands include:
bun run buildbun run smoke- focused
bun test ...runs for touched areas
Disclaimer
OpenClaude is an independent community project and is not affiliated with, endorsed by, or sponsored by Anthropic.
"Claude" and "Claude Code" are trademarks of Anthropic.
License
MIT