docs: rewrite README for OpenClaude — setup instructions for all providers

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
did:key:z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr
2026-03-31 23:16:40 +08:00
parent 619b5fb603
commit fd108243eb

390
README.md
View File

@@ -1,280 +1,222 @@
# Claude Code Source Snapshot for Security Research # OpenClaude
> This repository mirrors a **publicly exposed Claude Code source snapshot** that became accessible on **March 31, 2026** through a source map exposure in the npm distribution. It is maintained for **educational, defensive security research, and software supply-chain analysis**. Use Claude Code with **any LLM** — not just Claude.
OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API.
All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
--- ---
## Research Context ## Quick Start
This repository is maintained by a **university student** studying: ### 1. Set 3 environment variables
- software supply-chain exposure and build artifact leaks ```bash
- secure software engineering practices export CLAUDE_CODE_USE_OPENAI=1
- agentic developer tooling architecture export OPENAI_API_KEY=sk-your-key-here
- defensive analysis of real-world CLI systems export OPENAI_MODEL=gpt-4o
```
This archive is intended to support: ### 2. Run Claude Code
- educational study ```bash
- security research practice claude
- architecture review ```
- discussion of packaging and release-process failures
It does **not** claim ownership of the original code, and it should not be interpreted as an official Anthropic repository. That's it. The tool system, streaming, file editing, multi-step reasoning — everything works through the model you picked.
--- ---
## How the Public Snapshot Became Accessible ## Provider Examples
[Chaofan Shou (@Fried_rice)](https://x.com/Fried_rice) publicly noted that Claude Code source material was reachable through a `.map` file exposed in the npm package: ### OpenAI
> **"Claude code source code has been leaked via a map file in their npm registry!"** ```bash
> export CLAUDE_CODE_USE_OPENAI=1
> — [@Fried_rice, March 31, 2026](https://x.com/Fried_rice/status/2038894956459290963) export OPENAI_API_KEY=sk-...
export OPENAI_MODEL=gpt-4o
```
The published source map referenced unobfuscated TypeScript sources hosted in Anthropic's R2 storage bucket, which made the `src/` snapshot publicly downloadable. ### DeepSeek
--- ```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
```
## Repository Scope ### Google Gemini (via OpenRouter)
Claude Code is Anthropic's CLI for interacting with Claude from the terminal to perform software engineering tasks such as editing files, running commands, searching codebases, and coordinating workflows. ```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-or-...
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash
```
This repository contains a mirrored `src/` snapshot for research and analysis. ### Ollama (local, free)
- **Public exposure identified on**: 2026-03-31 ```bash
- **Language**: TypeScript ollama pull llama3.3:70b
- **Runtime**: Bun
- **Terminal UI**: React + [Ink](https://github.com/vadimdemedes/ink)
- **Scale**: ~1,900 files, 512,000+ lines of code
--- export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b
# no API key needed for local models
```
## Directory Structure ### LM Studio (local)
```text ```bash
src/ export CLAUDE_CODE_USE_OPENAI=1
├── main.tsx # Entrypoint orchestration (Commander.js-based CLI path) export OPENAI_BASE_URL=http://localhost:1234/v1
├── commands.ts # Command registry export OPENAI_MODEL=your-model-name
├── tools.ts # Tool registry ```
├── Tool.ts # Tool type definitions
├── QueryEngine.ts # LLM query engine ### Together AI
├── context.ts # System/user context collection
├── cost-tracker.ts # Token cost tracking ```bash
export CLAUDE_CODE_USE_OPENAI=1
├── commands/ # Slash command implementations (~50) export OPENAI_API_KEY=...
├── tools/ # Agent tool implementations (~40) export OPENAI_BASE_URL=https://api.together.xyz/v1
├── components/ # Ink UI components (~140) export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
├── hooks/ # React hooks ```
├── services/ # External service integrations
├── screens/ # Full-screen UIs (Doctor, REPL, Resume) ### Groq
├── types/ # TypeScript type definitions
├── utils/ # Utility functions ```bash
export CLAUDE_CODE_USE_OPENAI=1
├── bridge/ # IDE and remote-control bridge export OPENAI_API_KEY=gsk_...
├── coordinator/ # Multi-agent coordinator export OPENAI_BASE_URL=https://api.groq.com/openai/v1
├── plugins/ # Plugin system export OPENAI_MODEL=llama-3.3-70b-versatile
├── skills/ # Skill system ```
├── keybindings/ # Keybinding configuration
├── vim/ # Vim mode ### Mistral
├── voice/ # Voice input
├── remote/ # Remote sessions ```bash
├── server/ # Server mode export CLAUDE_CODE_USE_OPENAI=1
├── memdir/ # Persistent memory directory export OPENAI_API_KEY=...
├── tasks/ # Task management export OPENAI_BASE_URL=https://api.mistral.ai/v1
├── state/ # State management export OPENAI_MODEL=mistral-large-latest
├── migrations/ # Config migrations ```
├── schemas/ # Config schemas (Zod)
├── entrypoints/ # Initialization logic ### Azure OpenAI
├── ink/ # Ink renderer wrapper
├── buddy/ # Companion sprite ```bash
├── native-ts/ # Native TypeScript utilities export CLAUDE_CODE_USE_OPENAI=1
├── outputStyles/ # Output styling export OPENAI_API_KEY=your-azure-key
├── query/ # Query pipeline export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
└── upstreamproxy/ # Proxy configuration export OPENAI_MODEL=gpt-4o
``` ```
--- ---
## Architecture Summary ## Environment Variables
### 1. Tool System (`src/tools/`) | Variable | Required | Description |
|----------|----------|-------------|
| `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
| `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama) |
| `OPENAI_MODEL` | Yes | Model name (e.g. `gpt-4o`, `deepseek-chat`, `llama3.3:70b`) |
| `OPENAI_BASE_URL` | No | API endpoint (defaults to `https://api.openai.com/v1`) |
Every tool Claude Code can invoke is implemented as a self-contained module. Each tool defines its input schema, permission model, and execution logic. You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
| Tool | Description | ---
|---|---|
| `BashTool` | Shell command execution |
| `FileReadTool` | File reading (images, PDFs, notebooks) |
| `FileWriteTool` | File creation / overwrite |
| `FileEditTool` | Partial file modification (string replacement) |
| `GlobTool` | File pattern matching search |
| `GrepTool` | ripgrep-based content search |
| `WebFetchTool` | Fetch URL content |
| `WebSearchTool` | Web search |
| `AgentTool` | Sub-agent spawning |
| `SkillTool` | Skill execution |
| `MCPTool` | MCP server tool invocation |
| `LSPTool` | Language Server Protocol integration |
| `NotebookEditTool` | Jupyter notebook editing |
| `TaskCreateTool` / `TaskUpdateTool` | Task creation and management |
| `SendMessageTool` | Inter-agent messaging |
| `TeamCreateTool` / `TeamDeleteTool` | Team agent management |
| `EnterPlanModeTool` / `ExitPlanModeTool` | Plan mode toggle |
| `EnterWorktreeTool` / `ExitWorktreeTool` | Git worktree isolation |
| `ToolSearchTool` | Deferred tool discovery |
| `CronCreateTool` | Scheduled trigger creation |
| `RemoteTriggerTool` | Remote trigger |
| `SleepTool` | Proactive mode wait |
| `SyntheticOutputTool` | Structured output generation |
### 2. Command System (`src/commands/`) ## What Works
User-facing slash commands invoked with `/` prefix. - **All tools**: Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks
- **Streaming**: Real-time token streaming
- **Tool calling**: Multi-step tool chains (the model calls tools, gets results, continues)
- **Images**: Base64 and URL images passed to vision models
- **Slash commands**: /commit, /review, /compact, /diff, /doctor, etc.
- **Sub-agents**: AgentTool spawns sub-agents using the same provider
- **Memory**: Persistent memory system
| Command | Description | ## What's Different
|---|---|
| `/commit` | Create a git commit |
| `/review` | Code review |
| `/compact` | Context compression |
| `/mcp` | MCP server management |
| `/config` | Settings management |
| `/doctor` | Environment diagnostics |
| `/login` / `/logout` | Authentication |
| `/memory` | Persistent memory management |
| `/skills` | Skill management |
| `/tasks` | Task management |
| `/vim` | Vim mode toggle |
| `/diff` | View changes |
| `/cost` | Check usage cost |
| `/theme` | Change theme |
| `/context` | Context visualization |
| `/pr_comments` | View PR comments |
| `/resume` | Restore previous session |
| `/share` | Share session |
| `/desktop` | Desktop app handoff |
| `/mobile` | Mobile app handoff |
### 3. Service Layer (`src/services/`) - **No thinking mode**: Anthropic's extended thinking is disabled (OpenAI models use different reasoning)
- **No prompt caching**: Anthropic-specific cache headers are skipped
- **No beta features**: Anthropic-specific beta headers are ignored
- **Token limits**: Defaults to 32K max output — some models may cap lower, which is handled gracefully
| Service | Description | ---
|---|---|
| `api/` | Anthropic API client, file API, bootstrap |
| `mcp/` | Model Context Protocol server connection and management |
| `oauth/` | OAuth 2.0 authentication flow |
| `lsp/` | Language Server Protocol manager |
| `analytics/` | GrowthBook-based feature flags and analytics |
| `plugins/` | Plugin loader |
| `compact/` | Conversation context compression |
| `policyLimits/` | Organization policy limits |
| `remoteManagedSettings/` | Remote managed settings |
| `extractMemories/` | Automatic memory extraction |
| `tokenEstimation.ts` | Token count estimation |
| `teamMemorySync/` | Team memory synchronization |
### 4. Bridge System (`src/bridge/`) ## How It Works
A bidirectional communication layer connecting IDE extensions (VS Code, JetBrains) with the Claude Code CLI. The shim (`src/services/api/openaiShim.ts`) sits between Claude Code and the LLM API:
- `bridgeMain.ts` — Bridge main loop ```
- `bridgeMessaging.ts` — Message protocol Claude Code Tool System
- `bridgePermissionCallbacks.ts` — Permission callbacks |
- `replBridge.ts` — REPL session bridge v
- `jwtUtils.ts` — JWT-based authentication Anthropic SDK interface (duck-typed)
- `sessionRunner.ts` — Session execution management |
v
### 5. Permission System (`src/hooks/toolPermission/`) openaiShim.ts <-- translates formats
|
Checks permissions on every tool invocation. Either prompts the user for approval/denial or automatically resolves based on the configured permission mode (`default`, `plan`, `bypassPermissions`, `auto`, etc.). v
OpenAI Chat Completions API
### 6. Feature Flags |
v
Dead code elimination via Bun's `bun:bundle` feature flags: Any compatible model
```typescript
import { feature } from 'bun:bundle'
// Inactive code is completely stripped at build time
const voiceCommand = feature('VOICE_MODE')
? require('./commands/voice/index.js').default
: null
``` ```
Notable flags: `PROACTIVE`, `KAIROS`, `BRIDGE_MODE`, `DAEMON`, `VOICE_MODE`, `AGENT_TRIGGERS`, `MONITOR_TOOL` It translates:
- Anthropic message blocks → OpenAI messages
- Anthropic tool_use/tool_result → OpenAI function calls
- OpenAI SSE streaming → Anthropic stream events
- Anthropic system prompt arrays → OpenAI system messages
The rest of Claude Code doesn't know it's talking to a different model.
--- ---
## Key Files in Detail ## Model Quality Notes
### `QueryEngine.ts` (~46K lines) Not all models are equal at agentic tool use. Here's a rough guide:
The core engine for LLM API calls. Handles streaming responses, tool-call loops, thinking mode, retry logic, and token counting. | Model | Tool Calling | Code Quality | Speed |
|-------|-------------|-------------|-------|
| GPT-4o | Excellent | Excellent | Fast |
| DeepSeek-V3 | Great | Great | Fast |
| Gemini 2.0 Flash | Great | Good | Very Fast |
| Llama 3.3 70B | Good | Good | Medium |
| Mistral Large | Good | Good | Fast |
| GPT-4o-mini | Good | Good | Very Fast |
| Qwen 2.5 72B | Good | Good | Medium |
| Smaller models (<7B) | Limited | Limited | Very Fast |
### `Tool.ts` (~29K lines) For best results, use models with strong function/tool calling support.
Defines base types and interfaces for all tools — input schemas, permission models, and progress state types.
### `commands.ts` (~25K lines)
Manages registration and execution of all slash commands. Uses conditional imports to load different command sets per environment.
### `main.tsx`
Commander.js-based CLI parser and React/Ink renderer initialization. At startup, it overlaps MDM settings, keychain prefetch, and GrowthBook initialization for faster boot.
--- ---
## Tech Stack ## Files Changed from Original
| Category | Technology | ```
|---|---| src/services/api/openaiShim.ts — NEW: OpenAI-compatible API shim (724 lines)
| Runtime | [Bun](https://bun.sh) | src/services/api/client.ts — Routes to shim when CLAUDE_CODE_USE_OPENAI=1
| Language | TypeScript (strict) | src/utils/model/providers.ts — Added 'openai' provider type
| Terminal UI | [React](https://react.dev) + [Ink](https://github.com/vadimdemedes/ink) | src/utils/model/configs.ts — Added openai model mappings
| CLI Parsing | [Commander.js](https://github.com/tj/commander.js) (extra-typings) | src/utils/model/model.ts — Respects OPENAI_MODEL for defaults
| Schema Validation | [Zod v4](https://zod.dev) | src/utils/auth.ts — Recognizes OpenAI as valid 3P provider
| Code Search | [ripgrep](https://github.com/BurntSushi/ripgrep) |
| Protocols | [MCP SDK](https://modelcontextprotocol.io), LSP |
| API | [Anthropic SDK](https://docs.anthropic.com) |
| Telemetry | OpenTelemetry + gRPC |
| Feature Flags | GrowthBook |
| Auth | OAuth 2.0, JWT, macOS Keychain |
---
## Notable Design Patterns
### Parallel Prefetch
Startup time is optimized by prefetching MDM settings, keychain reads, and API preconnect in parallel before heavy module evaluation begins.
```typescript
// main.tsx — fired as side-effects before other imports
startMdmRawRead()
startKeychainPrefetch()
``` ```
### Lazy Loading 6 files changed. 786 lines added. Zero dependencies added.
Heavy modules (OpenTelemetry, gRPC, analytics, and some feature-gated subsystems) are deferred via dynamic `import()` until actually needed.
### Agent Swarms
Sub-agents are spawned via `AgentTool`, with `coordinator/` handling multi-agent orchestration. `TeamCreateTool` enables team-level parallel work.
### Skill System
Reusable workflows defined in `skills/` are executed through `SkillTool`. Users can add custom skills.
### Plugin Architecture
Built-in and third-party plugins are loaded through the `plugins/` subsystem.
--- ---
## Research / Ownership Disclaimer ## Origin
- This repository is an **educational and defensive security research archive** maintained by a university student. This is a fork of [instructkr/claude-code](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code), which mirrored the Claude Code source snapshot that became publicly accessible through an npm source map exposure on March 31, 2026.
- It exists to study source exposure, packaging failures, and the architecture of modern agentic CLI systems.
- The original Claude Code source remains the property of **Anthropic**. The original Claude Code source is the property of Anthropic. This repository is not affiliated with or endorsed by Anthropic.
- This repository is **not affiliated with, endorsed by, or maintained by Anthropic**.
---
## License
This repository is provided for educational and research purposes. The original source code is subject to Anthropic's terms. The OpenAI shim additions are public domain.