docs: rewrite README for OpenClaude — setup instructions for all providers
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
619b5fb603
commit
fd108243eb
390
README.md
390
README.md
@@ -1,280 +1,222 @@
|
||||
# Claude Code Source Snapshot for Security Research
|
||||
# OpenClaude
|
||||
|
||||
> This repository mirrors a **publicly exposed Claude Code source snapshot** that became accessible on **March 31, 2026** through a source map exposure in the npm distribution. It is maintained for **educational, defensive security research, and software supply-chain analysis**.
|
||||
Use Claude Code with **any LLM** — not just Claude.
|
||||
|
||||
OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API.
|
||||
|
||||
All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
|
||||
|
||||
---
|
||||
|
||||
## Research Context
|
||||
## Quick Start
|
||||
|
||||
This repository is maintained by a **university student** studying:
|
||||
### 1. Set 3 environment variables
|
||||
|
||||
- software supply-chain exposure and build artifact leaks
|
||||
- secure software engineering practices
|
||||
- agentic developer tooling architecture
|
||||
- defensive analysis of real-world CLI systems
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-your-key-here
|
||||
export OPENAI_MODEL=gpt-4o
|
||||
```
|
||||
|
||||
This archive is intended to support:
|
||||
### 2. Run Claude Code
|
||||
|
||||
- educational study
|
||||
- security research practice
|
||||
- architecture review
|
||||
- discussion of packaging and release-process failures
|
||||
```bash
|
||||
claude
|
||||
```
|
||||
|
||||
It does **not** claim ownership of the original code, and it should not be interpreted as an official Anthropic repository.
|
||||
That's it. The tool system, streaming, file editing, multi-step reasoning — everything works through the model you picked.
|
||||
|
||||
---
|
||||
|
||||
## How the Public Snapshot Became Accessible
|
||||
## Provider Examples
|
||||
|
||||
[Chaofan Shou (@Fried_rice)](https://x.com/Fried_rice) publicly noted that Claude Code source material was reachable through a `.map` file exposed in the npm package:
|
||||
### OpenAI
|
||||
|
||||
> **"Claude code source code has been leaked via a map file in their npm registry!"**
|
||||
>
|
||||
> — [@Fried_rice, March 31, 2026](https://x.com/Fried_rice/status/2038894956459290963)
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-...
|
||||
export OPENAI_MODEL=gpt-4o
|
||||
```
|
||||
|
||||
The published source map referenced unobfuscated TypeScript sources hosted in Anthropic's R2 storage bucket, which made the `src/` snapshot publicly downloadable.
|
||||
### DeepSeek
|
||||
|
||||
---
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-...
|
||||
export OPENAI_BASE_URL=https://api.deepseek.com/v1
|
||||
export OPENAI_MODEL=deepseek-chat
|
||||
```
|
||||
|
||||
## Repository Scope
|
||||
### Google Gemini (via OpenRouter)
|
||||
|
||||
Claude Code is Anthropic's CLI for interacting with Claude from the terminal to perform software engineering tasks such as editing files, running commands, searching codebases, and coordinating workflows.
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-or-...
|
||||
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
|
||||
export OPENAI_MODEL=google/gemini-2.0-flash
|
||||
```
|
||||
|
||||
This repository contains a mirrored `src/` snapshot for research and analysis.
|
||||
### Ollama (local, free)
|
||||
|
||||
- **Public exposure identified on**: 2026-03-31
|
||||
- **Language**: TypeScript
|
||||
- **Runtime**: Bun
|
||||
- **Terminal UI**: React + [Ink](https://github.com/vadimdemedes/ink)
|
||||
- **Scale**: ~1,900 files, 512,000+ lines of code
|
||||
```bash
|
||||
ollama pull llama3.3:70b
|
||||
|
||||
---
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_BASE_URL=http://localhost:11434/v1
|
||||
export OPENAI_MODEL=llama3.3:70b
|
||||
# no API key needed for local models
|
||||
```
|
||||
|
||||
## Directory Structure
|
||||
### LM Studio (local)
|
||||
|
||||
```text
|
||||
src/
|
||||
├── main.tsx # Entrypoint orchestration (Commander.js-based CLI path)
|
||||
├── commands.ts # Command registry
|
||||
├── tools.ts # Tool registry
|
||||
├── Tool.ts # Tool type definitions
|
||||
├── QueryEngine.ts # LLM query engine
|
||||
├── context.ts # System/user context collection
|
||||
├── cost-tracker.ts # Token cost tracking
|
||||
│
|
||||
├── commands/ # Slash command implementations (~50)
|
||||
├── tools/ # Agent tool implementations (~40)
|
||||
├── components/ # Ink UI components (~140)
|
||||
├── hooks/ # React hooks
|
||||
├── services/ # External service integrations
|
||||
├── screens/ # Full-screen UIs (Doctor, REPL, Resume)
|
||||
├── types/ # TypeScript type definitions
|
||||
├── utils/ # Utility functions
|
||||
│
|
||||
├── bridge/ # IDE and remote-control bridge
|
||||
├── coordinator/ # Multi-agent coordinator
|
||||
├── plugins/ # Plugin system
|
||||
├── skills/ # Skill system
|
||||
├── keybindings/ # Keybinding configuration
|
||||
├── vim/ # Vim mode
|
||||
├── voice/ # Voice input
|
||||
├── remote/ # Remote sessions
|
||||
├── server/ # Server mode
|
||||
├── memdir/ # Persistent memory directory
|
||||
├── tasks/ # Task management
|
||||
├── state/ # State management
|
||||
├── migrations/ # Config migrations
|
||||
├── schemas/ # Config schemas (Zod)
|
||||
├── entrypoints/ # Initialization logic
|
||||
├── ink/ # Ink renderer wrapper
|
||||
├── buddy/ # Companion sprite
|
||||
├── native-ts/ # Native TypeScript utilities
|
||||
├── outputStyles/ # Output styling
|
||||
├── query/ # Query pipeline
|
||||
└── upstreamproxy/ # Proxy configuration
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_BASE_URL=http://localhost:1234/v1
|
||||
export OPENAI_MODEL=your-model-name
|
||||
```
|
||||
|
||||
### Together AI
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=...
|
||||
export OPENAI_BASE_URL=https://api.together.xyz/v1
|
||||
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
|
||||
```
|
||||
|
||||
### Groq
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=gsk_...
|
||||
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
|
||||
export OPENAI_MODEL=llama-3.3-70b-versatile
|
||||
```
|
||||
|
||||
### Mistral
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=...
|
||||
export OPENAI_BASE_URL=https://api.mistral.ai/v1
|
||||
export OPENAI_MODEL=mistral-large-latest
|
||||
```
|
||||
|
||||
### Azure OpenAI
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=your-azure-key
|
||||
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
|
||||
export OPENAI_MODEL=gpt-4o
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture Summary
|
||||
## Environment Variables
|
||||
|
||||
### 1. Tool System (`src/tools/`)
|
||||
| Variable | Required | Description |
|
||||
|----------|----------|-------------|
|
||||
| `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
|
||||
| `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama) |
|
||||
| `OPENAI_MODEL` | Yes | Model name (e.g. `gpt-4o`, `deepseek-chat`, `llama3.3:70b`) |
|
||||
| `OPENAI_BASE_URL` | No | API endpoint (defaults to `https://api.openai.com/v1`) |
|
||||
|
||||
Every tool Claude Code can invoke is implemented as a self-contained module. Each tool defines its input schema, permission model, and execution logic.
|
||||
You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
|
||||
|
||||
| Tool | Description |
|
||||
|---|---|
|
||||
| `BashTool` | Shell command execution |
|
||||
| `FileReadTool` | File reading (images, PDFs, notebooks) |
|
||||
| `FileWriteTool` | File creation / overwrite |
|
||||
| `FileEditTool` | Partial file modification (string replacement) |
|
||||
| `GlobTool` | File pattern matching search |
|
||||
| `GrepTool` | ripgrep-based content search |
|
||||
| `WebFetchTool` | Fetch URL content |
|
||||
| `WebSearchTool` | Web search |
|
||||
| `AgentTool` | Sub-agent spawning |
|
||||
| `SkillTool` | Skill execution |
|
||||
| `MCPTool` | MCP server tool invocation |
|
||||
| `LSPTool` | Language Server Protocol integration |
|
||||
| `NotebookEditTool` | Jupyter notebook editing |
|
||||
| `TaskCreateTool` / `TaskUpdateTool` | Task creation and management |
|
||||
| `SendMessageTool` | Inter-agent messaging |
|
||||
| `TeamCreateTool` / `TeamDeleteTool` | Team agent management |
|
||||
| `EnterPlanModeTool` / `ExitPlanModeTool` | Plan mode toggle |
|
||||
| `EnterWorktreeTool` / `ExitWorktreeTool` | Git worktree isolation |
|
||||
| `ToolSearchTool` | Deferred tool discovery |
|
||||
| `CronCreateTool` | Scheduled trigger creation |
|
||||
| `RemoteTriggerTool` | Remote trigger |
|
||||
| `SleepTool` | Proactive mode wait |
|
||||
| `SyntheticOutputTool` | Structured output generation |
|
||||
---
|
||||
|
||||
### 2. Command System (`src/commands/`)
|
||||
## What Works
|
||||
|
||||
User-facing slash commands invoked with `/` prefix.
|
||||
- **All tools**: Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks
|
||||
- **Streaming**: Real-time token streaming
|
||||
- **Tool calling**: Multi-step tool chains (the model calls tools, gets results, continues)
|
||||
- **Images**: Base64 and URL images passed to vision models
|
||||
- **Slash commands**: /commit, /review, /compact, /diff, /doctor, etc.
|
||||
- **Sub-agents**: AgentTool spawns sub-agents using the same provider
|
||||
- **Memory**: Persistent memory system
|
||||
|
||||
| Command | Description |
|
||||
|---|---|
|
||||
| `/commit` | Create a git commit |
|
||||
| `/review` | Code review |
|
||||
| `/compact` | Context compression |
|
||||
| `/mcp` | MCP server management |
|
||||
| `/config` | Settings management |
|
||||
| `/doctor` | Environment diagnostics |
|
||||
| `/login` / `/logout` | Authentication |
|
||||
| `/memory` | Persistent memory management |
|
||||
| `/skills` | Skill management |
|
||||
| `/tasks` | Task management |
|
||||
| `/vim` | Vim mode toggle |
|
||||
| `/diff` | View changes |
|
||||
| `/cost` | Check usage cost |
|
||||
| `/theme` | Change theme |
|
||||
| `/context` | Context visualization |
|
||||
| `/pr_comments` | View PR comments |
|
||||
| `/resume` | Restore previous session |
|
||||
| `/share` | Share session |
|
||||
| `/desktop` | Desktop app handoff |
|
||||
| `/mobile` | Mobile app handoff |
|
||||
## What's Different
|
||||
|
||||
### 3. Service Layer (`src/services/`)
|
||||
- **No thinking mode**: Anthropic's extended thinking is disabled (OpenAI models use different reasoning)
|
||||
- **No prompt caching**: Anthropic-specific cache headers are skipped
|
||||
- **No beta features**: Anthropic-specific beta headers are ignored
|
||||
- **Token limits**: Defaults to 32K max output — some models may cap lower, which is handled gracefully
|
||||
|
||||
| Service | Description |
|
||||
|---|---|
|
||||
| `api/` | Anthropic API client, file API, bootstrap |
|
||||
| `mcp/` | Model Context Protocol server connection and management |
|
||||
| `oauth/` | OAuth 2.0 authentication flow |
|
||||
| `lsp/` | Language Server Protocol manager |
|
||||
| `analytics/` | GrowthBook-based feature flags and analytics |
|
||||
| `plugins/` | Plugin loader |
|
||||
| `compact/` | Conversation context compression |
|
||||
| `policyLimits/` | Organization policy limits |
|
||||
| `remoteManagedSettings/` | Remote managed settings |
|
||||
| `extractMemories/` | Automatic memory extraction |
|
||||
| `tokenEstimation.ts` | Token count estimation |
|
||||
| `teamMemorySync/` | Team memory synchronization |
|
||||
---
|
||||
|
||||
### 4. Bridge System (`src/bridge/`)
|
||||
## How It Works
|
||||
|
||||
A bidirectional communication layer connecting IDE extensions (VS Code, JetBrains) with the Claude Code CLI.
|
||||
The shim (`src/services/api/openaiShim.ts`) sits between Claude Code and the LLM API:
|
||||
|
||||
- `bridgeMain.ts` — Bridge main loop
|
||||
- `bridgeMessaging.ts` — Message protocol
|
||||
- `bridgePermissionCallbacks.ts` — Permission callbacks
|
||||
- `replBridge.ts` — REPL session bridge
|
||||
- `jwtUtils.ts` — JWT-based authentication
|
||||
- `sessionRunner.ts` — Session execution management
|
||||
|
||||
### 5. Permission System (`src/hooks/toolPermission/`)
|
||||
|
||||
Checks permissions on every tool invocation. Either prompts the user for approval/denial or automatically resolves based on the configured permission mode (`default`, `plan`, `bypassPermissions`, `auto`, etc.).
|
||||
|
||||
### 6. Feature Flags
|
||||
|
||||
Dead code elimination via Bun's `bun:bundle` feature flags:
|
||||
|
||||
```typescript
|
||||
import { feature } from 'bun:bundle'
|
||||
|
||||
// Inactive code is completely stripped at build time
|
||||
const voiceCommand = feature('VOICE_MODE')
|
||||
? require('./commands/voice/index.js').default
|
||||
: null
|
||||
```
|
||||
Claude Code Tool System
|
||||
|
|
||||
v
|
||||
Anthropic SDK interface (duck-typed)
|
||||
|
|
||||
v
|
||||
openaiShim.ts <-- translates formats
|
||||
|
|
||||
v
|
||||
OpenAI Chat Completions API
|
||||
|
|
||||
v
|
||||
Any compatible model
|
||||
```
|
||||
|
||||
Notable flags: `PROACTIVE`, `KAIROS`, `BRIDGE_MODE`, `DAEMON`, `VOICE_MODE`, `AGENT_TRIGGERS`, `MONITOR_TOOL`
|
||||
It translates:
|
||||
- Anthropic message blocks → OpenAI messages
|
||||
- Anthropic tool_use/tool_result → OpenAI function calls
|
||||
- OpenAI SSE streaming → Anthropic stream events
|
||||
- Anthropic system prompt arrays → OpenAI system messages
|
||||
|
||||
The rest of Claude Code doesn't know it's talking to a different model.
|
||||
|
||||
---
|
||||
|
||||
## Key Files in Detail
|
||||
## Model Quality Notes
|
||||
|
||||
### `QueryEngine.ts` (~46K lines)
|
||||
Not all models are equal at agentic tool use. Here's a rough guide:
|
||||
|
||||
The core engine for LLM API calls. Handles streaming responses, tool-call loops, thinking mode, retry logic, and token counting.
|
||||
| Model | Tool Calling | Code Quality | Speed |
|
||||
|-------|-------------|-------------|-------|
|
||||
| GPT-4o | Excellent | Excellent | Fast |
|
||||
| DeepSeek-V3 | Great | Great | Fast |
|
||||
| Gemini 2.0 Flash | Great | Good | Very Fast |
|
||||
| Llama 3.3 70B | Good | Good | Medium |
|
||||
| Mistral Large | Good | Good | Fast |
|
||||
| GPT-4o-mini | Good | Good | Very Fast |
|
||||
| Qwen 2.5 72B | Good | Good | Medium |
|
||||
| Smaller models (<7B) | Limited | Limited | Very Fast |
|
||||
|
||||
### `Tool.ts` (~29K lines)
|
||||
|
||||
Defines base types and interfaces for all tools — input schemas, permission models, and progress state types.
|
||||
|
||||
### `commands.ts` (~25K lines)
|
||||
|
||||
Manages registration and execution of all slash commands. Uses conditional imports to load different command sets per environment.
|
||||
|
||||
### `main.tsx`
|
||||
|
||||
Commander.js-based CLI parser and React/Ink renderer initialization. At startup, it overlaps MDM settings, keychain prefetch, and GrowthBook initialization for faster boot.
|
||||
For best results, use models with strong function/tool calling support.
|
||||
|
||||
---
|
||||
|
||||
## Tech Stack
|
||||
## Files Changed from Original
|
||||
|
||||
| Category | Technology |
|
||||
|---|---|
|
||||
| Runtime | [Bun](https://bun.sh) |
|
||||
| Language | TypeScript (strict) |
|
||||
| Terminal UI | [React](https://react.dev) + [Ink](https://github.com/vadimdemedes/ink) |
|
||||
| CLI Parsing | [Commander.js](https://github.com/tj/commander.js) (extra-typings) |
|
||||
| Schema Validation | [Zod v4](https://zod.dev) |
|
||||
| Code Search | [ripgrep](https://github.com/BurntSushi/ripgrep) |
|
||||
| Protocols | [MCP SDK](https://modelcontextprotocol.io), LSP |
|
||||
| API | [Anthropic SDK](https://docs.anthropic.com) |
|
||||
| Telemetry | OpenTelemetry + gRPC |
|
||||
| Feature Flags | GrowthBook |
|
||||
| Auth | OAuth 2.0, JWT, macOS Keychain |
|
||||
|
||||
---
|
||||
|
||||
## Notable Design Patterns
|
||||
|
||||
### Parallel Prefetch
|
||||
|
||||
Startup time is optimized by prefetching MDM settings, keychain reads, and API preconnect in parallel before heavy module evaluation begins.
|
||||
|
||||
```typescript
|
||||
// main.tsx — fired as side-effects before other imports
|
||||
startMdmRawRead()
|
||||
startKeychainPrefetch()
|
||||
```
|
||||
src/services/api/openaiShim.ts — NEW: OpenAI-compatible API shim (724 lines)
|
||||
src/services/api/client.ts — Routes to shim when CLAUDE_CODE_USE_OPENAI=1
|
||||
src/utils/model/providers.ts — Added 'openai' provider type
|
||||
src/utils/model/configs.ts — Added openai model mappings
|
||||
src/utils/model/model.ts — Respects OPENAI_MODEL for defaults
|
||||
src/utils/auth.ts — Recognizes OpenAI as valid 3P provider
|
||||
```
|
||||
|
||||
### Lazy Loading
|
||||
|
||||
Heavy modules (OpenTelemetry, gRPC, analytics, and some feature-gated subsystems) are deferred via dynamic `import()` until actually needed.
|
||||
|
||||
### Agent Swarms
|
||||
|
||||
Sub-agents are spawned via `AgentTool`, with `coordinator/` handling multi-agent orchestration. `TeamCreateTool` enables team-level parallel work.
|
||||
|
||||
### Skill System
|
||||
|
||||
Reusable workflows defined in `skills/` are executed through `SkillTool`. Users can add custom skills.
|
||||
|
||||
### Plugin Architecture
|
||||
|
||||
Built-in and third-party plugins are loaded through the `plugins/` subsystem.
|
||||
6 files changed. 786 lines added. Zero dependencies added.
|
||||
|
||||
---
|
||||
|
||||
## Research / Ownership Disclaimer
|
||||
## Origin
|
||||
|
||||
- This repository is an **educational and defensive security research archive** maintained by a university student.
|
||||
- It exists to study source exposure, packaging failures, and the architecture of modern agentic CLI systems.
|
||||
- The original Claude Code source remains the property of **Anthropic**.
|
||||
- This repository is **not affiliated with, endorsed by, or maintained by Anthropic**.
|
||||
This is a fork of [instructkr/claude-code](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code), which mirrored the Claude Code source snapshot that became publicly accessible through an npm source map exposure on March 31, 2026.
|
||||
|
||||
The original Claude Code source is the property of Anthropic. This repository is not affiliated with or endorsed by Anthropic.
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
This repository is provided for educational and research purposes. The original source code is subject to Anthropic's terms. The OpenAI shim additions are public domain.
|
||||
|
||||
Reference in New Issue
Block a user