gnanam1990 942d09ca9c security: fix 5 findings from issue #42 — env leak, ant gate, depth DoS, URL parse, CA cert
Finding 1 [CRITICAL] — sessionRunner leaks full process.env to child
Extract buildChildEnv() with an explicit allowlist of safe OS/runtime vars.
Child process no longer inherits ANTHROPIC_API_KEY, OPENAI_API_KEY, DB
credentials, or any other secret present in the parent shell environment.
Only CLAUDE_CODE_* bridge vars, PATH, HOME, and standard OS env are passed.

Finding 2 [HIGH] — USER_TYPE=ant activatable by external users
Add isAntEmployee() -> false constant in src/utils/buildConfig.ts.
Replace all three direct process.env.USER_TYPE === 'ant' checks in
setup.ts and onChangeAppState.ts so no external user can activate
Anthropic-internal code paths (commit attribution, system prompt clearing,
dangerously-skip-permissions bypass) by setting USER_TYPE in their shell.

Finding 3 [HIGH] — memoryScan.ts unlimited directory walk
Add MAX_DEPTH=3 guard on readdir({ recursive: true }) results.
Deep or symlink-looped memory directories no longer cause an unbounded
blocking walk before the MAX_MEMORY_FILES cap takes effect.

Finding 5 [HIGH] — buildSdkUrl uses string.includes for protocol detection
Replace apiBaseUrl.includes('localhost') with new URL(apiBaseUrl).hostname
comparison so a remote URL containing 'localhost' in its path no longer
incorrectly gets ws:// (unencrypted) instead of wss://.

Finding 6 [HIGH] — upstream proxy writes unvalidated CA cert to disk
Add isValidPemContent() validation before writeFile in the CA cert download
path. A compromised proxy sending non-PEM data (HTML, JSON, scripts) is now
rejected before it can be appended to the system CA bundle.

Each fix is covered by new unit tests (25 tests across 5 new test files).
All 52 tests pass. Build verified clean on v0.1.7.

Fixes #42

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 21:04:10 +05:30
2026-04-02 22:07:28 +08:00

OpenClaude

Use Claude Code with any LLM — not just Claude.

OpenClaude is a fork of the Claude Code source leak (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for codexplan and codexspark, and local inference via Atomic Chat on Apple Silicon.

All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.


Start Here

If you are new to terminals or just want the easiest path, start with the beginner guides:

If you want source builds, Bun workflows, profile launchers, or full provider examples, use:


Beginner Install

For most users, install the npm package:

npm install -g @gitlawb/openclaude

The package name is @gitlawb/openclaude, but the command you run is:

openclaude

If you install via npm and later see ripgrep not found, install ripgrep system-wide and confirm rg --version works in the same terminal before starting OpenClaude.


Fastest Setup

Windows PowerShell

npm install -g @gitlawb/openclaude

$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_MODEL="gpt-4o"

openclaude

macOS / Linux

npm install -g @gitlawb/openclaude

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o

openclaude

That is enough to start with OpenAI.


Choose Your Guide

Beginner

Advanced

  • Want source builds, Bun, local profiles, runtime checks, or more provider choices: Advanced Setup

Common Beginner Choices

OpenAI

Best default if you already have an OpenAI API key.

Ollama

Best if you want to run models locally on your own machine.

Codex

Best if you already use the Codex CLI or ChatGPT Codex backend.

Atomic Chat

Best if you want local inference on Apple Silicon with Atomic Chat. See Advanced Setup.


What Works

  • All tools: Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks
  • Streaming: Real-time token streaming
  • Tool calling: Multi-step tool chains (the model calls tools, gets results, continues)
  • Images: Base64 and URL images passed to vision models
  • Slash commands: /commit, /review, /compact, /diff, /doctor, etc.
  • Sub-agents: AgentTool spawns sub-agents using the same provider
  • Memory: Persistent memory system

What's Different

  • No thinking mode: Anthropic's extended thinking is disabled (OpenAI models use different reasoning)
  • No prompt caching: Anthropic-specific cache headers are skipped
  • No beta features: Anthropic-specific beta headers are ignored
  • Token limits: Defaults to 32K max output — some models may cap lower, which is handled gracefully

How It Works

The shim (src/services/api/openaiShim.ts) sits between Claude Code and the LLM API:

Claude Code Tool System
        |
        v
  Anthropic SDK interface (duck-typed)
        |
        v
  openaiShim.ts  <-- translates formats
        |
        v
  OpenAI Chat Completions API
        |
        v
  Any compatible model

It translates:

  • Anthropic message blocks → OpenAI messages
  • Anthropic tool_use/tool_result → OpenAI function calls
  • OpenAI SSE streaming → Anthropic stream events
  • Anthropic system prompt arrays → OpenAI system messages

The rest of Claude Code doesn't know it's talking to a different model.


Model Quality Notes

Not all models are equal at agentic tool use. Here's a rough guide:

Model Tool Calling Code Quality Speed
GPT-4o Excellent Excellent Fast
DeepSeek-V3 Great Great Fast
Gemini 2.0 Flash Great Good Very Fast
Llama 3.3 70B Good Good Medium
Mistral Large Good Good Fast
GPT-4o-mini Good Good Very Fast
Qwen 2.5 72B Good Good Medium
Smaller models (<7B) Limited Limited Very Fast

For best results, use models with strong function/tool calling support.


Files Changed from Original

src/services/api/openaiShim.ts   — NEW: OpenAI-compatible API shim (724 lines)
src/services/api/client.ts       — Routes to shim when CLAUDE_CODE_USE_OPENAI=1
src/utils/model/providers.ts     — Added 'openai' provider type
src/utils/model/configs.ts       — Added openai model mappings
src/utils/model/model.ts         — Respects OPENAI_MODEL for defaults
src/utils/auth.ts                — Recognizes OpenAI as valid 3P provider

6 files changed. 786 lines added. Zero dependencies added.


Origin

This is a fork of instructkr/claude-code, which mirrored the Claude Code source snapshot that became publicly accessible through an npm source map exposure on March 31, 2026.

The original Claude Code source is the property of Anthropic. This repository is not affiliated with or endorsed by Anthropic.


License

This repository is provided for educational and research purposes. The original source code is subject to Anthropic's terms. The OpenAI shim additions are public domain.

Description
Coding-agent CLI for cloud and local LLMs — fork of openclaude, branded for orcs.to
Readme 35 MiB
Languages
TypeScript 98.9%
JavaScript 0.9%
Python 0.2%