The vendored-binary lookup at vendor/ripgrep/<arch>-<platform>/rg never
resolved in this fork — that directory does not ship — so users without
a system rg had no working fallback. Switch to the @vscode/ripgrep
package so Microsoft maintains the platform/arch matrix and the binary
is delivered via npm.
- src/utils/ripgrep.ts: replace hand-rolled vendor-path resolution with
rgPath from @vscode/ripgrep. Lazy require so a missing package falls
through to the system rg branch instead of throwing at import.
Drop builtinExists from the config args; builtinCommand is now a
string-or-null. The system override (USE_BUILTIN_RIPGREP=0), the
Bun-compiled standalone embedded mode, the macOS codesign hook, and
all retry/timeout/error logic are preserved untouched.
- scripts/build.ts: mark @vscode/ripgrep as external. The package
resolves rgPath via __dirname at runtime, so bundling would freeze
the build host's absolute path into dist/cli.mjs.
- src/utils/ripgrep.test.ts: update for the new config shape and add
tests covering USE_BUILTIN_RIPGREP=0, embedded mode, last-resort
fallback, and null builtin path.
Tested locally on Linux (Bun 1.3.13). macOS (codesign hook) and
Windows (rg.exe extension) need contributor verification.
* gRPC Server
* gRPC fix
* UpdProto
* fix: address PR review feedback for gRPC server
- Update bun.lock for new dependencies (frozen-lockfile CI fix)
- Add multi-turn session persistence via initialMessages
- Replace hardcoded done payload with real token counts
- Default bind to localhost instead of 0.0.0.0
* fix(grpc): startup parity, cancel interrupt, and cli text fallback
- Replace enableConfigs() with await init() in start-grpc.ts for full
bootstrap parity with the main CLI (env vars, CA certs, mTLS, proxy,
OAuth, Windows shell)
- Call engine.interrupt() before call.end() in the cancel handler so
in-flight model/tool execution is actually stopped
- Show done.full_text in the CLI client when no text_chunk was received,
preventing silent drops when streaming is unavailable
* fix(grpc): wire session_id end-to-end and remove dead provider field
- Move session_id from ClientMessage into ChatRequest to fix proto-loader
oneofs encoding bug and make the field functional
- Implement in-memory session store so reconnecting with the same
session_id resumes conversation context across streams
- Remove ChatRequest.provider — per-request provider routing requires
global process.env mutation, unsafe for concurrent clients; provider
is configured via env vars at server startup
* fix(grpc): mirror CLI auth bootstrap in start-grpc and fix tool_name field
scripts/start-grpc.ts now runs the same provider/auth bootstrap as the
normal CLI entrypoint: enableConfigs, safe env vars, Gemini/GitHub token
hydration, saved-profile resolution with warn-and-fallback, and provider
validation before the server binds.
ToolCallResult.tool_name was being populated with the tool_use_id UUID.
Added a toolNameById map (filled in canUseTool) so tool_name now carries
the actual tool name (e.g. "Bash"). The UUID moves to a new tool_use_id
field (proto field 4) for client-side correlation.
* fix(grpc): add tool_use_id to ToolCallStart and interrupt engine on stream close
Two blocker-level issues flagged in code review:
- ToolCallStart was missing tool_use_id, making it impossible for clients
to correlate tool_start events with tool_result when the same tool runs
multiple times. Added tool_use_id = 3 to the proto message and populated
it from the toolUseID parameter in canUseTool.
- On stream close without an explicit CancelSignal the server only nulled
the engine reference, leaving the underlying model/tool work running
as an orphan. Added engine.interrupt() in the call.on('end') handler
to stop work immediately when the client disconnects.
* fix(grpc): resolve pending promises on disconnect and guard post-cancel writes
Four lifecycle and contract issues identified during proactive review:
- Pending permission Promises in canUseTool would hang forever if the
client disconnected mid-stream. On call 'end', all pending resolvers
are now called with 'no' so the engine can unblock and terminate.
- The done message and session save could fire after call.end() when
a CancelSignal arrived mid-generation. Added an `interrupted` flag
set on both cancel and stream close to gate all post-loop writes.
- The session map had no eviction policy, allowing unbounded memory
growth. Capped at MAX_SESSIONS=1000 with FIFO eviction of the
oldest entry.
- Field 3 was silently absent from ChatRequest. Added `reserved 3`
to document the gap and prevent accidental reuse in future.
* fix(grpc): reset previousMessages on each new request to prevent session history leak
previousMessages was declared at stream scope and only overwritten when
the incoming session_id already existed in the session store. A second
request on the same stream with a new session_id would silently inherit
the first request's conversation history in initialMessages instead of
starting fresh, violating the session contract.
Fix: reset previousMessages to [] at the start of each ChatRequest
before the session-store lookup.
* fix(grpc): reset interrupted flag between requests and guard against concurrent ChatRequest
Two stream-scoped state bugs found during proactive audit:
- The `interrupted` flag was never reset between requests on the same
stream. If the first request was cancelled, all subsequent requests
would silently skip the done message, causing the client to hang.
- A second ChatRequest arriving while the first was still processing
would overwrite the engine reference, corrupting the lifecycle of
both requests. Now returns ALREADY_EXISTS error instead. Engine is
nulled after the for-await loop completes so subsequent requests
can proceed normally.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* security: force lodash-es 4.18.0 for transitive dependencies
PR #225 bumped the direct lodash-es dependency to 4.18.0, but
@anthropic-ai/sandbox-runtime still pulled lodash-es@4.17.23 via its
own ^4.17.23 range. The transitive copy was vulnerable to:
- HIGH: Code Injection via _.template (GHSA-r5fr-rjxr-66jc)
- MODERATE: Prototype Pollution via _.unset/_.omit (GHSA-f23m-r3pf-42rh)
Added overrides field in package.json to force all copies to 4.18.0.
bun audit now reports zero vulnerabilities.
* fix: use lodash-es 4.18.1 instead of deprecated 4.18.0
lodash-es 4.18.0 is explicitly deprecated by the maintainer with
the message "Bad release. Please use lodash-es@4.17.23 instead."
Updated both the direct dependency and the override to 4.18.1, which
is the latest non-deprecated release that patches the CVEs.
* added duck duck go for websearch tools that allowed free searching
* update readme
* Replace @phukon/duckduckgo-search with duck-duck-scrape and fix Firecrawl routing priority, and add DDG error handling
* refactor: streamline DuckDuckGo search fallback to use Firecrawl directly on rate limit
* docs: update README to clarify DuckDuckGo web search fallback and its limitations with TOS
WebSearch is currently disabled for all non-Anthropic providers (OpenAI
shim, DeepSeek, Ollama, etc.) because those providers have no native
search backend. This adds Firecrawl as a fallback that activates when
FIRECRAWL_API_KEY is set, unlocking web search for every model
openclaude supports.
WebFetch uses basic HTTP + Turndown for HTML-to-markdown conversion,
which fails silently on JS-rendered SPAs and bot-protected pages.
Firecrawl scrape replaces the fetch layer when FIRECRAWL_API_KEY is set,
returning clean markdown that handles dynamic content correctly.
Changes:
- WebSearchTool: add runFirecrawlSearch() using @mendable/firecrawl-js,
respects allowed_domains (post-filter) and blocked_domains (-site: operators),
includes result snippets alongside links. shouldUseFirecrawl() ensures
firstParty/Vertex/Foundry/Codex providers keep their native backends.
- WebFetchTool: add scrapeWithFirecrawl(), drops into the existing
applyPromptToMarkdown() pipeline so prompt processing is unchanged.
- Remove "Web search is only available in the US" restriction from
prompt when Firecrawl is active (it works globally).
Removes caret (^) ranges from all 74 dependencies in package.json,
locking each to the exact version resolved in bun.lock.
Motivation: the axios supply chain attack of March 31 2026 demonstrated
that caret ranges are a live attack vector. axios@^1.14.0 would have
resolved to the trojanized 1.14.1 (bundled plain-crypto-js RAT, C2
sfrclak.com). Both 1.14.1 and 0.30.4 were unpublished within 24h.
Key pins:
axios ^1.14.0 → 1.14.0 (trojanized 1.14.1 blocked)
undici ^7.3.0 → 7.24.6 (7 CVEs between 7.3 and 7.24)
yaml ^2.7.0 → 2.8.3 (CVE-2026-33532 fix)
ajv ^8.17.0 → 8.18.0 (ReDoS fix)
lodash-es ^4.17.21 → 4.17.23 (prototype pollution fix)
zod ^3.24.0 → 3.25.76 (large range locked)
All 74 deps verified: integrity hashes match npm registry, no known
supply chain incidents, no postinstall scripts in lockfile.
- package.json with all 70+ dependencies
- Bun build script with feature flag shims, native module stubs, otel externals
- Stubs for ~15 missing source files (snapshot gaps)
- tsconfig.json for TypeScript
- bin/openclaude entry point
- Builds to single 19MB dist/cli.mjs
- Verified: --version and --help work
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>