* gRPC Server
* gRPC fix
* UpdProto
* fix: address PR review feedback for gRPC server
- Update bun.lock for new dependencies (frozen-lockfile CI fix)
- Add multi-turn session persistence via initialMessages
- Replace hardcoded done payload with real token counts
- Default bind to localhost instead of 0.0.0.0
* fix(grpc): startup parity, cancel interrupt, and cli text fallback
- Replace enableConfigs() with await init() in start-grpc.ts for full
bootstrap parity with the main CLI (env vars, CA certs, mTLS, proxy,
OAuth, Windows shell)
- Call engine.interrupt() before call.end() in the cancel handler so
in-flight model/tool execution is actually stopped
- Show done.full_text in the CLI client when no text_chunk was received,
preventing silent drops when streaming is unavailable
* fix(grpc): wire session_id end-to-end and remove dead provider field
- Move session_id from ClientMessage into ChatRequest to fix proto-loader
oneofs encoding bug and make the field functional
- Implement in-memory session store so reconnecting with the same
session_id resumes conversation context across streams
- Remove ChatRequest.provider — per-request provider routing requires
global process.env mutation, unsafe for concurrent clients; provider
is configured via env vars at server startup
* fix(grpc): mirror CLI auth bootstrap in start-grpc and fix tool_name field
scripts/start-grpc.ts now runs the same provider/auth bootstrap as the
normal CLI entrypoint: enableConfigs, safe env vars, Gemini/GitHub token
hydration, saved-profile resolution with warn-and-fallback, and provider
validation before the server binds.
ToolCallResult.tool_name was being populated with the tool_use_id UUID.
Added a toolNameById map (filled in canUseTool) so tool_name now carries
the actual tool name (e.g. "Bash"). The UUID moves to a new tool_use_id
field (proto field 4) for client-side correlation.
* fix(grpc): add tool_use_id to ToolCallStart and interrupt engine on stream close
Two blocker-level issues flagged in code review:
- ToolCallStart was missing tool_use_id, making it impossible for clients
to correlate tool_start events with tool_result when the same tool runs
multiple times. Added tool_use_id = 3 to the proto message and populated
it from the toolUseID parameter in canUseTool.
- On stream close without an explicit CancelSignal the server only nulled
the engine reference, leaving the underlying model/tool work running
as an orphan. Added engine.interrupt() in the call.on('end') handler
to stop work immediately when the client disconnects.
* fix(grpc): resolve pending promises on disconnect and guard post-cancel writes
Four lifecycle and contract issues identified during proactive review:
- Pending permission Promises in canUseTool would hang forever if the
client disconnected mid-stream. On call 'end', all pending resolvers
are now called with 'no' so the engine can unblock and terminate.
- The done message and session save could fire after call.end() when
a CancelSignal arrived mid-generation. Added an `interrupted` flag
set on both cancel and stream close to gate all post-loop writes.
- The session map had no eviction policy, allowing unbounded memory
growth. Capped at MAX_SESSIONS=1000 with FIFO eviction of the
oldest entry.
- Field 3 was silently absent from ChatRequest. Added `reserved 3`
to document the gap and prevent accidental reuse in future.
* fix(grpc): reset previousMessages on each new request to prevent session history leak
previousMessages was declared at stream scope and only overwritten when
the incoming session_id already existed in the session store. A second
request on the same stream with a new session_id would silently inherit
the first request's conversation history in initialMessages instead of
starting fresh, violating the session contract.
Fix: reset previousMessages to [] at the start of each ChatRequest
before the session-store lookup.
* fix(grpc): reset interrupted flag between requests and guard against concurrent ChatRequest
Two stream-scoped state bugs found during proactive audit:
- The `interrupted` flag was never reset between requests on the same
stream. If the first request was cancelled, all subsequent requests
would silently skip the done message, causing the client to hang.
- A second ChatRequest arriving while the first was still processing
would overwrite the engine reference, corrupting the lifecycle of
both requests. Now returns ALREADY_EXISTS error instead. Engine is
nulled after the for-await loop completes so subsequent requests
can proceed normally.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
163 lines
5.7 KiB
JSON
163 lines
5.7 KiB
JSON
{
|
|
"name": "@gitlawb/openclaude",
|
|
"version": "0.1.8",
|
|
"description": "Claude Code opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models",
|
|
"type": "module",
|
|
"bin": {
|
|
"openclaude": "./bin/openclaude"
|
|
},
|
|
"files": [
|
|
"bin/",
|
|
"dist/cli.mjs",
|
|
"README.md"
|
|
],
|
|
"scripts": {
|
|
"build": "bun run scripts/build.ts",
|
|
"dev": "bun run build && node dist/cli.mjs",
|
|
"dev:profile": "bun run scripts/provider-launch.ts",
|
|
"dev:profile:fast": "bun run scripts/provider-launch.ts auto --fast --bare",
|
|
"dev:codex": "bun run scripts/provider-launch.ts codex",
|
|
"dev:openai": "bun run scripts/provider-launch.ts openai",
|
|
"dev:gemini": "bun run scripts/provider-launch.ts gemini",
|
|
"dev:ollama": "bun run scripts/provider-launch.ts ollama",
|
|
"dev:ollama:fast": "bun run scripts/provider-launch.ts ollama --fast --bare",
|
|
"dev:atomic-chat": "bun run scripts/provider-launch.ts atomic-chat",
|
|
"profile:init": "bun run scripts/provider-bootstrap.ts",
|
|
"profile:recommend": "bun run scripts/provider-recommend.ts",
|
|
"profile:auto": "bun run scripts/provider-recommend.ts --apply",
|
|
"profile:codex": "bun run profile:init -- --provider codex --model codexplan",
|
|
"profile:fast": "bun run profile:init -- --provider ollama --model llama3.2:3b",
|
|
"profile:code": "bun run profile:init -- --provider ollama --model qwen2.5-coder:7b",
|
|
"dev:fast": "bun run profile:fast && bun run dev:ollama:fast",
|
|
"dev:code": "bun run profile:code && bun run dev:profile",
|
|
"dev:grpc": "bun run scripts/start-grpc.ts",
|
|
"dev:grpc:cli": "bun run scripts/grpc-cli.ts",
|
|
"start": "node dist/cli.mjs",
|
|
"test": "bun test",
|
|
"test:coverage": "bun test --coverage --coverage-reporter=lcov --coverage-dir=coverage --max-concurrency=1 && bun run scripts/render-coverage-heatmap.ts",
|
|
"test:coverage:ui": "bun run scripts/render-coverage-heatmap.ts",
|
|
"security:pr-scan": "bun run scripts/pr-intent-scan.ts",
|
|
"test:provider-recommendation": "bun test src/utils/providerRecommendation.test.ts src/utils/providerProfile.test.ts",
|
|
"typecheck": "tsc --noEmit",
|
|
"smoke": "bun run build && node dist/cli.mjs --version",
|
|
"verify:privacy": "bun run scripts/verify-no-phone-home.ts",
|
|
"build:verified": "bun run build && bun run verify:privacy",
|
|
"test:provider": "bun test src/services/api/*.test.ts src/utils/context.test.ts",
|
|
"doctor:runtime": "bun run scripts/system-check.ts",
|
|
"doctor:runtime:json": "bun run scripts/system-check.ts --json",
|
|
"doctor:report": "bun run scripts/system-check.ts --out reports/doctor-runtime.json",
|
|
"hardening:check": "bun run smoke && bun run doctor:runtime",
|
|
"hardening:strict": "bun run typecheck && bun run hardening:check",
|
|
"prepack": "npm run build"
|
|
},
|
|
"dependencies": {
|
|
"@alcalzone/ansi-tokenize": "0.3.0",
|
|
"@anthropic-ai/bedrock-sdk": "0.26.4",
|
|
"@anthropic-ai/foundry-sdk": "0.2.3",
|
|
"@anthropic-ai/sandbox-runtime": "0.0.46",
|
|
"@anthropic-ai/sdk": "0.81.0",
|
|
"@anthropic-ai/vertex-sdk": "0.14.4",
|
|
"@commander-js/extra-typings": "12.1.0",
|
|
"@growthbook/growthbook": "1.6.5",
|
|
"@grpc/grpc-js": "^1.14.3",
|
|
"@grpc/proto-loader": "^0.8.0",
|
|
"@mendable/firecrawl-js": "4.18.1",
|
|
"@modelcontextprotocol/sdk": "1.29.0",
|
|
"@opentelemetry/api": "1.9.1",
|
|
"@opentelemetry/api-logs": "0.214.0",
|
|
"@opentelemetry/core": "2.6.1",
|
|
"@opentelemetry/exporter-logs-otlp-http": "0.214.0",
|
|
"@opentelemetry/exporter-trace-otlp-grpc": "0.57.2",
|
|
"@opentelemetry/resources": "2.6.1",
|
|
"@opentelemetry/sdk-logs": "0.214.0",
|
|
"@opentelemetry/sdk-metrics": "2.6.1",
|
|
"@opentelemetry/sdk-trace-base": "2.6.1",
|
|
"@opentelemetry/sdk-trace-node": "2.6.1",
|
|
"@opentelemetry/semantic-conventions": "1.40.0",
|
|
"ajv": "8.18.0",
|
|
"auto-bind": "5.0.1",
|
|
"axios": "1.14.0",
|
|
"bidi-js": "1.0.3",
|
|
"chalk": "5.6.2",
|
|
"chokidar": "4.0.3",
|
|
"cli-boxes": "3.0.0",
|
|
"cli-highlight": "2.1.11",
|
|
"code-excerpt": "4.0.0",
|
|
"commander": "12.1.0",
|
|
"cross-spawn": "7.0.6",
|
|
"diff": "8.0.3",
|
|
"duck-duck-scrape": "^2.2.7",
|
|
"emoji-regex": "10.6.0",
|
|
"env-paths": "3.0.0",
|
|
"execa": "9.6.1",
|
|
"fflate": "0.8.2",
|
|
"figures": "6.1.0",
|
|
"fuse.js": "7.1.0",
|
|
"get-east-asian-width": "1.5.0",
|
|
"google-auth-library": "9.15.1",
|
|
"https-proxy-agent": "7.0.6",
|
|
"ignore": "7.0.5",
|
|
"indent-string": "5.0.0",
|
|
"jsonc-parser": "3.3.1",
|
|
"lodash-es": "4.18.1",
|
|
"lru-cache": "11.2.7",
|
|
"marked": "15.0.12",
|
|
"p-map": "7.0.4",
|
|
"picomatch": "4.0.4",
|
|
"proper-lockfile": "4.1.2",
|
|
"qrcode": "1.5.4",
|
|
"react": "19.2.4",
|
|
"react-compiler-runtime": "1.0.0",
|
|
"react-reconciler": "0.33.0",
|
|
"semver": "7.7.4",
|
|
"sharp": "^0.34.5",
|
|
"shell-quote": "1.8.3",
|
|
"signal-exit": "4.1.0",
|
|
"stack-utils": "2.0.6",
|
|
"strip-ansi": "7.2.0",
|
|
"supports-hyperlinks": "3.2.0",
|
|
"tree-kill": "1.2.2",
|
|
"turndown": "7.2.2",
|
|
"type-fest": "4.41.0",
|
|
"undici": "7.24.6",
|
|
"usehooks-ts": "3.1.1",
|
|
"vscode-languageserver-protocol": "3.17.5",
|
|
"wrap-ansi": "9.0.2",
|
|
"ws": "8.20.0",
|
|
"xss": "1.0.15",
|
|
"yaml": "2.8.3",
|
|
"zod": "3.25.76"
|
|
},
|
|
"devDependencies": {
|
|
"@types/bun": "1.3.11",
|
|
"@types/node": "25.5.0",
|
|
"@types/react": "19.2.14",
|
|
"tsx": "^4.21.0",
|
|
"typescript": "5.9.3"
|
|
},
|
|
"engines": {
|
|
"node": ">=20.0.0"
|
|
},
|
|
"repository": {
|
|
"type": "git",
|
|
"url": "https://gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude"
|
|
},
|
|
"keywords": [
|
|
"claude-code",
|
|
"openai",
|
|
"llm",
|
|
"cli",
|
|
"agent",
|
|
"deepseek",
|
|
"ollama",
|
|
"gemini"
|
|
],
|
|
"license": "SEE LICENSE FILE",
|
|
"publishConfig": {
|
|
"access": "public"
|
|
},
|
|
"overrides": {
|
|
"lodash-es": "4.18.1"
|
|
}
|
|
}
|