Files
orcs-code/scripts
NikitaBabenko 26eef92fe7 feat: add headless gRPC server for external agent integration (#278)
* gRPC Server

* gRPC fix

* UpdProto

* fix: address PR review feedback for gRPC server

- Update bun.lock for new dependencies (frozen-lockfile CI fix)
- Add multi-turn session persistence via initialMessages
- Replace hardcoded done payload with real token counts
- Default bind to localhost instead of 0.0.0.0

* fix(grpc): startup parity, cancel interrupt, and cli text fallback

- Replace enableConfigs() with await init() in start-grpc.ts for full
  bootstrap parity with the main CLI (env vars, CA certs, mTLS, proxy,
  OAuth, Windows shell)
- Call engine.interrupt() before call.end() in the cancel handler so
  in-flight model/tool execution is actually stopped
- Show done.full_text in the CLI client when no text_chunk was received,
  preventing silent drops when streaming is unavailable

* fix(grpc): wire session_id end-to-end and remove dead provider field

- Move session_id from ClientMessage into ChatRequest to fix proto-loader
  oneofs encoding bug and make the field functional
- Implement in-memory session store so reconnecting with the same
  session_id resumes conversation context across streams
- Remove ChatRequest.provider — per-request provider routing requires
  global process.env mutation, unsafe for concurrent clients; provider
  is configured via env vars at server startup

* fix(grpc): mirror CLI auth bootstrap in start-grpc and fix tool_name field

scripts/start-grpc.ts now runs the same provider/auth bootstrap as the
normal CLI entrypoint: enableConfigs, safe env vars, Gemini/GitHub token
hydration, saved-profile resolution with warn-and-fallback, and provider
validation before the server binds.

ToolCallResult.tool_name was being populated with the tool_use_id UUID.
Added a toolNameById map (filled in canUseTool) so tool_name now carries
the actual tool name (e.g. "Bash"). The UUID moves to a new tool_use_id
field (proto field 4) for client-side correlation.

* fix(grpc): add tool_use_id to ToolCallStart and interrupt engine on stream close

Two blocker-level issues flagged in code review:

- ToolCallStart was missing tool_use_id, making it impossible for clients
  to correlate tool_start events with tool_result when the same tool runs
  multiple times. Added tool_use_id = 3 to the proto message and populated
  it from the toolUseID parameter in canUseTool.

- On stream close without an explicit CancelSignal the server only nulled
  the engine reference, leaving the underlying model/tool work running
  as an orphan. Added engine.interrupt() in the call.on('end') handler
  to stop work immediately when the client disconnects.

* fix(grpc): resolve pending promises on disconnect and guard post-cancel writes

Four lifecycle and contract issues identified during proactive review:

- Pending permission Promises in canUseTool would hang forever if the
  client disconnected mid-stream. On call 'end', all pending resolvers
  are now called with 'no' so the engine can unblock and terminate.

- The done message and session save could fire after call.end() when
  a CancelSignal arrived mid-generation. Added an `interrupted` flag
  set on both cancel and stream close to gate all post-loop writes.

- The session map had no eviction policy, allowing unbounded memory
  growth. Capped at MAX_SESSIONS=1000 with FIFO eviction of the
  oldest entry.

- Field 3 was silently absent from ChatRequest. Added `reserved 3`
  to document the gap and prevent accidental reuse in future.

* fix(grpc): reset previousMessages on each new request to prevent session history leak

previousMessages was declared at stream scope and only overwritten when
the incoming session_id already existed in the session store. A second
request on the same stream with a new session_id would silently inherit
the first request's conversation history in initialMessages instead of
starting fresh, violating the session contract.

Fix: reset previousMessages to [] at the start of each ChatRequest
before the session-store lookup.

* fix(grpc): reset interrupted flag between requests and guard against concurrent ChatRequest

Two stream-scoped state bugs found during proactive audit:

- The `interrupted` flag was never reset between requests on the same
  stream. If the first request was cancelled, all subsequent requests
  would silently skip the done message, causing the client to hang.

- A second ChatRequest arriving while the first was still processing
  would overwrite the engine reference, corrupting the lifecycle of
  both requests. Now returns ALREADY_EXISTS error instead. Engine is
  nulled after the for-await loop completes so subsequent requests
  can proceed normally.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-06 17:54:10 +08:00
..