Compare commits

..

38 Commits

Author SHA1 Message Date
github-actions[bot]
e6af990375 chore(main): release 0.8.0 2026-04-29 16:59:26 +00:00
KRATOS
ee0d930093 fix(ripgrep): use @vscode/ripgrep package as the builtin source (#911) (#932)
The vendored-binary lookup at vendor/ripgrep/<arch>-<platform>/rg never
resolved in this fork — that directory does not ship — so users without
a system rg had no working fallback. Switch to the @vscode/ripgrep
package so Microsoft maintains the platform/arch matrix and the binary
is delivered via npm.

- src/utils/ripgrep.ts: replace hand-rolled vendor-path resolution with
  rgPath from @vscode/ripgrep. Lazy require so a missing package falls
  through to the system rg branch instead of throwing at import.
  Drop builtinExists from the config args; builtinCommand is now a
  string-or-null. The system override (USE_BUILTIN_RIPGREP=0), the
  Bun-compiled standalone embedded mode, the macOS codesign hook, and
  all retry/timeout/error logic are preserved untouched.
- scripts/build.ts: mark @vscode/ripgrep as external. The package
  resolves rgPath via __dirname at runtime, so bundling would freeze
  the build host's absolute path into dist/cli.mjs.
- src/utils/ripgrep.test.ts: update for the new config shape and add
  tests covering USE_BUILTIN_RIPGREP=0, embedded mode, last-resort
  fallback, and null builtin path.

Tested locally on Linux (Bun 1.3.13). macOS (codesign hook) and
Windows (rg.exe extension) need contributor verification.
2026-04-30 00:58:46 +08:00
ArkhAngelLifeJiggy
0ca4333537 feat: add streaming token counter (#797)
* feat: add streaming token counter

- Add StreamingTokenCounter for real-time token counting during generation
- Tracks output tokens as they arrive from stream
- Calculates tokens per second rate
- Add tests (4 passing)

PR 4A: Streaming Token Counter (Features 1.2, 1.7)

* refactor: move StreamingTokenCounter to separate file

- Extract StreamingTokenCounter from tokens.ts to streamingTokenCounter.ts
- Add getEstimatedRemainingTokens() method
- Update test import

* fix: word-boundary token counting for stable stream totals

- Accumulate raw content, count only at word boundaries
- Eliminates instability from arbitrary chunk boundaries
- Add finalize() to flush remaining content on stream end
- Add characterCount getter for raw content tracking
- Rename getEstimatedRemainingTokens -> getEstimatedGenerationTimeMs
- Add comprehensive tests

* fix: update streamingTokens test for word-boundary API

- Add finalize() call before checking output tokens
- Use characterCount for interim checks
- Add spaces to trigger word boundary counting

* fix: add estimateRemainingTokens/Time methods

- Add estimateRemainingTokens(target) method
- Add estimateRemainingTimeMs(target) method
- Covers non-blocking: now properly estimates remaining tokens

* fix: PR 797 - fix word boundary counting, consolidate tests

Blockers (Vasanthdev2004):
- recountAtWordBoundary now searches forward from lastCountedIndex+1
- Finds NEXT space after already-counted region, not before it
- Provides accurate live token counts during streaming, not just finalize()

Non-blocking (gnanam1990):
- Delete streamingTokens.test.ts, merge tests into streamingTokenCounter.test.ts
- Added interim-counting test to verify counting updates during streaming

* fix: PR 797 - fix word boundary advancement after space

Blocking:
- Fix recountAtWordBoundary to skip past space when searching for next boundary
- After counting at a space, indexOf(' ') returns 0 (the space itself)
- Now starts search from index 1 to find the NEXT word boundary
- Short chunks now properly trigger count advancement

Non-blocking:
- Add test verifying count increases after each word boundary
- Add test for space-skipping behavior
2026-04-29 16:17:00 +08:00
ArkhAngelLifeJiggy
92d297e50e feat: context preloading and hybrid context strategy (#860)
* feat: context preloading and hybrid context strategy

PR 2D - Section 2.7, 2.8:
- Add contextPreload.ts with pattern-based prediction
- Add hybridContextStrategy.ts with cache/fresh balancing
- Optimize for cost vs accuracy
- Add comprehensive tests (13 passing)

* feat: wire hybrid context strategy into API path

- Apply hybrid strategy after normalizeMessagesForAPI
- Feature-flag controlled (HYBRID_CONTEXT_STRATEGY)
- Optimizes cache/fresh balance for API requests

* fix: resolve PR 2D blocking issues

- Fix predictContextNeeds self-assign bug (matchedCategory = category)
- Add test for non-empty predictedNeed
- Preserve conversation tail in hybridStrategy (never drop last 3 messages)
- Add comment for hardcoded 200k cap in claude.ts

Fixes reviewer feedback from gnanam1990 and Vasanthdev2004

* fix: preserve tool_use/tool_result chains in hybridStrategy

- Increase MIN_TAIL to 5 (tool_use -> tool_result -> assistant -> user -> next)
- Add getMessageChain() to preserve paired messages
- Chains kept together in final selection

* fix: PR 860 - tool_use/tool_result pairing and safe token counting

Blocking:
- getMessageChain() now pairs by tool_use.id (block ID) not msg.message.id
- Find tool_use blocks by id, pair with tool_result having matching tool_use_id
- Fixes tool_result surviving while paired tool_use dropped

- Token counting now includes array content (tool_use, tool_result, thinking)
- Not just string content, prevents undercounting prompt size

- Deduplicate messages by UUID when combining chains + split + tail
- Prevents duplicate messages in final request

Non-blocking:
- Add regression test for tool_use/tool_result pairing

* fix: PR 860 - account for actual structured payload size in token counting

Blocking:
- getMessageTokenCount now calculates actual token count for structured blocks
- tool_use: uses JSON.stringify(input).length / 4 + base
- tool_result: counts actual content (string or array of text blocks)
- thinking: counts actual thinking text length / 4
- is_error flag adds small overhead

Non-blocking:
- Add tests for large tool_use input and large thinking blocks
2026-04-29 15:49:46 +08:00
emsanakhchivan
91f93ce615 feat: SDK Foundation — Type Declarations, Errors, and Utilities (#866)
* feat(sdk): add SDK foundation — type declarations, errors, and utilities

Adds standalone SDK building blocks with no SDK source dependencies:
- sdk.d.ts: ambient type declarations for SDK bundle
- coreSchemas.ts + coreTypes.generated.ts: Zod schemas and generated types
- errors.ts: SDK-specific error classes
- validation.ts: input validation utilities
- messageFilters.ts: extracted message filter logic
- handlePromptSubmit.ts: imports from messageFilters
- 16 generated-types tests

* fix(sdk): narrow assertFunction type from broad Function to callable signature

Code review finding: assertFunction used `asserts value is Function` which
accepts any function-like value without narrowing. Changed to
`(...args: any[]) => any` for better type safety.

* fix(sdk): update sdk.d.ts header — manually maintained, not generated

Reviewer noted the header said "Generated from index.ts" but no generator
produces this file. Updated to "Manually maintained — keep in sync with
index.ts". Drift detection added in validate-externals.ts (PR 3).

* fix(sdk): align sdk.d.ts types with canonical coreTypes.generated.ts

Tighten SDK public type contract to resolve reviewer blockers:

- PermissionResult: unknown[] → precise 6-shape discriminated union
  (addRules/replaceRules/removeRules/setMode/addDirectories/removeDirectories)
- SDKSessionInfo: snake_case → camelCase (sessionId, lastModified, etc.)
- ForkSessionResult: session_id → sessionId
- SDKPermissionRequestMessage: uuid + session_id now required
- SDKPermissionTimeoutMessage: added uuid + session_id
- SessionMessage: parent_uuid → parentUuid
- SDKMessage/SDKUserMessage/SDKResultMessage: replaced loose inline
  definitions with re-exports from coreTypes.generated.ts

---------

Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
2026-04-29 14:53:01 +08:00
KRATOS
5943c5c269 fix(input): strip leading ! when entering bash mode (#947)
The PromptInput onChange handler had two branches for entering bash
mode: a single-char path that just toggled the mode and a multi-char
paste path that also stripped the leading `!` from the buffer. The
single-char path returned without stripping, so typing a bare `!` into
empty input switched modes but left the literal `!` visible.

Consolidated both paths through a new pure helper `detectModeEntry`
that returns the new mode plus the stripped buffer value, so there is
no longer a branch where the mode character can leak into the buffer.

Fixes #662
2026-04-29 10:29:59 +08:00
Kevin Codex
c0b5535d86 docs: add Atomic Chat partner (#942)
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-28 23:35:25 +08:00
Vasanth T
d321c8fc6a fix: avoid legacy Windows PasswordVault reads by default (#941)
* fix: avoid legacy Windows PasswordVault reads by default

* fix: isolate model capability override cache

---------

Co-authored-by: OpenClaude Worker 3 <worker-3@openclaude.local>
2026-04-28 23:30:48 +08:00
KRATOS
8106880855 fix(typecheck): make bun run typecheck actionable on main (#473) (#938)
Issue #473 reported that `bun run typecheck` fails on main with ~4400
errors due to repo-foundation drift, masking branch-specific
regressions. Per kevincodex1's guidance ("lets narrow the typecheck
scope for now and then we expand step by step") this PR addresses the
foundational root causes and brings the error count down 60% so the
gate is actionable for branch reviews.

Changes:

- tsconfig.json: bump target to ES2023 + add lib ["ES2023", "DOM"]
  so Array.findLast / findLastIndex resolve (kills 41 TS2550 errors).
  Add `noEmit: true` for typecheck-only mode and
  `allowImportingTsExtensions: true` (kills 40 TS5097 errors). Set
  `noImplicitAny: false` because cleaning up TSX-component implicit
  any is explicitly out of scope per the issue.

- src/global.d.ts: ambient declaration for the build-time MACRO
  global injected by scripts/build.ts via Bun's `define` option
  (kills 9 TS2304 'Cannot find name MACRO' errors).

- src/types/{message,utils,tools}.ts: stubs for the highest-impact
  missing modules from the partial source snapshot (~21 importers
  for message alone). Document the snapshot caveat at the top of each
  stub and reference issue #473 so future readers know they're
  placeholders.

- src/entrypoints/sdk/controlTypes.ts and src/constants/querySource.ts:
  similar one-file stubs unblocking 18 + 19 importers respectively.

- src/entrypoints/agentSdkTypes.ts: append `any`-typed aliases for
  ~70 SDK names that callers expect on the public surface but that
  live in stubbed sub-files (PermissionMode, SDKCompactBoundaryMessage,
  HookEvent, ModelUsage, ModelInfo, etc. — exactly the list from
  auriti's bug-report enumeration).

Verified locally on Linux:
- baseline `bunx tsc --noEmit` on stashed main: 4434 errors
- with PR applied:                              1782 errors (60% drop)
- `bun run build`:                              passes (v0.7.0)
- `bun test`:                                   1632 pass; the 4
   remaining failures (StartupScreen, thinking) reproduce on main
   and are unrelated.
- TS2550 (lib): 41 → 0
- TS5097 (.ts imports): 40 → 0
- TS2304 'MACRO': 9 → 0
- TS2307 missing modules: 587 → 325

Remaining errors are localized to specific stubbed modules and can
be addressed in smaller follow-up issues, matching the issue's
"Definition of done" criterion.
2026-04-28 17:44:26 +08:00
Kevin Codex
4c93a9f9f1 feat: add Opus 4.7 as default model and fix alias/thinking bugs (#928)
- Add CLAUDE_OPUS_4_7_CONFIG and register it in ALL_MODEL_CONFIGS
- Set Opus 4.7 as default for firstParty in getDefaultOpusModel() (3P stays on 4.6 until rollout)
- Fix sonnet[1m] → 404 bug: query.ts was passing raw alias to API without resolving via parseUserSpecifiedModel
- Add opus-4-7 to modelSupportsAdaptiveThinking so it uses { type: 'adaptive' } not { type: 'enabled' }
- Fix duplicate opus47 case and wrong opus46[1m] fallthrough in getPublicModelDisplayName switch
- Update user-facing display strings (picker labels, plan mode description) to reference Opus 4.7
- Add 3P fallback suggestion chain for opus-4-7 → opus-4-6 in validateModel

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-28 17:31:06 +08:00
viudes
6ea3eb6483 feat(api): deterministic request-body serialization via stableStringify (#882)
* feat(api): deterministic request-body serialization via stableStringify

Add `stableStringify` helper that emits JSON with object keys sorted
lexicographically at every depth (arrays preserved). Adopt it in the
OpenAI-compatible shim and the Codex Responses-API shim for the outgoing
request body.

WHY: OpenAI / Kimi / DeepSeek / Codex use implicit prefix caching keyed
on exact request bytes. Spurious insertion-order differences in
spread-merged body objects otherwise invalidate the cache on every turn.
Also a pre-requisite for Anthropic `cache_control` breakpoint hits.

Byte-equivalent to `JSON.stringify` when keys already happen to be in
lexical insertion order, so strictly additive across providers.

* fix(api): preserve circular-ref TypeError in stableStringify + cover GitHub fallback

Replace two-pass sortingReplacer approach with a single-pass deepSort that
tracks ancestor objects via WeakSet, throwing TypeError on cycles (same
contract as native JSON.stringify) and correctly handling DAGs via
try/finally cleanup. Switch the GitHub Copilot /responses fallback in
openaiShim.ts from JSON.stringify to stableStringify so that path is also
byte-stable for prefix caching.

Regression coverage added: top-level cycle, deep nested cycle, DAG safety.

* fix(api): align stableStringify with native JSON.stringify pre-processing

Replicate native JSON.stringify pre-processing inside deepSort so
serialization output matches native behavior beyond key ordering:

- invoke toJSON(key) when present (Date, URL, user classes); pass ''
  at top-level, property name for nested values, index string for
  array elements
- unbox Number/String/Boolean wrappers via valueOf() so new Boolean(false)
  doesn't get truthy-coerced
- run cycle detection on the post-toJSON value so a toJSON returning
  an ancestor still throws TypeError; DAGs continue to not throw
- drop properties whose toJSON returns undefined, matching native

Add focused stableStringify.test.ts (21 cases) asserting equality with
JSON.stringify across toJSON paths, wrapper unboxing, cycle/DAG handling,
and sortKeysDeep parity.
2026-04-27 23:33:15 +08:00
vrdons
f699c1f2fc fix routing path (#923) 2026-04-27 20:05:17 +08:00
github-actions[bot]
52b4c5c2ff chore(main): release 0.7.0 (#817)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-27 11:47:52 +08:00
FluxLuFFy
c6c5f0608c fix: bugs (#885)
* fix: error output truncation (10KB→40KB) and MCP tool bugs

- toolErrors.ts: increase error truncation limit from 10KB to 40KB
  Shell output can be up to 30KB, so 10KB was silently cutting off
  error logs from systemctl, apt, python, etc.

- MCPTool: cache compiled AJV validators (was recompiling every call)
- MCPTool: fix validateInput error message showing [object Object]
- MCPTool: null-guard mapToolResultToToolResultBlockParam
- MCPTool: explicit null check in isResultTruncated
- ReadMcpResourceTool: null-guard mapToolResultToToolResultBlockParam

Tests (84 passing):
- src/utils/toolErrors.test.ts (13 tests)
- src/tools/BashTool/commandSemantics.test.ts (24 tests)
- src/tools/BashTool/utils.test.ts (32 tests)
- src/tools/MCPTool/MCPTool.test.ts (15 tests)

* fix: address review blockers from PR #885

Blocker 1: Fix abort path in callMCPTool
- Previously returned { content: undefined } on AbortError, which masked
  the cancellation and caused mapToolResultToToolResultBlockParam to send
  empty content to the API as if it were a successful result.
- Now converts abort errors to our AbortError class and re-throws, so the
  tool execution framework handles it properly (skips logging, creates
  is_error: true result with [Request interrupted by user for tool use]).

Blocker 2: Fix memory leak in AJV validator cache
- Changed compiledValidatorCache from Map to WeakMap so schemas from
  disconnected/refreshed MCP tools can be garbage collected instead of
  accumulating strong references indefinitely.

Also: null guards now return descriptive indicators instead of empty
strings, making it clear when content is unexpectedly missing.

---------

Co-authored-by: FluxLuFFy <FluxLuFFy@users.noreply.github.com>
Co-authored-by: Fix Bot <fix@openclaw.ai>
2026-04-26 23:11:19 +08:00
Kevin Codex
46a9d3eec4 chore: rebrand user-facing copy to OpenClaude (#851)
* chore: rebrand user-facing copy to OpenClaude

Replace lingering Claude Code branding in CLI, tips, and runtime UI with OpenClaude/openclaude, including the startup tip Gitlawb mention.

Co-Authored-By: Claude GPT-5.4 <noreply@openclaude.dev>

* chore: address branding-sweep review feedback

- PermissionRequest.tsx: rebrand the two remaining "Claude needs your
  approval/permission" notifications to OpenClaude (review-artifact and
  generic tool permission paths).
- main.tsx, teleport.tsx, session.tsx, WebFetchTool/utils.ts,
  skills/bundled/{debug,updateConfig}.ts: replace leftover `claude --…`
  CLI hints and "Claude Code" labels missed by the original sweep.
- main.tsx: drop the inline gitlawb.com marketing copy from the
  stale-prompt tip; keep it a pure rebrand.
- auth.ts: finish the half-rename so both `claude setup-token` and
  `claude auth login` references in the same error block now read
  `openclaude …`.
- mcp/client.ts: keep `name: 'claude-code'` for MCP server allowlist
  compatibility (now explicit via comment) and replace the
  "Anthropic's agentic coding tool" description with an OpenClaude one.
- MCPSettings.tsx: point the empty-server-list hint at
  https://github.com/Gitlawb/openclaude instead of code.claude.com.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: replace help link with OpenClaude repo URL

Replace https://code.claude.com/docs/en/overview with
https://github.com/Gitlawb/openclaude in the help screen.

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

---------

Co-authored-by: Claude GPT-5.4 <noreply@openclaude.dev>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-26 22:14:36 +08:00
Kevin Codex
2586a9cddb feat: add xAI as official provider (#865)
* feat: add xAI as official provider

- Add xAI preset to ProviderManager (alphabetical order)
- Add xAI provider detection via XAI_API_KEY
- Add xAI startup screen heuristic (x.ai base URL or grok model)
- Add xAI status display properties
- Add grok-4 and grok-3 context windows
- Add xAI model fallbacks across all tiers
- Fix JSDoc priority order in providerAutoDetect

Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>

* fix(xai): persist relaunch classification for xAI profiles

Addresses reviewer feedback on feat/xai-official-provider:
- isProcessEnvAlignedWithProfile now validates XAI_API_KEY for x.ai
  base URLs, mirroring the Bankr pattern. Without this, relaunch
  skips re-applying the profile, XAI_API_KEY stays unset, and
  getAPIProvider() falls back to 'openai'.
- buildOpenAICompatibleStartupEnv now sets XAI_API_KEY when syncing
  active xAI profile to the legacy fallback file.
- Adds 'xai' to VALID_PROVIDERS and --provider xai CLI flag support.
- Adds xAI detection to providerDiscovery label heuristics.
- Adds 'xai' to legacy ProviderProfile type/isProviderProfile guard.
- Adds targeted tests for relaunch alignment, flag application, and
  discovery labeling.

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@openclaude.dev>
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-26 21:26:44 +08:00
Rayan Alkhelaiwi
d45628c413 fix(startup): show --model flag override on startup screen (#898)
The startup screen was only reading model from env vars and settings,
ignoring the --model CLI flag since it's parsed by Commander.js after
the banner prints. Now eagerly parses --model from argv before rendering
so the displayed model matches what the session will actually use.
2026-04-26 20:24:44 +08:00
TechBrewBoss
6dedffe5ff Add OpenAI responses mode and custom auth headers (#906)
* Add OpenAI profile responses and custom auth header support

* Fix knowledge graph config reference in query loop

* Address OpenAI profile review edge cases

* Remove unused getGlobalConfig import

Delete an unused import of getGlobalConfig from src/query.ts. This cleans up dead code and avoids unused-import lint warnings; no functional behavior changes.

* Address follow-up OpenAI profile review comments

* Refine OpenAI responses auth review fixes

* Fix custom auth header default scheme
2026-04-26 20:24:03 +08:00
emsanakhchivan
a3e728a114 fix(agent): provider-aware fallback for haiku/sonnet aliases (#908)
* fix(agent): provider-aware fallback for haiku/sonnet aliases

Explore agent fails on custom providers (Z.AI GLM, Alibaba Anthropic-compatible,
local OpenAI endpoints) because 'haiku' alias resolves to a non-existent model.

Changes:
- Add isClaudeNativeProvider check (Bedrock, Vertex, Foundry, official Anthropic)
- For non-Claude-native providers, haiku/sonnet aliases inherit parent model
- Add 8 tests for provider-aware fallback behavior

Fixes Explore agent "model not found" errors on custom Anthropic-compatible APIs.

* test(agent): use Bun mock.module() for provider tests

Replace env manipulation with proper Bun mock.module() to reliably
mock getAPIProvider() and isFirstPartyAnthropicBaseUrl() functions.
This ensures tests work correctly on CI where module caching caused
false negatives.

---------

Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
2026-04-26 20:08:55 +08:00
Kevin Codex
818689b2ee fix(query): restore system prompt structure and add missing config import (#907)
- import getGlobalConfig — six call sites referenced it without an import;
  five short-circuited via feature() gates, but src/query.ts:1896 always
  ran and crashed every queryLoop iteration with "getGlobalConfig is not
  defined" (e.g. Explore subagent: "Agent failed: getGlobalConfig is not
  defined").
- stop coercing SystemPrompt (string[]) into a template-string before
  appendSystemContext — that made [...systemPrompt] spread the string
  character-by-character, replacing the structured prompt with thousands
  of one-char system blocks. Append arcSummary as its own array element
  instead.
- gate the finalizeArcTurn call behind feature('CONVERSATION_ARC') so it
  matches the rest of the memory-PR call sites and gets dead-code-
  eliminated for users without the flag.

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-26 12:45:09 +08:00
Kevin Codex
d9ae56bc58 fix provider switch not presistingin session (#903)
* fix provider switch not presistingin session

* fix broken tests
2026-04-26 11:15:25 +08:00
Pedry
af9a3caa4d Fix file path and update placeholder key in PLAYBOOK.md (#886)
Updated file paths and placeholder key in PLAYBOOK.md.
2026-04-26 08:20:25 +08:00
chioarub
a0d657ee18 feat(zai): add Z.AI GLM Coding Plan provider preset (#896)
* feat(zai): add Z.AI GLM Coding Plan provider preset

Add dedicated Z.AI provider support for the GLM Coding Plan, enabling
use of GLM-5.1, GLM-5-Turbo, GLM-4.7, and GLM-4.5-Air models through
the OpenAI-compatible shim with proper thinking mode (reasoning_content),
max_tokens handling, and context window sizing.

* fix(zai): unify GLM max output token limits across casing variants

glm-5/glm-4.7 had conservative 16K max output while GLM-5/GLM-4.7
had 131K. Use consistent Z.AI coding plan limits for all GLM variants.

* fix(zai): restore DashScope GLM limits, enable GLM thinking support

- Restore lowercase glm-5/glm-4.7 to 16_384 max output (DashScope limits)
  while keeping Z.AI coding plan high limits on uppercase GLM-* keys only
- Add GLM model support to modelSupportsThinking() so reasoning_content
  is enabled when using GLM-5.x/GLM-4.7 models on Z.AI

* fix(zai): tighten GLM regexes, fix misleading context window comment

- Use precise regex in thinking.ts: exact GLM model matches only,
  no false positives on glm-50/glm-4, includes glm-4.5-air
- Use uppercase-only match in StartupScreen rawModel fallback so
  DashScope lowercase glm-* models aren't mislabeled as Z.AI
- Clarify context window comment: lowercase glm-5.1/glm-5-turbo/
  glm-4.5-air are Z.AI-specific aliases, not DashScope

* fix(zai): scope GLM detection to Z.AI

* improve readability of max_completion_tokens check

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
2026-04-26 08:18:59 +08:00
3kin0x
29f7579377 feat(memory): implement persistent project-level Knowledge Graph and RAG (#899)
- Shift memory from session-scope to persistent project-scope\n- Add native JSON RAG with BM25-lite ranking\n- Implement passive technical concept extraction (IPs, versions, frameworks)\n- Orchestrate hierarchical context injection in the conversation loop
2026-04-26 08:17:02 +08:00
viudes
9e23c2bec4 feat(api): expose cache metrics in REPL + normalize across providers (#813)
* feat(api): expose cache metrics in REPL + /cache-stats command

* fix(api): normalize Kimi/DeepSeek/Gemini cache fields through shim layer

* test(api): cover /cache-stats rendering + fix CacheMetrics docstring drift

* fix(api): always reset cache turn counter + include date in /cache-stats rows

* refactor(api): unify shim usage builder + add cost-tracker wiring test

* fix(api): classify private-IP/self-hosted OpenAI endpoints as N/A instead of cold

* fix(api): require colon guard on IPv6 ULA prefix to avoid public-host over-match

* perf(api): ring buffer for cache history + hit rate clamp + .localhost TLD

* fix(api): null guards on formatters + document Codex Responses API shape

* fix(api): defensive start-of-turn reset + config gate fallback + env var docs

* fix(api): trust forwarded cache data on self-hosted URLs (data-driven)

* refactor(api): delegate streaming Responses usage to shared makeUsage helper
2026-04-25 12:38:25 +08:00
JATMN
9070220292 Add Kimi Code provider preset and rename Moonshot API preset (#862)
* Add Kimi Code provider preset

* fix desc.

Co-authored-by: Copilot <copilot@github.com>

* more desc. fixes.

* Fix release validation tests

---------

Co-authored-by: Copilot <copilot@github.com>
2026-04-25 12:36:54 +08:00
JATMN
26413f6d30 feat(minimax): add /usage support and fix MiniMax quota parsing (#869)
* Add MiniMax usage UI and API support

* Fix MiniMax usage parsing and refresh UI

* Refactor MiniMax usage handling
2026-04-25 12:33:22 +08:00
3kin0x
44f9cac70d Feature/memory pr (#894)
* feat: multi-turn context and conversation arc memory

PR 2E - Section 2.9, 2.10:
- Add multiTurnContext.ts with turn tracking and state preservation
- Add conversationArc.ts with goal/decision/milestone tracking
- Wire into query.ts after tool execution
- Feature-flags: MULTI_TURN_CONTEXT, CONVERSATION_ARC
- Add comprehensive tests (22 passing)

* feat(cli): add /knowledge command to manage native memory

- Add /knowledge enable <yes|no> to toggle Knowledge Graph learning\n- Add /knowledge clear to reset memory\n- Add persistent knowledgeGraphEnabled setting to global config\n- Integrated user setting into the query execution loop

* feat(cli): add /knowledge command (stable local-jsx version)

- Resolve conflicts between .ts and .tsx files\n- Align with LocalJSXCommandCall signature\n- Fix onDone and args errors

* test(cli): fix knowledge command tests by properly isolating global config

* fix(cli): make knowledge command defensive against undefined args and leaky tests

* fix(cli): correct data source for entity count and fix test isolation

* fix(cli): reinforce knowledge test by explicitly defining property on test config

* fix(cli): explicitly define property in test config to avoid undefined in CI

* fix(cli): make knowledge tests resistant to global config mocks in CI

* chore(memory): surgical improvements from architectural audit

- Fix: Implement entity deduplication in Knowledge Graph\n- Fix: Ensure fact extraction from user messages in query loop\n- Fix: Refine regexes for better quality learning (less noise)

---------

Co-authored-by: LifeJiggy <Bloomtonjovish@gmail.com>
2026-04-25 07:19:41 +08:00
JATMN
ff2a380723 Add DeepSeek V4 flash/pro support and DeepSeek thinking compatibility (#877)
* Add DeepSeek V4 support and thinking compatibility

* Fix DeepSeek profile persistence regression

* Align multi-model handling with openai-multi-model
2026-04-25 02:29:46 +08:00
JATMN
c4cb98a4f0 fix: normalize /provider multi-model selection and semicolon parsing (#841)
* fix provider multi-model selection

* fix provider manager multi-model save path
2026-04-25 02:28:14 +08:00
3kin0x
b5f7047358 Feature/memory pr (#889)
* feat: multi-turn context and conversation arc memory

PR 2E - Section 2.9, 2.10:
- Add multiTurnContext.ts with turn tracking and state preservation
- Add conversationArc.ts with goal/decision/milestone tracking
- Wire into query.ts after tool execution
- Feature-flags: MULTI_TURN_CONTEXT, CONVERSATION_ARC
- Add comprehensive tests (22 passing)

* feat(memory): resolve review blockers and integrate native Knowledge Graph into Conversation Arcs

- Fix: Extract text from production block arrays in phase detector\n- Fix: Ensure proper turn segmentation in query loop\n- Fix: Respect options in multi-turn context tracker\n- Feat: Add native Knowledge Graph (Entities/Relations) to ConversationArc architecture\n- Test: Comprehensive test suite for all fixes and new graph features

* test(perf): add automated performance benchmarks for Knowledge Graph extraction and summary

---------

Co-authored-by: LifeJiggy <Bloomtonjovish@gmail.com>
2026-04-25 02:26:02 +08:00
Kevin Codex
64b1014b9a Feat/bankr provider (#888)
* feat(provider): add Bankr LLM Gateway support

Add Bankr as an OpenAI-compatible provider preset with dedicated env vars:
- BNKR_API_KEY, BANKR_BASE_URL, BANKR_MODEL
- Uses X-API-Key header instead of Authorization Bearer
- Base URL: https://llm.bankr.bot/v1
- Default model: claude-opus-4.6

Changes:
- Add 'bankr' to VALID_PROVIDERS and provider flag handling
- Add buildBankrProfileEnv() with env key registration
- Add Bankr detection in startup screen and provider discovery
- Map Bankr env vars to OpenAI-compatible vars in shim
- Add Bankr preset to ProviderManager (alphabetical order)
- Update PRESET_ORDER test to include Bankr

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

* fixup(provider): address Bankr PR review feedback

1. Map BNKR_API_KEY → OPENAI_API_KEY in providerFlag.ts so
   --provider bankr works with BNKR_API_KEY in non-interactive startup.

2. Remove unconditional BANKR_MODEL read from model.ts; it maps to
   OPENAI_MODEL via providerFlag.ts and openaiShim.ts, preventing
   cross-provider leakage.

3. Use X-API-Key for Bankr model discovery in openaiModelDiscovery.ts
   and providerDiscovery.ts, matching chat request auth.

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

---------

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-24 23:03:45 +08:00
TechBrewBoss
5a21d05741 Persist active provider profile across restarts (#833)
* Persist active provider profile across restarts

* Clear stale startup provider overrides

* Fix provider profile restart fallback

* Fix provider profile restart fallback

* Omit empty OpenAI API key from startup env

* Fix startup override settings typing
2026-04-24 19:36:21 +08:00
Kevin Codex
038f715b7a feat(model): add GPT-5.5 support for Codex provider (#880)
- Bump Codex provider defaults from gpt-5.4 to gpt-5.5 across all ModelConfigs
- Update codexplan alias to resolve to gpt-5.5
- Add gpt-5.5 and gpt-5.5-mini to model picker with reasoning effort mappings
- Add context window and max output token specs for gpt-5.5 family
- Add gpt-5.5 entries to COPILOT_MODELS registry
- Keep official OpenAI API preset at gpt-5.4 (API availability pending)
- Update codexShim tests to expect gpt-5.5 from codexplan alias

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-24 19:06:36 +08:00
Kevin Codex
b694ccfff1 Add sponsors section to README (#874) 2026-04-24 11:47:55 +08:00
KRATOS
dcbe29558a fix(mcp): disable MCP_SKILLS feature flag — source not mirrored (#872)
Closes #856.

MCP servers that expose resources (e.g. RepoPrompt) failed to load
their tools in the open build with:

    Error fetching tools/commands/resources:
    fetchMcpSkillsForClient is not a function

Root cause: scripts/build.ts set MCP_SKILLS: true, which made
feature('MCP_SKILLS') evaluate to true at build time. The guards
around the dynamic skill discovery path therefore stayed live. The
underlying source file src/skills/mcpSkills.ts is not mirrored into
the open tree, so the bundler fell back to its generic missing-module
stub — which only exports `default` for require()-style imports, not
the named `fetchMcpSkillsForClient` binding. At runtime the require
returned an object without that property, and calling it threw.

`openclaude mcp doctor` reported RepoPrompt as healthy because doctor
does not exercise the skills-fetch path.

Fix: flip MCP_SKILLS to false and move it into the "Disabled: missing
source" group. With the flag off, every `if (feature('MCP_SKILLS'))`
guard becomes a no-op at build time, the require() branch is dead
code, and MCP servers with resources load normally via the existing
`Promise.resolve([])` fallbacks already present at each call site.

Also adds scripts/feature-flags-source-guard.test.ts to fail fast if
MCP_SKILLS (or any future flag in the same category) is re-enabled
without the corresponding source file being mirrored first.

Verification:
  - Test fails on main, passes with this fix
  - `bun run build` produces a bundle with no
    `missing-module-stub:../../skills/mcpSkills.js` reference
  - Full `bun test` — 1222 pass / 12 fail (same pre-existing 12 as
    main; new test adds the +1 pass)
2026-04-24 11:35:59 +08:00
KRATOS
a4c6757023 fix(shell): recover when CWD path was replaced by a non-directory (#871)
* fix(shell): recover when CWD path was replaced by a non-directory

Closes #844.

When the session's cached working directory is renamed on disk and
a file is subsequently created at the old path (e.g. `mv orig renamed
&& touch orig`), every Bash tool invocation failed with
`ENOTDIR: not a directory, posix_spawn '/usr/bin/zsh'` (exit 126),
and `!`-prefixed commands silently failed. No recovery was possible
without restarting the session.

Root cause: the pre-spawn guard in `src/utils/Shell.ts:exec()` used
`realpath(cwd)` to detect a missing CWD. `realpath()` succeeds on
any existing path — file or directory — so a path that was replaced
with a regular file slipped past the check. spawn() was then called
with `cwd` pointing at a non-directory and failed with ENOTDIR.

Fix: replace `realpath()` with `stat().isDirectory()` for both the
primary CWD check and the `getOriginalCwd()` fallback check. When
the cached CWD is no longer a directory, fall back to the original
CWD (as before) and update state so subsequent tools recover
transparently.

Verification:
  - Repro: `mkdir -p /tmp/x/orig && mv /tmp/x/orig /tmp/x/renamed
    && touch /tmp/x/orig`, then exec with stale cwd=/tmp/x/orig
  - Before: exit 126, stderr "ENOTDIR: not a directory, posix_spawn"
  - After:  exit 0, cwd transparently recovered to originalCwd
  - `bun test` — no new regressions (pre-existing model/provider
    test failures are unrelated and present on main)

* fix(shell): drop now-unused realpath import
2026-04-24 11:34:08 +08:00
KRATOS
6e58b81937 fix(update): show real package version and give actionable guidance (#870)
The `openclaude update` / `openclaude upgrade` command printed
`Current version: 99.0.0` and, in the development-build branch, exited
with only `Warning: Cannot update development build` (closes #852).

Root cause: `MACRO.VERSION` is hardcoded to `'99.0.0'` in
`scripts/build.ts` as an internal compatibility sentinel so OpenClaude
passes upstream minimum-version guards. The real package version is
exposed separately as `MACRO.DISPLAY_VERSION`. `update.ts` was using
`MACRO.VERSION` for both the version shown to the user and for every
`latestVersion` comparison, which meant:

- Users always saw `99.0.0` as their "current version".
- `99.0.0 >= <any real npm version>`, so the "up to date" and
  "update available" checks could never fire correctly.

Fix (scoped to `src/cli/update.ts`):

- Use `MACRO.DISPLAY_VERSION` for all user-facing version strings and
  version comparisons.
- Replace the dead-end `Warning: Cannot update development build`
  (which exited 1 with no guidance) with actionable instructions for
  both source builds (`git pull && bun install && bun run build`) and
  npm installs (`npm install -g @gitlawb/openclaude@latest`).
- Extend the existing third-party-provider branch to also show the
  current version and the npm reinstall command, so users who
  installed via npm aren't told only to rebuild from source.
2026-04-24 11:33:03 +08:00
224 changed files with 15980 additions and 766 deletions

View File

@@ -145,9 +145,27 @@ ANTHROPIC_API_KEY=sk-ant-your-key-here
# CLAUDE_CODE_USE_OPENAI=1 # CLAUDE_CODE_USE_OPENAI=1
# OPENAI_API_KEY=sk-your-key-here # OPENAI_API_KEY=sk-your-key-here
# OPENAI_MODEL=gpt-4o # OPENAI_MODEL=gpt-4o
# For DeepSeek, set:
# OPENAI_BASE_URL=https://api.deepseek.com/v1
# OPENAI_MODEL=deepseek-v4-flash
# Optional: OPENAI_MODEL=deepseek-v4-pro
# Legacy aliases also work: deepseek-chat and deepseek-reasoner
# For Z.AI GLM Coding Plan, set:
# OPENAI_BASE_URL=https://api.z.ai/api/coding/paas/v4
# OPENAI_MODEL=GLM-5.1
# Optional: OPENAI_MODEL=GLM-5-Turbo, GLM-4.7, or GLM-4.5-Air
# Use a custom OpenAI-compatible endpoint (optional — defaults to api.openai.com) # Use a custom OpenAI-compatible endpoint (optional — defaults to api.openai.com)
# OPENAI_BASE_URL=https://api.openai.com/v1 # OPENAI_BASE_URL=https://api.openai.com/v1
# Choose the OpenAI-compatible API surface (optional — defaults to chat_completions)
# Supported: chat_completions, responses
# OPENAI_API_FORMAT=chat_completions
# Choose a custom auth header for OpenAI-compatible providers (optional).
# Authorization defaults to Bearer; custom headers default to the raw API key.
# Set OPENAI_AUTH_HEADER_VALUE when the header value differs from OPENAI_API_KEY.
# OPENAI_AUTH_HEADER=api-key
# OPENAI_AUTH_SCHEME=raw
# OPENAI_AUTH_HEADER_VALUE=your-header-value-here
# Fallback context window size (tokens) when the model is not found in the # Fallback context window size (tokens) when the model is not found in the
# built-in table (default: 128000). Increase this for models with larger # built-in table (default: 128000). Increase this for models with larger
@@ -294,6 +312,20 @@ ANTHROPIC_API_KEY=sk-ant-your-key-here
# Useful for users who want full transparency over what the model sees # Useful for users who want full transparency over what the model sees
# OPENCLAUDE_DISABLE_TOOL_REMINDERS=1 # OPENCLAUDE_DISABLE_TOOL_REMINDERS=1
# Log structured per-request token usage (including cache metrics) to stderr.
# Useful for auditing cache hit rate / debugging cost spikes outside the REPL.
# Any truthy value enables it ("verbose", "1", "true").
#
# Complements (does NOT replace) CLAUDE_CODE_ENABLE_TOKEN_USAGE_ATTACHMENT —
# they serve different audiences:
# - OPENCLAUDE_LOG_TOKEN_USAGE is user-facing: one JSON line per API
# request on stderr, intended for humans inspecting cost/caching.
# - CLAUDE_CODE_ENABLE_TOKEN_USAGE_ATTACHMENT is model-facing: injects
# a context-usage attachment INTO the prompt so the model can reason
# about its own remaining context. Does not touch stderr.
# Turn on whichever audience you're debugging; both can run together.
# OPENCLAUDE_LOG_TOKEN_USAGE=verbose
# Custom timeout for API requests in milliseconds (default: varies) # Custom timeout for API requests in milliseconds (default: varies)
# API_TIMEOUT_MS=60000 # API_TIMEOUT_MS=60000

View File

@@ -1,3 +1,3 @@
{ {
".": "0.6.0" ".": "0.8.0"
} }

View File

@@ -1,5 +1,58 @@
# Changelog # Changelog
## [0.8.0](https://github.com/Gitlawb/openclaude/compare/v0.7.0...v0.8.0) (2026-04-29)
### Features
* add Opus 4.7 as default model and fix alias/thinking bugs ([#928](https://github.com/Gitlawb/openclaude/issues/928)) ([4c93a9f](https://github.com/Gitlawb/openclaude/commit/4c93a9f9f168217d4bdd53d103337e43f28be074))
* add streaming token counter ([#797](https://github.com/Gitlawb/openclaude/issues/797)) ([0ca4333](https://github.com/Gitlawb/openclaude/commit/0ca43335375beec6e58711b797d5b0c4bb5019b8))
* **api:** deterministic request-body serialization via stableStringify ([#882](https://github.com/Gitlawb/openclaude/issues/882)) ([6ea3eb6](https://github.com/Gitlawb/openclaude/commit/6ea3eb64830ccfec1436bcebe2406158e14a7e81))
* context preloading and hybrid context strategy ([#860](https://github.com/Gitlawb/openclaude/issues/860)) ([92d297e](https://github.com/Gitlawb/openclaude/commit/92d297e50efcc7225f57f0d3cb0ba989dc40d624))
* SDK Foundation — Type Declarations, Errors, and Utilities ([#866](https://github.com/Gitlawb/openclaude/issues/866)) ([91f93ce](https://github.com/Gitlawb/openclaude/commit/91f93ce61533a9cadd1d107e09a442451c09f5db))
### Bug Fixes
* avoid legacy Windows PasswordVault reads by default ([#941](https://github.com/Gitlawb/openclaude/issues/941)) ([d321c8f](https://github.com/Gitlawb/openclaude/commit/d321c8fc6a0be6731c1ccfec0fca8023b1a8b67e))
* **input:** strip leading ! when entering bash mode ([#947](https://github.com/Gitlawb/openclaude/issues/947)) ([5943c5c](https://github.com/Gitlawb/openclaude/commit/5943c5c269cdeba45879dac0d8da0082e28cc2a2)), closes [#662](https://github.com/Gitlawb/openclaude/issues/662)
* **ripgrep:** use @vscode/ripgrep package as the builtin source ([#911](https://github.com/Gitlawb/openclaude/issues/911)) ([#932](https://github.com/Gitlawb/openclaude/issues/932)) ([ee0d930](https://github.com/Gitlawb/openclaude/commit/ee0d9300939db0c6178bfad4707a0be45f126d1f))
* **typecheck:** make `bun run typecheck` actionable on main ([#473](https://github.com/Gitlawb/openclaude/issues/473)) ([#938](https://github.com/Gitlawb/openclaude/issues/938)) ([8106880](https://github.com/Gitlawb/openclaude/commit/8106880855ee0bb4b5bbca8827cfe97fe99558b8))
## [0.7.0](https://github.com/Gitlawb/openclaude/compare/v0.6.0...v0.7.0) (2026-04-26)
### Features
* add model-specific tokenizers and compression ratio detection ([#799](https://github.com/Gitlawb/openclaude/issues/799)) ([e92e527](https://github.com/Gitlawb/openclaude/commit/e92e5274b223d935d380b1fbd234cb631ab03211))
* add OPENCLAUDE_DISABLE_TOOL_REMINDERS env var to suppress hidden tool-output reminders ([#837](https://github.com/Gitlawb/openclaude/issues/837)) ([28de94d](https://github.com/Gitlawb/openclaude/commit/28de94df5dcd7718cb334e2e793e9472f5b291c5)), closes [#809](https://github.com/Gitlawb/openclaude/issues/809)
* add streaming optimizer and structured request logging ([#703](https://github.com/Gitlawb/openclaude/issues/703)) ([5b9cd21](https://github.com/Gitlawb/openclaude/commit/5b9cd21e373823a77fd552d6e02f5d4b68ae06b1))
* add xAI as official provider ([#865](https://github.com/Gitlawb/openclaude/issues/865)) ([2586a9c](https://github.com/Gitlawb/openclaude/commit/2586a9cddbd2512826bca81cb5deb3ec97f00f0f))
* **api:** expose cache metrics in REPL + normalize across providers ([#813](https://github.com/Gitlawb/openclaude/issues/813)) ([9e23c2b](https://github.com/Gitlawb/openclaude/commit/9e23c2bec43697187762601db5b1585c9b0fb1a3))
* implement Hook Chains runtime integration for self-healing agent mesh MVP ([#711](https://github.com/Gitlawb/openclaude/issues/711)) ([44a2c30](https://github.com/Gitlawb/openclaude/commit/44a2c30d5f9b98027e454466c680360f6b4625fc))
* **memory:** implement persistent project-level Knowledge Graph and RAG ([#899](https://github.com/Gitlawb/openclaude/issues/899)) ([29f7579](https://github.com/Gitlawb/openclaude/commit/29f757937732be0f8cca2bc0627a27eeafc2a992))
* **minimax:** add /usage support and fix MiniMax quota parsing ([#869](https://github.com/Gitlawb/openclaude/issues/869)) ([26413f6](https://github.com/Gitlawb/openclaude/commit/26413f6d307928a4f14c9c61c9860a28f8d81358))
* **model:** add GPT-5.5 support for Codex provider ([#880](https://github.com/Gitlawb/openclaude/issues/880)) ([038f715](https://github.com/Gitlawb/openclaude/commit/038f715b7ab9714340bda421b73a86d8590cf531))
* **tools:** resilient web search and fetch across all providers ([#836](https://github.com/Gitlawb/openclaude/issues/836)) ([531e3f1](https://github.com/Gitlawb/openclaude/commit/531e3f10592a73d81f26675c2479d46a3d5b55f5))
* **zai:** add Z.AI GLM Coding Plan provider preset ([#896](https://github.com/Gitlawb/openclaude/issues/896)) ([a0d657e](https://github.com/Gitlawb/openclaude/commit/a0d657ee188f52f8a4ceaad1658c81343a32fdad))
### Bug Fixes
* **agent:** provider-aware fallback for haiku/sonnet aliases ([#908](https://github.com/Gitlawb/openclaude/issues/908)) ([a3e728a](https://github.com/Gitlawb/openclaude/commit/a3e728a114f6379b80daefc8abcac17a752c5f96))
* bugs ([#885](https://github.com/Gitlawb/openclaude/issues/885)) ([c6c5f06](https://github.com/Gitlawb/openclaude/commit/c6c5f0608cf6509b412b121954547d72b3f3a411))
* make OpenAI fallback context window configurable + support external model lookup ([#861](https://github.com/Gitlawb/openclaude/issues/861)) ([b750e9e](https://github.com/Gitlawb/openclaude/commit/b750e9e97d15926d094d435772b2d6d12e5e545c))
* **mcp:** disable MCP_SKILLS feature flag — source not mirrored ([#872](https://github.com/Gitlawb/openclaude/issues/872)) ([dcbe295](https://github.com/Gitlawb/openclaude/commit/dcbe29558ab9c74d335b138488005a6509aa906a))
* normalize /provider multi-model selection and semicolon parsing ([#841](https://github.com/Gitlawb/openclaude/issues/841)) ([c4cb98a](https://github.com/Gitlawb/openclaude/commit/c4cb98a4f092062da02a4728cf59fed0fc3a6d3f))
* **openai-shim:** echo reasoning_content on assistant tool-call messages for Moonshot ([#828](https://github.com/Gitlawb/openclaude/issues/828)) ([67de6bd](https://github.com/Gitlawb/openclaude/commit/67de6bd2cffc3381f0f28fd3ffce043970611667))
* **query:** restore system prompt structure and add missing config import ([#907](https://github.com/Gitlawb/openclaude/issues/907)) ([818689b](https://github.com/Gitlawb/openclaude/commit/818689b2ee71cb6966cb4dc5a5ebd90fd22b0fcb))
* **shell:** recover when CWD path was replaced by a non-directory ([#871](https://github.com/Gitlawb/openclaude/issues/871)) ([a4c6757](https://github.com/Gitlawb/openclaude/commit/a4c67570238794317d049a225396672b465fdbfc))
* **startup:** show --model flag override on startup screen ([#898](https://github.com/Gitlawb/openclaude/issues/898)) ([d45628c](https://github.com/Gitlawb/openclaude/commit/d45628c41300b83b466e6a97983099615a50e7d7))
* **startup:** url authoritative over model name in banner provider detect ([#864](https://github.com/Gitlawb/openclaude/issues/864)) ([e346b8d](https://github.com/Gitlawb/openclaude/commit/e346b8d5ec2d58a4e8db337918d52d844ee52766)), closes [#855](https://github.com/Gitlawb/openclaude/issues/855)
* surface actionable error when DuckDuckGo web search is rate-limited ([#834](https://github.com/Gitlawb/openclaude/issues/834)) ([3c4d843](https://github.com/Gitlawb/openclaude/commit/3c4d8435c42e1ee04f9defd31c4c589017f524c5))
* **test:** add missing teammate exports to hookChains integration mock ([#840](https://github.com/Gitlawb/openclaude/issues/840)) ([23e8cfb](https://github.com/Gitlawb/openclaude/commit/23e8cfbd5b22179684276bef4131e26b830ce69c)), closes [#839](https://github.com/Gitlawb/openclaude/issues/839)
* **update:** show real package version and give actionable guidance ([#870](https://github.com/Gitlawb/openclaude/issues/870)) ([6e58b81](https://github.com/Gitlawb/openclaude/commit/6e58b819370128b923dda4fcc774bb556f4b951a))
## [0.6.0](https://github.com/Gitlawb/openclaude/compare/v0.5.2...v0.6.0) (2026-04-22) ## [0.6.0](https://github.com/Gitlawb/openclaude/compare/v0.5.2...v0.6.0) (2026-04-22)

View File

@@ -132,7 +132,7 @@ Cause:
Fix: Fix:
```powershell ```powershell
cd C:\Users\Lucas Pedry\Documents\openclaude\openclaude cd <PATH>
bun run dev:profile bun run dev:profile
``` ```
@@ -189,7 +189,7 @@ Or pick a local Ollama profile automatically by goal:
bun run profile:init -- --provider ollama --goal balanced bun run profile:init -- --provider ollama --goal balanced
``` ```
## 6.5 Placeholder key (`SUA_CHAVE`) error ## 6.5 Placeholder key (`YOUR_KEY`) error
Cause: Cause:

View File

@@ -13,7 +13,31 @@ Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex OAuth, Codex, Ollama, A
OpenClaude is also mirrored to GitLawb: OpenClaude is also mirrored to GitLawb:
[gitlawb.com/node/repos/z6MkqDnb/openclaude](https://gitlawb.com/node/repos/z6MkqDnb/openclaude) [gitlawb.com/node/repos/z6MkqDnb/openclaude](https://gitlawb.com/node/repos/z6MkqDnb/openclaude)
[Quick Start](#quick-start) | [Setup Guides](#setup-guides) | [Providers](#supported-providers) | [Source Build](#source-build-and-local-development) | [VS Code Extension](#vs-code-extension) | [Community](#community) [Quick Start](#quick-start) | [Setup Guides](#setup-guides) | [Providers](#supported-providers) | [Source Build](#source-build-and-local-development) | [VS Code Extension](#vs-code-extension) | [Sponsors](#sponsors) | [Community](#community)
## Sponsors
<p align="center">
<a href="https://gitlawb.com">
<img src="https://gitlawb.com/logo.png" alt="GitLawb logo" width="96">
</a>
&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://bankr.bot">
<img src="https://bankr.bot/favicon.svg" alt="Bankr.bot logo" width="96">
</a>
&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://atomic.chat/">
<img src="docs/assets/atomic-chat-logo.png" alt="Atomic Chat logo" width="96">
</a>
</p>
<p align="center">
<a href="https://gitlawb.com"><strong>GitLawb</strong></a>
&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://bankr.bot"><strong>Bankr.bot</strong></a>
&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://atomic.chat/"><strong>Atomic Chat</strong></a>
</p>
## Star History ## Star History
@@ -152,12 +176,12 @@ For best results, use models with strong tool/function calling support.
OpenClaude can route different agents to different models through settings-based routing. This is useful for cost optimization or splitting work by model strength. OpenClaude can route different agents to different models through settings-based routing. This is useful for cost optimization or splitting work by model strength.
Add to `~/.claude/settings.json`: Add to `~/.openclaude.json`:
```json ```json
{ {
"agentModels": { "agentModels": {
"deepseek-chat": { "deepseek-v4-flash": {
"base_url": "https://api.deepseek.com/v1", "base_url": "https://api.deepseek.com/v1",
"api_key": "sk-your-key" "api_key": "sk-your-key"
}, },
@@ -167,10 +191,10 @@ Add to `~/.claude/settings.json`:
} }
}, },
"agentRouting": { "agentRouting": {
"Explore": "deepseek-chat", "Explore": "deepseek-v4-flash",
"Plan": "gpt-4o", "Plan": "gpt-4o",
"general-purpose": "gpt-4o", "general-purpose": "gpt-4o",
"frontend-dev": "deepseek-chat", "frontend-dev": "deepseek-v4-flash",
"default": "gpt-4o" "default": "gpt-4o"
} }
} }

View File

@@ -28,6 +28,7 @@
"@opentelemetry/sdk-trace-base": "2.6.1", "@opentelemetry/sdk-trace-base": "2.6.1",
"@opentelemetry/sdk-trace-node": "2.6.1", "@opentelemetry/sdk-trace-node": "2.6.1",
"@opentelemetry/semantic-conventions": "1.40.0", "@opentelemetry/semantic-conventions": "1.40.0",
"@vscode/ripgrep": "^1.17.1",
"ajv": "8.18.0", "ajv": "8.18.0",
"auto-bind": "5.0.1", "auto-bind": "5.0.1",
"axios": "1.15.0", "axios": "1.15.0",
@@ -461,6 +462,8 @@
"@types/react": ["@types/react@19.2.14", "", { "dependencies": { "csstype": "^3.2.2" } }, "sha512-ilcTH/UniCkMdtexkoCN0bI7pMcJDvmQFPvuPvmEaYA/NSfFTAgdUSLAoVjaRJm7+6PvcM+q1zYOwS4wTYMF9w=="], "@types/react": ["@types/react@19.2.14", "", { "dependencies": { "csstype": "^3.2.2" } }, "sha512-ilcTH/UniCkMdtexkoCN0bI7pMcJDvmQFPvuPvmEaYA/NSfFTAgdUSLAoVjaRJm7+6PvcM+q1zYOwS4wTYMF9w=="],
"@vscode/ripgrep": ["@vscode/ripgrep@1.17.1", "", { "dependencies": { "https-proxy-agent": "^7.0.2", "proxy-from-env": "^1.1.0", "yauzl": "^2.9.2" } }, "sha512-xTs7DGyAO3IsJYOCTBP8LnTvPiYVKEuyv8s0xyJDBXfs8rhBfqnZPvb6xDT+RnwWzcXqW27xLS/aGrkjX7lNWw=="],
"accepts": ["accepts@2.0.0", "", { "dependencies": { "mime-types": "^3.0.0", "negotiator": "^1.0.0" } }, "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng=="], "accepts": ["accepts@2.0.0", "", { "dependencies": { "mime-types": "^3.0.0", "negotiator": "^1.0.0" } }, "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng=="],
"agent-base": ["agent-base@7.1.4", "", {}, "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ=="], "agent-base": ["agent-base@7.1.4", "", {}, "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ=="],
@@ -491,6 +494,8 @@
"bowser": ["bowser@2.14.1", "", {}, "sha512-tzPjzCxygAKWFOJP011oxFHs57HzIhOEracIgAePE4pqB3LikALKnSzUyU4MGs9/iCEUuHlAJTjTc5M+u7YEGg=="], "bowser": ["bowser@2.14.1", "", {}, "sha512-tzPjzCxygAKWFOJP011oxFHs57HzIhOEracIgAePE4pqB3LikALKnSzUyU4MGs9/iCEUuHlAJTjTc5M+u7YEGg=="],
"buffer-crc32": ["buffer-crc32@0.2.13", "", {}, "sha512-VO9Ht/+p3SN7SKWqcrgEzjGbRSJYTx+Q1pTQC0wrWqHx0vpJraQ6GtHx8tvcg1rlK1byhU5gccxgOgj7B0TDkQ=="],
"buffer-equal-constant-time": ["buffer-equal-constant-time@1.0.1", "", {}, "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA=="], "buffer-equal-constant-time": ["buffer-equal-constant-time@1.0.1", "", {}, "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA=="],
"bun-types": ["bun-types@1.3.11", "", { "dependencies": { "@types/node": "*" } }, "sha512-1KGPpoxQWl9f6wcZh57LvrPIInQMn2TQ7jsgxqpRzg+l0QPOFvJVH7HmvHo/AiPgwXy+/Thf6Ov3EdVn1vOabg=="], "bun-types": ["bun-types@1.3.11", "", { "dependencies": { "@types/node": "*" } }, "sha512-1KGPpoxQWl9f6wcZh57LvrPIInQMn2TQ7jsgxqpRzg+l0QPOFvJVH7HmvHo/AiPgwXy+/Thf6Ov3EdVn1vOabg=="],
@@ -609,6 +614,8 @@
"fast-xml-parser": ["fast-xml-parser@5.5.8", "", { "dependencies": { "fast-xml-builder": "^1.1.4", "path-expression-matcher": "^1.2.0", "strnum": "^2.2.0" }, "bin": { "fxparser": "src/cli/cli.js" } }, "sha512-Z7Fh2nVQSb2d+poDViM063ix2ZGt9jmY1nWhPfHBOK2Hgnb/OW3P4Et3P/81SEej0J7QbWtJqxO05h8QYfK7LQ=="], "fast-xml-parser": ["fast-xml-parser@5.5.8", "", { "dependencies": { "fast-xml-builder": "^1.1.4", "path-expression-matcher": "^1.2.0", "strnum": "^2.2.0" }, "bin": { "fxparser": "src/cli/cli.js" } }, "sha512-Z7Fh2nVQSb2d+poDViM063ix2ZGt9jmY1nWhPfHBOK2Hgnb/OW3P4Et3P/81SEej0J7QbWtJqxO05h8QYfK7LQ=="],
"fd-slicer": ["fd-slicer@1.1.0", "", { "dependencies": { "pend": "~1.2.0" } }, "sha512-cE1qsB/VwyQozZ+q1dGxR8LBYNZeofhEdUNGSMbQD3Gw2lAzX9Zb3uIU6Ebc/Fmyjo9AWWfnn0AUCHqtevs/8g=="],
"fflate": ["fflate@0.8.2", "", {}, "sha512-cPJU47OaAoCbg0pBvzsgpTPhmhqI5eJjh/JIu8tPj5q+T7iLvW/JAYUqmE7KOB4R1ZyEhzBaIQpQpardBF5z8A=="], "fflate": ["fflate@0.8.2", "", {}, "sha512-cPJU47OaAoCbg0pBvzsgpTPhmhqI5eJjh/JIu8tPj5q+T7iLvW/JAYUqmE7KOB4R1ZyEhzBaIQpQpardBF5z8A=="],
"figures": ["figures@6.1.0", "", { "dependencies": { "is-unicode-supported": "^2.0.0" } }, "sha512-d+l3qxjSesT4V7v2fh+QnmFnUWv9lSpjarhShNTgBOfA0ttejbQUAlHLitbjkoRiDulW0OPoQPYIGhIC8ohejg=="], "figures": ["figures@6.1.0", "", { "dependencies": { "is-unicode-supported": "^2.0.0" } }, "sha512-d+l3qxjSesT4V7v2fh+QnmFnUWv9lSpjarhShNTgBOfA0ttejbQUAlHLitbjkoRiDulW0OPoQPYIGhIC8ohejg=="],
@@ -787,6 +794,8 @@
"path-to-regexp": ["path-to-regexp@8.4.1", "", {}, "sha512-fvU78fIjZ+SBM9YwCknCvKOUKkLVqtWDVctl0s7xIqfmfb38t2TT4ZU2gHm+Z8xGwgW+QWEU3oQSAzIbo89Ggw=="], "path-to-regexp": ["path-to-regexp@8.4.1", "", {}, "sha512-fvU78fIjZ+SBM9YwCknCvKOUKkLVqtWDVctl0s7xIqfmfb38t2TT4ZU2gHm+Z8xGwgW+QWEU3oQSAzIbo89Ggw=="],
"pend": ["pend@1.2.0", "", {}, "sha512-F3asv42UuXchdzt+xXqfW1OGlVBe+mxa2mqI0pg5yAHZPvFmY3Y6drSf/GQ1A86WgWEN9Kzh/WrgKa6iGcHXLg=="],
"picomatch": ["picomatch@4.0.4", "", {}, "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A=="], "picomatch": ["picomatch@4.0.4", "", {}, "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A=="],
"pkce-challenge": ["pkce-challenge@5.0.1", "", {}, "sha512-wQ0b/W4Fr01qtpHlqSqspcj3EhBvimsdh0KlHhH8HRZnMsEa0ea2fTULOXOS9ccQr3om+GcGRk4e+isrZWV8qQ=="], "pkce-challenge": ["pkce-challenge@5.0.1", "", {}, "sha512-wQ0b/W4Fr01qtpHlqSqspcj3EhBvimsdh0KlHhH8HRZnMsEa0ea2fTULOXOS9ccQr3om+GcGRk4e+isrZWV8qQ=="],
@@ -801,7 +810,7 @@
"proxy-addr": ["proxy-addr@2.0.7", "", { "dependencies": { "forwarded": "0.2.0", "ipaddr.js": "1.9.1" } }, "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg=="], "proxy-addr": ["proxy-addr@2.0.7", "", { "dependencies": { "forwarded": "0.2.0", "ipaddr.js": "1.9.1" } }, "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg=="],
"proxy-from-env": ["proxy-from-env@2.1.0", "", {}, "sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA=="], "proxy-from-env": ["proxy-from-env@1.1.0", "", {}, "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg=="],
"qrcode": ["qrcode@1.5.4", "", { "dependencies": { "dijkstrajs": "^1.0.1", "pngjs": "^5.0.0", "yargs": "^15.3.1" }, "bin": { "qrcode": "bin/qrcode" } }, "sha512-1ca71Zgiu6ORjHqFBDpnSMTR2ReToX4l1Au1VFLyVeBTFavzQnv5JxMFr3ukHVKpSrSA2MCk0lNJSykjUfz7Zg=="], "qrcode": ["qrcode@1.5.4", "", { "dependencies": { "dijkstrajs": "^1.0.1", "pngjs": "^5.0.0", "yargs": "^15.3.1" }, "bin": { "qrcode": "bin/qrcode" } }, "sha512-1ca71Zgiu6ORjHqFBDpnSMTR2ReToX4l1Au1VFLyVeBTFavzQnv5JxMFr3ukHVKpSrSA2MCk0lNJSykjUfz7Zg=="],
@@ -953,6 +962,8 @@
"yargs-parser": ["yargs-parser@21.1.1", "", {}, "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw=="], "yargs-parser": ["yargs-parser@21.1.1", "", {}, "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw=="],
"yauzl": ["yauzl@2.10.0", "", { "dependencies": { "buffer-crc32": "~0.2.3", "fd-slicer": "~1.1.0" } }, "sha512-p4a9I6X6nu6IhoGmBqAcbJy1mlC4j27vEPZX9F4L4/vZT3Lyq1VkFHw/V/PUcB9Buo+DG3iHkT0x3Qya58zc3g=="],
"yoctocolors": ["yoctocolors@2.1.2", "", {}, "sha512-CzhO+pFNo8ajLM2d2IW/R93ipy99LWjtwblvC1RsoSUMZgyLbYFr221TnSNT7GjGdYui6P459mw9JH/g/zW2ug=="], "yoctocolors": ["yoctocolors@2.1.2", "", {}, "sha512-CzhO+pFNo8ajLM2d2IW/R93ipy99LWjtwblvC1RsoSUMZgyLbYFr221TnSNT7GjGdYui6P459mw9JH/g/zW2ug=="],
"zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="], "zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="],
@@ -1369,6 +1380,8 @@
"@smithy/uuid/tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="], "@smithy/uuid/tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="],
"axios/proxy-from-env": ["proxy-from-env@2.1.0", "", {}, "sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA=="],
"cli-highlight/chalk": ["chalk@4.1.2", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA=="], "cli-highlight/chalk": ["chalk@4.1.2", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA=="],
"cli-highlight/yargs": ["yargs@16.2.0", "", { "dependencies": { "cliui": "^7.0.2", "escalade": "^3.1.1", "get-caller-file": "^2.0.5", "require-directory": "^2.1.1", "string-width": "^4.2.0", "y18n": "^5.0.5", "yargs-parser": "^20.2.2" } }, "sha512-D1mvvtDG0L5ft/jGWkLpG1+m0eQxOfaBvTNELraWj22wSVUMWxZUvYgJYcKh6jGGIkJFhH4IZPQhR4TKpc8mBw=="], "cli-highlight/yargs": ["yargs@16.2.0", "", { "dependencies": { "cliui": "^7.0.2", "escalade": "^3.1.1", "get-caller-file": "^2.0.5", "require-directory": "^2.1.1", "string-width": "^4.2.0", "y18n": "^5.0.5", "yargs-parser": "^20.2.2" } }, "sha512-D1mvvtDG0L5ft/jGWkLpG1+m0eQxOfaBvTNELraWj22wSVUMWxZUvYgJYcKh6jGGIkJFhH4IZPQhR4TKpc8mBw=="],
@@ -1429,6 +1442,8 @@
"@aws-sdk/nested-clients/@smithy/util-base64/@smithy/util-buffer-from": ["@smithy/util-buffer-from@4.2.2", "", { "dependencies": { "@smithy/is-array-buffer": "^4.2.2", "tslib": "^2.6.2" } }, "sha512-FDXD7cvUoFWwN6vtQfEta540Y/YBe5JneK3SoZg9bThSoOAC/eGeYEua6RkBgKjGa/sz6Y+DuBZj3+YEY21y4Q=="], "@aws-sdk/nested-clients/@smithy/util-base64/@smithy/util-buffer-from": ["@smithy/util-buffer-from@4.2.2", "", { "dependencies": { "@smithy/is-array-buffer": "^4.2.2", "tslib": "^2.6.2" } }, "sha512-FDXD7cvUoFWwN6vtQfEta540Y/YBe5JneK3SoZg9bThSoOAC/eGeYEua6RkBgKjGa/sz6Y+DuBZj3+YEY21y4Q=="],
"@mendable/firecrawl-js/axios/proxy-from-env": ["proxy-from-env@2.1.0", "", {}, "sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA=="],
"@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/core/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="], "@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/core/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="],
"@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/otlp-transformer/@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.57.2", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-uIX52NnTM0iBh84MShlpouI7UKqkZ7MrUszTmaypHBu4r7NofznSnQRfJ+uUeDtQDj6w8eFGg5KBLDAwAPz1+A=="], "@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/otlp-transformer/@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.57.2", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-uIX52NnTM0iBh84MShlpouI7UKqkZ7MrUszTmaypHBu4r7NofznSnQRfJ+uUeDtQDj6w8eFGg5KBLDAwAPz1+A=="],
@@ -1509,6 +1524,8 @@
"cliui/wrap-ansi/ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="], "cliui/wrap-ansi/ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="],
"firecrawl/axios/proxy-from-env": ["proxy-from-env@2.1.0", "", {}, "sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA=="],
"form-data/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="], "form-data/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="],
"qrcode/yargs/cliui": ["cliui@6.0.0", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.0", "wrap-ansi": "^6.2.0" } }, "sha512-t6wbgtoCXvAzst7QgXxJYqPt0usEfbgQdftEPbLL/cvv6HPE5VgvqCuAIDR0NgU52ds6rFwqrgakNLrHEjCbrQ=="], "qrcode/yargs/cliui": ["cliui@6.0.0", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.0", "wrap-ansi": "^6.2.0" } }, "sha512-t6wbgtoCXvAzst7QgXxJYqPt0usEfbgQdftEPbLL/cvv6HPE5VgvqCuAIDR0NgU52ds6rFwqrgakNLrHEjCbrQ=="],

View File

@@ -68,9 +68,11 @@ openclaude
export CLAUDE_CODE_USE_OPENAI=1 export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-... export OPENAI_API_KEY=sk-...
export OPENAI_BASE_URL=https://api.deepseek.com/v1 export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat export OPENAI_MODEL=deepseek-v4-flash
``` ```
Use `deepseek-v4-pro` when you want the stronger model. `deepseek-chat` and `deepseek-reasoner` remain available as DeepSeek's legacy API aliases.
### Google Gemini via OpenRouter ### Google Gemini via OpenRouter
```bash ```bash
@@ -169,12 +171,13 @@ export OPENAI_MODEL=gpt-4o
|----------|----------|-------------| |----------|----------|-------------|
| `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider | | `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
| `OPENAI_API_KEY` | Yes* | Your API key (`*` not needed for local models like Ollama or Atomic Chat) | | `OPENAI_API_KEY` | Yes* | Your API key (`*` not needed for local models like Ollama or Atomic Chat) |
| `OPENAI_MODEL` | Yes | Model name such as `gpt-4o`, `deepseek-chat`, or `llama3.3:70b` | | `OPENAI_MODEL` | Yes | Model name such as `gpt-4o`, `deepseek-v4-flash`, or `llama3.3:70b` |
| `OPENAI_BASE_URL` | No | API endpoint, defaulting to `https://api.openai.com/v1` | | `OPENAI_BASE_URL` | No | API endpoint, defaulting to `https://api.openai.com/v1` |
| `CODEX_API_KEY` | Codex only | Codex or ChatGPT access token override | | `CODEX_API_KEY` | Codex only | Codex or ChatGPT access token override |
| `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file | | `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
| `CODEX_HOME` | Codex only | Alternative Codex home directory | | `CODEX_HOME` | Codex only | Alternative Codex home directory |
| `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` | No | Suppress the default `Co-Authored-By` trailer in generated git commits | | `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` | No | Suppress the default `Co-Authored-By` trailer in generated git commits |
| `OPENCLAUDE_LOG_TOKEN_USAGE` | No | When truthy (e.g. `verbose`), emits one JSON line on stderr per API request with input/output/cache tokens and the resolved provider. **User-facing debug output** — complements the REPL display controlled by `/config showCacheStats`. Distinct from `CLAUDE_CODE_ENABLE_TOKEN_USAGE_ATTACHMENT`, which is **model-facing** (injects context usage info into the prompt itself). Both can run together. |
You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority. You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

View File

@@ -41,11 +41,13 @@ openclaude
export CLAUDE_CODE_USE_OPENAI=1 export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here export OPENAI_API_KEY=sk-your-key-here
export OPENAI_BASE_URL=https://api.deepseek.com/v1 export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat export OPENAI_MODEL=deepseek-v4-flash
openclaude openclaude
``` ```
Use `deepseek-v4-pro` when you want the stronger model. `deepseek-chat` and `deepseek-reasoner` still work as DeepSeek's legacy API aliases.
### Option C: Ollama ### Option C: Ollama
Install Ollama first from: Install Ollama first from:

View File

@@ -41,11 +41,13 @@ openclaude
$env:CLAUDE_CODE_USE_OPENAI="1" $env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here" $env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_BASE_URL="https://api.deepseek.com/v1" $env:OPENAI_BASE_URL="https://api.deepseek.com/v1"
$env:OPENAI_MODEL="deepseek-chat" $env:OPENAI_MODEL="deepseek-v4-flash"
openclaude openclaude
``` ```
Use `deepseek-v4-pro` when you want the stronger model. `deepseek-chat` and `deepseek-reasoner` still work as DeepSeek's legacy API aliases.
### Option C: Ollama ### Option C: Ollama
Install Ollama first from: Install Ollama first from:

View File

@@ -1,7 +1,7 @@
{ {
"name": "@gitlawb/openclaude", "name": "@gitlawb/openclaude",
"version": "0.6.0", "version": "0.8.0",
"description": "Claude Code opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models", "description": "OpenClaude opens coding-agent workflows to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models",
"type": "module", "type": "module",
"bin": { "bin": {
"openclaude": "./bin/openclaude" "openclaude": "./bin/openclaude"
@@ -74,6 +74,7 @@
"@opentelemetry/sdk-trace-base": "2.6.1", "@opentelemetry/sdk-trace-base": "2.6.1",
"@opentelemetry/sdk-trace-node": "2.6.1", "@opentelemetry/sdk-trace-node": "2.6.1",
"@opentelemetry/semantic-conventions": "1.40.0", "@opentelemetry/semantic-conventions": "1.40.0",
"@vscode/ripgrep": "^1.17.1",
"ajv": "8.18.0", "ajv": "8.18.0",
"auto-bind": "5.0.1", "auto-bind": "5.0.1",
"axios": "1.15.0", "axios": "1.15.0",

View File

@@ -472,6 +472,11 @@ ${exports}
'@aws-sdk/credential-providers', '@aws-sdk/credential-providers',
'@azure/identity', '@azure/identity',
'google-auth-library', 'google-auth-library',
// @vscode/ripgrep ships a platform-specific binary alongside its
// index.js and resolves the path via __dirname at runtime. Bundling
// would freeze the build host's absolute path into dist/cli.mjs, so we
// keep it external and rely on the npm package being installed.
'@vscode/ripgrep',
], ],
}) })

View File

@@ -70,13 +70,13 @@ export async function isBridgeEnabledBlocking(): Promise<boolean> {
export async function getBridgeDisabledReason(): Promise<string | null> { export async function getBridgeDisabledReason(): Promise<string | null> {
if (feature('BRIDGE_MODE')) { if (feature('BRIDGE_MODE')) {
if (!isClaudeAISubscriber()) { if (!isClaudeAISubscriber()) {
return 'Remote Control requires a claude.ai subscription. Run `claude auth login` to sign in with your claude.ai account.' return 'Remote Control requires a claude.ai subscription. Run `openclaude auth login` to sign in with your claude.ai account.'
} }
if (!hasProfileScope()) { if (!hasProfileScope()) {
return 'Remote Control requires a full-scope login token. Long-lived tokens (from `claude setup-token` or CLAUDE_CODE_OAUTH_TOKEN) are limited to inference-only for security reasons. Run `claude auth login` to use Remote Control.' return 'Remote Control requires a full-scope login token. Long-lived tokens (from `openclaude setup-token` or CLAUDE_CODE_OAUTH_TOKEN) are limited to inference-only for security reasons. Run `openclaude auth login` to use Remote Control.'
} }
if (!getOauthAccountInfo()?.organizationUuid) { if (!getOauthAccountInfo()?.organizationUuid) {
return 'Unable to determine your organization for Remote Control eligibility. Run `claude auth login` to refresh your account information.' return 'Unable to determine your organization for Remote Control eligibility. Run `openclaude auth login` to refresh your account information.'
} }
if (!(await checkGate_CACHED_OR_BLOCKING('tengu_ccr_bridge'))) { if (!(await checkGate_CACHED_OR_BLOCKING('tengu_ccr_bridge'))) {
return 'Remote Control is not yet enabled for your account.' return 'Remote Control is not yet enabled for your account.'
@@ -166,7 +166,7 @@ export function checkBridgeMinVersion(): string | null {
minVersion: string minVersion: string
}>('tengu_bridge_min_version', { minVersion: '0.0.0' }) }>('tengu_bridge_min_version', { minVersion: '0.0.0' })
if (config.minVersion && lt(MACRO.VERSION, config.minVersion)) { if (config.minVersion && lt(MACRO.VERSION, config.minVersion)) {
return `Your version of Claude Code (${MACRO.VERSION}) is too old for Remote Control.\nVersion ${config.minVersion} or higher is required. Run \`claude update\` to update.` return `Your version of OpenClaude (${MACRO.VERSION}) is too old for Remote Control.\nVersion ${config.minVersion} or higher is required. Run \`openclaude update\` to update.`
} }
} }
return null return null

View File

@@ -2248,7 +2248,7 @@ export async function bridgeMain(args: string[]): Promise<void> {
}) })
// biome-ignore lint/suspicious/noConsole: intentional dialog output // biome-ignore lint/suspicious/noConsole: intentional dialog output
console.log( console.log(
`\nClaude Remote Control is launching in spawn mode which lets you create new sessions in this project from Claude Code on Web or your Mobile app. Learn more here: https://code.claude.com/docs/en/remote-control\n\n` + `\nClaude Remote Control is launching in spawn mode which lets you create new sessions in this project from OpenClaude on the web or your mobile app. Learn more here: https://code.claude.com/docs/en/remote-control\n\n` +
`Spawn mode for this project:\n` + `Spawn mode for this project:\n` +
` [1] same-dir \u2014 sessions share the current directory (default)\n` + ` [1] same-dir \u2014 sessions share the current directory (default)\n` +
` [2] worktree \u2014 each session gets an isolated git worktree\n\n` + ` [2] worktree \u2014 each session gets an isolated git worktree\n\n` +

View File

@@ -147,7 +147,7 @@ export async function getEnvLessBridgeConfig(): Promise<EnvLessBridgeConfig> {
export async function checkEnvLessBridgeMinVersion(): Promise<string | null> { export async function checkEnvLessBridgeMinVersion(): Promise<string | null> {
const cfg = await getEnvLessBridgeConfig() const cfg = await getEnvLessBridgeConfig()
if (cfg.min_version && lt(MACRO.VERSION, cfg.min_version)) { if (cfg.min_version && lt(MACRO.VERSION, cfg.min_version)) {
return `Your version of Claude Code (${MACRO.VERSION}) is too old for Remote Control.\nVersion ${cfg.min_version} or higher is required. Run \`claude update\` to update.` return `Your version of OpenClaude (${MACRO.VERSION}) is too old for Remote Control.\nVersion ${cfg.min_version} or higher is required. Run \`openclaude update\` to update.`
} }
return null return null
} }

View File

@@ -415,7 +415,7 @@ export async function initReplBridge(
`[bridge:repl] Skipping: ${versionError}`, `[bridge:repl] Skipping: ${versionError}`,
true, true,
) )
onStateChange?.('failed', 'run `claude update` to upgrade') onStateChange?.('failed', 'run `openclaude update` to upgrade')
return null return null
} }
logForDebugging( logForDebugging(
@@ -456,7 +456,7 @@ export async function initReplBridge(
const versionError = checkBridgeMinVersion() const versionError = checkBridgeMinVersion()
if (versionError) { if (versionError) {
logBridgeSkip('version_too_old', `[bridge:repl] Skipping: ${versionError}`) logBridgeSkip('version_too_old', `[bridge:repl] Skipping: ${versionError}`)
onStateChange?.('failed', 'run `claude update` to upgrade') onStateChange?.('failed', 'run `openclaude update` to upgrade')
return null return null
} }

View File

@@ -147,7 +147,7 @@ export async function enrollTrustedDevice(): Promise<void> {
device_id?: string device_id?: string
}>( }>(
`${baseUrl}/api/auth/trusted_devices`, `${baseUrl}/api/auth/trusted_devices`,
{ display_name: `Claude Code on ${hostname()} · ${process.platform}` }, { display_name: `OpenClaude on ${hostname()} · ${process.platform}` },
{ {
headers: { headers: {
Authorization: `Bearer ${accessToken}`, Authorization: `Bearer ${accessToken}`,

View File

@@ -287,7 +287,7 @@ export async function authStatus(opts: {
} }
if (!loggedIn) { if (!loggedIn) {
process.stdout.write( process.stdout.write(
'Not logged in. Run claude auth login to authenticate.\n', 'Not logged in. Run openclaude auth login to authenticate.\n',
) )
} }
} else { } else {

View File

@@ -83,7 +83,7 @@ export async function autoModeCritiqueHandler(options: {
process.stdout.write( process.stdout.write(
'No custom auto mode rules found.\n\n' + 'No custom auto mode rules found.\n\n' +
'Add rules to your settings file under autoMode.{allow, soft_deny, environment}.\n' + 'Add rules to your settings file under autoMode.{allow, soft_deny, environment}.\n' +
'Run `claude auto-mode defaults` to see the default rules for reference.\n', 'Run `openclaude auto-mode defaults` to see the default rules for reference.\n',
) )
return return
} }

View File

@@ -233,7 +233,7 @@ export async function mcpRemoveHandler(name: string, options: {
}); });
process.stderr.write('\nTo remove from a specific scope, use:\n'); process.stderr.write('\nTo remove from a specific scope, use:\n');
scopes.forEach(scope => { scopes.forEach(scope => {
process.stderr.write(` claude mcp remove "${name}" -s ${scope}\n`); process.stderr.write(` openclaude mcp remove "${name}" -s ${scope}\n`);
}); });
cliError(); cliError();
} }
@@ -250,7 +250,7 @@ export async function mcpListHandler(): Promise<void> {
} = await getAllMcpConfigs(); } = await getAllMcpConfigs();
if (Object.keys(configs).length === 0) { if (Object.keys(configs).length === 0) {
// biome-ignore lint/suspicious/noConsole:: intentional console output // biome-ignore lint/suspicious/noConsole:: intentional console output
console.log('No MCP servers configured. Use `claude mcp add` to add a server.'); console.log('No MCP servers configured. Use `openclaude mcp add` to add a server.');
} else { } else {
// biome-ignore lint/suspicious/noConsole:: intentional console output // biome-ignore lint/suspicious/noConsole:: intentional console output
console.log('Checking MCP server health...\n'); console.log('Checking MCP server health...\n');
@@ -374,7 +374,7 @@ export async function mcpGetHandler(name: string): Promise<void> {
} }
} }
// biome-ignore lint/suspicious/noConsole:: intentional console output // biome-ignore lint/suspicious/noConsole:: intentional console output
console.log(`\nTo remove this server, run: claude mcp remove "${name}" -s ${server.scope}`); console.log(`\nTo remove this server, run: openclaude mcp remove "${name}" -s ${server.scope}`);
// Use gracefulShutdown to properly clean up MCP server connections // Use gracefulShutdown to properly clean up MCP server connections
// (process.exit bypasses cleanup handlers, leaving child processes orphaned) // (process.exit bypasses cleanup handlers, leaving child processes orphaned)
await gracefulShutdown(0); await gracefulShutdown(0);
@@ -455,5 +455,5 @@ export async function mcpResetChoicesHandler(): Promise<void> {
disabledMcpjsonServers: [], disabledMcpjsonServers: [],
enableAllProjectMcpServers: false enableAllProjectMcpServers: false
})); }));
cliOk('All project-scoped (.mcp.json) server approvals and rejections have been reset.\n' + 'You will be prompted for approval next time you start Claude Code.'); cliOk('All project-scoped (.mcp.json) server approvals and rejections have been reset.\n' + 'You will be prompted for approval next time you start OpenClaude.');
} }

View File

@@ -352,7 +352,7 @@ export async function pluginListHandler(options: {
// through to the session section so the failure is visible. // through to the session section so the failure is visible.
if (inlineLoadErrors.length === 0) { if (inlineLoadErrors.length === 0) {
cliOk( cliOk(
'No plugins installed. Use `claude plugin install` to install a plugin.', 'No plugins installed. Use `openclaude plugin install` to install a plugin.',
) )
} }
} }

View File

@@ -5026,7 +5026,7 @@ async function loadInitialMessages(
) )
if (!parsedSessionId) { if (!parsedSessionId) {
let errorMessage = let errorMessage =
'Error: --resume requires a valid session ID when used with --print. Usage: claude -p --resume <session-id>' 'Error: --resume requires a valid session ID when used with --print. Usage: openclaude -p --resume <session-id>'
if (typeof options.resume === 'string') { if (typeof options.resume === 'string') {
errorMessage += `. Session IDs must be in UUID format (e.g., 550e8400-e29b-41d4-a716-446655440000). Provided value "${options.resume}" is not a valid UUID` errorMessage += `. Session IDs must be in UUID format (e.g., 550e8400-e29b-41d4-a716-446655440000). Provided value "${options.resume}" is not a valid UUID`
} }

View File

@@ -35,15 +35,20 @@ export async function update() {
// binary (without it). // binary (without it).
if (getAPIProvider() !== 'firstParty') { if (getAPIProvider() !== 'firstParty') {
writeToStdout( writeToStdout(
chalk.yellow('Auto-update is not available for third-party provider builds.\n') + chalk.yellow(
'To update, pull the latest source from the repository and rebuild:\n' + `Auto-update is not available for third-party provider builds.\n`,
' git pull && bun install && bun run build\n', ) +
`Current version: ${MACRO.DISPLAY_VERSION}\n\n` +
`To update, reinstall from npm:\n` +
chalk.bold(` npm install -g ${MACRO.PACKAGE_URL}@latest`) + '\n\n' +
`Or, if you built from source, pull and rebuild:\n` +
chalk.bold(' git pull && bun install && bun run build') + '\n',
) )
return await gracefulShutdown(0)
} }
logEvent('tengu_update_check', {}) logEvent('tengu_update_check', {})
writeToStdout(`Current version: ${MACRO.VERSION}\n`) writeToStdout(`Current version: ${MACRO.DISPLAY_VERSION}\n`)
const channel = getInitialSettings()?.autoUpdatesChannel ?? 'latest' const channel = getInitialSettings()?.autoUpdatesChannel ?? 'latest'
writeToStdout(`Checking for updates to ${channel} version...\n`) writeToStdout(`Checking for updates to ${channel} version...\n`)
@@ -123,9 +128,14 @@ export async function update() {
if (diagnostic.installationType === 'development') { if (diagnostic.installationType === 'development') {
writeToStdout('\n') writeToStdout('\n')
writeToStdout( writeToStdout(
chalk.yellow('Warning: Cannot update development build') + '\n', chalk.yellow('You are running a development build — auto-update is unavailable.') + '\n',
) )
await gracefulShutdown(1) writeToStdout('To update, pull the latest source and rebuild:\n')
writeToStdout(chalk.bold(' git pull && bun install && bun run build') + '\n')
writeToStdout('\n')
writeToStdout('Or reinstall from npm:\n')
writeToStdout(chalk.bold(` npm install -g ${MACRO.PACKAGE_URL}@latest`) + '\n')
await gracefulShutdown(0)
} }
// Check if running from a package manager // Check if running from a package manager
@@ -136,8 +146,8 @@ export async function update() {
if (packageManager === 'homebrew') { if (packageManager === 'homebrew') {
writeToStdout('Claude is managed by Homebrew.\n') writeToStdout('Claude is managed by Homebrew.\n')
const latest = await getLatestVersion(channel) const latest = await getLatestVersion(channel)
if (latest && !gte(MACRO.VERSION, latest)) { if (latest && !gte(MACRO.DISPLAY_VERSION, latest)) {
writeToStdout(`Update available: ${MACRO.VERSION}${latest}\n`) writeToStdout(`Update available: ${MACRO.DISPLAY_VERSION}${latest}\n`)
writeToStdout('\n') writeToStdout('\n')
writeToStdout('To update, run:\n') writeToStdout('To update, run:\n')
writeToStdout(chalk.bold(' brew upgrade claude-code') + '\n') writeToStdout(chalk.bold(' brew upgrade claude-code') + '\n')
@@ -147,8 +157,8 @@ export async function update() {
} else if (packageManager === 'winget') { } else if (packageManager === 'winget') {
writeToStdout('Claude is managed by winget.\n') writeToStdout('Claude is managed by winget.\n')
const latest = await getLatestVersion(channel) const latest = await getLatestVersion(channel)
if (latest && !gte(MACRO.VERSION, latest)) { if (latest && !gte(MACRO.DISPLAY_VERSION, latest)) {
writeToStdout(`Update available: ${MACRO.VERSION}${latest}\n`) writeToStdout(`Update available: ${MACRO.DISPLAY_VERSION}${latest}\n`)
writeToStdout('\n') writeToStdout('\n')
writeToStdout('To update, run:\n') writeToStdout('To update, run:\n')
writeToStdout( writeToStdout(
@@ -160,8 +170,8 @@ export async function update() {
} else if (packageManager === 'apk') { } else if (packageManager === 'apk') {
writeToStdout('Claude is managed by apk.\n') writeToStdout('Claude is managed by apk.\n')
const latest = await getLatestVersion(channel) const latest = await getLatestVersion(channel)
if (latest && !gte(MACRO.VERSION, latest)) { if (latest && !gte(MACRO.DISPLAY_VERSION, latest)) {
writeToStdout(`Update available: ${MACRO.VERSION}${latest}\n`) writeToStdout(`Update available: ${MACRO.DISPLAY_VERSION}${latest}\n`)
writeToStdout('\n') writeToStdout('\n')
writeToStdout('To update, run:\n') writeToStdout('To update, run:\n')
writeToStdout(chalk.bold(' apk upgrade claude-code') + '\n') writeToStdout(chalk.bold(' apk upgrade claude-code') + '\n')
@@ -250,14 +260,14 @@ export async function update() {
await gracefulShutdown(1) await gracefulShutdown(1)
} }
if (result.latestVersion === MACRO.VERSION) { if (result.latestVersion === MACRO.DISPLAY_VERSION) {
writeToStdout( writeToStdout(
chalk.green(`Claude Code is up to date (${MACRO.VERSION})`) + '\n', chalk.green(`OpenClaude is up to date (${MACRO.DISPLAY_VERSION})`) + '\n',
) )
} else { } else {
writeToStdout( writeToStdout(
chalk.green( chalk.green(
`Successfully updated from ${MACRO.VERSION} to version ${result.latestVersion}`, `Successfully updated from ${MACRO.DISPLAY_VERSION} to version ${result.latestVersion}`,
) + '\n', ) + '\n',
) )
await regenerateCompletionCache() await regenerateCompletionCache()
@@ -266,7 +276,7 @@ export async function update() {
} catch (error) { } catch (error) {
process.stderr.write('Error: Failed to install native update\n') process.stderr.write('Error: Failed to install native update\n')
process.stderr.write(String(error) + '\n') process.stderr.write(String(error) + '\n')
process.stderr.write('Try running "claude doctor" for diagnostics\n') process.stderr.write('Try running "openclaude doctor" for diagnostics\n')
await gracefulShutdown(1) await gracefulShutdown(1)
} }
} }
@@ -320,15 +330,15 @@ export async function update() {
} }
// Check if versions match exactly, including any build metadata (like SHA) // Check if versions match exactly, including any build metadata (like SHA)
if (latestVersion === MACRO.VERSION) { if (latestVersion === MACRO.DISPLAY_VERSION) {
writeToStdout( writeToStdout(
chalk.green(`Claude Code is up to date (${MACRO.VERSION})`) + '\n', chalk.green(`OpenClaude is up to date (${MACRO.DISPLAY_VERSION})`) + '\n',
) )
await gracefulShutdown(0) await gracefulShutdown(0)
} }
writeToStdout( writeToStdout(
`New version available: ${latestVersion} (current: ${MACRO.VERSION})\n`, `New version available: ${latestVersion} (current: ${MACRO.DISPLAY_VERSION})\n`,
) )
writeToStdout('Installing update...\n') writeToStdout('Installing update...\n')
@@ -388,7 +398,7 @@ export async function update() {
case 'success': case 'success':
writeToStdout( writeToStdout(
chalk.green( chalk.green(
`Successfully updated from ${MACRO.VERSION} to version ${latestVersion}`, `Successfully updated from ${MACRO.DISPLAY_VERSION} to version ${latestVersion}`,
) + '\n', ) + '\n',
) )
await regenerateCompletionCache() await regenerateCompletionCache()

View File

@@ -21,6 +21,7 @@ import dream from './commands/dream/index.js'
import ctx_viz from './commands/ctx_viz/index.js' import ctx_viz from './commands/ctx_viz/index.js'
import doctor from './commands/doctor/index.js' import doctor from './commands/doctor/index.js'
import onboardGithub from './commands/onboard-github/index.js' import onboardGithub from './commands/onboard-github/index.js'
import knowledge from './commands/knowledge/index.js'
import memory from './commands/memory/index.js' import memory from './commands/memory/index.js'
import help from './commands/help/index.js' import help from './commands/help/index.js'
import ide from './commands/ide/index.js' import ide from './commands/ide/index.js'
@@ -33,6 +34,7 @@ import installGitHubApp from './commands/install-github-app/index.js'
import installSlackApp from './commands/install-slack-app/index.js' import installSlackApp from './commands/install-slack-app/index.js'
import breakCache from './commands/break-cache/index.js' import breakCache from './commands/break-cache/index.js'
import cacheProbe from './commands/cache-probe/index.js' import cacheProbe from './commands/cache-probe/index.js'
import cacheStats from './commands/cacheStats/index.js'
import mcp from './commands/mcp/index.js' import mcp from './commands/mcp/index.js'
import mobile from './commands/mobile/index.js' import mobile from './commands/mobile/index.js'
import onboarding from './commands/onboarding/index.js' import onboarding from './commands/onboarding/index.js'
@@ -197,7 +199,7 @@ import stats from './commands/stats/index.js'
const usageReport: Command = { const usageReport: Command = {
type: 'prompt', type: 'prompt',
name: 'insights', name: 'insights',
description: 'Generate a report analyzing your Claude Code sessions', description: 'Generate a report analyzing your OpenClaude sessions',
contentLength: 0, contentLength: 0,
progressMessage: 'analyzing your sessions', progressMessage: 'analyzing your sessions',
source: 'builtin', source: 'builtin',
@@ -270,6 +272,7 @@ const COMMANDS = memoize((): Command[] => [
branch, branch,
btw, btw,
cacheProbe, cacheProbe,
cacheStats,
chrome, chrome,
clear, clear,
color, color,
@@ -292,6 +295,7 @@ const COMMANDS = memoize((): Command[] => [
ide, ide,
init, init,
keybindings, keybindings,
knowledge,
installGitHubApp, installGitHubApp,
installSlackApp, installSlackApp,
mcp, mcp,

View File

@@ -3,7 +3,7 @@ import type { Command } from '../../commands.js'
const buddy = { const buddy = {
type: 'local-jsx', type: 'local-jsx',
name: 'buddy', name: 'buddy',
description: 'Hatch, pet, and manage your Open Claude companion', description: 'Hatch, pet, and manage your OpenClaude companion',
immediate: true, immediate: true,
argumentHint: '[status|mute|unmute|help]', argumentHint: '[status|mute|unmute|help]',
load: () => import('./buddy.js'), load: () => import('./buddy.js'),

View File

@@ -0,0 +1,157 @@
/**
* Tests for `/cache-stats` command rendering.
*
* The command has non-trivial string formatting (timestamp slicing, model
* label padding, conditional N/A footnote, recent-rows cap) which can
* silently regress — these snapshot tests keep it honest.
*/
import { beforeEach, describe, expect, test } from 'bun:test'
import type { CacheMetrics } from '../../services/api/cacheMetrics.js'
import {
_setHistoryCapForTesting,
recordRequest,
resetSessionCacheStats,
} from '../../services/api/cacheStatsTracker.js'
import { call } from './cacheStats.js'
function supported(partial: Partial<CacheMetrics>): CacheMetrics {
return {
read: 0,
created: 0,
total: 0,
hitRate: null,
supported: true,
...partial,
}
}
const UNSUPPORTED: CacheMetrics = {
read: 0,
created: 0,
total: 0,
hitRate: null,
supported: false,
}
// The command signature requires a LocalJSXCommandContext. Our command
// doesn't actually read it — we pass an empty stand-in so the test can
// invoke call() without dragging the whole REPL context in.
const EMPTY_CTX = {} as Parameters<typeof call>[1]
// /cache-stats always returns a text result. Narrow the union here so
// the assertions don't need to redo the discriminant check every call.
async function runCommand(): Promise<string> {
const result = await call('', EMPTY_CTX)
if (result.type !== 'text') {
throw new Error(
`cacheStats command must return type:'text', got ${result.type}`,
)
}
return result.value
}
beforeEach(() => {
resetSessionCacheStats()
_setHistoryCapForTesting(500)
})
describe('/cache-stats — empty session', () => {
test('shows friendly "no requests yet" message', async () => {
const value = await runCommand()
expect(value).toContain('No API requests yet this session')
expect(value).toContain('/cache-stats')
})
})
describe('/cache-stats — supported-only session', () => {
test('renders Cache stats header, turn and session summaries', async () => {
recordRequest(
supported({ read: 500, total: 1_000, hitRate: 0.5 }),
'claude-sonnet-4',
)
const value = await runCommand()
expect(value).toContain('Cache stats')
expect(value).toContain('Current turn:')
expect(value).toContain('Session total:')
// Compact metric line should appear in the recent-requests table.
expect(value).toContain('claude-sonnet-4')
expect(value).toContain('read')
})
test('omits the N/A footnote when every row is supported', async () => {
recordRequest(supported({ read: 200, total: 400, hitRate: 0.5 }), 'model-A')
const value = await runCommand()
expect(value).not.toContain('N/A rows')
})
})
describe('/cache-stats — mixed supported + unsupported', () => {
test('renders N/A footnote when any row is unsupported', async () => {
recordRequest(UNSUPPORTED, 'gpt-4-copilot')
recordRequest(
supported({ read: 100, total: 500, hitRate: 0.2 }),
'claude-sonnet-4',
)
const value = await runCommand()
expect(value).toContain(
'N/A rows: provider API does not expose cache usage',
)
expect(value).toContain('GitHub Copilot')
expect(value).toContain('Ollama')
})
})
describe('/cache-stats — recent-rows cap', () => {
test('caps the breakdown at 20 rows and reports omitted count', async () => {
for (let i = 0; i < 25; i++) {
recordRequest(
supported({ read: i, total: 100, hitRate: i / 100 }),
`model-${i}`,
)
}
const value = await runCommand()
// 20 shown, 5 omitted from the oldest end.
expect(value).toContain('(20 of 25, 5 older omitted)')
// Oldest rows (model-0..model-4) should not appear; newest must.
expect(value).toContain('model-24')
expect(value).not.toContain('model-0 ')
})
test('does not mention "older omitted" when all rows fit', async () => {
for (let i = 0; i < 5; i++) {
recordRequest(supported({ read: i, total: 10 }), `m${i}`)
}
const value = await runCommand()
expect(value).not.toContain('older omitted')
expect(value).toContain('(5)')
})
})
describe('/cache-stats — model label rendering', () => {
test('truncates long model labels to fit the column width', async () => {
// cacheStats.ts pads+slices the label to 28 chars for alignment.
const longLabel = 'some-extremely-long-model-identifier-that-wraps'
recordRequest(supported({ read: 10, total: 100, hitRate: 0.1 }), longLabel)
const value = await runCommand()
// Sliced to 28 chars.
expect(value).toContain(longLabel.slice(0, 28))
// And the full string should NOT appear (would mean no truncation).
expect(value).not.toContain(longLabel)
})
})
describe('/cache-stats — timestamp rendering', () => {
test('renders each row with full date and time (YYYY-MM-DD HH:MM:SS)', async () => {
recordRequest(supported({ read: 5, total: 10, hitRate: 0.5 }), 'claude-x')
const value = await runCommand()
// Match the full ISO-ish date + time the row uses. We assert the shape,
// not a specific timestamp — real clock is used, so a regex on the
// format is the right assertion.
expect(value).toMatch(/\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}/)
// Bare time-of-day alone (no date) should NOT appear in isolation — it
// must always be preceded by the date. Guards against regression if
// someone shortens the formatter again.
const timeOnlyInRow = /\n\s*#\s*\d+\s+\d{2}:\d{2}:\d{2}\s/.test(value)
expect(timeOnlyInRow).toBe(false)
})
})

View File

@@ -0,0 +1,74 @@
import {
getCacheStatsHistory,
getCurrentTurnCacheMetrics,
getSessionCacheMetrics,
type CacheStatsEntry,
} from '../../services/api/cacheStatsTracker.js'
import {
formatCacheMetricsCompact,
formatCacheMetricsFull,
type CacheMetrics,
} from '../../services/api/cacheMetrics.js'
import type { LocalCommandCall } from '../../types/command.js'
// Cap the per-request breakdown to keep output readable. Users wanting
// the full history can rely on OPENCLAUDE_LOG_TOKEN_USAGE=verbose for
// structured per-request stderr output.
const MAX_RECENT_ROWS = 20
function formatRow(entry: CacheStatsEntry, idx: number): string {
// `YYYY-MM-DD HH:MM:SS` — long-running sessions can span midnight and a
// bare time-of-day makes the wrong row look "most recent" when two
// entries on different days share the same HH:MM:SS.
const iso = new Date(entry.timestamp).toISOString()
const ts = `${iso.slice(0, 10)} ${iso.slice(11, 19)}`
const line = formatCacheMetricsCompact(entry.metrics)
return ` #${String(idx + 1).padStart(3)} ${ts} ${entry.label.padEnd(28).slice(0, 28)} ${line}`
}
function summarize(label: string, m: CacheMetrics): string {
return `${label.padEnd(18)}${formatCacheMetricsFull(m)}`
}
export const call: LocalCommandCall = async () => {
const history = getCacheStatsHistory()
const session = getSessionCacheMetrics()
const turn = getCurrentTurnCacheMetrics()
if (history.length === 0) {
return {
type: 'text',
value:
'Cache stats\n No API requests yet this session.\n Start a turn and re-run /cache-stats to see results.',
}
}
const recent = history.slice(-MAX_RECENT_ROWS)
const omitted = history.length - recent.length
const lines: string[] = ['Cache stats', '']
lines.push(summarize('Current turn:', turn))
lines.push(summarize('Session total:', session))
lines.push('')
lines.push(`Recent requests (${recent.length}${omitted > 0 ? ` of ${history.length}, ${omitted} older omitted` : ''}):`)
lines.push(` # time model cache`)
for (const [i, entry] of recent.entries()) {
lines.push(formatRow(entry, history.length - recent.length + i))
}
// Honesty footnote — providers without cache reporting (vanilla Copilot,
// Ollama) show [Cache: N/A] rather than a fake 0%. Tell the user so they
// don't read "N/A" as "broken".
const hasUnsupported = recent.some((e) => !e.metrics.supported)
if (hasUnsupported) {
lines.push('')
lines.push(
' N/A rows: provider API does not expose cache usage (GitHub Copilot, Ollama).',
)
lines.push(
' The request still ran normally — only the metric is unavailable.',
)
}
return { type: 'text', value: lines.join('\n') }
}

View File

@@ -0,0 +1,24 @@
/**
* /cache-stats — per-session cache diagnostics.
*
* Always-on diagnostic command (no toggle) that surfaces the metrics
* tracked in `cacheStatsTracker.ts`. Breaks cache usage down by request
* and also reports the session-wide aggregate — useful when the user
* suspects a cache bust (e.g. after /reload-plugins) and wants to see
* whether recent turns still hit the cache.
*
* Lazy-loaded (implementation in cacheStats.ts) to keep startup time
* minimal — same pattern used by /cost and /cache-probe.
*/
import type { Command } from '../../commands.js'
const cacheStats = {
type: 'local',
name: 'cache-stats',
description:
'Show per-turn and session cache hit/miss stats (works across all providers)',
supportsNonInteractive: true,
load: () => import('./cacheStats.js'),
} satisfies Command
export default cacheStats

View File

@@ -197,7 +197,7 @@ function ClaudeInChromeMenu(t0) {
} }
let t6; let t6;
if ($[20] === Symbol.for("react.memo_cache_sentinel")) { if ($[20] === Symbol.for("react.memo_cache_sentinel")) {
t6 = <Text>Claude in Chrome works with the Chrome extension to let you control your browser directly from Claude Code. Navigate websites, fill forms, capture screenshots, record GIFs, and debug with console logs and network requests.</Text>; t6 = <Text>Claude in Chrome works with the Chrome extension to let you control your browser directly from OpenClaude. Navigate websites, fill forms, capture screenshots, record GIFs, and debug with console logs and network requests.</Text>;
$[20] = t6; $[20] = t6;
} else { } else {
t6 = $[20]; t6 = $[20];

View File

@@ -48,7 +48,7 @@ export function createMovedToPluginCommand({
text: `This command has been moved to a plugin. Tell the user: text: `This command has been moved to a plugin. Tell the user:
1. To install the plugin, run: 1. To install the plugin, run:
claude plugin install ${pluginName}@claude-code-marketplace openclaude plugin install ${pluginName}@claude-code-marketplace
2. After installation, use /${pluginName}:${pluginCommand} to run this command 2. After installation, use /${pluginName}:${pluginCommand} to run this command

View File

@@ -3,7 +3,7 @@ import { isEnvTruthy } from '../../utils/envUtils.js'
const doctor: Command = { const doctor: Command = {
name: 'doctor', name: 'doctor',
description: 'Diagnose and verify your Claude Code installation and settings', description: 'Diagnose and verify your OpenClaude installation and settings',
isEnabled: () => !isEnvTruthy(process.env.DISABLE_DOCTOR_COMMAND), isEnabled: () => !isEnvTruthy(process.env.DISABLE_DOCTOR_COMMAND),
type: 'local-jsx', type: 'local-jsx',
load: () => import('./doctor.js'), load: () => import('./doctor.js'),

View File

@@ -7,7 +7,7 @@ const feedback = {
aliases: ['bug'], aliases: ['bug'],
type: 'local-jsx', type: 'local-jsx',
name: 'feedback', name: 'feedback',
description: `Submit feedback about Claude Code`, description: `Submit feedback about OpenClaude`,
argumentHint: '[report]', argumentHint: '[report]',
isEnabled: () => isEnabled: () =>
!( !(

View File

@@ -247,7 +247,7 @@ function getSessionMetaDir(): string {
return join(getDataDir(), 'session-meta') return join(getDataDir(), 'session-meta')
} }
const FACET_EXTRACTION_PROMPT = `Analyze this Claude Code session and extract structured facets. const FACET_EXTRACTION_PROMPT = `Analyze this OpenClaude session and extract structured facets.
CRITICAL GUIDELINES: CRITICAL GUIDELINES:
@@ -687,7 +687,7 @@ function formatTranscriptForFacets(log: LogOption): string {
return lines.join('\n') return lines.join('\n')
} }
const SUMMARIZE_CHUNK_PROMPT = `Summarize this portion of a Claude Code session transcript. Focus on: const SUMMARIZE_CHUNK_PROMPT = `Summarize this portion of a OpenClaude session transcript. Focus on:
1. What the user asked for 1. What the user asked for
2. What Claude did (tools used, files modified) 2. What Claude did (tools used, files modified)
3. Any friction or issues 3. Any friction or issues
@@ -1156,12 +1156,12 @@ type InsightSection = {
const INSIGHT_SECTIONS: InsightSection[] = [ const INSIGHT_SECTIONS: InsightSection[] = [
{ {
name: 'project_areas', name: 'project_areas',
prompt: `Analyze this Claude Code usage data and identify project areas. prompt: `Analyze this OpenClaude usage data and identify project areas.
RESPOND WITH ONLY A VALID JSON OBJECT: RESPOND WITH ONLY A VALID JSON OBJECT:
{ {
"areas": [ "areas": [
{"name": "Area name", "session_count": N, "description": "2-3 sentences about what was worked on and how Claude Code was used."} {"name": "Area name", "session_count": N, "description": "2-3 sentences about what was worked on and how OpenClaude was used."}
] ]
} }
@@ -1170,18 +1170,18 @@ Include 4-5 areas. Skip internal CC operations.`,
}, },
{ {
name: 'interaction_style', name: 'interaction_style',
prompt: `Analyze this Claude Code usage data and describe the user's interaction style. prompt: `Analyze this OpenClaude usage data and describe the user's interaction style.
RESPOND WITH ONLY A VALID JSON OBJECT: RESPOND WITH ONLY A VALID JSON OBJECT:
{ {
"narrative": "2-3 paragraphs analyzing HOW the user interacts with Claude Code. Use second person 'you'. Describe patterns: iterate quickly vs detailed upfront specs? Interrupt often or let Claude run? Include specific examples. Use **bold** for key insights.", "narrative": "2-3 paragraphs analyzing HOW the user interacts with OpenClaude. Use second person 'you'. Describe patterns: iterate quickly vs detailed upfront specs? Interrupt often or let Claude run? Include specific examples. Use **bold** for key insights.",
"key_pattern": "One sentence summary of most distinctive interaction style" "key_pattern": "One sentence summary of most distinctive interaction style"
}`, }`,
maxTokens: 8192, maxTokens: 8192,
}, },
{ {
name: 'what_works', name: 'what_works',
prompt: `Analyze this Claude Code usage data and identify what's working well for this user. Use second person ("you"). prompt: `Analyze this OpenClaude usage data and identify what's working well for this user. Use second person ("you").
RESPOND WITH ONLY A VALID JSON OBJECT: RESPOND WITH ONLY A VALID JSON OBJECT:
{ {
@@ -1196,7 +1196,7 @@ Include 3 impressive workflows.`,
}, },
{ {
name: 'friction_analysis', name: 'friction_analysis',
prompt: `Analyze this Claude Code usage data and identify friction points for this user. Use second person ("you"). prompt: `Analyze this OpenClaude usage data and identify friction points for this user. Use second person ("you").
RESPOND WITH ONLY A VALID JSON OBJECT: RESPOND WITH ONLY A VALID JSON OBJECT:
{ {
@@ -1211,7 +1211,7 @@ Include 3 friction categories with 2 examples each.`,
}, },
{ {
name: 'suggestions', name: 'suggestions',
prompt: `Analyze this Claude Code usage data and suggest improvements. prompt: `Analyze this OpenClaude usage data and suggest improvements.
## CC FEATURES REFERENCE (pick from these for features_to_try): ## CC FEATURES REFERENCE (pick from these for features_to_try):
1. **MCP Servers**: Connect Claude to external tools, databases, and APIs via Model Context Protocol. 1. **MCP Servers**: Connect Claude to external tools, databases, and APIs via Model Context Protocol.
@@ -1254,7 +1254,7 @@ IMPORTANT for features_to_try: Pick 2-3 from the CC FEATURES REFERENCE above. In
}, },
{ {
name: 'on_the_horizon', name: 'on_the_horizon',
prompt: `Analyze this Claude Code usage data and identify future opportunities. prompt: `Analyze this OpenClaude usage data and identify future opportunities.
RESPOND WITH ONLY A VALID JSON OBJECT: RESPOND WITH ONLY A VALID JSON OBJECT:
{ {
@@ -1271,7 +1271,7 @@ Include 3 opportunities. Think BIG - autonomous workflows, parallel agents, iter
? [ ? [
{ {
name: 'cc_team_improvements', name: 'cc_team_improvements',
prompt: `Analyze this Claude Code usage data and suggest product improvements for the CC team. prompt: `Analyze this OpenClaude usage data and suggest product improvements for the CC team.
RESPOND WITH ONLY A VALID JSON OBJECT: RESPOND WITH ONLY A VALID JSON OBJECT:
{ {
@@ -1285,7 +1285,7 @@ Include 2-3 improvements based on friction patterns observed.`,
}, },
{ {
name: 'model_behavior_improvements', name: 'model_behavior_improvements',
prompt: `Analyze this Claude Code usage data and suggest model behavior improvements. prompt: `Analyze this OpenClaude usage data and suggest model behavior improvements.
RESPOND WITH ONLY A VALID JSON OBJECT: RESPOND WITH ONLY A VALID JSON OBJECT:
{ {
@@ -1301,7 +1301,7 @@ Include 2-3 improvements based on friction patterns observed.`,
: []), : []),
{ {
name: 'fun_ending', name: 'fun_ending',
prompt: `Analyze this Claude Code usage data and find a memorable moment. prompt: `Analyze this OpenClaude usage data and find a memorable moment.
RESPOND WITH ONLY A VALID JSON OBJECT: RESPOND WITH ONLY A VALID JSON OBJECT:
{ {
@@ -1555,7 +1555,7 @@ async function generateParallelInsights(
.join('\n') || '' .join('\n') || ''
// Now generate "At a Glance" with access to other sections' outputs // Now generate "At a Glance" with access to other sections' outputs
const atAGlancePrompt = `You're writing an "At a Glance" summary for a Claude Code usage insights report for Claude Code users. The goal is to help them understand their usage and improve how they can use Claude better, especially as models improve. const atAGlancePrompt = `You're writing an "At a Glance" summary for a OpenClaude usage insights report for OpenClaude users. The goal is to help them understand their usage and improve how they can use Claude better, especially as models improve.
Use this 4-part structure: Use this 4-part structure:
@@ -1563,7 +1563,7 @@ Use this 4-part structure:
2. **What's hindering you** - Split into (a) Claude's fault (misunderstandings, wrong approaches, bugs) and (b) user-side friction (not providing enough context, environment issues -- ideally more general than just one project). Be honest but constructive. 2. **What's hindering you** - Split into (a) Claude's fault (misunderstandings, wrong approaches, bugs) and (b) user-side friction (not providing enough context, environment issues -- ideally more general than just one project). Be honest but constructive.
3. **Quick wins to try** - Specific Claude Code features they could try from the examples below, or a workflow technique if you think it's really compelling. (Avoid stuff like "Ask Claude to confirm before taking actions" or "Type out more context up front" which are less compelling.) 3. **Quick wins to try** - Specific OpenClaude features they could try from the examples below, or a workflow technique if you think it's really compelling. (Avoid stuff like "Ask Claude to confirm before taking actions" or "Type out more context up front" which are less compelling.)
4. **Ambitious workflows for better models** - As we move to much more capable models over the next 3-6 months, what should they prepare for? What workflows that seem impossible now will become possible? Draw from the appropriate section below. 4. **Ambitious workflows for better models** - As we move to much more capable models over the next 3-6 months, what should they prepare for? What workflows that seem impossible now will become possible? Draw from the appropriate section below.
@@ -1826,7 +1826,7 @@ function generateHtmlReport(
const interactionStyle = insights.interaction_style const interactionStyle = insights.interaction_style
const interactionHtml = interactionStyle?.narrative const interactionHtml = interactionStyle?.narrative
? ` ? `
<h2 id="section-usage">How You Use Claude Code</h2> <h2 id="section-usage">How You Use OpenClaude</h2>
<div class="narrative"> <div class="narrative">
${markdownToHtml(interactionStyle.narrative)} ${markdownToHtml(interactionStyle.narrative)}
${interactionStyle.key_pattern ? `<div class="key-insight"><strong>Key pattern:</strong> ${escapeHtml(interactionStyle.key_pattern)}</div>` : ''} ${interactionStyle.key_pattern ? `<div class="key-insight"><strong>Key pattern:</strong> ${escapeHtml(interactionStyle.key_pattern)}</div>` : ''}
@@ -1890,7 +1890,7 @@ function generateHtmlReport(
<h2 id="section-features">Existing CC Features to Try</h2> <h2 id="section-features">Existing CC Features to Try</h2>
<div class="claude-md-section"> <div class="claude-md-section">
<h3>Suggested CLAUDE.md Additions</h3> <h3>Suggested CLAUDE.md Additions</h3>
<p style="font-size: 12px; color: #64748b; margin-bottom: 12px;">Just copy this into Claude Code to add it to your CLAUDE.md.</p> <p style="font-size: 12px; color: #64748b; margin-bottom: 12px;">Just copy this into OpenClaude to add it to your CLAUDE.md.</p>
<div class="claude-md-actions"> <div class="claude-md-actions">
<button class="copy-all-btn" onclick="copyAllCheckedClaudeMd()">Copy All Checked</button> <button class="copy-all-btn" onclick="copyAllCheckedClaudeMd()">Copy All Checked</button>
</div> </div>
@@ -1915,7 +1915,7 @@ function generateHtmlReport(
${ ${
suggestions.features_to_try && suggestions.features_to_try.length > 0 suggestions.features_to_try && suggestions.features_to_try.length > 0
? ` ? `
<p style="font-size: 13px; color: #64748b; margin-bottom: 12px;">Just copy this into Claude Code and it'll set it up for you.</p> <p style="font-size: 13px; color: #64748b; margin-bottom: 12px;">Just copy this into OpenClaude and it'll set it up for you.</p>
<div class="features-section"> <div class="features-section">
${suggestions.features_to_try ${suggestions.features_to_try
.map( .map(
@@ -1949,8 +1949,8 @@ function generateHtmlReport(
${ ${
suggestions.usage_patterns && suggestions.usage_patterns.length > 0 suggestions.usage_patterns && suggestions.usage_patterns.length > 0
? ` ? `
<h2 id="section-patterns">New Ways to Use Claude Code</h2> <h2 id="section-patterns">New Ways to Use OpenClaude</h2>
<p style="font-size: 13px; color: #64748b; margin-bottom: 12px;">Just copy this into Claude Code and it'll walk you through it.</p> <p style="font-size: 13px; color: #64748b; margin-bottom: 12px;">Just copy this into OpenClaude and it'll walk you through it.</p>
<div class="patterns-section"> <div class="patterns-section">
${suggestions.usage_patterns ${suggestions.usage_patterns
.map( .map(
@@ -1963,7 +1963,7 @@ function generateHtmlReport(
pat.copyable_prompt pat.copyable_prompt
? ` ? `
<div class="copyable-prompt-section"> <div class="copyable-prompt-section">
<div class="prompt-label">Paste into Claude Code:</div> <div class="prompt-label">Paste into OpenClaude:</div>
<div class="copyable-prompt-row"> <div class="copyable-prompt-row">
<code class="copyable-prompt">${escapeHtml(pat.copyable_prompt)}</code> <code class="copyable-prompt">${escapeHtml(pat.copyable_prompt)}</code>
<button class="copy-btn" onclick="copyText(this)">Copy</button> <button class="copy-btn" onclick="copyText(this)">Copy</button>
@@ -1998,7 +1998,7 @@ function generateHtmlReport(
<div class="horizon-title">${escapeHtml(opp.title || '')}</div> <div class="horizon-title">${escapeHtml(opp.title || '')}</div>
<div class="horizon-possible">${escapeHtml(opp.whats_possible || '')}</div> <div class="horizon-possible">${escapeHtml(opp.whats_possible || '')}</div>
${opp.how_to_try ? `<div class="horizon-tip"><strong>Getting started:</strong> ${escapeHtml(opp.how_to_try)}</div>` : ''} ${opp.how_to_try ? `<div class="horizon-tip"><strong>Getting started:</strong> ${escapeHtml(opp.how_to_try)}</div>` : ''}
${opp.copyable_prompt ? `<div class="pattern-prompt"><div class="prompt-label">Paste into Claude Code:</div><code>${escapeHtml(opp.copyable_prompt)}</code><button class="copy-btn" onclick="copyText(this)">Copy</button></div>` : ''} ${opp.copyable_prompt ? `<div class="pattern-prompt"><div class="prompt-label">Paste into OpenClaude:</div><code>${escapeHtml(opp.copyable_prompt)}</code><button class="copy-btn" onclick="copyText(this)">Copy</button></div>` : ''}
</div> </div>
`, `,
) )
@@ -2305,13 +2305,13 @@ function generateHtmlReport(
<html> <html>
<head> <head>
<meta charset="utf-8"> <meta charset="utf-8">
<title>Claude Code Insights</title> <title>OpenClaude Insights</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet">
<style>${css}</style> <style>${css}</style>
</head> </head>
<body> <body>
<div class="container"> <div class="container">
<h1>Claude Code Insights</h1> <h1>OpenClaude Insights</h1>
<p class="subtitle">${data.total_messages.toLocaleString()} messages across ${data.total_sessions} sessions${data.total_sessions_scanned && data.total_sessions_scanned > data.total_sessions ? ` (${data.total_sessions_scanned.toLocaleString()} total)` : ''} | ${data.date_range.start} to ${data.date_range.end}</p> <p class="subtitle">${data.total_messages.toLocaleString()} messages across ${data.total_sessions} sessions${data.total_sessions_scanned && data.total_sessions_scanned > data.total_sessions ? ` (${data.total_sessions_scanned.toLocaleString()} total)` : ''} | ${data.date_range.start} to ${data.date_range.end}</p>
${atAGlanceHtml} ${atAGlanceHtml}
@@ -2377,7 +2377,7 @@ function generateHtmlReport(
data.multi_clauding.overlap_events === 0 data.multi_clauding.overlap_events === 0
? ` ? `
<p style="font-size: 14px; color: #64748b; padding: 8px 0;"> <p style="font-size: 14px; color: #64748b; padding: 8px 0;">
No parallel session usage detected. You typically work with one Claude Code session at a time. No parallel session usage detected. You typically work with one OpenClaude session at a time.
</p> </p>
` `
: ` : `
@@ -2396,7 +2396,7 @@ function generateHtmlReport(
</div> </div>
</div> </div>
<p style="font-size: 13px; color: #475569; margin-top: 12px;"> <p style="font-size: 13px; color: #475569; margin-top: 12px;">
You run multiple Claude Code sessions simultaneously. Multi-clauding is detected when sessions You run multiple OpenClaude sessions simultaneously. Multi-clauding is detected when sessions
overlap in time, suggesting parallel workflows. overlap in time, suggesting parallel workflows.
</p> </p>
` `
@@ -2836,7 +2836,7 @@ function safeKeys(obj: Record<string, unknown> | undefined | null): string[] {
const usageReport: Command = { const usageReport: Command = {
type: 'prompt', type: 'prompt',
name: 'insights', name: 'insights',
description: 'Generate a report analyzing your Claude Code sessions', description: 'Generate a report analyzing your OpenClaude sessions',
contentLength: 0, // Dynamic content contentLength: 0, // Dynamic content
progressMessage: 'analyzing your sessions', progressMessage: 'analyzing your sessions',
source: 'builtin', source: 'builtin',
@@ -2874,7 +2874,7 @@ ${atAGlance.quick_wins ? `**Quick wins to try:** ${atAGlance.quick_wins} See _Fe
${atAGlance.ambitious_workflows ? `**Ambitious workflows:** ${atAGlance.ambitious_workflows} See _On the Horizon_.` : ''}` ${atAGlance.ambitious_workflows ? `**Ambitious workflows:** ${atAGlance.ambitious_workflows} See _On the Horizon_.` : ''}`
: '_No insights generated_' : '_No insights generated_'
const header = `# Claude Code Insights const header = `# OpenClaude Insights
${stats} ${stats}
${data.date_range.start} to ${data.date_range.end} ${data.date_range.start} to ${data.date_range.end}
@@ -2888,7 +2888,7 @@ Your full shareable insights report is ready: ${reportUrl}${uploadHint}`
return [ return [
{ {
type: 'text', type: 'text',
text: `The user just ran /insights to generate a usage report analyzing their Claude Code sessions. text: `The user just ran /insights to generate a usage report analyzing their OpenClaude sessions.
Here is the full insights data: Here is the full insights data:
${jsonStringify(insights, null, 2)} ${jsonStringify(insights, null, 2)}

View File

@@ -210,12 +210,12 @@ function Install({
useEffect(() => { useEffect(() => {
if (state.type === 'success') { if (state.type === 'success') {
// Give success message time to render before exiting // Give success message time to render before exiting
setTimeout(onDone, 2000, 'Claude Code installation completed successfully', { setTimeout(onDone, 2000, 'OpenClaude installation completed successfully', {
display: 'system' as const display: 'system' as const
}); });
} else if (state.type === 'error') { } else if (state.type === 'error') {
// Give error message time to render before exiting // Give error message time to render before exiting
setTimeout(onDone, 3000, 'Claude Code installation failed', { setTimeout(onDone, 3000, 'OpenClaude installation failed', {
display: 'system' as const display: 'system' as const
}); });
} }
@@ -226,7 +226,7 @@ function Install({
{state.type === 'cleaning-npm' && <Text color="warning">Cleaning up old npm installations...</Text>} {state.type === 'cleaning-npm' && <Text color="warning">Cleaning up old npm installations...</Text>}
{state.type === 'installing' && <Text color="claude"> {state.type === 'installing' && <Text color="claude">
Installing Claude Code native build {state.version}... Installing OpenClaude native build {state.version}...
</Text>} </Text>}
{state.type === 'setting-up' && <Text color="claude">Setting up launcher and shell integration...</Text>} {state.type === 'setting-up' && <Text color="claude">Setting up launcher and shell integration...</Text>}
@@ -237,7 +237,7 @@ function Install({
<Box> <Box>
<StatusIcon status="success" withSpace /> <StatusIcon status="success" withSpace />
<Text color="success" bold> <Text color="success" bold>
Claude Code successfully installed! OpenClaude successfully installed!
</Text> </Text>
</Box> </Box>
<Box marginLeft={2} flexDirection="column" gap={1}> <Box marginLeft={2} flexDirection="column" gap={1}>
@@ -254,7 +254,7 @@ function Install({
<Box marginTop={1}> <Box marginTop={1}>
<Text dimColor>Next: Run </Text> <Text dimColor>Next: Run </Text>
<Text color="claude" bold> <Text color="claude" bold>
claude --help openclaude --help
</Text> </Text>
<Text dimColor> to get started</Text> <Text dimColor> to get started</Text>
</Box> </Box>
@@ -279,7 +279,7 @@ function Install({
export const install = { export const install = {
type: 'local-jsx' as const, type: 'local-jsx' as const,
name: 'install', name: 'install',
description: 'Install Claude Code native build', description: 'Install OpenClaude native build',
argumentHint: '[options]', argumentHint: '[options]',
async call(onDone: (result: string, options?: { async call(onDone: (result: string, options?: {
display?: CommandResultDisplay; display?: CommandResultDisplay;

View File

@@ -0,0 +1,12 @@
import type { Command } from '../../commands.js'
const knowledge: Command = {
type: 'local',
name: 'knowledge',
description: 'Manage native Knowledge Graph',
supportsNonInteractive: true,
argumentHint: 'enable <yes|no> | clear | status | list',
load: () => import('./knowledge.js'),
}
export default knowledge

View File

@@ -0,0 +1,74 @@
import { describe, expect, it, beforeEach } from 'bun:test'
import { call as knowledgeCall } from './knowledge.js'
import { getGlobalConfig, saveGlobalConfig } from '../../utils/config.js'
import { getArc, addEntity, resetArc } from '../../utils/conversationArc.js'
import { getGlobalGraph, resetGlobalGraph } from '../../utils/knowledgeGraph.js'
describe('knowledge command', () => {
const mockContext = {} as any
beforeEach(() => {
resetArc()
resetGlobalGraph()
})
const knowledgeCallWithCapture = async (args: string) => {
const result = await knowledgeCall(args, mockContext)
if (result.type === 'text') {
return result.value
}
return ''
}
beforeEach(() => {
// Attempt to reset config - even if mocked, we try to set our key
try {
saveGlobalConfig(current => ({
...current,
knowledgeGraphEnabled: true
}))
} catch {
// Ignore if config is heavily mocked
}
resetArc()
})
it('enables and disables knowledge graph engine', async () => {
// Test Disable
const res1 = await knowledgeCallWithCapture('enable no')
expect(res1.toLowerCase()).toContain('disabled')
// Safety check: only verify state if property is actually present (avoid CI mock interference)
const config1 = getGlobalConfig()
if (config1 && 'knowledgeGraphEnabled' in config1) {
expect(config1.knowledgeGraphEnabled).toBe(false)
}
// Test Enable
const res2 = await knowledgeCallWithCapture('enable yes')
expect(res2.toLowerCase()).toContain('enabled')
const config2 = getGlobalConfig()
if (config2 && 'knowledgeGraphEnabled' in config2) {
expect(config2.knowledgeGraphEnabled).toBe(true)
}
})
it('clears the knowledge graph', async () => {
// Add a fact first
addEntity('test', 'fact')
const graph = getGlobalGraph()
expect(Object.keys(graph.entities).length).toBe(1)
// Clear it
const res = await knowledgeCallWithCapture('clear')
const graphAfter = getGlobalGraph()
expect(Object.keys(graphAfter.entities).length).toBe(0)
expect(res.toLowerCase()).toContain('cleared')
})
it('shows error on unknown subcommand', async () => {
const res = await knowledgeCallWithCapture('invalid')
expect(res.toLowerCase()).toContain('unknown subcommand')
})
})

View File

@@ -0,0 +1,63 @@
import type { LocalCommandCall } from '../../types/command.js';
import { getArcSummary, resetArc, getArcStats } from '../../utils/conversationArc.js';
import { getGlobalGraph, resetGlobalGraph } from '../../utils/knowledgeGraph.js';
import { getGlobalConfig, saveGlobalConfig } from '../../utils/config.js';
import chalk from 'chalk';
export const call: LocalCommandCall = async (args, _context) => {
const arg = (args ? String(args) : '').trim().toLowerCase();
const splitArgs = arg.split(/\s+/).filter(Boolean);
const subCommand = splitArgs[0];
if (!subCommand || subCommand === 'status') {
const config = getGlobalConfig();
const stats = getArcStats();
const graph = getGlobalGraph();
const entityCount = Object.keys(graph.entities).length;
const statusText = (config.knowledgeGraphEnabled !== false)
? chalk.green('ENABLED')
: chalk.red('DISABLED');
let output = `${chalk.bold('Knowledge Graph Engine')}: ${statusText}\n`;
if (stats) {
output += `• Stats: ${stats.goalCount} goals, ${stats.milestoneCount} milestones, ${entityCount} technical facts learned`;
}
return { type: 'text', value: output };
}
if (subCommand === 'enable') {
const val = splitArgs[1];
const isEnabled = val === 'yes' || val === 'true';
const isDisabled = val === 'no' || val === 'false';
if (!isEnabled && !isDisabled) {
return { type: 'text', value: 'Usage: /knowledge enable <yes|no>' };
}
saveGlobalConfig(current => ({ ...current, knowledgeGraphEnabled: isEnabled }));
return {
type: 'text',
value: `✨ Knowledge Graph engine ${isEnabled ? chalk.green('enabled') : chalk.red('disabled')}.`
};
}
if (subCommand === 'clear') {
resetArc();
resetGlobalGraph();
return {
type: 'text',
value: '🗑️ Knowledge graph memory has been cleared for this session.'
};
}
if (subCommand === 'list') {
return { type: 'text', value: getArcSummary() };
}
return {
type: 'text',
value: `Unknown subcommand: ${subCommand}. Available: enable, clear, status, list`
};
};

View File

@@ -34,16 +34,16 @@ export function registerMcpAddCommand(mcp: Command): void {
mcp mcp
.command('add <name> <commandOrUrl> [args...]') .command('add <name> <commandOrUrl> [args...]')
.description( .description(
'Add an MCP server to Claude Code.\n\n' + 'Add an MCP server to OpenClaude.\n\n' +
'Examples:\n' + 'Examples:\n' +
' # Add HTTP server:\n' + ' # Add HTTP server:\n' +
' claude mcp add --transport http sentry https://mcp.sentry.dev/mcp\n\n' + ' openclaude mcp add --transport http sentry https://mcp.sentry.dev/mcp\n\n' +
' # Add HTTP server with headers:\n' + ' # Add HTTP server with headers:\n' +
' claude mcp add --transport http corridor https://app.corridor.dev/api/mcp --header "Authorization: Bearer ..."\n\n' + ' openclaude mcp add --transport http corridor https://app.corridor.dev/api/mcp --header "Authorization: Bearer ..."\n\n' +
' # Add stdio server with environment variables:\n' + ' # Add stdio server with environment variables:\n' +
' claude mcp add -e API_KEY=xxx my-server -- npx my-mcp-server\n\n' + ' openclaude mcp add -e API_KEY=xxx my-server -- npx my-mcp-server\n\n' +
' # Add stdio server with subprocess flags:\n' + ' # Add stdio server with subprocess flags:\n' +
' claude mcp add my-server -- my-command --some-flag arg1', ' openclaude mcp add my-server -- my-command --some-flag arg1',
) )
.option( .option(
'-s, --scope <scope>', '-s, --scope <scope>',
@@ -75,7 +75,7 @@ export function registerMcpAddCommand(mcp: Command): void {
.addOption( .addOption(
new Option( new Option(
'--xaa', '--xaa',
"Enable XAA (SEP-990) for this server. Requires 'claude mcp xaa setup' first. Also requires --client-id and --client-secret (for the MCP server's AS).", "Enable XAA (SEP-990) for this server. Requires 'openclaude mcp xaa setup' first. Also requires --client-id and --client-secret (for the MCP server's AS).",
).hideHelp(!isXaaEnabled()), ).hideHelp(!isXaaEnabled()),
) )
.action(async (name, commandOrUrl, args, options) => { .action(async (name, commandOrUrl, args, options) => {
@@ -87,12 +87,12 @@ export function registerMcpAddCommand(mcp: Command): void {
if (!name) { if (!name) {
cliError( cliError(
'Error: Server name is required.\n' + 'Error: Server name is required.\n' +
'Usage: claude mcp add <name> <command> [args...]', 'Usage: openclaude mcp add <name> <command> [args...]',
) )
} else if (!actualCommand) { } else if (!actualCommand) {
cliError( cliError(
'Error: Command is required when server name is provided.\n' + 'Error: Command is required when server name is provided.\n' +
'Usage: claude mcp add <name> <command> [args...]', 'Usage: openclaude mcp add <name> <command> [args...]',
) )
} }
@@ -113,7 +113,7 @@ export function registerMcpAddCommand(mcp: Command): void {
if (!options.clientSecret) missing.push('--client-secret') if (!options.clientSecret) missing.push('--client-secret')
if (!getXaaIdpSettings()) { if (!getXaaIdpSettings()) {
missing.push( missing.push(
"'claude mcp xaa setup' (settings.xaaIdp not configured)", "'openclaude mcp xaa setup' (settings.xaaIdp not configured)",
) )
} }
if (missing.length) { if (missing.length) {
@@ -254,10 +254,10 @@ export function registerMcpAddCommand(mcp: Command): void {
`\nWarning: The command "${actualCommand}" looks like a URL, but is being interpreted as a stdio server as --transport was not specified.\n`, `\nWarning: The command "${actualCommand}" looks like a URL, but is being interpreted as a stdio server as --transport was not specified.\n`,
) )
process.stderr.write( process.stderr.write(
`If this is an HTTP server, use: claude mcp add --transport http ${name} ${actualCommand}\n`, `If this is an HTTP server, use: openclaude mcp add --transport http ${name} ${actualCommand}\n`,
) )
process.stderr.write( process.stderr.write(
`If this is an SSE server, use: claude mcp add --transport sse ${name} ${actualCommand}\n`, `If this is an SSE server, use: openclaude mcp add --transport sse ${name} ${actualCommand}\n`,
) )
} }

View File

@@ -170,7 +170,7 @@ export function registerMcpXaaIdpCommand(mcp: Command): void {
const idp = getXaaIdpSettings() const idp = getXaaIdpSettings()
if (!idp) { if (!idp) {
return cliError( return cliError(
"Error: no XAA IdP connection. Run 'claude mcp xaa setup' first.", "Error: no XAA IdP connection. Run 'openclaude mcp xaa setup' first.",
) )
} }
@@ -235,7 +235,7 @@ export function registerMcpXaaIdpCommand(mcp: Command): void {
`Client secret: ${hasSecret ? '(stored in keychain)' : '(not set — PKCE-only)'}\n`, `Client secret: ${hasSecret ? '(stored in keychain)' : '(not set — PKCE-only)'}\n`,
) )
process.stdout.write( process.stdout.write(
`Logged in: ${hasIdToken ? 'yes (id_token cached)' : "no — run 'claude mcp xaa login'"}\n`, `Logged in: ${hasIdToken ? 'yes (id_token cached)' : "no — run 'openclaude mcp xaa login'"}\n`,
) )
cliOk() cliOk()
}) })

View File

@@ -6,7 +6,7 @@ export default {
type: 'local-jsx', type: 'local-jsx',
name: 'model', name: 'model',
get description() { get description() {
return `Set the AI model for Claude Code (currently ${renderModelName(getMainLoopModel())})` return `Set the AI model for OpenClaude (currently ${renderModelName(getMainLoopModel())})`
}, },
argumentHint: '[model]', argumentHint: '[model]',
get immediate() { get immediate() {

View File

@@ -713,7 +713,7 @@ function EmptyStateMessage(t0) {
{ {
let t1; let t1;
if ($[0] === Symbol.for("react.memo_cache_sentinel")) { if ($[0] === Symbol.for("react.memo_cache_sentinel")) {
t1 = <><Text dimColor={true}>Git is required to install marketplaces.</Text><Text dimColor={true}>Please install git and restart Claude Code.</Text></>; t1 = <><Text dimColor={true}>Git is required to install marketplaces.</Text><Text dimColor={true}>Please install git and restart OpenClaude.</Text></>;
$[0] = t1; $[0] = t1;
} else { } else {
t1 = $[0]; t1 = $[0];

View File

@@ -3,7 +3,7 @@ const plugin = {
type: 'local-jsx', type: 'local-jsx',
name: 'plugin', name: 'plugin',
aliases: ['plugins', 'marketplace'], aliases: ['plugins', 'marketplace'],
description: 'Manage Claude Code plugins', description: 'Manage OpenClaude plugins',
immediate: true, immediate: true,
load: () => import('./plugin.js') load: () => import('./plugin.js')
} satisfies Command; } satisfies Command;

View File

@@ -11,6 +11,7 @@ import {
buildCodexOAuthProfileEnv, buildCodexOAuthProfileEnv,
buildCurrentProviderSummary, buildCurrentProviderSummary,
buildProfileSaveMessage, buildProfileSaveMessage,
buildProviderManagerCompletion,
getProviderWizardDefaults, getProviderWizardDefaults,
ProviderWizard, ProviderWizard,
TextEntryDialog, TextEntryDialog,
@@ -264,6 +265,32 @@ test('wizard step remount prevents a typed API key from leaking into the next fi
expect(output).not.toContain('sk-secret-12345678') expect(output).not.toContain('sk-secret-12345678')
}) })
test('buildProviderManagerCompletion records provider switch event and model-visible reminder', () => {
const completion = buildProviderManagerCompletion({
action: 'activated',
activeProviderName: 'Sadaf Provider',
activeProviderModel: 'sadaf-model',
message: 'Provider switched to Sadaf Provider (sadaf-model)',
})
expect(completion.message).toBe(
'Provider switched to Sadaf Provider (sadaf-model)',
)
expect(completion.metaMessages).toEqual([
'<system-reminder>Provider switched mid-session to Sadaf Provider using model sadaf-model. Use this provider/model for subsequent requests unless the user switches again.</system-reminder>',
])
})
test('buildProviderManagerCompletion skips provider reminder when manager is cancelled', () => {
const completion = buildProviderManagerCompletion({
action: 'cancelled',
message: 'Provider manager closed',
})
expect(completion.message).toBe('Provider manager closed')
expect(completion.metaMessages).toBeUndefined()
})
test('buildProfileSaveMessage maps provider fields without echoing secrets', () => { test('buildProfileSaveMessage maps provider fields without echoing secrets', () => {
const message = buildProfileSaveMessage( const message = buildProfileSaveMessage(
'openai', 'openai',

View File

@@ -2,7 +2,10 @@ import * as React from 'react'
import type { LocalJSXCommandCall, LocalJSXCommandOnDone } from '../../types/command.js' import type { LocalJSXCommandCall, LocalJSXCommandOnDone } from '../../types/command.js'
import { COMMON_HELP_ARGS, COMMON_INFO_ARGS } from '../../constants/xml.js' import { COMMON_HELP_ARGS, COMMON_INFO_ARGS } from '../../constants/xml.js'
import { ProviderManager } from '../../components/ProviderManager.js' import {
ProviderManager,
type ProviderManagerResult,
} from '../../components/ProviderManager.js'
import TextInput from '../../components/TextInput.js' import TextInput from '../../components/TextInput.js'
import { import {
Select, Select,
@@ -70,6 +73,29 @@ import {
type OllamaGenerationReadiness, type OllamaGenerationReadiness,
} from '../../utils/providerDiscovery.js' } from '../../utils/providerDiscovery.js'
export function buildProviderManagerCompletion(result?: ProviderManagerResult): {
message: string
metaMessages?: string[]
} {
const message =
result?.message ??
(result?.action === 'saved'
? 'Provider profile updated'
: 'Provider manager closed')
const metaMessages =
result?.action === 'activated' && result.activeProviderName
? [
`<system-reminder>Provider switched mid-session to ${result.activeProviderName}${
result.activeProviderModel
? ` using model ${result.activeProviderModel}`
: ''
}. Use this provider/model for subsequent requests unless the user switches again.</system-reminder>`,
]
: undefined
return { message, metaMessages }
}
function describeOllamaReadinessIssue( function describeOllamaReadinessIssue(
readiness: OllamaGenerationReadiness, readiness: OllamaGenerationReadiness,
options?: { options?: {
@@ -1703,13 +1729,8 @@ export const call: LocalJSXCommandCall = async (onDone, _context, args) => {
<ProviderManager <ProviderManager
mode="manage" mode="manage"
onDone={result => { onDone={result => {
const message = const { message, metaMessages } = buildProviderManagerCompletion(result)
result?.message ?? onDone(message, { display: 'system', metaMessages })
(result?.action === 'saved'
? 'Provider profile updated'
: 'Provider manager closed')
onDone(message, { display: 'system' })
}} }}
/> />
) )

View File

@@ -6,7 +6,7 @@ const web = {
type: 'local-jsx', type: 'local-jsx',
name: 'web-setup', name: 'web-setup',
description: description:
'Setup Claude Code on the web (requires connecting your GitHub account)', 'Setup OpenClaude on the web (requires connecting your GitHub account)',
availability: ['claude-ai'], availability: ['claude-ai'],
isEnabled: () => isEnabled: () =>
getFeatureValue_CACHED_MAY_BE_STALE('tengu_cobalt_lantern', false) && getFeatureValue_CACHED_MAY_BE_STALE('tengu_cobalt_lantern', false) &&

View File

@@ -48,7 +48,7 @@ const review: Command = {
const ultrareview: Command = { const ultrareview: Command = {
type: 'local-jsx', type: 'local-jsx',
name: 'ultrareview', name: 'ultrareview',
description: `~1020 min · Finds and verifies bugs in your branch. Runs in Claude Code on the web. See ${CCR_TERMS_URL}`, description: `~1020 min · Finds and verifies bugs in your branch. Runs in OpenClaude on the web. See ${CCR_TERMS_URL}`,
isEnabled: () => isUltrareviewEnabled(), isEnabled: () => isUltrareviewEnabled(),
load: () => import('./review/ultrareviewCommand.js'), load: () => import('./review/ultrareviewCommand.js'),
} }

View File

@@ -57,7 +57,7 @@ function SessionInfo(t0) {
if (!remoteSessionUrl) { if (!remoteSessionUrl) {
let t4; let t4;
if ($[4] === Symbol.for("react.memo_cache_sentinel")) { if ($[4] === Symbol.for("react.memo_cache_sentinel")) {
t4 = <Pane><Text color="warning">Not in remote mode. Start with `claude --remote` to use this command.</Text><Text dimColor={true}>(press esc to close)</Text></Pane>; t4 = <Pane><Text color="warning">Not in remote mode. Start with `openclaude --remote` to use this command.</Text><Text dimColor={true}>(press esc to close)</Text></Pane>;
$[4] = t4; $[4] = t4;
} else { } else {
t4 = $[4]; t4 = $[4];

View File

@@ -3,7 +3,7 @@ import type { Command } from '../../commands.js'
const stats = { const stats = {
type: 'local-jsx', type: 'local-jsx',
name: 'stats', name: 'stats',
description: 'Show your Claude Code usage statistics and activity', description: 'Show your OpenClaude usage statistics and activity',
load: () => import('./stats.js'), load: () => import('./stats.js'),
} satisfies Command } satisfies Command

View File

@@ -4,7 +4,7 @@ const status = {
type: 'local-jsx', type: 'local-jsx',
name: 'status', name: 'status',
description: description:
'Show Claude Code status including version, model, account, API connectivity, and tool statuses', 'Show OpenClaude status including version, model, account, API connectivity, and tool statuses',
immediate: true, immediate: true,
load: () => import('./status.js'), load: () => import('./status.js'),
} satisfies Command } satisfies Command

View File

@@ -3,7 +3,7 @@ import type { Command } from '../commands.js';
import { AGENT_TOOL_NAME } from '../tools/AgentTool/constants.js'; import { AGENT_TOOL_NAME } from '../tools/AgentTool/constants.js';
const statusline = { const statusline = {
type: 'prompt', type: 'prompt',
description: "Set up Claude Code's status line UI", description: "Set up OpenClaude's status line UI",
contentLength: 0, contentLength: 0,
// Dynamic content // Dynamic content
aliases: [], aliases: [],

View File

@@ -3,7 +3,7 @@ import type { Command } from '../../commands.js'
const stickers = { const stickers = {
type: 'local', type: 'local',
name: 'stickers', name: 'stickers',
description: 'Order Claude Code stickers', description: 'Order OpenClaude stickers',
supportsNonInteractive: false, supportsNonInteractive: false,
load: () => import('./stickers.js'), load: () => import('./stickers.js'),
} satisfies Command } satisfies Command

View File

@@ -4,7 +4,7 @@ import { checkStatsigFeatureGate_CACHED_MAY_BE_STALE } from '../../services/anal
const thinkback = { const thinkback = {
type: 'local-jsx', type: 'local-jsx',
name: 'think-back', name: 'think-back',
description: 'Your 2025 Claude Code Year in Review', description: 'Your 2025 OpenClaude Year in Review',
isEnabled: () => isEnabled: () =>
checkStatsigFeatureGate_CACHED_MAY_BE_STALE('tengu_thinkback'), checkStatsigFeatureGate_CACHED_MAY_BE_STALE('tengu_thinkback'),
load: () => import('./thinkback.js'), load: () => import('./thinkback.js'),

View File

@@ -115,7 +115,7 @@ function startDetachedPoll(taskId: string, sessionId: string, url: string, getAp
ultraplanSessionUrl: undefined ultraplanSessionUrl: undefined
} : prev); } : prev);
enqueuePendingNotification({ enqueuePendingNotification({
value: [`Ultraplan approved — executing in Claude Code on the web. Follow along at: ${url}`, '', 'Results will land as a pull request when the remote session finishes. There is nothing to do here.'].join('\n'), value: [`Ultraplan approved — executing in OpenClaude on the web. Follow along at: ${url}`, '', 'Results will land as a pull request when the remote session finishes. There is nothing to do here.'].join('\n'),
mode: 'task-notification' mode: 'task-notification'
}); });
} else { } else {
@@ -184,10 +184,10 @@ function startDetachedPoll(taskId: string, sessionId: string, url: string, getAp
// multi-second teleportToRemote round-trip. // multi-second teleportToRemote round-trip.
function buildLaunchMessage(disconnectedBridge?: boolean): string { function buildLaunchMessage(disconnectedBridge?: boolean): string {
const prefix = disconnectedBridge ? `${REMOTE_CONTROL_DISCONNECTED_MSG} ` : ''; const prefix = disconnectedBridge ? `${REMOTE_CONTROL_DISCONNECTED_MSG} ` : '';
return `${DIAMOND_OPEN} ultraplan\n${prefix}Starting Claude Code on the web…`; return `${DIAMOND_OPEN} ultraplan\n${prefix}Starting OpenClaude on the web…`;
} }
function buildSessionReadyMessage(url: string): string { function buildSessionReadyMessage(url: string): string {
return `${DIAMOND_OPEN} ultraplan · Monitor progress in Claude Code on the web ${url}\nYou can continue working — when the ${DIAMOND_OPEN} fills, press ↓ to view results`; return `${DIAMOND_OPEN} ultraplan · Monitor progress in OpenClaude on the web ${url}\nYou can continue working — when the ${DIAMOND_OPEN} fills, press ↓ to view results`;
} }
function buildAlreadyActiveMessage(url: string | undefined): string { function buildAlreadyActiveMessage(url: string | undefined): string {
return url ? `ultraplan: already polling. Open ${url} to check status, or wait for the plan to land here.` : 'ultraplan: already launching. Please wait for the session to start.'; return url ? `ultraplan: already polling. Open ${url} to check status, or wait for the plan to land here.` : 'ultraplan: already launching. Please wait for the session to start.';
@@ -272,7 +272,7 @@ export async function launchUltraplan(opts: {
return [ return [
// Rendered via <Markdown>; raw <message> is tokenized as HTML // Rendered via <Markdown>; raw <message> is tokenized as HTML
// and dropped. Backslash-escape the brackets. // and dropped. Backslash-escape the brackets.
'Usage: /ultraplan \\<prompt\\>, or include "ultraplan" anywhere', 'in your prompt', '', 'Advanced multi-agent plan mode with our most powerful model', '(Opus). Runs in Claude Code on the web. When the plan is ready,', 'you can execute it in the web session or send it back here.', 'Terminal stays free while the remote plans.', 'Requires /login.', '', `Terms: ${CCR_TERMS_URL}`].join('\n'); 'Usage: /ultraplan \\<prompt\\>, or include "ultraplan" anywhere', 'in your prompt', '', 'Advanced multi-agent plan mode with our most powerful model', '(Opus). Runs in OpenClaude on the web. When the plan is ready,', 'you can execute it in the web session or send it back here.', 'Terminal stays free while the remote plans.', 'Requires /login.', '', `Terms: ${CCR_TERMS_URL}`].join('\n');
} }
// Set synchronously before the detached flow to prevent duplicate launches // Set synchronously before the detached flow to prevent duplicate launches
@@ -461,7 +461,7 @@ const call: LocalJSXCommandCall = async (onDone, context, args) => {
export default { export default {
type: 'local-jsx', type: 'local-jsx',
name: 'ultraplan', name: 'ultraplan',
description: `~1030 min · Claude Code on the web drafts an advanced plan you can edit and approve. See ${CCR_TERMS_URL}`, description: `~1030 min · OpenClaude on the web drafts an advanced plan you can edit and approve. See ${CCR_TERMS_URL}`,
argumentHint: '<prompt>', argumentHint: '<prompt>',
isEnabled: () => "external" === 'ant', isEnabled: () => "external" === 'ant',
load: () => Promise.resolve({ load: () => Promise.resolve({

View File

@@ -4,6 +4,5 @@ export default {
type: 'local-jsx', type: 'local-jsx',
name: 'usage', name: 'usage',
description: 'Show plan usage limits', description: 'Show plan usage limits',
availability: ['claude-ai'],
load: () => import('./usage.js'), load: () => import('./usage.js'),
} satisfies Command } satisfies Command

View File

@@ -56,7 +56,7 @@ export function ClaudeInChromeOnboarding(t0) {
} }
let t5; let t5;
if ($[6] !== t4) { if ($[6] !== t4) {
t5 = <Text>Claude in Chrome works with the Chrome extension to let you control your browser directly from Claude Code. You can navigate websites, fill forms, capture screenshots, record GIFs, and debug with console logs and network requests.{t4}</Text>; t5 = <Text>Claude in Chrome works with the Chrome extension to let you control your browser directly from OpenClaude. You can navigate websites, fill forms, capture screenshots, record GIFs, and debug with console logs and network requests.{t4}</Text>;
$[6] = t4; $[6] = t4;
$[7] = t5; $[7] = t5;
} else { } else {

View File

@@ -262,7 +262,7 @@ export function ConsoleOAuthFlow({
state: 'success' state: 'success'
}); });
void sendNotification({ void sendNotification({
message: 'Claude Code login successful', message: 'OpenClaude login successful',
notificationType: 'auth_success' notificationType: 'auth_success'
}, terminal); }, terminal);
} }
@@ -384,7 +384,7 @@ function OAuthStatusMessage({
case 'idle': { case 'idle': {
const promptText = const promptText =
startingMessage || startingMessage ||
'Claude Code can be used with your Claude subscription or billed based on API usage through your Console account.' 'OpenClaude can be used with your Claude subscription or billed based on API usage through your Console account.'
const loginOptions = [ const loginOptions = [
{ {
@@ -512,7 +512,7 @@ function OAuthStatusMessage({
<Box flexDirection="column" gap={1}> <Box flexDirection="column" gap={1}>
<Box> <Box>
<Spinner /> <Spinner />
<Text>Creating API key for Claude Code</Text> <Text>Creating API key for OpenClaude</Text>
</Box> </Box>
</Box> </Box>
) )

View File

@@ -90,7 +90,7 @@ export function DesktopUpsellStartup(t0) {
let t3; let t3;
if ($[5] === Symbol.for("react.memo_cache_sentinel")) { if ($[5] === Symbol.for("react.memo_cache_sentinel")) {
t3 = { t3 = {
label: "Open in Claude Code Desktop", label: "Open in Claude desktop app",
value: "try" as const value: "try" as const
}; };
$[5] = t3; $[5] = t3;
@@ -120,7 +120,7 @@ export function DesktopUpsellStartup(t0) {
const options = t5; const options = t5;
let t6; let t6;
if ($[8] === Symbol.for("react.memo_cache_sentinel")) { if ($[8] === Symbol.for("react.memo_cache_sentinel")) {
t6 = <Box marginBottom={1}><Text>Same Claude Code with visual diffs, live app preview, parallel sessions, and more.</Text></Box>; t6 = <Box marginBottom={1}><Text>Use OpenClaude in the Claude desktop app for visual diffs, live app preview, parallel sessions, and more.</Text></Box>;
$[8] = t6; $[8] = t6;
} else { } else {
t6 = $[8]; t6 = $[8];
@@ -135,7 +135,7 @@ export function DesktopUpsellStartup(t0) {
} }
let t8; let t8;
if ($[11] !== handleSelect || $[12] !== t7) { if ($[11] !== handleSelect || $[12] !== t7) {
t8 = <PermissionDialog title="Try Claude Code Desktop"><Box flexDirection="column" paddingX={2} paddingY={1}>{t6}<Select options={options} onChange={handleSelect} onCancel={t7} /></Box></PermissionDialog>; t8 = <PermissionDialog title="Try the Claude desktop app"><Box flexDirection="column" paddingX={2} paddingY={1}>{t6}<Select options={options} onChange={handleSelect} onCancel={t7} /></Box></PermissionDialog>;
$[11] = handleSelect; $[11] = handleSelect;
$[12] = t7; $[12] = t7;
$[13] = t8; $[13] = t8;

View File

@@ -138,7 +138,7 @@ export function HelpV2(t0) {
const t5 = insideModal ? undefined : maxHeight; const t5 = insideModal ? undefined : maxHeight;
let t6; let t6;
if ($[31] !== tabs) { if ($[31] !== tabs) {
t6 = <Tabs title={false ? "/help" : `Claude Code v${MACRO.VERSION}`} color="professionalBlue" defaultTab="general">{tabs}</Tabs>; t6 = <Tabs title={false ? "/help" : `OpenClaude v${MACRO.VERSION}`} color="professionalBlue" defaultTab="general">{tabs}</Tabs>;
$[31] = tabs; $[31] = tabs;
$[32] = t6; $[32] = t6;
} else { } else {
@@ -146,7 +146,7 @@ export function HelpV2(t0) {
} }
let t7; let t7;
if ($[33] === Symbol.for("react.memo_cache_sentinel")) { if ($[33] === Symbol.for("react.memo_cache_sentinel")) {
t7 = <Box marginTop={1}><Text>For more help:{" "}<Link url="https://code.claude.com/docs/en/overview" /></Text></Box>; t7 = <Box marginTop={1}><Text>For more help:{" "}<Link url="https://github.com/Gitlawb/openclaude" /></Text></Box>;
$[33] = t7; $[33] = t7;
} else { } else {
t7 = $[33]; t7 = $[33];

View File

@@ -70,7 +70,7 @@ export function IdeOnboardingDialog(t0) {
} }
let t6; let t6;
if ($[8] !== ideName) { if ($[8] !== ideName) {
t6 = <>{t5}<Text>Welcome to Claude Code for {ideName}</Text></>; t6 = <>{t5}<Text>Welcome to OpenClaude for {ideName}</Text></>;
$[8] = ideName; $[8] = ideName;
$[9] = t6; $[9] = t6;
} else { } else {

View File

@@ -135,7 +135,7 @@ export function ChannelsNotice() {
} }
let t2; let t2;
if ($[24] !== flag) { if ($[24] !== flag) {
t2 = <Text dimColor={true}>Experimental · inbound messages will be pushed into this session, this carries prompt injection risks. Restart Claude Code without {flag} to disable.</Text>; t2 = <Text dimColor={true}>Experimental · inbound messages will be pushed into this session, this carries prompt injection risks. Restart OpenClaude without {flag} to disable.</Text>;
$[24] = flag; $[24] = flag;
$[25] = t2; $[25] = t2;
} else { } else {

View File

@@ -250,8 +250,8 @@ export function LogoV2() {
} }
const layoutMode = getLayoutMode(columns); const layoutMode = getLayoutMode(columns);
const userTheme = resolveThemeSetting(getGlobalConfig().theme); const userTheme = resolveThemeSetting(getGlobalConfig().theme);
const borderTitle = ` ${color("text", userTheme)("Open Claude")} ${color("inactive", userTheme)(`v${version}`)} `; const borderTitle = ` ${color("text", userTheme)("OpenClaude")} ${color("inactive", userTheme)(`v${version}`)} `;
const compactBorderTitle = color("text", userTheme)(" Open Claude "); const compactBorderTitle = color("text", userTheme)(" OpenClaude ");
if (layoutMode === "compact") { if (layoutMode === "compact") {
let welcomeMessage = formatWelcomeMessage(username); let welcomeMessage = formatWelcomeMessage(username);
if (stringWidth(welcomeMessage) > columns - 4) { if (stringWidth(welcomeMessage) > columns - 4) {

View File

@@ -9,7 +9,7 @@ export function WelcomeV2() {
if (env.terminal === "Apple_Terminal") { if (env.terminal === "Apple_Terminal") {
let t0; let t0;
if ($[0] !== theme) { if ($[0] !== theme) {
t0 = <AppleTerminalWelcomeV2 theme={theme} welcomeMessage="Welcome to Claude Code" />; t0 = <AppleTerminalWelcomeV2 theme={theme} welcomeMessage="Welcome to OpenClaude" />;
$[0] = theme; $[0] = theme;
$[1] = t0; $[1] = t0;
} else { } else {
@@ -28,7 +28,7 @@ export function WelcomeV2() {
let t7; let t7;
let t8; let t8;
if ($[2] === Symbol.for("react.memo_cache_sentinel")) { if ($[2] === Symbol.for("react.memo_cache_sentinel")) {
t0 = <Text><Text color="claude">{"Welcome to Open Claude"} </Text><Text dimColor={true}>v{MACRO.DISPLAY_VERSION ?? MACRO.VERSION} </Text></Text>; t0 = <Text><Text color="claude">{"Welcome to OpenClaude"} </Text><Text dimColor={true}>v{MACRO.DISPLAY_VERSION ?? MACRO.VERSION} </Text></Text>;
t1 = <Text>{"\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026"}</Text>; t1 = <Text>{"\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026"}</Text>;
t2 = <Text>{" "}</Text>; t2 = <Text>{" "}</Text>;
t3 = <Text>{" "}</Text>; t3 = <Text>{" "}</Text>;
@@ -113,7 +113,7 @@ export function WelcomeV2() {
let t5; let t5;
let t6; let t6;
if ($[18] === Symbol.for("react.memo_cache_sentinel")) { if ($[18] === Symbol.for("react.memo_cache_sentinel")) {
t0 = <Text><Text color="claude">{"Welcome to Open Claude"} </Text><Text dimColor={true}>v{MACRO.DISPLAY_VERSION ?? MACRO.VERSION} </Text></Text>; t0 = <Text><Text color="claude">{"Welcome to OpenClaude"} </Text><Text dimColor={true}>v{MACRO.DISPLAY_VERSION ?? MACRO.VERSION} </Text></Text>;
t1 = <Text>{"\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026"}</Text>; t1 = <Text>{"\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026"}</Text>;
t2 = <Text>{" "}</Text>; t2 = <Text>{" "}</Text>;
t3 = <Text>{" * \u2588\u2588\u2588\u2588\u2588\u2593\u2593\u2591 "}</Text>; t3 = <Text>{" * \u2588\u2588\u2588\u2588\u2588\u2593\u2593\u2591 "}</Text>;

View File

@@ -41,7 +41,7 @@ export function createWhatsNewFeed(releaseNotes: string[]): FeedConfig {
}); });
const emptyMessage = "external" === 'ant' ? 'Unable to fetch latest claude-cli-internal commits' : 'Check /release-notes for recent updates'; const emptyMessage = "external" === 'ant' ? 'Unable to fetch latest claude-cli-internal commits' : 'Check /release-notes for recent updates';
return { return {
title: "external" === 'ant' ? "Open Claude Updates [internal-only: Latest CC commits]" : "Open Claude Updates", title: "external" === 'ant' ? "OpenClaude Updates [internal-only: Latest CC commits]" : "OpenClaude Updates",
lines, lines,
footer: lines.length > 0 ? '/release-notes for more' : undefined, footer: lines.length > 0 ? '/release-notes for more' : undefined,
emptyMessage emptyMessage
@@ -60,7 +60,7 @@ export function createProjectOnboardingFeed(steps: Step[]): FeedConfig {
text: `${checkmark}${text}` text: `${checkmark}${text}`
}; };
}); });
const warningText = getCwd() === homedir() ? 'Note: You have launched claude in your home directory. For the best experience, launch it in a project directory instead.' : undefined; const warningText = getCwd() === homedir() ? 'Note: You have launched openclaude in your home directory. For the best experience, launch it in a project directory instead.' : undefined;
if (warningText) { if (warningText) {
lines.push({ lines.push({
text: warningText text: warningText
@@ -73,7 +73,7 @@ export function createProjectOnboardingFeed(steps: Step[]): FeedConfig {
} }
export function createGuestPassesFeed(): FeedConfig { export function createGuestPassesFeed(): FeedConfig {
const reward = getCachedReferrerReward(); const reward = getCachedReferrerReward();
const subtitle = reward ? `Share Open Claude and earn ${formatCreditAmount(reward)} of extra usage` : 'Share Open Claude with friends'; const subtitle = reward ? `Share OpenClaude and earn ${formatCreditAmount(reward)} of extra usage` : 'Share OpenClaude with friends';
return { return {
title: '3 guest passes', title: '3 guest passes',
lines: [], lines: [],

View File

@@ -265,7 +265,7 @@ export function ModelPicker(t0) {
} else { } else {
t15 = $[41]; t15 = $[41];
} }
const t16 = headerText ?? "Switch between Claude models. Applies to this session and future Claude Code sessions. For other/previous model names, specify with --model."; const t16 = headerText ?? "Switch between Claude models. Applies to this session and future OpenClaude sessions. For other/previous model names, specify with --model.";
let t17; let t17;
if ($[42] !== t16) { if ($[42] !== t16) {
t17 = <Text dimColor={true}>{t16}</Text>; t17 = <Text dimColor={true}>{t16}</Text>;

View File

@@ -146,7 +146,7 @@ export function Onboarding({
steps.push({ steps.push({
id: 'terminal-setup', id: 'terminal-setup',
component: <Box flexDirection="column" gap={1} paddingLeft={1}> component: <Box flexDirection="column" gap={1} paddingLeft={1}>
<Text bold>Use Claude Code&apos;s terminal setup?</Text> <Text bold>Use OpenClaude&apos;s terminal setup?</Text>
<Box flexDirection="column" width={70} gap={1}> <Box flexDirection="column" width={70} gap={1}>
<Text> <Text>
For the optimal coding experience, enable the recommended settings For the optimal coding experience, enable the recommended settings

View File

@@ -80,7 +80,7 @@ export function OutputStylePicker(t0) {
const t6 = !isStandaloneCommand; const t6 = !isStandaloneCommand;
let t7; let t7;
if ($[5] === Symbol.for("react.memo_cache_sentinel")) { if ($[5] === Symbol.for("react.memo_cache_sentinel")) {
t7 = <Box marginTop={1}><Text dimColor={true}>This changes how Claude Code communicates with you</Text></Box>; t7 = <Box marginTop={1}><Text dimColor={true}>This changes how OpenClaude communicates with you</Text></Box>;
$[5] = t7; $[5] = t7;
} else { } else {
t7 = $[5]; t7 = $[5];

View File

@@ -111,7 +111,7 @@ import { BackgroundTasksDialog } from '../tasks/BackgroundTasksDialog.js';
import { shouldHideTasksFooter } from '../tasks/taskStatusUtils.js'; import { shouldHideTasksFooter } from '../tasks/taskStatusUtils.js';
import { TeamsDialog } from '../teams/TeamsDialog.js'; import { TeamsDialog } from '../teams/TeamsDialog.js';
import VimTextInput from '../VimTextInput.js'; import VimTextInput from '../VimTextInput.js';
import { getModeFromInput, getValueFromInput } from './inputModes.js'; import { detectModeEntry, getModeFromInput, getValueFromInput } from './inputModes.js';
import { FOOTER_TEMPORARY_STATUS_TIMEOUT, Notifications } from './Notifications.js'; import { FOOTER_TEMPORARY_STATUS_TIMEOUT, Notifications } from './Notifications.js';
import PromptInputFooter from './PromptInputFooter.js'; import PromptInputFooter from './PromptInputFooter.js';
import type { SuggestionItem } from './PromptInputFooterSuggestions.js'; import type { SuggestionItem } from './PromptInputFooterSuggestions.js';
@@ -773,7 +773,7 @@ function PromptInput({
if (feature('ULTRAPLAN') && ultraplanTriggers.length) { if (feature('ULTRAPLAN') && ultraplanTriggers.length) {
addNotification({ addNotification({
key: 'ultraplan-active', key: 'ultraplan-active',
text: 'This prompt will launch an ultraplan session in Claude Code on the web', text: 'This prompt will launch an ultraplan session in OpenClaude on the web',
priority: 'immediate', priority: 'immediate',
timeoutMs: 5000 timeoutMs: 5000
}); });
@@ -878,24 +878,22 @@ function PromptInput({
abortPromptSuggestion(); abortPromptSuggestion();
abortSpeculation(setAppState); abortSpeculation(setAppState);
// Check if this is a single character insertion at the start // Strip the mode character from the buffer when entering bash mode — the
const isSingleCharInsertion = value.length === input.length + 1; // mode itself is shown via the prompt prefix in the UI. Without this,
const insertedAtStart = cursorOffset === 0; // typing `!` into empty input would enter bash mode but leave the literal
const mode = getModeFromInput(value); // `!` in the buffer (issue #662).
if (insertedAtStart && mode !== 'prompt') { const modeEntry = detectModeEntry({
if (isSingleCharInsertion) { value,
onModeChange(mode); prevInputLength: input.length,
return; cursorOffset,
} });
// Multi-char insertion into empty input (e.g. tab-accepting "! gcloud auth login") if (modeEntry) {
if (input.length === 0) { onModeChange(modeEntry.mode);
onModeChange(mode); const cleaned = modeEntry.strippedValue.replaceAll('\t', ' ');
const valueWithoutMode = getValueFromInput(value).replaceAll('\t', ' '); pushToBuffer(input, cursorOffset, pastedContents);
pushToBuffer(input, cursorOffset, pastedContents); trackAndSetInput(cleaned);
trackAndSetInput(valueWithoutMode); setCursorOffset(cleaned.length);
setCursorOffset(valueWithoutMode.length); return;
return;
}
} }
const processedValue = value.replaceAll('\t', ' '); const processedValue = value.replaceAll('\t', ' ');

View File

@@ -0,0 +1,104 @@
import { describe, expect, it } from 'bun:test'
import {
detectModeEntry,
getModeFromInput,
getValueFromInput,
isInputModeCharacter,
prependModeCharacterToInput,
} from './inputModes.js'
describe('inputModes', () => {
describe('getModeFromInput', () => {
it('returns bash mode for input starting with !', () => {
expect(getModeFromInput('!')).toBe('bash')
expect(getModeFromInput('!ls')).toBe('bash')
})
it('returns prompt mode for non-bash input', () => {
expect(getModeFromInput('')).toBe('prompt')
expect(getModeFromInput('hello')).toBe('prompt')
expect(getModeFromInput(' !')).toBe('prompt')
})
})
describe('getValueFromInput', () => {
it('strips the leading ! when entering bash mode', () => {
expect(getValueFromInput('!')).toBe('')
expect(getValueFromInput('!ls -la')).toBe('ls -la')
})
it('returns input unchanged in prompt mode', () => {
expect(getValueFromInput('')).toBe('')
expect(getValueFromInput('hello')).toBe('hello')
})
})
describe('isInputModeCharacter', () => {
it('returns true only for the bare ! character', () => {
expect(isInputModeCharacter('!')).toBe(true)
expect(isInputModeCharacter('!ls')).toBe(false)
expect(isInputModeCharacter('')).toBe(false)
})
})
describe('prependModeCharacterToInput', () => {
it('prepends ! when mode is bash', () => {
expect(prependModeCharacterToInput('ls', 'bash')).toBe('!ls')
expect(prependModeCharacterToInput('', 'bash')).toBe('!')
})
it('returns input unchanged in prompt mode', () => {
expect(prependModeCharacterToInput('hello', 'prompt')).toBe('hello')
})
})
describe('detectModeEntry', () => {
// Regression for #662 — typing `!` into empty input must switch to bash
// mode AND yield an empty stripped buffer. Before the fix the single-char
// path returned without stripping, leaving `!` visible in the buffer.
it('strips the mode character when typing ! into empty input', () => {
expect(
detectModeEntry({ value: '!', prevInputLength: 0, cursorOffset: 0 }),
).toEqual({ mode: 'bash', strippedValue: '' })
})
it('strips the mode character when pasting !cmd into empty input', () => {
expect(
detectModeEntry({ value: '!ls -la', prevInputLength: 0, cursorOffset: 0 }),
).toEqual({ mode: 'bash', strippedValue: 'ls -la' })
})
it('returns null when the cursor is not at the start', () => {
expect(
detectModeEntry({ value: '!', prevInputLength: 0, cursorOffset: 1 }),
).toBeNull()
})
it('returns null when the value does not start with !', () => {
expect(
detectModeEntry({ value: 'hello', prevInputLength: 0, cursorOffset: 0 }),
).toBeNull()
})
it('returns null when typing ! after existing text', () => {
// value="ab!" with prevInputLength=2 is a single-char insertion but does
// not start with ! — getModeFromInput returns 'prompt'.
expect(
detectModeEntry({ value: 'ab!', prevInputLength: 2, cursorOffset: 0 }),
).toBeNull()
})
it('returns null when prepending ! to non-empty existing text', () => {
// Single-char insertion at start that produces "!ab" from "ab" — value
// length is 3, prevInputLength is 2, so isSingleCharInsertion is true
// and isMultiCharIntoEmpty is false. We accept the mode change here so
// that typing ! at the start of existing text still toggles mode.
const result = detectModeEntry({
value: '!ab',
prevInputLength: 2,
cursorOffset: 0,
})
expect(result).toEqual({ mode: 'bash', strippedValue: 'ab' })
})
})
})

View File

@@ -31,3 +31,30 @@ export function getValueFromInput(input: string): string {
export function isInputModeCharacter(input: string): boolean { export function isInputModeCharacter(input: string): boolean {
return input === '!' return input === '!'
} }
export type ModeEntryDecision = {
mode: HistoryMode
strippedValue: string
}
/**
* Decide whether an onChange `value` should switch the input mode (e.g.
* `prompt` → `bash`) and what the stripped buffer value should be.
*
* Returns null when no mode change applies. Returns a decision otherwise so
* callers run a single update path — no separate single-char vs multi-char
* branches that can drift apart.
*/
export function detectModeEntry(args: {
value: string
prevInputLength: number
cursorOffset: number
}): ModeEntryDecision | null {
if (args.cursorOffset !== 0) return null
const mode = getModeFromInput(args.value)
if (mode === 'prompt') return null
const isSingleCharInsertion = args.value.length === args.prevInputLength + 1
const isMultiCharIntoEmpty = args.prevInputLength === 0
if (!isSingleCharInsertion && !isMultiCharIntoEmpty) return null
return { mode, strippedValue: getValueFromInput(args.value) }
}

View File

@@ -110,6 +110,7 @@ const PRESET_ORDER = [
'Anthropic', 'Anthropic',
'Atomic Chat', 'Atomic Chat',
'Azure OpenAI', 'Azure OpenAI',
'Bankr',
'Codex OAuth', 'Codex OAuth',
'DeepSeek', 'DeepSeek',
'Google Gemini', 'Google Gemini',
@@ -117,12 +118,15 @@ const PRESET_ORDER = [
'LM Studio', 'LM Studio',
'MiniMax', 'MiniMax',
'Mistral', 'Mistral',
'Moonshot AI', 'Moonshot AI - API',
'Moonshot AI - Kimi Code',
'NVIDIA NIM', 'NVIDIA NIM',
'Ollama', 'Ollama',
'OpenAI', 'OpenAI',
'OpenRouter', 'OpenRouter',
'Together AI', 'Together AI',
'xAI',
'Z.AI - GLM Coding Plan',
'Custom', 'Custom',
] as const ] as const
@@ -151,6 +155,7 @@ function createDeferred<T>(): {
function mockProviderProfilesModule(options?: { function mockProviderProfilesModule(options?: {
addProviderProfile?: (...args: unknown[]) => unknown addProviderProfile?: (...args: unknown[]) => unknown
getActiveProviderProfile?: () => unknown
getProviderProfiles?: () => unknown[] getProviderProfiles?: () => unknown[]
updateProviderProfile?: (...args: unknown[]) => unknown updateProviderProfile?: (...args: unknown[]) => unknown
setActiveProviderProfile?: (...args: unknown[]) => unknown setActiveProviderProfile?: (...args: unknown[]) => unknown
@@ -159,7 +164,7 @@ function mockProviderProfilesModule(options?: {
addProviderProfile: options?.addProviderProfile ?? (() => null), addProviderProfile: options?.addProviderProfile ?? (() => null),
applyActiveProviderProfileFromConfig: () => {}, applyActiveProviderProfileFromConfig: () => {},
deleteProviderProfile: () => ({ removed: false, activeProfileId: null }), deleteProviderProfile: () => ({ removed: false, activeProfileId: null }),
getActiveProviderProfile: () => null, getActiveProviderProfile: options?.getActiveProviderProfile ?? (() => null),
getProviderPresetDefaults: (preset: string) => getProviderPresetDefaults: (preset: string) =>
preset === 'ollama' preset === 'ollama'
? { ? {
@@ -189,6 +194,7 @@ function mockProviderManagerDependencies(
addProviderProfile?: (...args: unknown[]) => unknown addProviderProfile?: (...args: unknown[]) => unknown
applySavedProfileToCurrentSession?: (...args: unknown[]) => Promise<string | null> applySavedProfileToCurrentSession?: (...args: unknown[]) => Promise<string | null>
clearCodexCredentials?: () => { success: boolean; warning?: string } clearCodexCredentials?: () => { success: boolean; warning?: string }
getActiveProviderProfile?: () => unknown
getProviderProfiles?: () => unknown[] getProviderProfiles?: () => unknown[]
probeOllamaGenerationReadiness?: () => Promise<{ probeOllamaGenerationReadiness?: () => Promise<{
state: 'ready' | 'unreachable' | 'no_models' | 'generation_failed' state: 'ready' | 'unreachable' | 'no_models' | 'generation_failed'
@@ -228,6 +234,7 @@ function mockProviderManagerDependencies(
): void { ): void {
mockProviderProfilesModule({ mockProviderProfilesModule({
addProviderProfile: options?.addProviderProfile, addProviderProfile: options?.addProviderProfile,
getActiveProviderProfile: options?.getActiveProviderProfile,
getProviderProfiles: options?.getProviderProfiles, getProviderProfiles: options?.getProviderProfiles,
updateProviderProfile: options?.updateProviderProfile, updateProviderProfile: options?.updateProviderProfile,
setActiveProviderProfile: options?.setActiveProviderProfile, setActiveProviderProfile: options?.setActiveProviderProfile,
@@ -330,6 +337,10 @@ async function mountProviderManager(
options?: { options?: {
mode?: 'first-run' | 'manage' mode?: 'first-run' | 'manage'
onDone?: (result?: unknown) => void onDone?: (result?: unknown) => void
onChangeAppState?: (args: {
newState: unknown
oldState: unknown
}) => void
}, },
): Promise<{ ): Promise<{
stdin: PassThrough stdin: PassThrough
@@ -344,7 +355,7 @@ async function mountProviderManager(
}) })
root.render( root.render(
<AppStateProvider> <AppStateProvider onChangeAppState={options?.onChangeAppState}>
<KeybindingSetup> <KeybindingSetup>
<ProviderManager <ProviderManager
mode={options?.mode ?? 'manage'} mode={options?.mode ?? 'manage'}
@@ -906,6 +917,223 @@ test('ProviderManager keeps Codex OAuth as next-startup only when activating the
await mounted.dispose() await mounted.dispose()
}) })
test('ProviderManager activating a multi-model provider sets the session model to the primary model', async () => {
delete process.env.CLAUDE_CODE_SIMPLE
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const multiModelProfile = {
id: 'provider_multi_model',
provider: 'openai',
name: 'Multi Model Provider',
baseUrl: 'https://api.openai.com/v1',
model: 'gpt-5.4; gpt-5.4-mini',
apiKey: 'sk-test',
}
const setActiveProviderProfile = mock(() => multiModelProfile)
const appStateChanges: Array<{ newState: any; oldState: any }> = []
mockProviderManagerDependencies(
() => undefined,
async () => undefined,
{
getProviderProfiles: () => [multiModelProfile],
setActiveProviderProfile,
},
)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const mounted = await mountProviderManager(ProviderManager, {
onChangeAppState: args => {
appStateChanges.push(args as { newState: any; oldState: any })
},
})
await waitForFrameOutput(
mounted.getOutput,
frame =>
frame.includes('Provider manager') &&
frame.includes('Set active provider'),
)
mounted.stdin.write('j')
await Bun.sleep(25)
mounted.stdin.write('\r')
await waitForFrameOutput(
mounted.getOutput,
frame =>
frame.includes('Set active provider') &&
frame.includes('Multi Model Provider'),
)
await Bun.sleep(25)
mounted.stdin.write('\r')
await waitForCondition(() => setActiveProviderProfile.mock.calls.length > 0)
await waitForCondition(() =>
appStateChanges.some(
({ newState, oldState }) =>
newState.mainLoopModel === 'gpt-5.4' &&
oldState.mainLoopModel !== newState.mainLoopModel,
),
)
expect(setActiveProviderProfile).toHaveBeenCalledWith('provider_multi_model')
expect(
appStateChanges.some(
({ newState }) =>
newState.mainLoopModel === 'gpt-5.4' &&
newState.mainLoopModelForSession === null,
),
).toBe(true)
expect(
appStateChanges.some(
({ newState }) => newState.mainLoopModel === 'gpt-5.4; gpt-5.4-mini',
),
).toBe(false)
await mounted.dispose()
})
test('ProviderManager editing an active multi-model provider keeps app state on the primary model', async () => {
delete process.env.CLAUDE_CODE_SIMPLE
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const multiModelProfile = {
id: 'provider_multi_model',
provider: 'openai',
name: 'Multi Model Provider',
baseUrl: 'https://api.openai.com/v1',
model: 'gpt-5.4; gpt-5.4-mini',
apiKey: 'sk-test',
}
const updateProviderProfile = mock(() => multiModelProfile)
const appStateChanges: Array<{ newState: any; oldState: any }> = []
mockProviderManagerDependencies(
() => undefined,
async () => undefined,
{
getActiveProviderProfile: () => multiModelProfile,
getProviderProfiles: () => [multiModelProfile],
updateProviderProfile,
},
)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const mounted = await mountProviderManager(ProviderManager, {
onChangeAppState: args => {
appStateChanges.push(args as { newState: any; oldState: any })
},
})
await waitForFrameOutput(
mounted.getOutput,
frame =>
frame.includes('Provider manager') &&
frame.includes('Edit provider'),
)
mounted.stdin.write('j')
await Bun.sleep(25)
mounted.stdin.write('j')
await Bun.sleep(25)
mounted.stdin.write('\r')
await waitForFrameOutput(
mounted.getOutput,
frame =>
frame.includes('Edit provider') &&
frame.includes('Multi Model Provider'),
)
await Bun.sleep(25)
mounted.stdin.write('\r')
await waitForFrameOutput(
mounted.getOutput,
frame =>
frame.includes('Edit provider profile') &&
frame.includes('Step 1 of 7'),
)
mounted.stdin.write('\r')
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Step 2 of 7'),
)
mounted.stdin.write('\r')
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Step 3 of 7'),
)
mounted.stdin.write('\r')
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Step 4 of 7'),
)
mounted.stdin.write('\r')
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Step 5 of 7'),
)
mounted.stdin.write('\r')
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Step 6 of 7'),
)
mounted.stdin.write('\r')
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Step 7 of 7'),
)
mounted.stdin.write('\r')
await waitForCondition(() => updateProviderProfile.mock.calls.length > 0)
await waitForCondition(() =>
appStateChanges.some(
({ newState, oldState }) =>
newState.mainLoopModel === 'gpt-5.4' &&
oldState.mainLoopModel !== newState.mainLoopModel,
),
)
expect(updateProviderProfile).toHaveBeenCalledWith(
'provider_multi_model',
expect.objectContaining({
model: 'gpt-5.4; gpt-5.4-mini',
}),
)
expect(
appStateChanges.some(
({ newState }) =>
newState.mainLoopModel === 'gpt-5.4' &&
newState.mainLoopModelForSession === null,
),
).toBe(true)
expect(
appStateChanges.some(
({ newState }) => newState.mainLoopModel === 'gpt-5.4; gpt-5.4-mini',
),
).toBe(false)
await mounted.dispose()
})
test('ProviderManager resolves Codex OAuth state from async storage without sync reads in render flow', async () => { test('ProviderManager resolves Codex OAuth state from async storage without sync reads in render flow', async () => {
delete process.env.CLAUDE_CODE_SIMPLE delete process.env.CLAUDE_CODE_SIMPLE
delete process.env.CLAUDE_CODE_USE_GITHUB delete process.env.CLAUDE_CODE_USE_GITHUB

View File

@@ -46,6 +46,7 @@ import {
rankOllamaModels, rankOllamaModels,
recommendOllamaModel, recommendOllamaModel,
} from '../utils/providerRecommendation.js' } from '../utils/providerRecommendation.js'
import { clearStartupProviderOverrides } from '../utils/providerStartupOverrides.js'
import { redactUrlForDisplay } from '../utils/urlRedaction.js' import { redactUrlForDisplay } from '../utils/urlRedaction.js'
import { updateSettingsForSource } from '../utils/settings/settings.js' import { updateSettingsForSource } from '../utils/settings/settings.js'
import { import {
@@ -57,8 +58,10 @@ import TextInput from './TextInput.js'
import { useCodexOAuthFlow } from './useCodexOAuthFlow.js' import { useCodexOAuthFlow } from './useCodexOAuthFlow.js'
export type ProviderManagerResult = { export type ProviderManagerResult = {
action: 'saved' | 'cancelled' action: 'saved' | 'cancelled' | 'activated'
activeProfileId?: string activeProfileId?: string
activeProviderName?: string
activeProviderModel?: string
message?: string message?: string
} }
@@ -78,7 +81,14 @@ type Screen =
| 'select-edit' | 'select-edit'
| 'select-delete' | 'select-delete'
type DraftField = 'name' | 'baseUrl' | 'model' | 'apiKey' type DraftField =
| 'name'
| 'baseUrl'
| 'model'
| 'apiKey'
| 'apiFormat'
| 'authHeader'
| 'authHeaderValue'
type ProviderDraft = Record<DraftField, string> type ProviderDraft = Record<DraftField, string>
@@ -124,8 +134,29 @@ const FORM_STEPS: Array<{
{ {
key: 'model', key: 'model',
label: 'Default model', label: 'Default model',
placeholder: 'e.g. llama3.1:8b or glm-4.7, glm-4.7-flash', placeholder: 'e.g. llama3.1:8b or glm-4.7; glm-4.7-flash',
helpText: 'Model name(s) to use. Separate multiple with commas; first is default.', helpText: 'Model name(s) to use. Separate multiple with ";" or ","; first is default.',
},
{
key: 'apiFormat',
label: 'API mode',
placeholder: 'chat_completions',
helpText: 'Choose the OpenAI-compatible API surface for this provider.',
optional: true,
},
{
key: 'authHeader',
label: 'Auth header',
placeholder: 'e.g. api-key or X-API-Key',
helpText: 'Optional. Header name used for a custom provider key.',
optional: true,
},
{
key: 'authHeaderValue',
label: 'Auth header value',
placeholder: 'Leave empty to use the API key value',
helpText: 'Optional. Value sent in the custom auth header.',
optional: true,
}, },
{ {
key: 'apiKey', key: 'apiKey',
@@ -151,6 +182,9 @@ function toDraft(profile: ProviderProfile): ProviderDraft {
baseUrl: profile.baseUrl, baseUrl: profile.baseUrl,
model: profile.model, model: profile.model,
apiKey: profile.apiKey ?? '', apiKey: profile.apiKey ?? '',
apiFormat: profile.apiFormat ?? 'chat_completions',
authHeader: profile.authHeader ?? '',
authHeaderValue: profile.authHeaderValue ?? '',
} }
} }
@@ -161,6 +195,9 @@ function presetToDraft(preset: ProviderPreset): ProviderDraft {
baseUrl: defaults.baseUrl, baseUrl: defaults.baseUrl,
model: defaults.model, model: defaults.model,
apiKey: defaults.apiKey ?? '', apiKey: defaults.apiKey ?? '',
apiFormat: 'chat_completions',
authHeader: '',
authHeaderValue: '',
} }
} }
@@ -174,7 +211,15 @@ function profileSummary(profile: ProviderProfile, isActive: boolean): string {
models.length <= 3 models.length <= 3
? models.join(', ') ? models.join(', ')
: `${models[0]}, ${models[1]} + ${models.length - 2} more` : `${models[0]}, ${models[1]} + ${models.length - 2} more`
return `${providerKind} · ${profile.baseUrl} · ${modelDisplay} · ${keyInfo}${activeSuffix}` const modeInfo =
profile.provider === 'openai'
? ` · ${profile.apiFormat === 'responses' ? 'responses' : 'chat/completions'}`
: ''
const authInfo =
profile.provider === 'openai' && profile.authHeader
? ` · ${profile.authHeader} auth`
: ''
return `${providerKind} · ${profile.baseUrl} · ${modelDisplay}${modeInfo}${authInfo} · ${keyInfo}${activeSuffix}`
} }
function getGithubCredentialSourceFromEnv( function getGithubCredentialSourceFromEnv(
@@ -453,7 +498,18 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
}) })
}, []) }, [])
const currentStep = FORM_STEPS[formStepIndex] ?? FORM_STEPS[0] const formSteps = React.useMemo(
() =>
draftProvider === 'openai'
? FORM_STEPS
: FORM_STEPS.filter(step =>
step.key !== 'apiFormat' &&
step.key !== 'authHeader' &&
step.key !== 'authHeaderValue'
),
[draftProvider],
)
const currentStep = formSteps[formStepIndex] ?? formSteps[0] ?? FORM_STEPS[0]
const currentStepKey = currentStep.key const currentStepKey = currentStep.key
const currentValue = draft[currentStepKey] const currentValue = draft[currentStepKey]
@@ -671,17 +727,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
} }
function clearStartupProviderOverrideFromUserSettings(): string | null { function clearStartupProviderOverrideFromUserSettings(): string | null {
const { error } = updateSettingsForSource('userSettings', { return clearStartupProviderOverrides()
env: {
CLAUDE_CODE_USE_OPENAI: undefined as any,
CLAUDE_CODE_USE_GEMINI: undefined as any,
CLAUDE_CODE_USE_GITHUB: undefined as any,
CLAUDE_CODE_USE_BEDROCK: undefined as any,
CLAUDE_CODE_USE_VERTEX: undefined as any,
CLAUDE_CODE_USE_FOUNDRY: undefined as any,
},
})
return error ? error.message : null
} }
function buildCodexOAuthActivationMessage(options: { function buildCodexOAuthActivationMessage(options: {
@@ -768,12 +814,14 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
mainLoopModelForSession: null, mainLoopModelForSession: null,
})) }))
refreshProfiles() refreshProfiles()
setAppState(prev => ({
...prev,
mainLoopModel: GITHUB_PROVIDER_DEFAULT_MODEL,
}))
setStatusMessage(`Active provider: ${GITHUB_PROVIDER_LABEL}`) setStatusMessage(`Active provider: ${GITHUB_PROVIDER_LABEL}`)
setIsActivating(false) setIsActivating(false)
onDone({
action: 'activated',
activeProviderName: GITHUB_PROVIDER_LABEL,
activeProviderModel: GITHUB_PROVIDER_DEFAULT_MODEL,
message: `Provider switched to ${GITHUB_PROVIDER_LABEL} (${GITHUB_PROVIDER_DEFAULT_MODEL})`,
})
returnToMenu() returnToMenu()
return return
} }
@@ -789,19 +837,14 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
// Update the session model to the new provider's first model. // Update the session model to the new provider's first model.
// persistActiveProviderProfileModel (called by onChangeAppState) will // persistActiveProviderProfileModel (called by onChangeAppState) will
// not overwrite the multi-model list because it checks if the model // not overwrite the multi-model list because it checks if the model
// is already in the profile's comma-separated model list. // is already in the provider's configured model list.
const newModel = getPrimaryModel(active.model) const newModel = getPrimaryModel(active.model)
setAppState(prev => ({ setAppState(prev => ({
...prev, ...prev,
mainLoopModel: newModel, mainLoopModel: newModel,
}))
providerLabel = active.name
setAppState(prev => ({
...prev,
mainLoopModel: active.model,
mainLoopModelForSession: null, mainLoopModelForSession: null,
})) }))
providerLabel = active.name
const settingsOverrideError = const settingsOverrideError =
clearStartupProviderOverrideFromUserSettings() clearStartupProviderOverrideFromUserSettings()
const isActiveCodexOAuth = isCodexOAuthProfile( const isActiveCodexOAuth = isCodexOAuthProfile(
@@ -813,23 +856,29 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
: null : null
refreshProfiles() refreshProfiles()
setStatusMessage( const activationMessage = isActiveCodexOAuth
isActiveCodexOAuth ? buildCodexOAuthActivationMessage({
? buildCodexOAuthActivationMessage({ prefix: `Active provider: ${active.name}`,
prefix: `Active provider: ${active.name}`, activationWarning,
warnings: [
activationWarning, activationWarning,
warnings: [ settingsOverrideError
activationWarning, ? `could not clear startup provider override (${settingsOverrideError})`
settingsOverrideError : null,
? `could not clear startup provider override (${settingsOverrideError})` ].filter((warning): warning is string => Boolean(warning)),
: null, })
].filter((warning): warning is string => Boolean(warning)), : settingsOverrideError
}) ? `Active provider: ${active.name}. Warning: could not clear startup provider override (${settingsOverrideError}).`
: settingsOverrideError : `Active provider: ${active.name}`
? `Active provider: ${active.name}. Warning: could not clear startup provider override (${settingsOverrideError}).` setStatusMessage(activationMessage)
: `Active provider: ${active.name}`,
)
setIsActivating(false) setIsActivating(false)
onDone({
action: 'activated',
activeProfileId: active.id,
activeProviderName: active.name,
activeProviderModel: newModel,
message: `Provider switched to ${active.name} (${newModel})`,
})
returnToMenu() returnToMenu()
} catch (error) { } catch (error) {
refreshProfiles() refreshProfiles()
@@ -944,6 +993,9 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
baseUrl: defaults.baseUrl, baseUrl: defaults.baseUrl,
model: defaults.model, model: defaults.model,
apiKey: defaults.apiKey ?? '', apiKey: defaults.apiKey ?? '',
apiFormat: 'chat_completions',
authHeader: '',
authHeaderValue: '',
} }
setEditingProfileId(null) setEditingProfileId(null)
setDraftProvider(defaults.provider ?? 'openai') setDraftProvider(defaults.provider ?? 'openai')
@@ -990,6 +1042,22 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
baseUrl: nextDraft.baseUrl, baseUrl: nextDraft.baseUrl,
model: nextDraft.model, model: nextDraft.model,
apiKey: nextDraft.apiKey, apiKey: nextDraft.apiKey,
apiFormat:
draftProvider === 'openai' && nextDraft.apiFormat === 'responses'
? 'responses'
: 'chat_completions',
authHeader:
draftProvider === 'openai' && nextDraft.authHeader
? nextDraft.authHeader
: undefined,
authScheme:
draftProvider === 'openai' && nextDraft.authHeader
? (nextDraft.authHeader.toLowerCase() === 'authorization' ? 'bearer' : 'raw')
: undefined,
authHeaderValue:
draftProvider === 'openai' && nextDraft.authHeaderValue
? nextDraft.authHeaderValue
: undefined,
} }
const saved = editingProfileId const saved = editingProfileId
@@ -1005,7 +1073,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
if (isActiveSavedProfile) { if (isActiveSavedProfile) {
setAppState(prev => ({ setAppState(prev => ({
...prev, ...prev,
mainLoopModel: saved.model, mainLoopModel: getPrimaryModel(saved.model),
mainLoopModelForSession: null, mainLoopModelForSession: null,
})) }))
} }
@@ -1212,9 +1280,9 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
setDraft(nextDraft) setDraft(nextDraft)
setErrorMessage(undefined) setErrorMessage(undefined)
if (formStepIndex < FORM_STEPS.length - 1) { if (formStepIndex < formSteps.length - 1) {
const nextIndex = formStepIndex + 1 const nextIndex = formStepIndex + 1
const nextKey = FORM_STEPS[nextIndex]?.key ?? 'name' const nextKey = formSteps[nextIndex]?.key ?? 'name'
setFormStepIndex(nextIndex) setFormStepIndex(nextIndex)
setCursorOffset(nextDraft[nextKey].length) setCursorOffset(nextDraft[nextKey].length)
return return
@@ -1228,7 +1296,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
if (formStepIndex > 0) { if (formStepIndex > 0) {
const nextIndex = formStepIndex - 1 const nextIndex = formStepIndex - 1
const nextKey = FORM_STEPS[nextIndex]?.key ?? 'name' const nextKey = formSteps[nextIndex]?.key ?? 'name'
setFormStepIndex(nextIndex) setFormStepIndex(nextIndex)
setCursorOffset(draft[nextKey].length) setCursorOffset(draft[nextKey].length)
return return
@@ -1279,6 +1347,11 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
label: 'Azure OpenAI', label: 'Azure OpenAI',
description: 'Azure OpenAI endpoint (model=deployment name)', description: 'Azure OpenAI endpoint (model=deployment name)',
}, },
{
value: 'bankr',
label: 'Bankr',
description: 'Bankr LLM Gateway (OpenAI-compatible)',
},
...(canUseCodexOAuth ...(canUseCodexOAuth
? [ ? [
{ {
@@ -1321,8 +1394,13 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
}, },
{ {
value: 'moonshotai', value: 'moonshotai',
label: 'Moonshot AI', label: 'Moonshot AI - API',
description: 'Kimi OpenAI-compatible endpoint', description: 'Moonshot AI - API endpoint',
},
{
value: 'kimi-code',
label: 'Moonshot AI - Kimi Code',
description: 'Moonshot AI - Kimi Code Subscription endpoint',
}, },
{ {
value: 'nvidia-nim', value: 'nvidia-nim',
@@ -1349,6 +1427,16 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
label: 'Together AI', label: 'Together AI',
description: 'Together chat/completions endpoint', description: 'Together chat/completions endpoint',
}, },
{
value: 'xai',
label: 'xAI',
description: 'xAI Grok OpenAI-compatible endpoint',
},
{
value: 'zai',
label: 'Z.AI - GLM Coding Plan',
description: 'Z.AI GLM coding subscription endpoint',
},
{ {
value: 'custom', value: 'custom',
label: 'Custom', label: 'Custom',
@@ -1413,28 +1501,59 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
: 'OpenAI-compatible API'} : 'OpenAI-compatible API'}
</Text> </Text>
<Text dimColor> <Text dimColor>
Step {formStepIndex + 1} of {FORM_STEPS.length}: {currentStep.label} Step {formStepIndex + 1} of {formSteps.length}: {currentStep.label}
</Text> </Text>
<Box flexDirection="row" gap={1}> {currentStepKey === 'apiFormat' ? (
<Text>{figures.pointer}</Text> <Select
<TextInput options={[
value={currentValue} {
onChange={value => value: 'chat_completions',
setDraft(prev => ({ label: 'Chat Completions',
...prev, description: 'Use /chat/completions for broad OpenAI-compatible support',
[currentStepKey]: value, },
})) {
value: 'responses',
label: 'Responses',
description: 'Use /responses for providers that support the Responses API',
},
]}
defaultValue={
currentValue === 'responses' ? 'responses' : 'chat_completions'
} }
onSubmit={handleFormSubmit} defaultFocusValue={
focus={true} currentValue === 'responses' ? 'responses' : 'chat_completions'
showCursor={true} }
placeholder={`${currentStep.placeholder}${figures.ellipsis}`} onChange={value => handleFormSubmit(value)}
mask={currentStepKey === 'apiKey' ? '*' : undefined} onCancel={handleBackFromForm}
columns={80} visibleOptionCount={2}
cursorOffset={cursorOffset}
onChangeCursorOffset={setCursorOffset}
/> />
</Box> ) : (
<Box flexDirection="row" gap={1}>
<Text>{figures.pointer}</Text>
<TextInput
value={currentValue}
onChange={value =>
setDraft(prev => ({
...prev,
[currentStepKey]: value,
}))
}
onSubmit={handleFormSubmit}
focus={true}
showCursor={true}
placeholder={`${currentStep.placeholder}${figures.ellipsis}`}
mask={
currentStepKey === 'apiKey' ||
currentStepKey === 'authHeaderValue'
? '*'
: undefined
}
columns={80}
cursorOffset={cursorOffset}
onChangeCursorOffset={setCursorOffset}
/>
</Box>
)}
{errorMessage && <Text color="error">{errorMessage}</Text>} {errorMessage && <Text color="error">{errorMessage}</Text>}
<Text dimColor> <Text dimColor>
Press Enter to continue. Press Esc to go back. Press Enter to continue. Press Esc to go back.

View File

@@ -119,17 +119,17 @@ export function ResumeTask({
return <Box flexDirection="column" padding={1}> return <Box flexDirection="column" padding={1}>
<Box flexDirection="row"> <Box flexDirection="row">
<Spinner /> <Spinner />
<Text bold>Loading Claude Code sessions</Text> <Text bold>Loading OpenClaude sessions</Text>
</Box> </Box>
<Text dimColor> <Text dimColor>
{retrying ? 'Retrying…' : 'Fetching your Claude Code sessions…'} {retrying ? 'Retrying…' : 'Fetching your OpenClaude sessions…'}
</Text> </Text>
</Box>; </Box>;
} }
if (loadErrorType) { if (loadErrorType) {
return <Box flexDirection="column" padding={1}> return <Box flexDirection="column" padding={1}>
<Text bold color="error"> <Text bold color="error">
Error loading Claude Code sessions Error loading OpenClaude sessions
</Text> </Text>
{renderErrorSpecificGuidance(loadErrorType)} {renderErrorSpecificGuidance(loadErrorType)}
@@ -143,7 +143,7 @@ export function ResumeTask({
if (sessions.length === 0) { if (sessions.length === 0) {
return <Box flexDirection="column" padding={1}> return <Box flexDirection="column" padding={1}>
<Text bold> <Text bold>
No Claude Code sessions found No OpenClaude sessions found
{currentRepo && <Text> for {currentRepo}</Text>} {currentRepo && <Text> for {currentRepo}</Text>}
</Text> </Text>
<Box marginTop={1}> <Box marginTop={1}>
@@ -261,7 +261,7 @@ function renderErrorSpecificGuidance(errorType: LoadErrorType): React.ReactNode
</Box>; </Box>;
case 'other': case 'other':
return <Box marginY={1} flexDirection="row"> return <Box marginY={1} flexDirection="row">
<Text dimColor>Sorry, Claude Code encountered an error</Text> <Text dimColor>Sorry, OpenClaude encountered an error</Text>
</Box>; </Box>;
} }
} }

View File

@@ -299,6 +299,26 @@ export function Config({
enabled: toolHistoryCompressionEnabled enabled: toolHistoryCompressionEnabled
}); });
} }
}, {
id: 'showCacheStats',
label: 'Cache stats display',
value: globalConfig.showCacheStats,
options: ['off', 'compact', 'full'],
type: 'enum' as const,
onChange(mode: string) {
const showCacheStats = (mode === 'off' || mode === 'compact' || mode === 'full' ? mode : 'compact') as 'off' | 'compact' | 'full';
saveGlobalConfig(current_cs => ({
...current_cs,
showCacheStats
}));
setGlobalConfig({
...getGlobalConfig(),
showCacheStats
});
logEvent('tengu_show_cache_stats_setting_changed', {
mode: showCacheStats as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
});
}
}, { }, {
id: 'spinnerTipsEnabled', id: 'spinnerTipsEnabled',
label: 'Show tips', label: 'Show tips',

View File

@@ -0,0 +1,249 @@
import * as React from 'react'
import { useEffect, useState } from 'react'
import { useTerminalSize } from '../../hooks/useTerminalSize.js'
import { Box, Text } from '../../ink.js'
import { useKeybinding } from '../../keybindings/useKeybinding.js'
import {
buildMiniMaxUsageRows,
fetchMiniMaxUsage,
type MiniMaxUsageData,
type MiniMaxUsageRow,
} from '../../services/api/minimaxUsage.js'
import { logError } from '../../utils/log.js'
import { ConfigurableShortcutHint } from '../ConfigurableShortcutHint.js'
import { Byline } from '../design-system/Byline.js'
import { ProgressBar } from '../design-system/ProgressBar.js'
const RESET_COUNTDOWN_REFRESH_MS = 30_000
const PROGRESS_BAR_WIDTH = 18
type MiniMaxUsageLimitBarProps = {
label: string
usedPercent: number
resetsAt?: string
extraSubtext?: string
maxWidth: number
nowMs: number
}
function formatCountdownDuration(ms: number): string {
const totalMinutes = Math.max(1, Math.ceil(ms / 60_000))
const days = Math.floor(totalMinutes / 1_440)
const hours = Math.floor((totalMinutes % 1_440) / 60)
const minutes = totalMinutes % 60
if (days > 0) {
return hours > 0 ? `${days}d ${hours}h` : `${days}d`
}
if (hours > 0) {
return minutes > 0 ? `${hours}h ${minutes}m` : `${hours}h`
}
return `${minutes}m`
}
function formatResetCountdown(
resetsAt: string | undefined,
nowMs: number,
): string | undefined {
if (!resetsAt) return undefined
const resetMs = Date.parse(resetsAt)
if (!Number.isFinite(resetMs)) return undefined
const remainingMs = resetMs - nowMs
if (remainingMs <= 0) {
return 'Resetting now'
}
return `Resets in ${formatCountdownDuration(remainingMs)}`
}
function MiniMaxUsageLimitBar({
label,
usedPercent,
resetsAt,
extraSubtext,
maxWidth,
nowMs,
}: MiniMaxUsageLimitBarProps): React.ReactNode {
const normalizedUsedPercent = Math.max(0, Math.min(100, usedPercent))
const usedText = `${Math.floor(normalizedUsedPercent)}% used`
const resetText = formatResetCountdown(resetsAt, nowMs)
const details = [usedText, extraSubtext].filter(
(part): part is string => Boolean(part),
)
return (
<Box flexDirection="column">
<Text>
<Text bold>{label}</Text>
{resetText ? <Text dimColor> · {resetText}</Text> : null}
</Text>
<Box flexDirection="row" gap={1}>
<ProgressBar
ratio={normalizedUsedPercent / 100}
width={Math.min(PROGRESS_BAR_WIDTH, Math.max(1, maxWidth))}
fillColor="rate_limit_fill"
emptyColor="rate_limit_empty"
/>
{details.length > 0 ? <Text dimColor>{details.join(' · ')}</Text> : null}
</Box>
</Box>
)
}
function MiniMaxUsageTextRow({
label,
value,
}: Extract<MiniMaxUsageRow, { kind: 'text' }>): React.ReactNode {
if (!value) {
return <Text bold>{label}</Text>
}
return (
<Text>
<Text bold>{label}</Text>
<Text dimColor> · {value}</Text>
</Text>
)
}
export function MiniMaxUsage(): React.ReactNode {
const [usage, setUsage] = useState<MiniMaxUsageData | null>(null)
const [error, setError] = useState<string | null>(null)
const [isLoading, setIsLoading] = useState(true)
const [nowMs, setNowMs] = useState(() => Date.now())
const { columns } = useTerminalSize()
const availableWidth = columns - 2
const maxWidth = Math.min(availableWidth, 80)
const loadUsage = React.useCallback(async () => {
setIsLoading(true)
setError(null)
try {
setUsage(await fetchMiniMaxUsage())
} catch (err) {
logError(err as Error)
setError(
err instanceof Error ? err.message : 'Failed to load MiniMax usage',
)
} finally {
setIsLoading(false)
}
}, [])
useEffect(() => {
void loadUsage()
}, [loadUsage])
useEffect(() => {
const interval = setInterval(() => {
setNowMs(Date.now())
}, RESET_COUNTDOWN_REFRESH_MS)
return () => clearInterval(interval)
}, [])
useKeybinding(
'settings:retry',
() => {
void loadUsage()
},
{
context: 'Settings',
isActive: !!error && !isLoading,
},
)
if (error) {
return (
<Box flexDirection="column" gap={1}>
<Text color="error">Error: {error}</Text>
<Text dimColor>
<Byline>
<ConfigurableShortcutHint
action="settings:retry"
context="Settings"
fallback="r"
description="retry"
/>
<ConfigurableShortcutHint
action="confirm:no"
context="Settings"
fallback="Esc"
description="cancel"
/>
</Byline>
</Text>
</Box>
)
}
if (!usage) {
return (
<Box flexDirection="column" gap={1}>
<Text dimColor>Loading MiniMax usage data</Text>
<Text dimColor>
<ConfigurableShortcutHint
action="confirm:no"
context="Settings"
fallback="Esc"
description="cancel"
/>
</Text>
</Box>
)
}
const rows =
usage.availability === 'available'
? buildMiniMaxUsageRows(usage.snapshots)
: []
return (
<Box flexDirection="column" gap={1} width="100%">
{usage.planType ? <Text dimColor>Plan: {usage.planType}</Text> : null}
{usage.availability === 'unknown' ? (
<Text dimColor>{usage.message}</Text>
) : rows.length === 0 ? (
<Text dimColor>
No MiniMax usage windows were returned for this account.
</Text>
) : null}
{rows.map((row, index) =>
row.kind === 'window' ? (
<MiniMaxUsageLimitBar
key={`${row.label}-${index}`}
label={row.label}
usedPercent={row.usedPercent}
resetsAt={row.resetsAt}
extraSubtext={row.extraSubtext}
maxWidth={maxWidth}
nowMs={nowMs}
/>
) : (
<MiniMaxUsageTextRow
key={`${row.label}-${index}`}
label={row.label}
value={row.value}
/>
),
)}
<Text dimColor>
<ConfigurableShortcutHint
action="confirm:no"
context="Settings"
fallback="Esc"
description="cancel"
/>
</Text>
</Box>
)
}

View File

@@ -0,0 +1,28 @@
import * as React from 'react'
import { Box, Text } from '../../ink.js'
import { ConfigurableShortcutHint } from '../ConfigurableShortcutHint.js'
type UnsupportedUsageProps = {
providerLabel: string
}
export function UnsupportedUsage({
providerLabel,
}: UnsupportedUsageProps): React.ReactNode {
return (
<Box flexDirection="column" gap={1}>
<Text dimColor>
Usage details are not currently available for {providerLabel}.
</Text>
<Text dimColor>
<ConfigurableShortcutHint
action="confirm:no"
context="Settings"
fallback="Esc"
description="cancel"
/>
</Text>
</Box>
)
}

View File

@@ -17,6 +17,8 @@ import { Byline } from '../design-system/Byline.js';
import { ProgressBar } from '../design-system/ProgressBar.js'; import { ProgressBar } from '../design-system/ProgressBar.js';
import { isEligibleForOverageCreditGrant, OverageCreditUpsell } from '../LogoV2/OverageCreditUpsell.js'; import { isEligibleForOverageCreditGrant, OverageCreditUpsell } from '../LogoV2/OverageCreditUpsell.js';
import { CodexUsage } from './CodexUsage.js'; import { CodexUsage } from './CodexUsage.js';
import { MiniMaxUsage } from './MiniMaxUsage.js';
import { UnsupportedUsage } from './UnsupportedUsage.js';
type LimitBarProps = { type LimitBarProps = {
title: string; title: string;
limit: RateLimit; limit: RateLimit;
@@ -266,9 +268,26 @@ function AnthropicUsage(): React.ReactNode {
</Box>; </Box>;
} }
export function Usage(): React.ReactNode { export function Usage(): React.ReactNode {
if (getAPIProvider() === 'codex') { const provider = getAPIProvider();
if (provider === 'codex') {
return <CodexUsage />; return <CodexUsage />;
} }
if (provider === 'minimax') {
return <MiniMaxUsage />;
}
if (provider !== 'firstParty') {
const providerLabel = {
openai: 'this OpenAI-compatible provider',
gemini: 'Google Gemini',
github: 'GitHub Models',
mistral: 'Mistral',
'nvidia-nim': 'NVIDIA NIM',
bedrock: 'AWS Bedrock',
vertex: 'Google Vertex AI',
foundry: 'Microsoft Foundry'
}[provider] ?? 'this provider';
return <UnsupportedUsage providerLabel={providerLabel} />;
}
return <AnthropicUsage />; return <AnthropicUsage />;
} }
type ExtraUsageSectionProps = { type ExtraUsageSectionProps = {

View File

@@ -14,6 +14,7 @@ const ENV_KEYS = [
'GEMINI_MODEL', 'GEMINI_MODEL',
'MISTRAL_MODEL', 'MISTRAL_MODEL',
'ANTHROPIC_MODEL', 'ANTHROPIC_MODEL',
'CLAUDE_MODEL',
'NVIDIA_NIM', 'NVIDIA_NIM',
'MINIMAX_API_KEY', 'MINIMAX_API_KEY',
] ]
@@ -101,9 +102,14 @@ describe('detectProvider — direct vendor endpoints', () => {
expect(detectProvider().name).toBe('DeepSeek') expect(detectProvider().name).toBe('DeepSeek')
}) })
test('api.moonshot.cn labels as Moonshot (Kimi)', () => { test('api.kimi.com labels as Moonshot AI - Kimi Code', () => {
setupOpenAIMode('https://api.kimi.com/coding/v1', 'kimi-for-coding')
expect(detectProvider().name).toBe('Moonshot AI - Kimi Code')
})
test('api.moonshot.cn labels as Moonshot AI - API', () => {
setupOpenAIMode('https://api.moonshot.cn/v1', 'moonshot-v1-8k') setupOpenAIMode('https://api.moonshot.cn/v1', 'moonshot-v1-8k')
expect(detectProvider().name).toBe('Moonshot (Kimi)') expect(detectProvider().name).toBe('Moonshot AI - API')
}) })
test('api.mistral.ai labels as Mistral', () => { test('api.mistral.ai labels as Mistral', () => {
@@ -111,6 +117,11 @@ describe('detectProvider — direct vendor endpoints', () => {
expect(detectProvider().name).toBe('Mistral') expect(detectProvider().name).toBe('Mistral')
}) })
test('api.z.ai labels as Z.AI GLM', () => {
setupOpenAIMode('https://api.z.ai/api/coding/paas/v4', 'GLM-5.1')
expect(detectProvider().name).toBe('Z.AI - GLM')
})
test('default OpenAI URL + gpt-4o labels as OpenAI', () => { test('default OpenAI URL + gpt-4o labels as OpenAI', () => {
setupOpenAIMode('https://api.openai.com/v1', 'gpt-4o') setupOpenAIMode('https://api.openai.com/v1', 'gpt-4o')
expect(detectProvider().name).toBe('OpenAI') expect(detectProvider().name).toBe('OpenAI')
@@ -125,9 +136,14 @@ describe('detectProvider — rawModel fallback when URL is generic', () => {
expect(detectProvider().name).toBe('DeepSeek') expect(detectProvider().name).toBe('DeepSeek')
}) })
test('custom proxy + kimi-k2 falls back to Moonshot (Kimi)', () => { test('custom proxy + kimi-for-coding falls back to Moonshot AI - Kimi Code', () => {
setupOpenAIMode('https://my-proxy.internal/v1', 'kimi-for-coding')
expect(detectProvider().name).toBe('Moonshot AI - Kimi Code')
})
test('custom proxy + kimi-k2 falls back to Moonshot AI - API', () => {
setupOpenAIMode('https://my-proxy.internal/v1', 'kimi-k2-instruct') setupOpenAIMode('https://my-proxy.internal/v1', 'kimi-k2-instruct')
expect(detectProvider().name).toBe('Moonshot (Kimi)') expect(detectProvider().name).toBe('Moonshot AI - API')
}) })
test('custom proxy + llama-3.3 falls back to Meta Llama', () => { test('custom proxy + llama-3.3 falls back to Meta Llama', () => {
@@ -139,6 +155,21 @@ describe('detectProvider — rawModel fallback when URL is generic', () => {
setupOpenAIMode('https://my-proxy.internal/v1', 'mistral-large-latest') setupOpenAIMode('https://my-proxy.internal/v1', 'mistral-large-latest')
expect(detectProvider().name).toBe('Mistral') expect(detectProvider().name).toBe('Mistral')
}) })
test('custom proxy + exact uppercase GLM ID falls back to Z.AI GLM', () => {
setupOpenAIMode('https://my-proxy.internal/v1', 'GLM-5.1')
expect(detectProvider().name).toBe('Z.AI - GLM')
})
test('custom proxy + lowercase glm ID stays generic OpenAI', () => {
setupOpenAIMode('https://my-proxy.internal/v1', 'glm-5.1')
expect(detectProvider().name).toBe('OpenAI')
})
test('DashScope lowercase glm ID is not mislabeled as Z.AI', () => {
setupOpenAIMode('https://dashscope.aliyuncs.com/compatible-mode/v1', 'glm-5.1')
expect(detectProvider().name).toBe('OpenAI')
})
}) })
// --- Explicit env flags win over URL heuristics --- // --- Explicit env flags win over URL heuristics ---
@@ -156,3 +187,71 @@ describe('detectProvider — explicit dedicated-provider env flags', () => {
expect(detectProvider().name).toBe('MiniMax') expect(detectProvider().name).toBe('MiniMax')
}) })
}) })
// --- modelOverride from --model flag ---
describe('detectProvider — modelOverride from --model flag', () => {
test('modelOverride overrides default Anthropic model', () => {
const result = detectProvider('claude-opus-4-6')
expect(result.name).toBe('Anthropic')
expect(result.model).toContain('opus')
})
test('modelOverride alias is resolved for Anthropic', () => {
const result = detectProvider('opus')
expect(result.name).toBe('Anthropic')
expect(result.model).toContain('opus')
})
test('modelOverride takes priority over ANTHROPIC_MODEL env var', () => {
process.env.ANTHROPIC_MODEL = 'claude-haiku-4-5-20251001'
const result = detectProvider('claude-opus-4-6')
expect(result.name).toBe('Anthropic')
expect(result.model).toContain('opus')
})
test('modelOverride takes priority over CLAUDE_MODEL env var', () => {
process.env.CLAUDE_MODEL = 'claude-haiku-4-5-20251001'
const result = detectProvider('claude-opus-4-6')
expect(result.name).toBe('Anthropic')
expect(result.model).toContain('opus')
})
test('modelOverride works for OpenAI provider', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_API_KEY = 'test-key'
process.env.OPENAI_MODEL = 'gpt-4o'
const result = detectProvider('gpt-4-turbo')
expect(result.model).toContain('gpt-4-turbo')
})
test('modelOverride works for Gemini provider', () => {
process.env.CLAUDE_CODE_USE_GEMINI = '1'
const result = detectProvider('gemini-2.5-pro')
expect(result.model).toBe('gemini-2.5-pro')
})
test('modelOverride works for Mistral provider', () => {
process.env.CLAUDE_CODE_USE_MISTRAL = '1'
const result = detectProvider('mistral-large-latest')
expect(result.model).toBe('mistral-large-latest')
})
test('modelOverride works for GitHub provider', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const result = detectProvider('gpt-4o')
expect(result.model).toContain('gpt-4o')
})
test('undefined modelOverride preserves default behavior', () => {
const result = detectProvider(undefined)
expect(result.name).toBe('Anthropic')
expect(result.model).toContain('sonnet')
})
test('no argument preserves default behavior', () => {
const result = detectProvider()
expect(result.name).toBe('Anthropic')
expect(result.model).toContain('sonnet')
})
})

View File

@@ -9,6 +9,7 @@ import { isLocalProviderUrl, resolveProviderRequest } from '../services/api/prov
import { getLocalOpenAICompatibleProviderLabel } from '../utils/providerDiscovery.js' import { getLocalOpenAICompatibleProviderLabel } from '../utils/providerDiscovery.js'
import { getSettings_DEPRECATED } from '../utils/settings/settings.js' import { getSettings_DEPRECATED } from '../utils/settings/settings.js'
import { parseUserSpecifiedModel } from '../utils/model/model.js' import { parseUserSpecifiedModel } from '../utils/model/model.js'
import { containsExactZaiGlmModelId, isZaiBaseUrl } from '../utils/zaiProvider.js'
declare const MACRO: { VERSION: string; DISPLAY_VERSION?: string } declare const MACRO: { VERSION: string; DISPLAY_VERSION?: string }
@@ -83,33 +84,33 @@ const LOGO_CLAUDE = [
// ─── Provider detection ─────────────────────────────────────────────────────── // ─── Provider detection ───────────────────────────────────────────────────────
export function detectProvider(): { name: string; model: string; baseUrl: string; isLocal: boolean } { export function detectProvider(modelOverride?: string): { name: string; model: string; baseUrl: string; isLocal: boolean } {
const useGemini = process.env.CLAUDE_CODE_USE_GEMINI === '1' || process.env.CLAUDE_CODE_USE_GEMINI === 'true' const useGemini = process.env.CLAUDE_CODE_USE_GEMINI === '1' || process.env.CLAUDE_CODE_USE_GEMINI === 'true'
const useGithub = process.env.CLAUDE_CODE_USE_GITHUB === '1' || process.env.CLAUDE_CODE_USE_GITHUB === 'true' const useGithub = process.env.CLAUDE_CODE_USE_GITHUB === '1' || process.env.CLAUDE_CODE_USE_GITHUB === 'true'
const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1' || process.env.CLAUDE_CODE_USE_OPENAI === 'true' const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1' || process.env.CLAUDE_CODE_USE_OPENAI === 'true'
const useMistral = process.env.CLAUDE_CODE_USE_MISTRAL === '1' || process.env.CLAUDE_CODE_USE_MISTRAL === 'true' const useMistral = process.env.CLAUDE_CODE_USE_MISTRAL === '1' || process.env.CLAUDE_CODE_USE_MISTRAL === 'true'
if (useGemini) { if (useGemini) {
const model = process.env.GEMINI_MODEL || 'gemini-2.0-flash' const model = modelOverride || process.env.GEMINI_MODEL || 'gemini-2.0-flash'
const baseUrl = process.env.GEMINI_BASE_URL || 'https://generativelanguage.googleapis.com/v1beta/openai' const baseUrl = process.env.GEMINI_BASE_URL || 'https://generativelanguage.googleapis.com/v1beta/openai'
return { name: 'Google Gemini', model, baseUrl, isLocal: false } return { name: 'Google Gemini', model, baseUrl, isLocal: false }
} }
if (useMistral) { if (useMistral) {
const model = process.env.MISTRAL_MODEL || 'devstral-latest' const model = modelOverride || process.env.MISTRAL_MODEL || 'devstral-latest'
const baseUrl = process.env.MISTRAL_BASE_URL || 'https://api.mistral.ai/v1' const baseUrl = process.env.MISTRAL_BASE_URL || 'https://api.mistral.ai/v1'
return { name: 'Mistral', model, baseUrl, isLocal: false } return { name: 'Mistral', model, baseUrl, isLocal: false }
} }
if (useGithub) { if (useGithub) {
const model = process.env.OPENAI_MODEL || 'github:copilot' const model = modelOverride || process.env.OPENAI_MODEL || 'github:copilot'
const baseUrl = const baseUrl =
process.env.OPENAI_BASE_URL || 'https://api.githubcopilot.com' process.env.OPENAI_BASE_URL || 'https://api.githubcopilot.com'
return { name: 'GitHub Copilot', model, baseUrl, isLocal: false } return { name: 'GitHub Copilot', model, baseUrl, isLocal: false }
} }
if (useOpenAI) { if (useOpenAI) {
const rawModel = process.env.OPENAI_MODEL || 'gpt-4o' const rawModel = modelOverride || process.env.OPENAI_MODEL || 'gpt-4o'
const resolvedRequest = resolveProviderRequest({ const resolvedRequest = resolveProviderRequest({
model: rawModel, model: rawModel,
baseUrl: process.env.OPENAI_BASE_URL, baseUrl: process.env.OPENAI_BASE_URL,
@@ -134,16 +135,26 @@ export function detectProvider(): { name: string; model: string; baseUrl: string
else if (/azure/i.test(baseUrl)) name = 'Azure OpenAI' else if (/azure/i.test(baseUrl)) name = 'Azure OpenAI'
else if (/nvidia/i.test(baseUrl)) name = 'NVIDIA NIM' else if (/nvidia/i.test(baseUrl)) name = 'NVIDIA NIM'
else if (/minimax/i.test(baseUrl)) name = 'MiniMax' else if (/minimax/i.test(baseUrl)) name = 'MiniMax'
else if (/moonshot/i.test(baseUrl)) name = 'Moonshot (Kimi)' else if (/api\.kimi\.com/i.test(baseUrl)) name = 'Moonshot AI - Kimi Code'
else if (/moonshot/i.test(baseUrl)) name = 'Moonshot AI - API'
else if (/deepseek/i.test(baseUrl)) name = 'DeepSeek' else if (/deepseek/i.test(baseUrl)) name = 'DeepSeek'
else if (/x\.ai/i.test(baseUrl)) name = 'xAI'
else if (isZaiBaseUrl(baseUrl)) name = 'Z.AI - GLM'
else if (/mistral/i.test(baseUrl)) name = 'Mistral' else if (/mistral/i.test(baseUrl)) name = 'Mistral'
// rawModel fallback — fires only when base URL is generic/custom. // rawModel fallback — fires only when base URL is generic/custom.
else if (/nvidia/i.test(rawModel)) name = 'NVIDIA NIM' else if (/nvidia/i.test(rawModel)) name = 'NVIDIA NIM'
else if (/minimax/i.test(rawModel)) name = 'MiniMax' else if (/minimax/i.test(rawModel)) name = 'MiniMax'
else if (/kimi/i.test(rawModel)) name = 'Moonshot (Kimi)' else if (/\bkimi-for-coding\b/i.test(rawModel))
name = 'Moonshot AI - Kimi Code'
else if (/\bkimi-k/i.test(rawModel) || /moonshot/i.test(rawModel))
name = 'Moonshot AI - API'
else if (/deepseek/i.test(rawModel)) name = 'DeepSeek' else if (/deepseek/i.test(rawModel)) name = 'DeepSeek'
else if (/grok/i.test(rawModel)) name = 'xAI'
else if (containsExactZaiGlmModelId(rawModel)) name = 'Z.AI - GLM'
else if (/mistral/i.test(rawModel)) name = 'Mistral' else if (/mistral/i.test(rawModel)) name = 'Mistral'
else if (/llama/i.test(rawModel)) name = 'Meta Llama' else if (/llama/i.test(rawModel)) name = 'Meta Llama'
else if (/bankr/i.test(baseUrl)) name = 'Bankr'
else if (/bankr/i.test(rawModel)) name = 'Bankr'
else if (isLocal) name = getLocalOpenAICompatibleProviderLabel(baseUrl) else if (isLocal) name = getLocalOpenAICompatibleProviderLabel(baseUrl)
// Resolve model alias to actual model name + reasoning effort // Resolve model alias to actual model name + reasoning effort
@@ -157,7 +168,7 @@ export function detectProvider(): { name: string; model: string; baseUrl: string
// Default: Anthropic - check settings.model first, then env vars // Default: Anthropic - check settings.model first, then env vars
const settings = getSettings_DEPRECATED() || {} const settings = getSettings_DEPRECATED() || {}
const modelSetting = settings.model || process.env.ANTHROPIC_MODEL || process.env.CLAUDE_MODEL || 'claude-sonnet-4-6' const modelSetting = modelOverride || settings.model || process.env.ANTHROPIC_MODEL || process.env.CLAUDE_MODEL || 'claude-sonnet-4-6'
const resolvedModel = parseUserSpecifiedModel(modelSetting) const resolvedModel = parseUserSpecifiedModel(modelSetting)
const baseUrl = process.env.ANTHROPIC_BASE_URL ?? 'https://api.anthropic.com' const baseUrl = process.env.ANTHROPIC_BASE_URL ?? 'https://api.anthropic.com'
const isLocal = isLocalProviderUrl(baseUrl) const isLocal = isLocalProviderUrl(baseUrl)
@@ -173,11 +184,11 @@ function boxRow(content: string, width: number, rawLen: number): string {
// ─── Main ───────────────────────────────────────────────────────────────────── // ─── Main ─────────────────────────────────────────────────────────────────────
export function printStartupScreen(): void { export function printStartupScreen(modelOverride?: string): void {
// Skip in non-interactive / CI / print mode // Skip in non-interactive / CI / print mode
if (process.env.CI || !process.stdout.isTTY) return if (process.env.CI || !process.stdout.isTTY) return
const p = detectProvider() const p = detectProvider(modelOverride)
const W = 62 const W = 62
const out: string[] = [] const out: string[] = []

View File

@@ -94,7 +94,7 @@ export function Stats(t0) {
const allTimePromise = t1; const allTimePromise = t1;
let t2; let t2;
if ($[1] === Symbol.for("react.memo_cache_sentinel")) { if ($[1] === Symbol.for("react.memo_cache_sentinel")) {
t2 = <Box marginTop={1}><Spinner /><Text> Loading your Claude Code stats</Text></Box>; t2 = <Box marginTop={1}><Spinner /><Text> Loading your OpenClaude stats</Text></Box>;
$[1] = t2; $[1] = t2;
} else { } else {
t2 = $[1]; t2 = $[1];
@@ -242,7 +242,7 @@ function StatsContent(t0) {
if (allTimeResult.type === "empty") { if (allTimeResult.type === "empty") {
let t7; let t7;
if ($[15] === Symbol.for("react.memo_cache_sentinel")) { if ($[15] === Symbol.for("react.memo_cache_sentinel")) {
t7 = <Box marginTop={1}><Text color="warning">No stats available yet. Start using Claude Code!</Text></Box>; t7 = <Box marginTop={1}><Text color="warning">No stats available yet. Start using OpenClaude!</Text></Box>;
$[15] = t7; $[15] = t7;
} else { } else {
t7 = $[15]; t7 = $[15];

View File

@@ -73,7 +73,7 @@ export function TeleportRepoMismatchDialog(t0) {
const options = t2; const options = t2;
let t3; let t3;
if ($[8] !== availablePaths.length || $[9] !== errorMessage || $[10] !== handleChange || $[11] !== options || $[12] !== targetRepo || $[13] !== validating) { if ($[8] !== availablePaths.length || $[9] !== errorMessage || $[10] !== handleChange || $[11] !== options || $[12] !== targetRepo || $[13] !== validating) {
t3 = availablePaths.length > 0 ? <><Box flexDirection="column" gap={1}>{errorMessage && <Text color="error">{errorMessage}</Text>}<Text>Open Claude Code in <Text bold={true}>{targetRepo}</Text>:</Text></Box>{validating ? <Box><Spinner /><Text> Validating repository</Text></Box> : <Select options={options} onChange={value_0 => void handleChange(value_0)} />}</> : <Box flexDirection="column" gap={1}>{errorMessage && <Text color="error">{errorMessage}</Text>}<Text dimColor={true}>Run claude --teleport from a checkout of {targetRepo}</Text></Box>; t3 = availablePaths.length > 0 ? <><Box flexDirection="column" gap={1}>{errorMessage && <Text color="error">{errorMessage}</Text>}<Text>Open OpenClaude in <Text bold={true}>{targetRepo}</Text>:</Text></Box>{validating ? <Box><Spinner /><Text> Validating repository</Text></Box> : <Select options={options} onChange={value_0 => void handleChange(value_0)} />}</> : <Box flexDirection="column" gap={1}>{errorMessage && <Text color="error">{errorMessage}</Text>}<Text dimColor={true}>Run openclaude --teleport from a checkout of {targetRepo}</Text></Box>;
$[8] = availablePaths.length; $[8] = availablePaths.length;
$[9] = errorMessage; $[9] = errorMessage;
$[10] = handleChange; $[10] = handleChange;

View File

@@ -206,7 +206,7 @@ export function TrustDialog(t0) {
if ($[20] === Symbol.for("react.memo_cache_sentinel")) { if ($[20] === Symbol.for("react.memo_cache_sentinel")) {
t16 = <Text bold={true}>{getFsImplementation().cwd()}</Text>; t16 = <Text bold={true}>{getFsImplementation().cwd()}</Text>;
t17 = <Text>Quick safety check: Is this a project you created or one you trust? (Like your own code, a well-known open source project, or work from your team). If not, take a moment to review what{"'"}s in this folder first.</Text>; t17 = <Text>Quick safety check: Is this a project you created or one you trust? (Like your own code, a well-known open source project, or work from your team). If not, take a moment to review what{"'"}s in this folder first.</Text>;
t18 = <Text>Claude Code{"'"}ll be able to read, edit, and execute files here.</Text>; t18 = <Text>OpenClaude{"'"}ll be able to read, edit, and execute files here.</Text>;
$[20] = t16; $[20] = t16;
$[21] = t17; $[21] = t17;
$[22] = t18; $[22] = t18;

View File

@@ -254,7 +254,7 @@ function ElicitationFormDialog({
// Text fields are always in edit mode when focused — no Enter-to-edit step. // Text fields are always in edit mode when focused — no Enter-to-edit step.
const isEditingTextField = currentFieldIsText && !focusedButton; const isEditingTextField = currentFieldIsText && !focusedButton;
useRegisterOverlay('elicitation'); useRegisterOverlay('elicitation');
useNotifyAfterTimeout('Claude Code needs your input', 'elicitation_dialog'); useNotifyAfterTimeout('OpenClaude needs your input', 'elicitation_dialog');
// Sync textInputValue when the focused field changes // Sync textInputValue when the focused field changes
const syncTextInput = useCallback((fieldIndex: number | undefined) => { const syncTextInput = useCallback((fieldIndex: number | undefined) => {
@@ -1004,7 +1004,7 @@ function ElicitationURLDialog({
const phaseRef = useRef<'prompt' | 'waiting'>('prompt'); const phaseRef = useRef<'prompt' | 'waiting'>('prompt');
const [focusedButton, setFocusedButton] = useState<'accept' | 'decline' | 'open' | 'action' | 'cancel'>('accept'); const [focusedButton, setFocusedButton] = useState<'accept' | 'decline' | 'open' | 'action' | 'cancel'>('accept');
const showCancel = waitingState?.showCancel ?? false; const showCancel = waitingState?.showCancel ?? false;
useNotifyAfterTimeout('Claude Code needs your input', 'elicitation_url_dialog'); useNotifyAfterTimeout('OpenClaude needs your input', 'elicitation_url_dialog');
useRegisterOverlay('elicitation-url'); useRegisterOverlay('elicitation-url');
// Keep refs in sync for use in abort handler (avoids re-registering listener) // Keep refs in sync for use in abort handler (avoids re-registering listener)

View File

@@ -102,9 +102,9 @@ export function MCPRemoteServerMenu({
if (success) { if (success) {
onComplete?.(`Authentication successful. Connected to ${server.name}.`); onComplete?.(`Authentication successful. Connected to ${server.name}.`);
} else if (result.client.type === 'needs-auth') { } else if (result.client.type === 'needs-auth') {
onComplete?.('Authentication successful, but server still requires authentication. You may need to manually restart Claude Code.'); onComplete?.('Authentication successful, but server still requires authentication. You may need to manually restart OpenClaude.');
} else { } else {
onComplete?.('Authentication successful, but server reconnection failed. You may need to manually restart Claude Code for the changes to take effect.'); onComplete?.('Authentication successful, but server reconnection failed. You may need to manually restart OpenClaude for the changes to take effect.');
} }
} catch (err) { } catch (err) {
logEvent('tengu_claudeai_mcp_auth_completed', { logEvent('tengu_claudeai_mcp_auth_completed', {
@@ -281,11 +281,11 @@ export function MCPRemoteServerMenu({
const message = isEffectivelyAuthenticated ? `Authentication successful. Reconnected to ${server.name}.` : `Authentication successful. Connected to ${server.name}.`; const message = isEffectivelyAuthenticated ? `Authentication successful. Reconnected to ${server.name}.` : `Authentication successful. Connected to ${server.name}.`;
onComplete?.(message); onComplete?.(message);
} else if (result_0.client.type === 'needs-auth') { } else if (result_0.client.type === 'needs-auth') {
onComplete?.('Authentication successful, but server still requires authentication. You may need to manually restart Claude Code.'); onComplete?.('Authentication successful, but server still requires authentication. You may need to manually restart OpenClaude.');
} else { } else {
// result.client.type === 'failed' // result.client.type === 'failed'
logMCPDebug(server.name, `Reconnection failed after authentication`); logMCPDebug(server.name, `Reconnection failed after authentication`);
onComplete?.('Authentication successful, but server reconnection failed. You may need to manually restart Claude Code for the changes to take effect.'); onComplete?.('Authentication successful, but server reconnection failed. You may need to manually restart OpenClaude for the changes to take effect.');
} }
} }
} catch (err_1) { } catch (err_1) {

View File

@@ -147,7 +147,7 @@ export function MCPSettings(t0) {
return; return;
} }
if (servers.length === 0 && agentMcpServers.length === 0) { if (servers.length === 0 && agentMcpServers.length === 0) {
onComplete("No MCP servers configured. Please run /doctor if this is unexpected. Otherwise, run `claude mcp --help` or visit https://code.claude.com/docs/en/mcp to learn more."); onComplete("No MCP servers configured. Please run /doctor if this is unexpected. Otherwise, run `openclaude mcp --help` or visit https://github.com/Gitlawb/openclaude to learn more.");
} }
}; };
t8 = [servers.length, filteredClients.length, agentMcpServers.length, onComplete]; t8 = [servers.length, filteredClients.length, agentMcpServers.length, onComplete];

View File

@@ -161,7 +161,7 @@ function ComputerUseTccPanel(t0) {
} }
let t7; let t7;
if ($[15] === Symbol.for("react.memo_cache_sentinel")) { if ($[15] === Symbol.for("react.memo_cache_sentinel")) {
t7 = <Text dimColor={true}>Grant the missing permissions in System Settings, then select "Try again". macOS may require you to restart Claude Code after granting Screen Recording.</Text>; t7 = <Text dimColor={true}>Grant the missing permissions in System Settings, then select "Try again". macOS may require you to restart OpenClaude after granting Screen Recording.</Text>;
$[15] = t7; $[15] = t7;
} else { } else {
t7 = $[15]; t7 = $[15];

View File

@@ -730,7 +730,7 @@ export function buildPlanApprovalOptions({
}); });
if (showUltraplan) { if (showUltraplan) {
options.push({ options.push({
label: 'No, refine with Ultraplan on Claude Code on the web', label: 'No, refine with Ultraplan on OpenClaude on the web',
value: 'ultraplan' value: 'ultraplan'
}); });
} }

View File

@@ -128,18 +128,18 @@ export type ToolUseConfirm<Input extends AnyObject = AnyObject> = {
function getNotificationMessage(toolUseConfirm: ToolUseConfirm): string { function getNotificationMessage(toolUseConfirm: ToolUseConfirm): string {
const toolName = toolUseConfirm.tool.userFacingName(toolUseConfirm.input as never); const toolName = toolUseConfirm.tool.userFacingName(toolUseConfirm.input as never);
if (toolUseConfirm.tool === ExitPlanModeV2Tool) { if (toolUseConfirm.tool === ExitPlanModeV2Tool) {
return 'Claude Code needs your approval for the plan'; return 'OpenClaude needs your approval for the plan';
} }
if (toolUseConfirm.tool === EnterPlanModeTool) { if (toolUseConfirm.tool === EnterPlanModeTool) {
return 'Claude Code wants to enter plan mode'; return 'OpenClaude wants to enter plan mode';
} }
if (feature('REVIEW_ARTIFACT') && toolUseConfirm.tool === ReviewArtifactTool) { if (feature('REVIEW_ARTIFACT') && toolUseConfirm.tool === ReviewArtifactTool) {
return 'Claude needs your approval for a review artifact'; return 'OpenClaude needs your approval for a review artifact';
} }
if (!toolName || toolName.trim() === '') { if (!toolName || toolName.trim() === '') {
return 'Claude Code needs your attention'; return 'OpenClaude needs your attention';
} }
return `Claude needs your permission to use ${toolName}`; return `OpenClaude needs your permission to use ${toolName}`;
} }
// TODO: Move this to Tool.renderPermissionRequest // TODO: Move this to Tool.renderPermissionRequest

View File

@@ -40,7 +40,7 @@ function PermissionDescription() {
const $ = _c(1); const $ = _c(1);
let t0; let t0;
if ($[0] === Symbol.for("react.memo_cache_sentinel")) { if ($[0] === Symbol.for("react.memo_cache_sentinel")) {
t0 = <Text dimColor={true}>Claude Code will be able to read files in this directory and make edits when auto-accept edits is on.</Text>; t0 = <Text dimColor={true}>OpenClaude will be able to read files in this directory and make edits when auto-accept edits is on.</Text>;
$[0] = t0; $[0] = t0;
} else { } else {
t0 = $[0]; t0 = $[0];

View File

@@ -388,9 +388,9 @@ function PermissionRulesTab(t0) {
let t8; let t8;
if ($[10] === Symbol.for("react.memo_cache_sentinel")) { if ($[10] === Symbol.for("react.memo_cache_sentinel")) {
t8 = { t8 = {
allow: "Claude Code won't ask before using allowed tools.", allow: "OpenClaude won't ask before using allowed tools.",
ask: "Claude Code will always ask for confirmation before using these tools.", ask: "OpenClaude will always ask for confirmation before using these tools.",
deny: "Claude Code will always reject requests to use denied tools." deny: "OpenClaude will always reject requests to use denied tools."
}; };
$[10] = t8; $[10] = t8;
} else { } else {
@@ -1098,7 +1098,7 @@ export function PermissionRuleList(t0) {
} }
let t28; let t28;
if ($[89] === Symbol.for("react.memo_cache_sentinel")) { if ($[89] === Symbol.for("react.memo_cache_sentinel")) {
t28 = <Text>Claude Code can read files in the workspace, and make edits when auto-accept edits is on.</Text>; t28 = <Text>OpenClaude can read files in the workspace, and make edits when auto-accept edits is on.</Text>;
$[89] = t28; $[89] = t28;
} else { } else {
t28 = $[89]; t28 = $[89];

View File

@@ -68,7 +68,7 @@ export function RemoveWorkspaceDirectory(t0) {
} }
let t4; let t4;
if ($[10] === Symbol.for("react.memo_cache_sentinel")) { if ($[10] === Symbol.for("react.memo_cache_sentinel")) {
t4 = <Text>Claude Code will no longer have access to files in this directory.</Text>; t4 = <Text>OpenClaude will no longer have access to files in this directory.</Text>;
$[10] = t4; $[10] = t4;
} else { } else {
t4 = $[10]; t4 = $[10];

View File

@@ -44,7 +44,7 @@ type Props = {
export function formatToolUseSummary(name: string, input: unknown): string { export function formatToolUseSummary(name: string, input: unknown): string {
// plan_ready phase is only reached via ExitPlanMode tool // plan_ready phase is only reached via ExitPlanMode tool
if (name === EXIT_PLAN_MODE_V2_TOOL_NAME) { if (name === EXIT_PLAN_MODE_V2_TOOL_NAME) {
return 'Review the plan in Claude Code on the web'; return 'Review the plan in OpenClaude on the web';
} }
if (!input || typeof input !== 'object') return name; if (!input || typeof input !== 'object') return name;
// AskUserQuestion: show the question text as a CTA, not the tool name. // AskUserQuestion: show the question text as a CTA, not the tool name.
@@ -168,7 +168,7 @@ function UltraplanSessionDetail(t0) {
} }
let t7; let t7;
if ($[12] === Symbol.for("react.memo_cache_sentinel")) { if ($[12] === Symbol.for("react.memo_cache_sentinel")) {
t7 = <Text dimColor={true}>This will terminate the Claude Code on the web session.</Text>; t7 = <Text dimColor={true}>This will terminate the OpenClaude on the web session.</Text>;
$[12] = t7; $[12] = t7;
} else { } else {
t7 = $[12]; t7 = $[12];
@@ -311,7 +311,7 @@ function UltraplanSessionDetail(t0) {
let t19; let t19;
if ($[47] === Symbol.for("react.memo_cache_sentinel")) { if ($[47] === Symbol.for("react.memo_cache_sentinel")) {
t19 = { t19 = {
label: "Review in Claude Code on the web", label: "Review in OpenClaude on the web",
value: "open" as const value: "open" as const
}; };
$[47] = t19; $[47] = t19;
@@ -595,13 +595,13 @@ function ReviewSessionDetail(t0) {
let t3; let t3;
if ($[11] !== completed || $[12] !== onKill || $[13] !== running) { if ($[11] !== completed || $[12] !== onKill || $[13] !== running) {
t3 = completed ? [{ t3 = completed ? [{
label: "Open in Claude Code on the web", label: "Open in OpenClaude on the web",
value: "open" value: "open"
}, { }, {
label: "Dismiss", label: "Dismiss",
value: "dismiss" value: "dismiss"
}] : [{ }] : [{
label: "Open in Claude Code on the web", label: "Open in OpenClaude on the web",
value: "open" value: "open"
}, ...(onKill && running ? [{ }, ...(onKill && running ? [{
label: "Stop ultrareview", label: "Stop ultrareview",

View File

@@ -11,6 +11,7 @@ import { afterEach, expect, test } from 'bun:test'
NATIVE_PACKAGE_URL: undefined, NATIVE_PACKAGE_URL: undefined,
} }
import { clearSystemPromptSections } from './systemPromptSections.js'
import { getSystemPrompt, DEFAULT_AGENT_PROMPT } from './prompts.js' import { getSystemPrompt, DEFAULT_AGENT_PROMPT } from './prompts.js'
import { CLI_SYSPROMPT_PREFIXES, getCLISyspromptPrefix } from './system.js' import { CLI_SYSPROMPT_PREFIXES, getCLISyspromptPrefix } from './system.js'
import { CLAUDE_CODE_GUIDE_AGENT } from '../tools/AgentTool/built-in/claudeCodeGuideAgent.js' import { CLAUDE_CODE_GUIDE_AGENT } from '../tools/AgentTool/built-in/claudeCodeGuideAgent.js'
@@ -23,6 +24,7 @@ const originalSimpleEnv = process.env.CLAUDE_CODE_SIMPLE
afterEach(() => { afterEach(() => {
process.env.CLAUDE_CODE_SIMPLE = originalSimpleEnv process.env.CLAUDE_CODE_SIMPLE = originalSimpleEnv
clearSystemPromptSections()
}) })
test('CLI identity prefixes describe OpenClaude instead of Claude Code', () => { test('CLI identity prefixes describe OpenClaude instead of Claude Code', () => {
@@ -47,6 +49,21 @@ test('simple mode identity describes OpenClaude instead of Claude Code', async (
expect(prompt[0]).not.toContain("Anthropic's official CLI for Claude") expect(prompt[0]).not.toContain("Anthropic's official CLI for Claude")
}) })
test('system prompt model identity updates when model changes mid-session', async () => {
delete process.env.CLAUDE_CODE_SIMPLE
clearSystemPromptSections()
const firstPrompt = await getSystemPrompt([], 'old-test-model')
const secondPrompt = await getSystemPrompt([], 'new-test-model')
const firstText = firstPrompt.join('\n')
const secondText = secondPrompt.join('\n')
expect(firstText).toContain('You are powered by the model old-test-model.')
expect(secondText).toContain('You are powered by the model new-test-model.')
expect(secondText).not.toContain('You are powered by the model old-test-model.')
})
test('built-in agent prompts describe OpenClaude instead of Claude Code', () => { test('built-in agent prompts describe OpenClaude instead of Claude Code', () => {
expect(DEFAULT_AGENT_PROMPT).toContain('OpenClaude') expect(DEFAULT_AGENT_PROMPT).toContain('OpenClaude')
expect(DEFAULT_AGENT_PROMPT).not.toContain('Claude Code') expect(DEFAULT_AGENT_PROMPT).not.toContain('Claude Code')

View File

@@ -496,7 +496,7 @@ ${CYBER_RISK_INSTRUCTION}`,
systemPromptSection('ant_model_override', () => systemPromptSection('ant_model_override', () =>
getAntModelOverrideSection(), getAntModelOverrideSection(),
), ),
systemPromptSection('env_info_simple', () => systemPromptSection(`env_info_simple:${model}`, () =>
computeSimpleEnvInfo(model, additionalWorkingDirectories), computeSimpleEnvInfo(model, additionalWorkingDirectories),
), ),
systemPromptSection('language', () => systemPromptSection('language', () =>
@@ -519,7 +519,7 @@ ${CYBER_RISK_INSTRUCTION}`,
'MCP servers connect/disconnect between turns', 'MCP servers connect/disconnect between turns',
), ),
systemPromptSection('scratchpad', () => getScratchpadInstructions()), systemPromptSection('scratchpad', () => getScratchpadInstructions()),
systemPromptSection('frc', () => getFunctionResultClearingSection(model)), systemPromptSection(`frc:${model}`, () => getFunctionResultClearingSection(model)),
systemPromptSection( systemPromptSection(
'summarize_tool_results', 'summarize_tool_results',
() => SUMMARIZE_TOOL_RESULTS_SECTION, () => SUMMARIZE_TOOL_RESULTS_SECTION,

View File

@@ -0,0 +1,7 @@
/**
* Stub — query source enum not included in source snapshot. See
* src/types/message.ts for the same scoping caveat (issue #473).
*/
/* eslint-disable @typescript-eslint/no-explicit-any */
export type QuerySource = any

View File

@@ -0,0 +1,128 @@
/**
* Integration test for cost-tracker → cacheStatsTracker wiring.
*
* The unit tests in services/api/cacheMetrics.test.ts and
* services/api/cacheStatsTracker.test.ts verify that each piece works
* in isolation. This file verifies that they're ACTUALLY CONNECTED —
* that `addToTotalSessionCost` resolves the provider, extracts metrics,
* and records them on the tracker on every call. Without this test, a
* future refactor could silently unwire the call chain (wrong param
* order, renamed symbol, removed call) and every individual unit test
* would still pass while `/cache-stats` showed empty data.
*
* We use real state — `resetCostState` + `getCurrentTurnCacheMetrics` —
* rather than mocking the tracker module. Fewer moving parts, and the
* test fails for the right reason if anyone breaks the wrapping.
*/
import { beforeEach, describe, expect, test } from 'bun:test'
import { addToTotalSessionCost, resetCostState } from './cost-tracker.js'
import {
getCurrentTurnCacheMetrics,
getSessionCacheMetrics,
} from './services/api/cacheStatsTracker.js'
// BetaUsage-compatible shape — minimum fields addToTotalSessionCost
// needs to run without throwing. Cache fields are the ones we care
// about here; input/output go into model cost calc.
function anthropicUsage(partial: {
input?: number
output?: number
cacheRead?: number
cacheCreation?: number
}): Parameters<typeof addToTotalSessionCost>[1] {
return {
input_tokens: partial.input ?? 0,
output_tokens: partial.output ?? 0,
cache_read_input_tokens: partial.cacheRead ?? 0,
cache_creation_input_tokens: partial.cacheCreation ?? 0,
// BetaUsage has several other optional fields; they're not read by
// the cache-tracking path so we leave them undefined.
} as Parameters<typeof addToTotalSessionCost>[1]
}
beforeEach(() => {
// resetCostState is the wrapped version that ALSO clears the cache
// tracker — this line is itself part of what we're verifying.
resetCostState()
})
describe('addToTotalSessionCost → cacheStatsTracker wiring', () => {
test('records normalized cache metrics on the tracker for each call', () => {
addToTotalSessionCost(
0.01,
anthropicUsage({
input: 200,
output: 50,
cacheRead: 800,
cacheCreation: 100,
}),
'claude-sonnet-4',
)
const turn = getCurrentTurnCacheMetrics()
expect(turn.supported).toBe(true)
expect(turn.read).toBe(800)
expect(turn.created).toBe(100)
// total = fresh(200) + read(800) + created(100) = 1100
expect(turn.total).toBe(1_100)
// hitRate = read / total = 800 / 1100 ≈ 0.727
expect(turn.hitRate).toBeCloseTo(800 / 1_100, 4)
})
test('session aggregate accumulates across multiple API calls', () => {
addToTotalSessionCost(
0.01,
anthropicUsage({ input: 100, cacheRead: 400 }),
'claude-sonnet-4',
)
addToTotalSessionCost(
0.02,
anthropicUsage({ input: 200, cacheRead: 600 }),
'claude-sonnet-4',
)
const session = getSessionCacheMetrics()
expect(session.read).toBe(1_000)
// total = (100+400) + (200+600) = 1300
expect(session.total).toBe(1_300)
expect(session.hitRate).toBeCloseTo(1_000 / 1_300, 4)
})
test('cold turn (no cache read/created) still records as supported', () => {
addToTotalSessionCost(
0.005,
anthropicUsage({ input: 500, output: 100 }),
'claude-sonnet-4',
)
const turn = getCurrentTurnCacheMetrics()
expect(turn.supported).toBe(true)
expect(turn.read).toBe(0)
expect(turn.created).toBe(0)
expect(turn.total).toBe(500)
// hitRate computed against a non-zero total is 0, not null — empty
// cache on a cacheable provider is a legitimate "no-hit" signal.
expect(turn.hitRate).toBe(0)
})
})
describe('resetCostState wrapper also clears cache tracker', () => {
test('resetCostState() zeros both cost counters and cache stats', () => {
// Populate both systems
addToTotalSessionCost(
0.01,
anthropicUsage({ input: 100, cacheRead: 500 }),
'claude-sonnet-4',
)
expect(getSessionCacheMetrics().read).toBe(500)
// resetCostState is the WRAPPED version — bootstrap's
// resetCostState cleared cost state historically but not cache
// stats. The wrapper in cost-tracker.ts adds the second call.
resetCostState()
const session = getSessionCacheMetrics()
expect(session.read).toBe(0)
expect(session.supported).toBe(false)
})
})

View File

@@ -1,5 +1,14 @@
import type { BetaUsage as Usage } from '@anthropic-ai/sdk/resources/beta/messages/messages.mjs' import type { BetaUsage as Usage } from '@anthropic-ai/sdk/resources/beta/messages/messages.mjs'
import chalk from 'chalk' import chalk from 'chalk'
import {
extractCacheMetrics,
resolveCacheProvider,
} from './services/api/cacheMetrics.js'
import {
recordRequest as recordCacheRequest,
resetSessionCacheStats,
} from './services/api/cacheStatsTracker.js'
import { getAPIProvider, isGithubNativeAnthropicMode } from './utils/model/providers.js'
import { import {
addToTotalCostState, addToTotalCostState,
addToTotalLinesChanged, addToTotalLinesChanged,
@@ -22,7 +31,7 @@ import {
getTotalWebSearchRequests, getTotalWebSearchRequests,
getUsageForModel, getUsageForModel,
hasUnknownModelCost, hasUnknownModelCost,
resetCostState, resetCostState as baseResetCostState,
resetStateForTests, resetStateForTests,
setCostStateForRestore, setCostStateForRestore,
setHasUnknownModelCost, setHasUnknownModelCost,
@@ -62,12 +71,22 @@ export {
formatCost, formatCost,
hasUnknownModelCost, hasUnknownModelCost,
resetStateForTests, resetStateForTests,
resetCostState,
setHasUnknownModelCost, setHasUnknownModelCost,
getModelUsage, getModelUsage,
getUsageForModel, getUsageForModel,
} }
/**
* Wraps bootstrap's resetCostState() so /clear, /compact and session
* switches zero the cache-stats tracker alongside the cost counters.
* Exported under the same name so existing callers pick up the cache
* reset without any call-site changes.
*/
export function resetCostState(): void {
baseResetCostState()
resetSessionCacheStats()
}
type StoredCostState = { type StoredCostState = {
totalCostUSD: number totalCostUSD: number
totalAPIDuration: number totalAPIDuration: number
@@ -251,6 +270,16 @@ function round(number: number, precision: number): number {
return Math.round(number * precision) / precision return Math.round(number * precision) / precision
} }
// Env-gated verbose token usage log. Treated as a boolean regardless of
// value specifics — any truthy-ish string switches it on. `verbose` is the
// documented keyword but we accept `1`/`true` for ergonomic parity with
// other OPENCLAUDE_* flags.
function shouldLogTokenUsageVerbose(): boolean {
const v = (process.env.OPENCLAUDE_LOG_TOKEN_USAGE ?? '').trim().toLowerCase()
if (!v) return false
return v !== '0' && v !== 'false' && v !== 'off'
}
function addToTotalModelUsage( function addToTotalModelUsage(
cost: number, cost: number,
usage: Usage, usage: Usage,
@@ -287,6 +316,43 @@ export function addToTotalSessionCost(
const modelUsage = addToTotalModelUsage(cost, usage, model) const modelUsage = addToTotalModelUsage(cost, usage, model)
addToTotalCostState(cost, modelUsage, model) addToTotalCostState(cost, modelUsage, model)
// Record normalized cache metrics for REPL display + /cache-stats.
// Resolved from the current process provider — at this point `usage` has
// already been Anthropic-shaped by the shim layer, so we feed the
// corresponding bucket (anthropic / copilot-claude / openai-like) to the
// extractor. For providers that genuinely don't report cache data
// (vanilla Copilot, Ollama), resolveCacheProvider steers us to
// supported:false so the UI shows "N/A" instead of lying with "0%".
const cacheProvider = resolveCacheProvider(getAPIProvider(), {
githubNativeAnthropic: isGithubNativeAnthropicMode(model),
openAiBaseUrl: process.env.OPENAI_BASE_URL ?? process.env.OPENAI_API_BASE,
})
const cacheMetrics = extractCacheMetrics(
usage as unknown as Record<string, unknown>,
cacheProvider,
)
recordCacheRequest(cacheMetrics, model)
// Opt-in structured per-request debug log on stderr. Power-user knob, not
// shown in the REPL — complements CLAUDE_CODE_ENABLE_TOKEN_USAGE_ATTACHMENT
// (which is model-facing). Any truthy value except "0"/"false" enables it.
if (shouldLogTokenUsageVerbose()) {
process.stderr.write(
JSON.stringify({
tag: 'openclaude.tokenUsage',
model,
provider: cacheProvider,
input_tokens: usage.input_tokens,
output_tokens: usage.output_tokens,
cache_read_input_tokens: usage.cache_read_input_tokens ?? 0,
cache_creation_input_tokens: usage.cache_creation_input_tokens ?? 0,
cache_supported: cacheMetrics.supported,
cache_hit_rate: cacheMetrics.hitRate,
cost_usd: cost,
}) + '\n',
)
}
const attrs = const attrs =
isFastModeEnabled() && usage.speed === 'fast' isFastModeEnabled() && usage.speed === 'fast'
? { model, speed: 'fast' } ? { model, speed: 'fast' }

View File

@@ -442,7 +442,84 @@ export async function connectRemoteControl(
throw new Error('not implemented') throw new Error('not implemented')
} }
// add exit reason types for removing the error within gracefulShutdown file // add exit reason types for removing the error within gracefulShutdown file
export type ExitReason = { export type ExitReason = {
} }
// ============================================================================
// Stub re-exports — types not included in source snapshot.
//
// The upstream Anthropic SDK defines these in sub-files (sdk/coreTypes,
// sdk/runtimeTypes, sdk/controlTypes, sdk/toolTypes) that are stubbed
// in this open repo. Until the real definitions are restored, alias the
// names to `any` so callers can resolve their imports and `tsc` becomes
// actionable. See issue #473 for the typecheck-foundation effort.
// ============================================================================
/* eslint-disable @typescript-eslint/no-explicit-any */
export type AnyZodRawShape = any
export type ApiKeySource = any
export type AsyncHookJSONOutput = any
export type ConfigChangeHookInput = any
export type CwdChangedHookInput = any
export type ElicitationHookInput = any
export type ElicitationResultHookInput = any
export type FileChangedHookInput = any
export type ForkSessionOptions = any
export type ForkSessionResult = any
export type GetSessionInfoOptions = any
export type GetSessionMessagesOptions = any
export type HookEvent = any
export type HookInput = any
export type HookJSONOutput = any
export type InferShape<_T> = any
export type InstructionsLoadedHookInput = any
export type InternalOptions = any
export type InternalQuery = any
export type ListSessionsOptions = any
export type McpSdkServerConfigWithInstance = any
export type McpServerConfigForProcessTransport = any
export type McpServerStatus = any
export type ModelInfo = any
export type ModelUsage = any
export type NotificationHookInput = any
export type Options = any
export type PermissionDeniedHookInput = any
export type PermissionMode = any
export type PermissionRequestHookInput = any
export type PermissionResult = any
export type PermissionUpdate = any
export type PostCompactHookInput = any
export type PostToolUseFailureHookInput = any
export type PostToolUseHookInput = any
export type PreCompactHookInput = any
export type PreToolUseHookInput = any
export type Query = any
export type RewindFilesResult = any
export type SDKAssistantMessage = any
export type SDKAssistantMessageError = any
export type SDKCompactBoundaryMessage = any
export type SdkMcpToolDefinition = any
export type SDKPartialAssistantMessage = any
export type SDKPermissionDenial = any
export type SDKRateLimitInfo = any
export type SDKStatus = any
export type SDKStatusMessage = any
export type SDKSystemMessage = any
export type SDKToolProgressMessage = any
export type SDKUserMessageReplay = any
export type SessionEndHookInput = any
export type SessionMessage = any
export type SessionMutationOptions = any
export type SessionStartHookInput = any
export type SetupHookInput = any
export type StopFailureHookInput = any
export type StopHookInput = any
export type SubagentStartHookInput = any
export type SubagentStopHookInput = any
export type SyncHookJSONOutput = any
export type TaskCompletedHookInput = any
export type TaskCreatedHookInput = any
export type TeammateIdleHookInput = any
export type UserPromptSubmitHookInput = any

View File

@@ -80,7 +80,7 @@ async function main(): Promise<void> {
if (args.length === 1 && (args[0] === '--version' || args[0] === '-v' || args[0] === '-V')) { if (args.length === 1 && (args[0] === '--version' || args[0] === '-v' || args[0] === '-V')) {
// MACRO.VERSION is inlined at build time // MACRO.VERSION is inlined at build time
// biome-ignore lint/suspicious/noConsole:: intentional console output // biome-ignore lint/suspicious/noConsole:: intentional console output
console.log(`${MACRO.DISPLAY_VERSION ?? MACRO.VERSION} (Open Claude)`); console.log(`${MACRO.DISPLAY_VERSION ?? MACRO.VERSION} (OpenClaude)`);
return; return;
} }
@@ -134,9 +134,13 @@ async function main(): Promise<void> {
await validateProviderEnvForStartupOrExit() await validateProviderEnvForStartupOrExit()
// Parse --model early so the startup screen can display the override
const { eagerParseCliFlag } = await import('../utils/cliArgs.js')
const earlyModelFlag = eagerParseCliFlag('--model')
// Print the gradient startup screen before the Ink UI loads // Print the gradient startup screen before the Ink UI loads
const { printStartupScreen } = await import('../components/StartupScreen.js') const { printStartupScreen } = await import('../components/StartupScreen.js')
printStartupScreen() printStartupScreen(earlyModelFlag)
// For all other paths, load the startup profiler // For all other paths, load the startup profiler
const { const {

518
src/entrypoints/sdk.d.ts vendored Normal file
View File

@@ -0,0 +1,518 @@
// Type declarations for @gitlawb/openclaude SDK
// Manually maintained — keep in sync with src/entrypoints/sdk/index.ts
// Drift is caught by validate-externals.ts (runs in CI)
// ============================================================================
// Error
// ============================================================================
export class AbortError extends Error {
override readonly name: 'AbortError'
}
export class ClaudeError extends Error {
constructor(message: string)
}
export class SDKError extends ClaudeError {
constructor(message: string)
}
export class SDKAuthenticationError extends SDKError {
constructor(message?: string)
}
export class SDKBillingError extends SDKError {
constructor(message?: string)
}
export class SDKRateLimitError extends SDKError {
constructor(
message?: string,
readonly resetsAt?: number,
readonly rateLimitType?: string,
)
}
export class SDKInvalidRequestError extends SDKError {
constructor(message?: string)
}
export class SDKServerError extends SDKError {
constructor(message?: string)
}
export class SDKMaxOutputTokensError extends SDKError {
constructor(message?: string)
}
export type SDKAssistantMessageError =
| 'authentication_failed'
| 'billing_error'
| 'rate_limit'
| 'invalid_request'
| 'server_error'
| 'unknown'
| 'max_output_tokens'
export function sdkErrorFromType(
errorType: SDKAssistantMessageError,
message?: string,
): SDKError | ClaudeError
// ============================================================================
// Types
// ============================================================================
export type ApiKeySource = 'user' | 'project' | 'org' | 'temporary' | 'oauth' | 'none'
export type RewindFilesResult = {
canRewind: boolean
error?: string
filesChanged?: string[]
insertions?: number
deletions?: number
}
export type McpServerStatus = {
name: string
status: 'connected' | 'failed' | 'needs-auth' | 'pending' | 'disabled'
serverInfo?: { name: string; version: string }
error?: string
scope?: string
tools?: {
name: string
description?: string
annotations?: {
readOnly?: boolean
destructive?: boolean
openWorld?: boolean
}
}[]
}
export type PermissionResult = ({
behavior: 'allow'
updatedInput?: Record<string, unknown>
updatedPermissions?: ({
type: 'addRules'
rules: { toolName: string; ruleContent?: string }[]
behavior: 'allow' | 'deny' | 'ask'
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
}) | ({
type: 'replaceRules'
rules: { toolName: string; ruleContent?: string }[]
behavior: 'allow' | 'deny' | 'ask'
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
}) | ({
type: 'removeRules'
rules: { toolName: string; ruleContent?: string }[]
behavior: 'allow' | 'deny' | 'ask'
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
}) | ({
type: 'setMode'
mode: 'default' | 'acceptEdits' | 'bypassPermissions' | 'plan' | 'dontAsk'
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
}) | ({
type: 'addDirectories'
directories: string[]
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
}) | ({
type: 'removeDirectories'
directories: string[]
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
})[]
toolUseID?: string
decisionClassification?: 'user_temporary' | 'user_permanent' | 'user_reject'
}) | ({
behavior: 'deny'
message: string
interrupt?: boolean
toolUseID?: string
decisionClassification?: 'user_temporary' | 'user_permanent' | 'user_reject'
})
export type SDKSessionInfo = {
sessionId: string
summary: string
lastModified: number
fileSize?: number
customTitle?: string
firstPrompt?: string
gitBranch?: string
cwd?: string
tag?: string
createdAt?: number
}
export type ListSessionsOptions = {
dir?: string
limit?: number
offset?: number
includeWorktrees?: boolean
}
export type GetSessionInfoOptions = {
dir?: string
}
export type GetSessionMessagesOptions = {
dir?: string
limit?: number
offset?: number
includeSystemMessages?: boolean
}
export type SessionMutationOptions = {
dir?: string
}
export type ForkSessionOptions = {
dir?: string
upToMessageId?: string
title?: string
}
export type ForkSessionResult = {
sessionId: string
}
export type SessionMessage = {
role: 'user' | 'assistant' | 'system'
content: unknown
timestamp?: string
uuid?: string
parentUuid?: string | null
[key: string]: unknown
}
// Re-export precise SDK message types from generated types
// These use camelCase field names and discriminated unions for full IntelliSense
export type { SDKMessage as SDKMessage } from './sdk/coreTypes.generated.js'
export type { SDKUserMessage as SDKUserMessage } from './sdk/coreTypes.generated.js'
export type { SDKResultMessage as SDKResultMessage } from './sdk/coreTypes.generated.js'
// ============================================================================
// Query types
// ============================================================================
export type QueryPermissionMode =
| 'default'
| 'plan'
| 'auto-accept'
| 'bypass-permissions'
| 'bypassPermissions'
| 'acceptEdits'
export type QueryOptions = {
cwd: string
additionalDirectories?: string[]
model?: string
sessionId?: string
/** Fork the session before resuming (requires sessionId). */
fork?: boolean
/** Alias for fork. When true, resumed session forks to a new session ID. */
forkSession?: boolean
/** Resume the most recent session for this cwd (no sessionId needed). */
continue?: boolean
resume?: string
/** When resuming, resume messages up to and including this message UUID. */
resumeSessionAt?: string
permissionMode?: QueryPermissionMode
abortController?: AbortController
executable?: string
allowDangerouslySkipPermissions?: boolean
disallowedTools?: string[]
hooks?: Record<string, unknown[]>
mcpServers?: Record<string, unknown>
settings?: {
env?: Record<string, string>
attribution?: { commit: string; pr: string }
}
/** Environment variables to apply during query execution. Overrides process.env. Takes precedence over settings.env. */
env?: Record<string, string | undefined>
/**
* Callback invoked before each tool use. Return `{ behavior: 'allow' }` to
* permit the call or `{ behavior: 'deny', message?: string }` to reject it.
*
* **Secure-by-default**: If neither `canUseTool` nor `onPermissionRequest`
* is provided, ALL tool uses are denied. You MUST provide at least one of
* these callbacks to allow tool execution.
*/
canUseTool?: (
name: string,
input: unknown,
options?: { toolUseID?: string },
) => Promise<{ behavior: 'allow' | 'deny'; message?: string; updatedInput?: unknown }>
/**
* Callback invoked when a tool needs permission approval. The host receives
* the request immediately and can resolve it by calling
* `query.respondToPermission(toolUseId, decision)` before the timeout.
* If omitted, tools that require permission fall through to the default
* permission logic immediately (no timeout).
*/
onPermissionRequest?: (message: SDKPermissionRequestMessage) => void
systemPrompt?:
| string
| { type: 'preset'; preset: string; append?: string }
| { type: 'custom'; content: string }
/** Agent definitions to register with the query engine. */
agents?: Record<string, {
description: string
prompt: string
tools?: string[]
disallowedTools?: string[]
model?: string
maxTurns?: number
}>
settingSources?: string[]
/** When true, yields stream_event messages for token-by-token streaming. */
includePartialMessages?: boolean
/** @internal Timeout in ms for permission request resolution. Default 30000. */
_permissionTimeoutMs?: number
stderr?: (data: string) => void
}
export interface Query {
readonly sessionId: string
[Symbol.asyncIterator](): AsyncIterator<SDKMessage>
setModel(model: string): Promise<void>
setPermissionMode(mode: QueryPermissionMode): Promise<void>
close(): void
interrupt(): void
respondToPermission(toolUseId: string, decision: PermissionResult): void
/** Check if file rewind is possible. */
rewindFiles(): RewindFilesResult
/** Actually perform the file rewind. Returns files changed and diff stats. */
rewindFilesAsync(): Promise<RewindFilesResult>
supportedCommands(): string[]
supportedModels(): string[]
supportedAgents(): string[]
mcpServerStatus(): McpServerStatus[]
accountInfo(): Promise<{ apiKeySource: ApiKeySource; [key: string]: unknown }>
setMaxThinkingTokens(tokens: number): void
}
/**
* Permission request message emitted when a tool needs permission approval.
* Hosts can respond via respondToPermission() using the request_id.
*/
export type SDKPermissionRequestMessage = {
type: 'permission_request'
request_id: string
tool_name: string
tool_use_id: string
input: Record<string, unknown>
uuid: string
session_id: string
}
export type SDKPermissionTimeoutMessage = {
type: 'permission_timeout'
tool_name: string
tool_use_id: string
timed_out_after_ms: number
uuid: string
session_id: string
}
// ============================================================================
// V2 API types
// ============================================================================
export type SDKSessionOptions = {
cwd: string
model?: string
permissionMode?: QueryPermissionMode
abortController?: AbortController
/**
* Callback invoked before each tool use. Return `{ behavior: 'allow' }` to
* permit the call or `{ behavior: 'deny', message?: string }` to reject it.
*
* **Secure-by-default**: If neither `canUseTool` nor `onPermissionRequest`
* is provided, ALL tool uses are denied. You MUST provide at least one of
* these callbacks to allow tool execution.
*/
canUseTool?: (
name: string,
input: unknown,
options?: { toolUseID?: string },
) => Promise<{ behavior: 'allow' | 'deny'; message?: string; updatedInput?: unknown }>
/** MCP server configurations for this session. */
mcpServers?: Record<string, unknown>
/**
* Callback invoked when a tool needs permission approval. The host receives
* the request immediately and can resolve it via respondToPermission().
*/
onPermissionRequest?: (message: SDKPermissionRequestMessage) => void
}
export interface SDKSession {
sessionId: string
sendMessage(content: string): AsyncIterable<SDKMessage>
getMessages(): SDKMessage[]
interrupt(): void
/** Respond to a pending permission prompt. */
respondToPermission(toolUseId: string, decision: PermissionResult): void
}
// ============================================================================
// MCP tool types
// ============================================================================
export interface SdkMcpToolDefinition<Schema = any> {
name: string
description: string
inputSchema: Schema
handler: (args: any, extra: unknown) => Promise<any>
annotations?: any
searchHint?: string
alwaysLoad?: boolean
}
// ============================================================================
// Session functions
// ============================================================================
export function listSessions(
options?: ListSessionsOptions,
): Promise<SDKSessionInfo[]>
export function getSessionInfo(
sessionId: string,
options?: GetSessionInfoOptions,
): Promise<SDKSessionInfo | undefined>
export function getSessionMessages(
sessionId: string,
options?: GetSessionMessagesOptions,
): Promise<SessionMessage[]>
export function renameSession(
sessionId: string,
title: string,
options?: SessionMutationOptions,
): Promise<void>
export function tagSession(
sessionId: string,
tag: string | null,
options?: SessionMutationOptions,
): Promise<void>
export function forkSession(
sessionId: string,
options?: ForkSessionOptions,
): Promise<ForkSessionResult>
export function deleteSession(
sessionId: string,
options?: SessionMutationOptions,
): Promise<void>
// ============================================================================
// Query functions
// ============================================================================
export function query(params: {
prompt: string | AsyncIterable<SDKUserMessage>
options?: QueryOptions
}): Query
export function queryAsync(params: {
prompt: string | AsyncIterable<SDKUserMessage>
options?: QueryOptions
}): Promise<Query>
// ============================================================================
// V2 API functions
// ============================================================================
export function unstable_v2_createSession(options: SDKSessionOptions): SDKSession
export function unstable_v2_resumeSession(
sessionId: string,
options: SDKSessionOptions,
): Promise<SDKSession>
export function unstable_v2_prompt(
message: string,
options: SDKSessionOptions,
): Promise<SDKResultMessage>
// ============================================================================
// MCP tool functions
// ============================================================================
export function tool<Schema = any>(
name: string,
description: string,
inputSchema: Schema,
handler: (args: any, extra: unknown) => Promise<any>,
extras?: {
annotations?: any
searchHint?: string
alwaysLoad?: boolean
},
): SdkMcpToolDefinition<Schema>
/**
* MCP server transport configuration types.
* Matches McpServerConfigForProcessTransport from coreTypes.generated.ts.
*/
export type SdkMcpStdioConfig = {
type?: "stdio"
command: string
args?: string[]
env?: Record<string, string>
}
export type SdkMcpSSEConfig = {
type: "sse"
url: string
headers?: Record<string, string>
}
export type SdkMcpHttpConfig = {
type: "http"
url: string
headers?: Record<string, string>
}
export type SdkMcpSdkConfig = {
type: "sdk"
name: string
}
export type SdkMcpServerConfig = SdkMcpStdioConfig | SdkMcpSSEConfig | SdkMcpHttpConfig | SdkMcpSdkConfig
/**
* Scoped MCP server config with session scope.
* Returned by createSdkMcpServer() for use with mcpServers option.
*/
export type SdkScopedMcpServerConfig = SdkMcpServerConfig & {
scope: "session"
}
/**
* Wraps an MCP server configuration for use with the SDK.
* Adds the 'session' scope marker so the SDK knows this server
* should be connected per-session (not globally).
*
* @param config - MCP server config (stdio, sse, http, or sdk type)
* @returns Scoped config with scope: 'session' added
*
* @example
* ```typescript
* const server = createSdkMcpServer({
* type: 'stdio',
* command: 'npx',
* args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
* })
* const session = unstable_v2_createSession({
* cwd: '/my/project',
* mcpServers: { 'fs': server },
* })
* ```
*/
export function createSdkMcpServer(config: SdkMcpServerConfig): SdkScopedMcpServerConfig

View File

@@ -0,0 +1,10 @@
/**
* Stub — control protocol types not included in source snapshot. See
* src/types/message.ts for the same scoping caveat (issue #473).
*/
/* eslint-disable @typescript-eslint/no-explicit-any */
export type SDKControlRequest = any
export type SDKControlResponse = any
export type SDKControlPermissionRequest = any
export type StdoutMessage = any

Some files were not shown because too many files have changed in this diff Show More