Compare commits

..

10 Commits

Author SHA1 Message Date
github-actions[bot]
e6af990375 chore(main): release 0.8.0 2026-04-29 16:59:26 +00:00
KRATOS
ee0d930093 fix(ripgrep): use @vscode/ripgrep package as the builtin source (#911) (#932)
The vendored-binary lookup at vendor/ripgrep/<arch>-<platform>/rg never
resolved in this fork — that directory does not ship — so users without
a system rg had no working fallback. Switch to the @vscode/ripgrep
package so Microsoft maintains the platform/arch matrix and the binary
is delivered via npm.

- src/utils/ripgrep.ts: replace hand-rolled vendor-path resolution with
  rgPath from @vscode/ripgrep. Lazy require so a missing package falls
  through to the system rg branch instead of throwing at import.
  Drop builtinExists from the config args; builtinCommand is now a
  string-or-null. The system override (USE_BUILTIN_RIPGREP=0), the
  Bun-compiled standalone embedded mode, the macOS codesign hook, and
  all retry/timeout/error logic are preserved untouched.
- scripts/build.ts: mark @vscode/ripgrep as external. The package
  resolves rgPath via __dirname at runtime, so bundling would freeze
  the build host's absolute path into dist/cli.mjs.
- src/utils/ripgrep.test.ts: update for the new config shape and add
  tests covering USE_BUILTIN_RIPGREP=0, embedded mode, last-resort
  fallback, and null builtin path.

Tested locally on Linux (Bun 1.3.13). macOS (codesign hook) and
Windows (rg.exe extension) need contributor verification.
2026-04-30 00:58:46 +08:00
ArkhAngelLifeJiggy
0ca4333537 feat: add streaming token counter (#797)
* feat: add streaming token counter

- Add StreamingTokenCounter for real-time token counting during generation
- Tracks output tokens as they arrive from stream
- Calculates tokens per second rate
- Add tests (4 passing)

PR 4A: Streaming Token Counter (Features 1.2, 1.7)

* refactor: move StreamingTokenCounter to separate file

- Extract StreamingTokenCounter from tokens.ts to streamingTokenCounter.ts
- Add getEstimatedRemainingTokens() method
- Update test import

* fix: word-boundary token counting for stable stream totals

- Accumulate raw content, count only at word boundaries
- Eliminates instability from arbitrary chunk boundaries
- Add finalize() to flush remaining content on stream end
- Add characterCount getter for raw content tracking
- Rename getEstimatedRemainingTokens -> getEstimatedGenerationTimeMs
- Add comprehensive tests

* fix: update streamingTokens test for word-boundary API

- Add finalize() call before checking output tokens
- Use characterCount for interim checks
- Add spaces to trigger word boundary counting

* fix: add estimateRemainingTokens/Time methods

- Add estimateRemainingTokens(target) method
- Add estimateRemainingTimeMs(target) method
- Covers non-blocking: now properly estimates remaining tokens

* fix: PR 797 - fix word boundary counting, consolidate tests

Blockers (Vasanthdev2004):
- recountAtWordBoundary now searches forward from lastCountedIndex+1
- Finds NEXT space after already-counted region, not before it
- Provides accurate live token counts during streaming, not just finalize()

Non-blocking (gnanam1990):
- Delete streamingTokens.test.ts, merge tests into streamingTokenCounter.test.ts
- Added interim-counting test to verify counting updates during streaming

* fix: PR 797 - fix word boundary advancement after space

Blocking:
- Fix recountAtWordBoundary to skip past space when searching for next boundary
- After counting at a space, indexOf(' ') returns 0 (the space itself)
- Now starts search from index 1 to find the NEXT word boundary
- Short chunks now properly trigger count advancement

Non-blocking:
- Add test verifying count increases after each word boundary
- Add test for space-skipping behavior
2026-04-29 16:17:00 +08:00
ArkhAngelLifeJiggy
92d297e50e feat: context preloading and hybrid context strategy (#860)
* feat: context preloading and hybrid context strategy

PR 2D - Section 2.7, 2.8:
- Add contextPreload.ts with pattern-based prediction
- Add hybridContextStrategy.ts with cache/fresh balancing
- Optimize for cost vs accuracy
- Add comprehensive tests (13 passing)

* feat: wire hybrid context strategy into API path

- Apply hybrid strategy after normalizeMessagesForAPI
- Feature-flag controlled (HYBRID_CONTEXT_STRATEGY)
- Optimizes cache/fresh balance for API requests

* fix: resolve PR 2D blocking issues

- Fix predictContextNeeds self-assign bug (matchedCategory = category)
- Add test for non-empty predictedNeed
- Preserve conversation tail in hybridStrategy (never drop last 3 messages)
- Add comment for hardcoded 200k cap in claude.ts

Fixes reviewer feedback from gnanam1990 and Vasanthdev2004

* fix: preserve tool_use/tool_result chains in hybridStrategy

- Increase MIN_TAIL to 5 (tool_use -> tool_result -> assistant -> user -> next)
- Add getMessageChain() to preserve paired messages
- Chains kept together in final selection

* fix: PR 860 - tool_use/tool_result pairing and safe token counting

Blocking:
- getMessageChain() now pairs by tool_use.id (block ID) not msg.message.id
- Find tool_use blocks by id, pair with tool_result having matching tool_use_id
- Fixes tool_result surviving while paired tool_use dropped

- Token counting now includes array content (tool_use, tool_result, thinking)
- Not just string content, prevents undercounting prompt size

- Deduplicate messages by UUID when combining chains + split + tail
- Prevents duplicate messages in final request

Non-blocking:
- Add regression test for tool_use/tool_result pairing

* fix: PR 860 - account for actual structured payload size in token counting

Blocking:
- getMessageTokenCount now calculates actual token count for structured blocks
- tool_use: uses JSON.stringify(input).length / 4 + base
- tool_result: counts actual content (string or array of text blocks)
- thinking: counts actual thinking text length / 4
- is_error flag adds small overhead

Non-blocking:
- Add tests for large tool_use input and large thinking blocks
2026-04-29 15:49:46 +08:00
emsanakhchivan
91f93ce615 feat: SDK Foundation — Type Declarations, Errors, and Utilities (#866)
* feat(sdk): add SDK foundation — type declarations, errors, and utilities

Adds standalone SDK building blocks with no SDK source dependencies:
- sdk.d.ts: ambient type declarations for SDK bundle
- coreSchemas.ts + coreTypes.generated.ts: Zod schemas and generated types
- errors.ts: SDK-specific error classes
- validation.ts: input validation utilities
- messageFilters.ts: extracted message filter logic
- handlePromptSubmit.ts: imports from messageFilters
- 16 generated-types tests

* fix(sdk): narrow assertFunction type from broad Function to callable signature

Code review finding: assertFunction used `asserts value is Function` which
accepts any function-like value without narrowing. Changed to
`(...args: any[]) => any` for better type safety.

* fix(sdk): update sdk.d.ts header — manually maintained, not generated

Reviewer noted the header said "Generated from index.ts" but no generator
produces this file. Updated to "Manually maintained — keep in sync with
index.ts". Drift detection added in validate-externals.ts (PR 3).

* fix(sdk): align sdk.d.ts types with canonical coreTypes.generated.ts

Tighten SDK public type contract to resolve reviewer blockers:

- PermissionResult: unknown[] → precise 6-shape discriminated union
  (addRules/replaceRules/removeRules/setMode/addDirectories/removeDirectories)
- SDKSessionInfo: snake_case → camelCase (sessionId, lastModified, etc.)
- ForkSessionResult: session_id → sessionId
- SDKPermissionRequestMessage: uuid + session_id now required
- SDKPermissionTimeoutMessage: added uuid + session_id
- SessionMessage: parent_uuid → parentUuid
- SDKMessage/SDKUserMessage/SDKResultMessage: replaced loose inline
  definitions with re-exports from coreTypes.generated.ts

---------

Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
2026-04-29 14:53:01 +08:00
KRATOS
5943c5c269 fix(input): strip leading ! when entering bash mode (#947)
The PromptInput onChange handler had two branches for entering bash
mode: a single-char path that just toggled the mode and a multi-char
paste path that also stripped the leading `!` from the buffer. The
single-char path returned without stripping, so typing a bare `!` into
empty input switched modes but left the literal `!` visible.

Consolidated both paths through a new pure helper `detectModeEntry`
that returns the new mode plus the stripped buffer value, so there is
no longer a branch where the mode character can leak into the buffer.

Fixes #662
2026-04-29 10:29:59 +08:00
Kevin Codex
c0b5535d86 docs: add Atomic Chat partner (#942)
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-28 23:35:25 +08:00
Vasanth T
d321c8fc6a fix: avoid legacy Windows PasswordVault reads by default (#941)
* fix: avoid legacy Windows PasswordVault reads by default

* fix: isolate model capability override cache

---------

Co-authored-by: OpenClaude Worker 3 <worker-3@openclaude.local>
2026-04-28 23:30:48 +08:00
KRATOS
8106880855 fix(typecheck): make bun run typecheck actionable on main (#473) (#938)
Issue #473 reported that `bun run typecheck` fails on main with ~4400
errors due to repo-foundation drift, masking branch-specific
regressions. Per kevincodex1's guidance ("lets narrow the typecheck
scope for now and then we expand step by step") this PR addresses the
foundational root causes and brings the error count down 60% so the
gate is actionable for branch reviews.

Changes:

- tsconfig.json: bump target to ES2023 + add lib ["ES2023", "DOM"]
  so Array.findLast / findLastIndex resolve (kills 41 TS2550 errors).
  Add `noEmit: true` for typecheck-only mode and
  `allowImportingTsExtensions: true` (kills 40 TS5097 errors). Set
  `noImplicitAny: false` because cleaning up TSX-component implicit
  any is explicitly out of scope per the issue.

- src/global.d.ts: ambient declaration for the build-time MACRO
  global injected by scripts/build.ts via Bun's `define` option
  (kills 9 TS2304 'Cannot find name MACRO' errors).

- src/types/{message,utils,tools}.ts: stubs for the highest-impact
  missing modules from the partial source snapshot (~21 importers
  for message alone). Document the snapshot caveat at the top of each
  stub and reference issue #473 so future readers know they're
  placeholders.

- src/entrypoints/sdk/controlTypes.ts and src/constants/querySource.ts:
  similar one-file stubs unblocking 18 + 19 importers respectively.

- src/entrypoints/agentSdkTypes.ts: append `any`-typed aliases for
  ~70 SDK names that callers expect on the public surface but that
  live in stubbed sub-files (PermissionMode, SDKCompactBoundaryMessage,
  HookEvent, ModelUsage, ModelInfo, etc. — exactly the list from
  auriti's bug-report enumeration).

Verified locally on Linux:
- baseline `bunx tsc --noEmit` on stashed main: 4434 errors
- with PR applied:                              1782 errors (60% drop)
- `bun run build`:                              passes (v0.7.0)
- `bun test`:                                   1632 pass; the 4
   remaining failures (StartupScreen, thinking) reproduce on main
   and are unrelated.
- TS2550 (lib): 41 → 0
- TS5097 (.ts imports): 40 → 0
- TS2304 'MACRO': 9 → 0
- TS2307 missing modules: 587 → 325

Remaining errors are localized to specific stubbed modules and can
be addressed in smaller follow-up issues, matching the issue's
"Definition of done" criterion.
2026-04-28 17:44:26 +08:00
Kevin Codex
4c93a9f9f1 feat: add Opus 4.7 as default model and fix alias/thinking bugs (#928)
- Add CLAUDE_OPUS_4_7_CONFIG and register it in ALL_MODEL_CONFIGS
- Set Opus 4.7 as default for firstParty in getDefaultOpusModel() (3P stays on 4.6 until rollout)
- Fix sonnet[1m] → 404 bug: query.ts was passing raw alias to API without resolving via parseUserSpecifiedModel
- Add opus-4-7 to modelSupportsAdaptiveThinking so it uses { type: 'adaptive' } not { type: 'enabled' }
- Fix duplicate opus47 case and wrong opus46[1m] fallthrough in getPublicModelDisplayName switch
- Update user-facing display strings (picker labels, plan mode description) to reference Opus 4.7
- Add 3P fallback suggestion chain for opus-4-7 → opus-4-6 in validateModel

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-28 17:31:06 +08:00
78 changed files with 5101 additions and 2464 deletions

View File

@@ -1,3 +1,3 @@
{ {
".": "0.7.0" ".": "0.8.0"
} }

View File

@@ -1,5 +1,24 @@
# Changelog # Changelog
## [0.8.0](https://github.com/Gitlawb/openclaude/compare/v0.7.0...v0.8.0) (2026-04-29)
### Features
* add Opus 4.7 as default model and fix alias/thinking bugs ([#928](https://github.com/Gitlawb/openclaude/issues/928)) ([4c93a9f](https://github.com/Gitlawb/openclaude/commit/4c93a9f9f168217d4bdd53d103337e43f28be074))
* add streaming token counter ([#797](https://github.com/Gitlawb/openclaude/issues/797)) ([0ca4333](https://github.com/Gitlawb/openclaude/commit/0ca43335375beec6e58711b797d5b0c4bb5019b8))
* **api:** deterministic request-body serialization via stableStringify ([#882](https://github.com/Gitlawb/openclaude/issues/882)) ([6ea3eb6](https://github.com/Gitlawb/openclaude/commit/6ea3eb64830ccfec1436bcebe2406158e14a7e81))
* context preloading and hybrid context strategy ([#860](https://github.com/Gitlawb/openclaude/issues/860)) ([92d297e](https://github.com/Gitlawb/openclaude/commit/92d297e50efcc7225f57f0d3cb0ba989dc40d624))
* SDK Foundation — Type Declarations, Errors, and Utilities ([#866](https://github.com/Gitlawb/openclaude/issues/866)) ([91f93ce](https://github.com/Gitlawb/openclaude/commit/91f93ce61533a9cadd1d107e09a442451c09f5db))
### Bug Fixes
* avoid legacy Windows PasswordVault reads by default ([#941](https://github.com/Gitlawb/openclaude/issues/941)) ([d321c8f](https://github.com/Gitlawb/openclaude/commit/d321c8fc6a0be6731c1ccfec0fca8023b1a8b67e))
* **input:** strip leading ! when entering bash mode ([#947](https://github.com/Gitlawb/openclaude/issues/947)) ([5943c5c](https://github.com/Gitlawb/openclaude/commit/5943c5c269cdeba45879dac0d8da0082e28cc2a2)), closes [#662](https://github.com/Gitlawb/openclaude/issues/662)
* **ripgrep:** use @vscode/ripgrep package as the builtin source ([#911](https://github.com/Gitlawb/openclaude/issues/911)) ([#932](https://github.com/Gitlawb/openclaude/issues/932)) ([ee0d930](https://github.com/Gitlawb/openclaude/commit/ee0d9300939db0c6178bfad4707a0be45f126d1f))
* **typecheck:** make `bun run typecheck` actionable on main ([#473](https://github.com/Gitlawb/openclaude/issues/473)) ([#938](https://github.com/Gitlawb/openclaude/issues/938)) ([8106880](https://github.com/Gitlawb/openclaude/commit/8106880855ee0bb4b5bbca8827cfe97fe99558b8))
## [0.7.0](https://github.com/Gitlawb/openclaude/compare/v0.6.0...v0.7.0) (2026-04-26) ## [0.7.0](https://github.com/Gitlawb/openclaude/compare/v0.6.0...v0.7.0) (2026-04-26)

View File

@@ -25,12 +25,18 @@ OpenClaude is also mirrored to GitLawb:
<a href="https://bankr.bot"> <a href="https://bankr.bot">
<img src="https://bankr.bot/favicon.svg" alt="Bankr.bot logo" width="96"> <img src="https://bankr.bot/favicon.svg" alt="Bankr.bot logo" width="96">
</a> </a>
&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://atomic.chat/">
<img src="docs/assets/atomic-chat-logo.png" alt="Atomic Chat logo" width="96">
</a>
</p> </p>
<p align="center"> <p align="center">
<a href="https://gitlawb.com"><strong>GitLawb</strong></a> <a href="https://gitlawb.com"><strong>GitLawb</strong></a>
&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://bankr.bot"><strong>Bankr.bot</strong></a> <a href="https://bankr.bot"><strong>Bankr.bot</strong></a>
&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://atomic.chat/"><strong>Atomic Chat</strong></a>
</p> </p>
## Star History ## Star History
@@ -154,7 +160,6 @@ Advanced and source-build guides:
- **Images**: URL and base64 image inputs for providers that support vision - **Images**: URL and base64 image inputs for providers that support vision
- **Provider profiles**: Guided setup plus saved `.openclaude-profile.json` support - **Provider profiles**: Guided setup plus saved `.openclaude-profile.json` support
- **Local and remote model backends**: Cloud APIs, local servers, and Apple Silicon local inference - **Local and remote model backends**: Cloud APIs, local servers, and Apple Silicon local inference
- **Codebase intelligence (repo map)**: Structural map of the repository ranked by PageRank importance, auto-injected into context when the `REPO_MAP` flag is enabled. Inspect with `/repomap`. See [docs/repo-map.md](docs/repo-map.md) for details.
## Provider Notes ## Provider Notes

View File

@@ -28,6 +28,7 @@
"@opentelemetry/sdk-trace-base": "2.6.1", "@opentelemetry/sdk-trace-base": "2.6.1",
"@opentelemetry/sdk-trace-node": "2.6.1", "@opentelemetry/sdk-trace-node": "2.6.1",
"@opentelemetry/semantic-conventions": "1.40.0", "@opentelemetry/semantic-conventions": "1.40.0",
"@vscode/ripgrep": "^1.17.1",
"ajv": "8.18.0", "ajv": "8.18.0",
"auto-bind": "5.0.1", "auto-bind": "5.0.1",
"axios": "1.15.0", "axios": "1.15.0",
@@ -49,13 +50,9 @@
"fuse.js": "7.1.0", "fuse.js": "7.1.0",
"get-east-asian-width": "1.5.0", "get-east-asian-width": "1.5.0",
"google-auth-library": "9.15.1", "google-auth-library": "9.15.1",
"graphology": "^0.26.0",
"graphology-operators": "^1.6.0",
"graphology-pagerank": "^1.1.0",
"https-proxy-agent": "7.0.6", "https-proxy-agent": "7.0.6",
"ignore": "7.0.5", "ignore": "7.0.5",
"indent-string": "5.0.0", "indent-string": "5.0.0",
"js-tiktoken": "^1.0.16",
"jsonc-parser": "3.3.1", "jsonc-parser": "3.3.1",
"lodash-es": "4.18.1", "lodash-es": "4.18.1",
"lru-cache": "11.2.7", "lru-cache": "11.2.7",
@@ -75,13 +72,11 @@
"strip-ansi": "7.2.0", "strip-ansi": "7.2.0",
"supports-hyperlinks": "3.2.0", "supports-hyperlinks": "3.2.0",
"tree-kill": "1.2.2", "tree-kill": "1.2.2",
"tree-sitter-wasms": "^0.1.12",
"turndown": "7.2.2", "turndown": "7.2.2",
"type-fest": "4.41.0", "type-fest": "4.41.0",
"undici": "7.24.6", "undici": "7.24.6",
"usehooks-ts": "3.1.1", "usehooks-ts": "3.1.1",
"vscode-languageserver-protocol": "3.17.5", "vscode-languageserver-protocol": "3.17.5",
"web-tree-sitter": "^0.25.0",
"wrap-ansi": "9.0.2", "wrap-ansi": "9.0.2",
"ws": "8.20.0", "ws": "8.20.0",
"xss": "1.0.15", "xss": "1.0.15",
@@ -467,6 +462,8 @@
"@types/react": ["@types/react@19.2.14", "", { "dependencies": { "csstype": "^3.2.2" } }, "sha512-ilcTH/UniCkMdtexkoCN0bI7pMcJDvmQFPvuPvmEaYA/NSfFTAgdUSLAoVjaRJm7+6PvcM+q1zYOwS4wTYMF9w=="], "@types/react": ["@types/react@19.2.14", "", { "dependencies": { "csstype": "^3.2.2" } }, "sha512-ilcTH/UniCkMdtexkoCN0bI7pMcJDvmQFPvuPvmEaYA/NSfFTAgdUSLAoVjaRJm7+6PvcM+q1zYOwS4wTYMF9w=="],
"@vscode/ripgrep": ["@vscode/ripgrep@1.17.1", "", { "dependencies": { "https-proxy-agent": "^7.0.2", "proxy-from-env": "^1.1.0", "yauzl": "^2.9.2" } }, "sha512-xTs7DGyAO3IsJYOCTBP8LnTvPiYVKEuyv8s0xyJDBXfs8rhBfqnZPvb6xDT+RnwWzcXqW27xLS/aGrkjX7lNWw=="],
"accepts": ["accepts@2.0.0", "", { "dependencies": { "mime-types": "^3.0.0", "negotiator": "^1.0.0" } }, "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng=="], "accepts": ["accepts@2.0.0", "", { "dependencies": { "mime-types": "^3.0.0", "negotiator": "^1.0.0" } }, "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng=="],
"agent-base": ["agent-base@7.1.4", "", {}, "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ=="], "agent-base": ["agent-base@7.1.4", "", {}, "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ=="],
@@ -497,6 +494,8 @@
"bowser": ["bowser@2.14.1", "", {}, "sha512-tzPjzCxygAKWFOJP011oxFHs57HzIhOEracIgAePE4pqB3LikALKnSzUyU4MGs9/iCEUuHlAJTjTc5M+u7YEGg=="], "bowser": ["bowser@2.14.1", "", {}, "sha512-tzPjzCxygAKWFOJP011oxFHs57HzIhOEracIgAePE4pqB3LikALKnSzUyU4MGs9/iCEUuHlAJTjTc5M+u7YEGg=="],
"buffer-crc32": ["buffer-crc32@0.2.13", "", {}, "sha512-VO9Ht/+p3SN7SKWqcrgEzjGbRSJYTx+Q1pTQC0wrWqHx0vpJraQ6GtHx8tvcg1rlK1byhU5gccxgOgj7B0TDkQ=="],
"buffer-equal-constant-time": ["buffer-equal-constant-time@1.0.1", "", {}, "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA=="], "buffer-equal-constant-time": ["buffer-equal-constant-time@1.0.1", "", {}, "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA=="],
"bun-types": ["bun-types@1.3.11", "", { "dependencies": { "@types/node": "*" } }, "sha512-1KGPpoxQWl9f6wcZh57LvrPIInQMn2TQ7jsgxqpRzg+l0QPOFvJVH7HmvHo/AiPgwXy+/Thf6Ov3EdVn1vOabg=="], "bun-types": ["bun-types@1.3.11", "", { "dependencies": { "@types/node": "*" } }, "sha512-1KGPpoxQWl9f6wcZh57LvrPIInQMn2TQ7jsgxqpRzg+l0QPOFvJVH7HmvHo/AiPgwXy+/Thf6Ov3EdVn1vOabg=="],
@@ -595,8 +594,6 @@
"etag": ["etag@1.8.1", "", {}, "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg=="], "etag": ["etag@1.8.1", "", {}, "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg=="],
"events": ["events@3.3.0", "", {}, "sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q=="],
"eventsource": ["eventsource@3.0.7", "", { "dependencies": { "eventsource-parser": "^3.0.1" } }, "sha512-CRT1WTyuQoD771GW56XEZFQ/ZoSfWid1alKGDYMmkt2yl8UXrVR4pspqWNEcqKvVIzg6PAltWjxcSSPrboA4iA=="], "eventsource": ["eventsource@3.0.7", "", { "dependencies": { "eventsource-parser": "^3.0.1" } }, "sha512-CRT1WTyuQoD771GW56XEZFQ/ZoSfWid1alKGDYMmkt2yl8UXrVR4pspqWNEcqKvVIzg6PAltWjxcSSPrboA4iA=="],
"eventsource-parser": ["eventsource-parser@3.0.6", "", {}, "sha512-Vo1ab+QXPzZ4tCa8SwIHJFaSzy4R6SHf7BY79rFBDf0idraZWAkYrDjDj8uWaSm3S2TK+hJ7/t1CEmZ7jXw+pg=="], "eventsource-parser": ["eventsource-parser@3.0.6", "", {}, "sha512-Vo1ab+QXPzZ4tCa8SwIHJFaSzy4R6SHf7BY79rFBDf0idraZWAkYrDjDj8uWaSm3S2TK+hJ7/t1CEmZ7jXw+pg=="],
@@ -617,6 +614,8 @@
"fast-xml-parser": ["fast-xml-parser@5.5.8", "", { "dependencies": { "fast-xml-builder": "^1.1.4", "path-expression-matcher": "^1.2.0", "strnum": "^2.2.0" }, "bin": { "fxparser": "src/cli/cli.js" } }, "sha512-Z7Fh2nVQSb2d+poDViM063ix2ZGt9jmY1nWhPfHBOK2Hgnb/OW3P4Et3P/81SEej0J7QbWtJqxO05h8QYfK7LQ=="], "fast-xml-parser": ["fast-xml-parser@5.5.8", "", { "dependencies": { "fast-xml-builder": "^1.1.4", "path-expression-matcher": "^1.2.0", "strnum": "^2.2.0" }, "bin": { "fxparser": "src/cli/cli.js" } }, "sha512-Z7Fh2nVQSb2d+poDViM063ix2ZGt9jmY1nWhPfHBOK2Hgnb/OW3P4Et3P/81SEej0J7QbWtJqxO05h8QYfK7LQ=="],
"fd-slicer": ["fd-slicer@1.1.0", "", { "dependencies": { "pend": "~1.2.0" } }, "sha512-cE1qsB/VwyQozZ+q1dGxR8LBYNZeofhEdUNGSMbQD3Gw2lAzX9Zb3uIU6Ebc/Fmyjo9AWWfnn0AUCHqtevs/8g=="],
"fflate": ["fflate@0.8.2", "", {}, "sha512-cPJU47OaAoCbg0pBvzsgpTPhmhqI5eJjh/JIu8tPj5q+T7iLvW/JAYUqmE7KOB4R1ZyEhzBaIQpQpardBF5z8A=="], "fflate": ["fflate@0.8.2", "", {}, "sha512-cPJU47OaAoCbg0pBvzsgpTPhmhqI5eJjh/JIu8tPj5q+T7iLvW/JAYUqmE7KOB4R1ZyEhzBaIQpQpardBF5z8A=="],
"figures": ["figures@6.1.0", "", { "dependencies": { "is-unicode-supported": "^2.0.0" } }, "sha512-d+l3qxjSesT4V7v2fh+QnmFnUWv9lSpjarhShNTgBOfA0ttejbQUAlHLitbjkoRiDulW0OPoQPYIGhIC8ohejg=="], "figures": ["figures@6.1.0", "", { "dependencies": { "is-unicode-supported": "^2.0.0" } }, "sha512-d+l3qxjSesT4V7v2fh+QnmFnUWv9lSpjarhShNTgBOfA0ttejbQUAlHLitbjkoRiDulW0OPoQPYIGhIC8ohejg=="],
@@ -665,16 +664,6 @@
"graceful-fs": ["graceful-fs@4.2.11", "", {}, "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ=="], "graceful-fs": ["graceful-fs@4.2.11", "", {}, "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ=="],
"graphology": ["graphology@0.26.0", "", { "dependencies": { "events": "^3.3.0" }, "peerDependencies": { "graphology-types": ">=0.24.0" } }, "sha512-8SSImzgUUYC89Z042s+0r/vMibY7GX/Emz4LDO5e7jYXhuoWfHISPFJYjpRLUSJGq6UQ6xlenvX1p/hJdfXuXg=="],
"graphology-operators": ["graphology-operators@1.6.1", "", { "dependencies": { "graphology-utils": "^2.0.0" }, "peerDependencies": { "graphology-types": ">=0.20.0" } }, "sha512-ZKGcaN+6L5hv0VelrDgkZ2IQL1c7nrqkTRiHDwBCjmbkS56vWh/iQNDnvd/c9YIpoygtEK0mgGOr/m4i7BOYrw=="],
"graphology-pagerank": ["graphology-pagerank@1.1.0", "", { "dependencies": { "graphology-utils": "^1.3.0", "lodash": "^4.17.5" } }, "sha512-ubhzN7HDKYSaFFvzqQsqQp14LIgCPNGaioWVZgc5E49NEKUOtCVehWEDF/9QXDUiK+4cMzj/yRoneJbYR0Rc3A=="],
"graphology-types": ["graphology-types@0.24.8", "", {}, "sha512-hDRKYXa8TsoZHjgEaysSRyPdT6uB78Ci8WnjgbStlQysz7xR52PInxNsmnB7IBOM1BhikxkNyCVEFgmPKnpx3Q=="],
"graphology-utils": ["graphology-utils@2.5.2", "", { "peerDependencies": { "graphology-types": ">=0.23.0" } }, "sha512-ckHg8MXrXJkOARk56ZaSCM1g1Wihe2d6iTmz1enGOz4W/l831MBCKSayeFQfowgF8wd+PQ4rlch/56Vs/VZLDQ=="],
"gtoken": ["gtoken@7.1.0", "", { "dependencies": { "gaxios": "^6.0.0", "jws": "^4.0.0" } }, "sha512-pCcEwRi+TKpMlxAQObHDQ56KawURgyAf6jtIY046fJ5tIv3zDe/LEIubckAO8fj6JnAxLdmWkUfNyulQ2iKdEw=="], "gtoken": ["gtoken@7.1.0", "", { "dependencies": { "gaxios": "^6.0.0", "jws": "^4.0.0" } }, "sha512-pCcEwRi+TKpMlxAQObHDQ56KawURgyAf6jtIY046fJ5tIv3zDe/LEIubckAO8fj6JnAxLdmWkUfNyulQ2iKdEw=="],
"has-flag": ["has-flag@4.0.0", "", {}, "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ=="], "has-flag": ["has-flag@4.0.0", "", {}, "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ=="],
@@ -723,8 +712,6 @@
"jose": ["jose@6.2.2", "", {}, "sha512-d7kPDd34KO/YnzaDOlikGpOurfF0ByC2sEV4cANCtdqLlTfBlw2p14O/5d/zv40gJPbIQxfES3nSx1/oYNyuZQ=="], "jose": ["jose@6.2.2", "", {}, "sha512-d7kPDd34KO/YnzaDOlikGpOurfF0ByC2sEV4cANCtdqLlTfBlw2p14O/5d/zv40gJPbIQxfES3nSx1/oYNyuZQ=="],
"js-tiktoken": ["js-tiktoken@1.0.21", "", { "dependencies": { "base64-js": "^1.5.1" } }, "sha512-biOj/6M5qdgx5TKjDnFT1ymSpM5tbd3ylwDtrQvFQSu0Z7bBYko2dF+W/aUkXUPuk6IVpRxk/3Q2sHOzGlS36g=="],
"json-bigint": ["json-bigint@1.0.0", "", { "dependencies": { "bignumber.js": "^9.0.0" } }, "sha512-SiPv/8VpZuWbvLSMtTDU8hEfrZWg/mH/nV/b4o0CYbSxu1UIQPLdwKOCIyLQX+VIPO5vrLX3i8qtqFyhdPSUSQ=="], "json-bigint": ["json-bigint@1.0.0", "", { "dependencies": { "bignumber.js": "^9.0.0" } }, "sha512-SiPv/8VpZuWbvLSMtTDU8hEfrZWg/mH/nV/b4o0CYbSxu1UIQPLdwKOCIyLQX+VIPO5vrLX3i8qtqFyhdPSUSQ=="],
"json-schema-to-ts": ["json-schema-to-ts@3.1.1", "", { "dependencies": { "@babel/runtime": "^7.18.3", "ts-algebra": "^2.0.0" } }, "sha512-+DWg8jCJG2TEnpy7kOm/7/AxaYoaRbjVB4LFZLySZlWn8exGs3A4OLJR966cVvU26N7X9TWxl+Jsw7dzAqKT6g=="], "json-schema-to-ts": ["json-schema-to-ts@3.1.1", "", { "dependencies": { "@babel/runtime": "^7.18.3", "ts-algebra": "^2.0.0" } }, "sha512-+DWg8jCJG2TEnpy7kOm/7/AxaYoaRbjVB4LFZLySZlWn8exGs3A4OLJR966cVvU26N7X9TWxl+Jsw7dzAqKT6g=="],
@@ -741,8 +728,6 @@
"locate-path": ["locate-path@5.0.0", "", { "dependencies": { "p-locate": "^4.1.0" } }, "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g=="], "locate-path": ["locate-path@5.0.0", "", { "dependencies": { "p-locate": "^4.1.0" } }, "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g=="],
"lodash": ["lodash@4.18.1", "", {}, "sha512-dMInicTPVE8d1e5otfwmmjlxkZoUpiVLwyeTdUsi/Caj/gfzzblBcCE5sRHV/AsjuCmxWrte2TNGSYuCeCq+0Q=="],
"lodash-es": ["lodash-es@4.18.1", "", {}, "sha512-J8xewKD/Gk22OZbhpOVSwcs60zhd95ESDwezOFuA3/099925PdHJ7OFHNTGtajL3AlZkykD32HykiMo+BIBI8A=="], "lodash-es": ["lodash-es@4.18.1", "", {}, "sha512-J8xewKD/Gk22OZbhpOVSwcs60zhd95ESDwezOFuA3/099925PdHJ7OFHNTGtajL3AlZkykD32HykiMo+BIBI8A=="],
"lodash.camelcase": ["lodash.camelcase@4.3.0", "", {}, "sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA=="], "lodash.camelcase": ["lodash.camelcase@4.3.0", "", {}, "sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA=="],
@@ -809,6 +794,8 @@
"path-to-regexp": ["path-to-regexp@8.4.1", "", {}, "sha512-fvU78fIjZ+SBM9YwCknCvKOUKkLVqtWDVctl0s7xIqfmfb38t2TT4ZU2gHm+Z8xGwgW+QWEU3oQSAzIbo89Ggw=="], "path-to-regexp": ["path-to-regexp@8.4.1", "", {}, "sha512-fvU78fIjZ+SBM9YwCknCvKOUKkLVqtWDVctl0s7xIqfmfb38t2TT4ZU2gHm+Z8xGwgW+QWEU3oQSAzIbo89Ggw=="],
"pend": ["pend@1.2.0", "", {}, "sha512-F3asv42UuXchdzt+xXqfW1OGlVBe+mxa2mqI0pg5yAHZPvFmY3Y6drSf/GQ1A86WgWEN9Kzh/WrgKa6iGcHXLg=="],
"picomatch": ["picomatch@4.0.4", "", {}, "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A=="], "picomatch": ["picomatch@4.0.4", "", {}, "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A=="],
"pkce-challenge": ["pkce-challenge@5.0.1", "", {}, "sha512-wQ0b/W4Fr01qtpHlqSqspcj3EhBvimsdh0KlHhH8HRZnMsEa0ea2fTULOXOS9ccQr3om+GcGRk4e+isrZWV8qQ=="], "pkce-challenge": ["pkce-challenge@5.0.1", "", {}, "sha512-wQ0b/W4Fr01qtpHlqSqspcj3EhBvimsdh0KlHhH8HRZnMsEa0ea2fTULOXOS9ccQr3om+GcGRk4e+isrZWV8qQ=="],
@@ -823,7 +810,7 @@
"proxy-addr": ["proxy-addr@2.0.7", "", { "dependencies": { "forwarded": "0.2.0", "ipaddr.js": "1.9.1" } }, "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg=="], "proxy-addr": ["proxy-addr@2.0.7", "", { "dependencies": { "forwarded": "0.2.0", "ipaddr.js": "1.9.1" } }, "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg=="],
"proxy-from-env": ["proxy-from-env@2.1.0", "", {}, "sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA=="], "proxy-from-env": ["proxy-from-env@1.1.0", "", {}, "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg=="],
"qrcode": ["qrcode@1.5.4", "", { "dependencies": { "dijkstrajs": "^1.0.1", "pngjs": "^5.0.0", "yargs": "^15.3.1" }, "bin": { "qrcode": "bin/qrcode" } }, "sha512-1ca71Zgiu6ORjHqFBDpnSMTR2ReToX4l1Au1VFLyVeBTFavzQnv5JxMFr3ukHVKpSrSA2MCk0lNJSykjUfz7Zg=="], "qrcode": ["qrcode@1.5.4", "", { "dependencies": { "dijkstrajs": "^1.0.1", "pngjs": "^5.0.0", "yargs": "^15.3.1" }, "bin": { "qrcode": "bin/qrcode" } }, "sha512-1ca71Zgiu6ORjHqFBDpnSMTR2ReToX4l1Au1VFLyVeBTFavzQnv5JxMFr3ukHVKpSrSA2MCk0lNJSykjUfz7Zg=="],
@@ -915,8 +902,6 @@
"tree-kill": ["tree-kill@1.2.2", "", { "bin": { "tree-kill": "cli.js" } }, "sha512-L0Orpi8qGpRG//Nd+H90vFB+3iHnue1zSSGmNOOCh1GLJ7rUKVwV2HvijphGQS2UmhUZewS9VgvxYIdgr+fG1A=="], "tree-kill": ["tree-kill@1.2.2", "", { "bin": { "tree-kill": "cli.js" } }, "sha512-L0Orpi8qGpRG//Nd+H90vFB+3iHnue1zSSGmNOOCh1GLJ7rUKVwV2HvijphGQS2UmhUZewS9VgvxYIdgr+fG1A=="],
"tree-sitter-wasms": ["tree-sitter-wasms@0.1.13", "", { "dependencies": { "tree-sitter-wasms": "^0.1.11" } }, "sha512-wT+cR6DwaIz80/vho3AvSF0N4txuNx/5bcRKoXouOfClpxh/qqrF4URNLQXbbt8MaAxeksZcZd1j8gcGjc+QxQ=="],
"ts-algebra": ["ts-algebra@2.0.0", "", {}, "sha512-FPAhNPFMrkwz76P7cdjdmiShwMynZYN6SgOujD1urY4oNm80Ou9oMdmbR45LotcKOXoy7wSmHkRFE6Mxbrhefw=="], "ts-algebra": ["ts-algebra@2.0.0", "", {}, "sha512-FPAhNPFMrkwz76P7cdjdmiShwMynZYN6SgOujD1urY4oNm80Ou9oMdmbR45LotcKOXoy7wSmHkRFE6Mxbrhefw=="],
"tslib": ["tslib@1.14.1", "", {}, "sha512-Xni35NKzjgMrwevysHTCArtLDpPvye8zV/0E4EyYn43P7/7qvQwPh9BGkHewbMulVntbigmcT7rdX3BNo9wRJg=="], "tslib": ["tslib@1.14.1", "", {}, "sha512-Xni35NKzjgMrwevysHTCArtLDpPvye8zV/0E4EyYn43P7/7qvQwPh9BGkHewbMulVntbigmcT7rdX3BNo9wRJg=="],
@@ -953,8 +938,6 @@
"vscode-languageserver-types": ["vscode-languageserver-types@3.17.5", "", {}, "sha512-Ld1VelNuX9pdF39h2Hgaeb5hEZM2Z3jUrrMgWQAu82jMtZp7p3vJT3BzToKtZI7NgQssZje5o0zryOrhQvzQAg=="], "vscode-languageserver-types": ["vscode-languageserver-types@3.17.5", "", {}, "sha512-Ld1VelNuX9pdF39h2Hgaeb5hEZM2Z3jUrrMgWQAu82jMtZp7p3vJT3BzToKtZI7NgQssZje5o0zryOrhQvzQAg=="],
"web-tree-sitter": ["web-tree-sitter@0.25.10", "", { "peerDependencies": { "@types/emscripten": "^1.40.0" }, "optionalPeers": ["@types/emscripten"] }, "sha512-Y09sF44/13XvgVKgO2cNDw5rGk6s26MgoZPXLESvMXeefBf7i6/73eFurre0IsTW6E14Y0ArIzhUMmjoc7xyzA=="],
"webidl-conversions": ["webidl-conversions@3.0.1", "", {}, "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ=="], "webidl-conversions": ["webidl-conversions@3.0.1", "", {}, "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ=="],
"whatwg-url": ["whatwg-url@5.0.0", "", { "dependencies": { "tr46": "~0.0.3", "webidl-conversions": "^3.0.0" } }, "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw=="], "whatwg-url": ["whatwg-url@5.0.0", "", { "dependencies": { "tr46": "~0.0.3", "webidl-conversions": "^3.0.0" } }, "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw=="],
@@ -979,6 +962,8 @@
"yargs-parser": ["yargs-parser@21.1.1", "", {}, "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw=="], "yargs-parser": ["yargs-parser@21.1.1", "", {}, "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw=="],
"yauzl": ["yauzl@2.10.0", "", { "dependencies": { "buffer-crc32": "~0.2.3", "fd-slicer": "~1.1.0" } }, "sha512-p4a9I6X6nu6IhoGmBqAcbJy1mlC4j27vEPZX9F4L4/vZT3Lyq1VkFHw/V/PUcB9Buo+DG3iHkT0x3Qya58zc3g=="],
"yoctocolors": ["yoctocolors@2.1.2", "", {}, "sha512-CzhO+pFNo8ajLM2d2IW/R93ipy99LWjtwblvC1RsoSUMZgyLbYFr221TnSNT7GjGdYui6P459mw9JH/g/zW2ug=="], "yoctocolors": ["yoctocolors@2.1.2", "", {}, "sha512-CzhO+pFNo8ajLM2d2IW/R93ipy99LWjtwblvC1RsoSUMZgyLbYFr221TnSNT7GjGdYui6P459mw9JH/g/zW2ug=="],
"zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="], "zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="],
@@ -1395,6 +1380,8 @@
"@smithy/uuid/tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="], "@smithy/uuid/tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="],
"axios/proxy-from-env": ["proxy-from-env@2.1.0", "", {}, "sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA=="],
"cli-highlight/chalk": ["chalk@4.1.2", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA=="], "cli-highlight/chalk": ["chalk@4.1.2", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA=="],
"cli-highlight/yargs": ["yargs@16.2.0", "", { "dependencies": { "cliui": "^7.0.2", "escalade": "^3.1.1", "get-caller-file": "^2.0.5", "require-directory": "^2.1.1", "string-width": "^4.2.0", "y18n": "^5.0.5", "yargs-parser": "^20.2.2" } }, "sha512-D1mvvtDG0L5ft/jGWkLpG1+m0eQxOfaBvTNELraWj22wSVUMWxZUvYgJYcKh6jGGIkJFhH4IZPQhR4TKpc8mBw=="], "cli-highlight/yargs": ["yargs@16.2.0", "", { "dependencies": { "cliui": "^7.0.2", "escalade": "^3.1.1", "get-caller-file": "^2.0.5", "require-directory": "^2.1.1", "string-width": "^4.2.0", "y18n": "^5.0.5", "yargs-parser": "^20.2.2" } }, "sha512-D1mvvtDG0L5ft/jGWkLpG1+m0eQxOfaBvTNELraWj22wSVUMWxZUvYgJYcKh6jGGIkJFhH4IZPQhR4TKpc8mBw=="],
@@ -1411,8 +1398,6 @@
"gaxios/is-stream": ["is-stream@2.0.1", "", {}, "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg=="], "gaxios/is-stream": ["is-stream@2.0.1", "", {}, "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg=="],
"graphology-pagerank/graphology-utils": ["graphology-utils@1.8.0", "", { "peerDependencies": { "graphology-types": ">=0.19.0" } }, "sha512-Pa7SW30OMm8fVtyH49b3GJ/uxlMHGfXly50wIhlcc7ZoX9ahZa7sPBz+obo4WZClrRV6wh3tIu0GJoI42eao1A=="],
"needle/iconv-lite": ["iconv-lite@0.6.3", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" } }, "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw=="], "needle/iconv-lite": ["iconv-lite@0.6.3", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" } }, "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw=="],
"npm-run-path/path-key": ["path-key@4.0.0", "", {}, "sha512-haREypq7xkM7ErfgIyA0z+Bj4AGKlMSdlQE2jvJo6huWD1EdkKYV+G/T4nq0YEF2vgTT8kqMFKo1uHn950r4SQ=="], "npm-run-path/path-key": ["path-key@4.0.0", "", {}, "sha512-haREypq7xkM7ErfgIyA0z+Bj4AGKlMSdlQE2jvJo6huWD1EdkKYV+G/T4nq0YEF2vgTT8kqMFKo1uHn950r4SQ=="],
@@ -1457,6 +1442,8 @@
"@aws-sdk/nested-clients/@smithy/util-base64/@smithy/util-buffer-from": ["@smithy/util-buffer-from@4.2.2", "", { "dependencies": { "@smithy/is-array-buffer": "^4.2.2", "tslib": "^2.6.2" } }, "sha512-FDXD7cvUoFWwN6vtQfEta540Y/YBe5JneK3SoZg9bThSoOAC/eGeYEua6RkBgKjGa/sz6Y+DuBZj3+YEY21y4Q=="], "@aws-sdk/nested-clients/@smithy/util-base64/@smithy/util-buffer-from": ["@smithy/util-buffer-from@4.2.2", "", { "dependencies": { "@smithy/is-array-buffer": "^4.2.2", "tslib": "^2.6.2" } }, "sha512-FDXD7cvUoFWwN6vtQfEta540Y/YBe5JneK3SoZg9bThSoOAC/eGeYEua6RkBgKjGa/sz6Y+DuBZj3+YEY21y4Q=="],
"@mendable/firecrawl-js/axios/proxy-from-env": ["proxy-from-env@2.1.0", "", {}, "sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA=="],
"@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/core/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="], "@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/core/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="],
"@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/otlp-transformer/@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.57.2", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-uIX52NnTM0iBh84MShlpouI7UKqkZ7MrUszTmaypHBu4r7NofznSnQRfJ+uUeDtQDj6w8eFGg5KBLDAwAPz1+A=="], "@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/otlp-transformer/@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.57.2", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-uIX52NnTM0iBh84MShlpouI7UKqkZ7MrUszTmaypHBu4r7NofznSnQRfJ+uUeDtQDj6w8eFGg5KBLDAwAPz1+A=="],
@@ -1537,6 +1524,8 @@
"cliui/wrap-ansi/ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="], "cliui/wrap-ansi/ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="],
"firecrawl/axios/proxy-from-env": ["proxy-from-env@2.1.0", "", {}, "sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA=="],
"form-data/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="], "form-data/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="],
"qrcode/yargs/cliui": ["cliui@6.0.0", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.0", "wrap-ansi": "^6.2.0" } }, "sha512-t6wbgtoCXvAzst7QgXxJYqPt0usEfbgQdftEPbLL/cvv6HPE5VgvqCuAIDR0NgU52ds6rFwqrgakNLrHEjCbrQ=="], "qrcode/yargs/cliui": ["cliui@6.0.0", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.0", "wrap-ansi": "^6.2.0" } }, "sha512-t6wbgtoCXvAzst7QgXxJYqPt0usEfbgQdftEPbLL/cvv6HPE5VgvqCuAIDR0NgU52ds6rFwqrgakNLrHEjCbrQ=="],

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

View File

@@ -1,67 +0,0 @@
# Codebase Intelligence — Repo Map
The repo map feature gives the AI model structural awareness of your codebase at the start of each session. Instead of the model needing to explore the repository with `Grep`, `Glob`, and `Read` calls, it starts with a ranked summary of the most important files and their key signatures.
## How it works
1. **File enumeration** — Lists all tracked files via `git ls-files` (falls back to a manual directory walk when not in a git repo)
2. **Symbol extraction** — Parses each supported source file with tree-sitter to extract function, class, type, and interface definitions, plus cross-file references
3. **Reference graph** — Builds a directed graph where an edge from file A to file B means A references a symbol defined in B. Edges are weighted by reference count multiplied by the IDF (inverse document frequency) of the symbol name — common names like `get`, `set`, `value` contribute less
4. **PageRank** — Ranks files by structural importance using PageRank. Files imported by many others rank highest
5. **Rendering** — Walks ranked files top-down, emitting file paths and definition signatures, stopping when the token budget is reached
Results are cached to disk (`~/.openclaude/repomap-cache/`) keyed by file path, mtime, and size. Only changed files are re-parsed on subsequent runs.
## Supported languages
- TypeScript (`.ts`, `.tsx`)
- JavaScript (`.js`, `.jsx`, `.mjs`, `.cjs`)
- Python (`.py`)
Additional language grammars will be added in future releases.
## Enabling auto-injection
The repo map is gated behind the `REPO_MAP` feature flag, **off by default**. To enable auto-injection into the session context:
Set the environment variable before launching:
```bash
REPO_MAP=1 openclaude
```
Or add it to your shell profile for persistent use.
When enabled, the map is built once per session and prepended to the system context alongside git status and CLAUDE.md content. The default budget is 1024 tokens.
Auto-injection is skipped in:
- Bare mode (`--bare`)
- Remote sessions (`CLAUDE_CODE_REMOTE`)
## The /repomap slash command
The `/repomap` command is always available regardless of the feature flag. It lets you inspect and tune the map interactively.
```
/repomap # Show the map with default settings (1024 tokens)
/repomap --tokens 4096 # Increase the token budget for a larger map
/repomap --focus src/tools/ # Boost specific paths in the ranking
/repomap --focus src/context.ts # Can use multiple --focus flags
/repomap --stats # Show cache statistics
/repomap --invalidate # Clear cache and rebuild from scratch
```
## The RepoMap tool
The model can also call the `RepoMap` tool on demand during a session. This is useful when:
- The model needs structural context mid-conversation
- The user asks about specific areas (the model can pass `focus_files` or `focus_symbols`)
- A larger token budget is needed than the auto-injected default
## Known limitations
- **Signatures only** — The map shows function/class/type declarations, not implementations. The model still needs `Read` to see function bodies.
- **Cold build time** — First build on large repos (2000+ files) can take 20-30 seconds due to WASM-based parsing. Subsequent builds use the disk cache and complete in under 100ms.
- **Language coverage** — Only TypeScript, JavaScript, and Python are supported. Files in other languages are skipped.
- **TypeScript references** — The TypeScript tree-sitter query captures type annotations and `new` expressions as references, but not plain function calls. This means the ranking slightly favors type-heavy hub files.
- **Git dependency** — File enumeration uses `git ls-files` by default. Non-git repos fall back to a directory walk with hardcoded exclusions.

View File

@@ -1,6 +1,6 @@
{ {
"name": "@gitlawb/openclaude", "name": "@gitlawb/openclaude",
"version": "0.7.0", "version": "0.8.0",
"description": "OpenClaude opens coding-agent workflows to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models", "description": "OpenClaude opens coding-agent workflows to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models",
"type": "module", "type": "module",
"bin": { "bin": {
@@ -74,6 +74,7 @@
"@opentelemetry/sdk-trace-base": "2.6.1", "@opentelemetry/sdk-trace-base": "2.6.1",
"@opentelemetry/sdk-trace-node": "2.6.1", "@opentelemetry/sdk-trace-node": "2.6.1",
"@opentelemetry/semantic-conventions": "1.40.0", "@opentelemetry/semantic-conventions": "1.40.0",
"@vscode/ripgrep": "^1.17.1",
"ajv": "8.18.0", "ajv": "8.18.0",
"auto-bind": "5.0.1", "auto-bind": "5.0.1",
"axios": "1.15.0", "axios": "1.15.0",
@@ -93,15 +94,11 @@
"fflate": "0.8.2", "fflate": "0.8.2",
"figures": "6.1.0", "figures": "6.1.0",
"fuse.js": "7.1.0", "fuse.js": "7.1.0",
"graphology": "^0.26.0",
"graphology-operators": "^1.6.0",
"get-east-asian-width": "1.5.0", "get-east-asian-width": "1.5.0",
"google-auth-library": "9.15.1", "google-auth-library": "9.15.1",
"https-proxy-agent": "7.0.6", "https-proxy-agent": "7.0.6",
"ignore": "7.0.5", "ignore": "7.0.5",
"graphology-pagerank": "^1.1.0",
"indent-string": "5.0.0", "indent-string": "5.0.0",
"js-tiktoken": "^1.0.16",
"jsonc-parser": "3.3.1", "jsonc-parser": "3.3.1",
"lodash-es": "4.18.1", "lodash-es": "4.18.1",
"lru-cache": "11.2.7", "lru-cache": "11.2.7",
@@ -121,12 +118,10 @@
"strip-ansi": "7.2.0", "strip-ansi": "7.2.0",
"supports-hyperlinks": "3.2.0", "supports-hyperlinks": "3.2.0",
"tree-kill": "1.2.2", "tree-kill": "1.2.2",
"tree-sitter-wasms": "^0.1.12",
"turndown": "7.2.2", "turndown": "7.2.2",
"type-fest": "4.41.0", "type-fest": "4.41.0",
"undici": "7.24.6", "undici": "7.24.6",
"usehooks-ts": "3.1.1", "usehooks-ts": "3.1.1",
"web-tree-sitter": "^0.25.0",
"vscode-languageserver-protocol": "3.17.5", "vscode-languageserver-protocol": "3.17.5",
"wrap-ansi": "9.0.2", "wrap-ansi": "9.0.2",
"ws": "8.20.0", "ws": "8.20.0",

View File

@@ -36,9 +36,6 @@ const featureFlags: Record<string, boolean> = {
COWORKER_TYPE_TELEMETRY: false, // Telemetry for agent/coworker type classification COWORKER_TYPE_TELEMETRY: false, // Telemetry for agent/coworker type classification
MCP_SKILLS: false, // Dynamic MCP skill discovery (src/skills/mcpSkills.ts not mirrored; enabling this causes "fetchMcpSkillsForClient is not a function" when MCP servers with resources connect — see #856) MCP_SKILLS: false, // Dynamic MCP skill discovery (src/skills/mcpSkills.ts not mirrored; enabling this causes "fetchMcpSkillsForClient is not a function" when MCP servers with resources connect — see #856)
// ── Disabled by default, opt-in via runtime env var ─────────────────
REPO_MAP: false, // Auto-injected codebase intelligence repo-map; users opt in with REPO_MAP=1 (the runtime gate in src/context.ts honors the env var even when this flag is false)
// ── Enabled: upstream defaults ────────────────────────────────────── // ── Enabled: upstream defaults ──────────────────────────────────────
COORDINATOR_MODE: true, // Multi-agent coordinator with worker delegation COORDINATOR_MODE: true, // Multi-agent coordinator with worker delegation
BUILTIN_EXPLORE_PLAN_AGENTS: true, // Built-in Explore/Plan specialized subagents BUILTIN_EXPLORE_PLAN_AGENTS: true, // Built-in Explore/Plan specialized subagents
@@ -475,6 +472,11 @@ ${exports}
'@aws-sdk/credential-providers', '@aws-sdk/credential-providers',
'@azure/identity', '@azure/identity',
'google-auth-library', 'google-auth-library',
// @vscode/ripgrep ships a platform-specific binary alongside its
// index.js and resolves the path via __dirname at runtime. Bundling
// would freeze the build host's absolute path into dist/cli.mjs, so we
// keep it external and rely on the npm package being installed.
'@vscode/ripgrep',
], ],
}) })

View File

@@ -23,7 +23,6 @@ import doctor from './commands/doctor/index.js'
import onboardGithub from './commands/onboard-github/index.js' import onboardGithub from './commands/onboard-github/index.js'
import knowledge from './commands/knowledge/index.js' import knowledge from './commands/knowledge/index.js'
import memory from './commands/memory/index.js' import memory from './commands/memory/index.js'
import repomap from './commands/repomap/index.js'
import help from './commands/help/index.js' import help from './commands/help/index.js'
import ide from './commands/ide/index.js' import ide from './commands/ide/index.js'
import init from './commands/init.js' import init from './commands/init.js'
@@ -312,7 +311,6 @@ const COMMANDS = memoize((): Command[] => [
releaseNotes, releaseNotes,
reloadPlugins, reloadPlugins,
rename, rename,
repomap,
resume, resume,
session, session,
skills, skills,

View File

@@ -1,17 +0,0 @@
/**
* /repomap command - minimal metadata only.
* Implementation is lazy-loaded from repomap.ts to reduce startup time.
*/
import type { Command } from '../../commands.js'
const repomap = {
type: 'local',
name: 'repomap',
description:
'Show or configure the repository structural map (codebase intelligence)',
isHidden: false,
supportsNonInteractive: true,
load: () => import('./repomap.js'),
} satisfies Command
export default repomap

View File

@@ -1,56 +0,0 @@
import { describe, expect, test } from 'bun:test'
import { parseArgs } from './repomap.js'
describe('/repomap argument parsing', () => {
test('defaults to 1024 tokens with no flags', () => {
const result = parseArgs('')
expect(result.tokens).toBe(2048)
expect(result.focus).toEqual([])
expect(result.invalidate).toBe(false)
expect(result.stats).toBe(false)
})
test('parses --tokens flag', () => {
const result = parseArgs('--tokens 4096')
expect(result.tokens).toBe(4096)
})
test('rejects --tokens below 256', () => {
const result = parseArgs('--tokens 100')
expect(result.tokens).toBe(2048) // falls back to default
})
test('rejects --tokens above 16384', () => {
const result = parseArgs('--tokens 20000')
expect(result.tokens).toBe(2048) // falls back to default
})
test('parses --focus flag', () => {
const result = parseArgs('--focus src/tools/')
expect(result.focus).toEqual(['src/tools/'])
})
test('parses multiple --focus flags', () => {
const result = parseArgs('--focus src/tools/ --focus src/context.ts')
expect(result.focus).toEqual(['src/tools/', 'src/context.ts'])
})
test('parses --invalidate flag', () => {
const result = parseArgs('--invalidate')
expect(result.invalidate).toBe(true)
expect(result.stats).toBe(false)
})
test('parses --stats flag', () => {
const result = parseArgs('--stats')
expect(result.stats).toBe(true)
expect(result.invalidate).toBe(false)
})
test('parses combined flags', () => {
const result = parseArgs('--tokens 2048 --focus src/tools/ --invalidate')
expect(result.tokens).toBe(2048)
expect(result.focus).toEqual(['src/tools/'])
expect(result.invalidate).toBe(true)
})
})

View File

@@ -1,93 +0,0 @@
import type { LocalCommandCall } from '../../types/command.js'
import { getCwd } from '../../utils/cwd.js'
/** Parse CLI-style arguments from the command string. */
export function parseArgs(args: string): {
tokens: number
focus: string[]
invalidate: boolean
stats: boolean
} {
const parts = args.trim().split(/\s+/).filter(Boolean)
let tokens = 2048
const focus: string[] = []
let invalidate = false
let stats = false
for (let i = 0; i < parts.length; i++) {
const part = parts[i]!
if (part === '--tokens' && i + 1 < parts.length) {
const n = parseInt(parts[i + 1]!, 10)
if (!isNaN(n) && n >= 256 && n <= 16384) {
tokens = n
}
i++
} else if (part === '--focus' && i + 1 < parts.length) {
focus.push(parts[i + 1]!)
i++
} else if (part === '--invalidate') {
invalidate = true
} else if (part === '--stats') {
stats = true
}
}
return { tokens, focus, invalidate, stats }
}
export const call: LocalCommandCall = async (args) => {
const root = getCwd()
const { tokens, focus, invalidate, stats } = parseArgs(args ?? '')
// Lazy import to avoid loading tree-sitter at startup
const {
buildRepoMap,
invalidateCache,
getCacheStats,
} = await import('../../context/repoMap/index.js')
if (stats) {
const cacheStats = getCacheStats(root)
const lines = [
`Repository map cache stats:`,
` Cache directory: ${cacheStats.cacheDir}`,
` Cache file: ${cacheStats.cacheFile ?? '(none)'}`,
` Cached entries: ${cacheStats.entryCount}`,
` Cache exists: ${cacheStats.exists}`,
]
return { type: 'text', value: lines.join('\n') }
}
if (invalidate) {
invalidateCache(root)
const result = await buildRepoMap({
root,
maxTokens: tokens,
focusFiles: focus.length > 0 ? focus : undefined,
})
return {
type: 'text',
value: [
`Cache invalidated and rebuilt.`,
`Files: ${result.fileCount} ranked (${result.totalFileCount} total) | Tokens: ${result.tokenCount} | Time: ${result.buildTimeMs}ms | Cache hit: ${result.cacheHit}`,
'',
result.map,
].join('\n'),
}
}
const result = await buildRepoMap({
root,
maxTokens: tokens,
focusFiles: focus.length > 0 ? focus : undefined,
})
return {
type: 'text',
value: [
`Repository map: ${result.fileCount} files ranked (${result.totalFileCount} total) | Tokens: ${result.tokenCount} | Time: ${result.buildTimeMs}ms | Cache hit: ${result.cacheHit}`,
'',
result.map,
].join('\n'),
}
}

View File

@@ -111,7 +111,7 @@ import { BackgroundTasksDialog } from '../tasks/BackgroundTasksDialog.js';
import { shouldHideTasksFooter } from '../tasks/taskStatusUtils.js'; import { shouldHideTasksFooter } from '../tasks/taskStatusUtils.js';
import { TeamsDialog } from '../teams/TeamsDialog.js'; import { TeamsDialog } from '../teams/TeamsDialog.js';
import VimTextInput from '../VimTextInput.js'; import VimTextInput from '../VimTextInput.js';
import { getModeFromInput, getValueFromInput } from './inputModes.js'; import { detectModeEntry, getModeFromInput, getValueFromInput } from './inputModes.js';
import { FOOTER_TEMPORARY_STATUS_TIMEOUT, Notifications } from './Notifications.js'; import { FOOTER_TEMPORARY_STATUS_TIMEOUT, Notifications } from './Notifications.js';
import PromptInputFooter from './PromptInputFooter.js'; import PromptInputFooter from './PromptInputFooter.js';
import type { SuggestionItem } from './PromptInputFooterSuggestions.js'; import type { SuggestionItem } from './PromptInputFooterSuggestions.js';
@@ -878,24 +878,22 @@ function PromptInput({
abortPromptSuggestion(); abortPromptSuggestion();
abortSpeculation(setAppState); abortSpeculation(setAppState);
// Check if this is a single character insertion at the start // Strip the mode character from the buffer when entering bash mode — the
const isSingleCharInsertion = value.length === input.length + 1; // mode itself is shown via the prompt prefix in the UI. Without this,
const insertedAtStart = cursorOffset === 0; // typing `!` into empty input would enter bash mode but leave the literal
const mode = getModeFromInput(value); // `!` in the buffer (issue #662).
if (insertedAtStart && mode !== 'prompt') { const modeEntry = detectModeEntry({
if (isSingleCharInsertion) { value,
onModeChange(mode); prevInputLength: input.length,
return; cursorOffset,
} });
// Multi-char insertion into empty input (e.g. tab-accepting "! gcloud auth login") if (modeEntry) {
if (input.length === 0) { onModeChange(modeEntry.mode);
onModeChange(mode); const cleaned = modeEntry.strippedValue.replaceAll('\t', ' ');
const valueWithoutMode = getValueFromInput(value).replaceAll('\t', ' '); pushToBuffer(input, cursorOffset, pastedContents);
pushToBuffer(input, cursorOffset, pastedContents); trackAndSetInput(cleaned);
trackAndSetInput(valueWithoutMode); setCursorOffset(cleaned.length);
setCursorOffset(valueWithoutMode.length); return;
return;
}
} }
const processedValue = value.replaceAll('\t', ' '); const processedValue = value.replaceAll('\t', ' ');

View File

@@ -0,0 +1,104 @@
import { describe, expect, it } from 'bun:test'
import {
detectModeEntry,
getModeFromInput,
getValueFromInput,
isInputModeCharacter,
prependModeCharacterToInput,
} from './inputModes.js'
describe('inputModes', () => {
describe('getModeFromInput', () => {
it('returns bash mode for input starting with !', () => {
expect(getModeFromInput('!')).toBe('bash')
expect(getModeFromInput('!ls')).toBe('bash')
})
it('returns prompt mode for non-bash input', () => {
expect(getModeFromInput('')).toBe('prompt')
expect(getModeFromInput('hello')).toBe('prompt')
expect(getModeFromInput(' !')).toBe('prompt')
})
})
describe('getValueFromInput', () => {
it('strips the leading ! when entering bash mode', () => {
expect(getValueFromInput('!')).toBe('')
expect(getValueFromInput('!ls -la')).toBe('ls -la')
})
it('returns input unchanged in prompt mode', () => {
expect(getValueFromInput('')).toBe('')
expect(getValueFromInput('hello')).toBe('hello')
})
})
describe('isInputModeCharacter', () => {
it('returns true only for the bare ! character', () => {
expect(isInputModeCharacter('!')).toBe(true)
expect(isInputModeCharacter('!ls')).toBe(false)
expect(isInputModeCharacter('')).toBe(false)
})
})
describe('prependModeCharacterToInput', () => {
it('prepends ! when mode is bash', () => {
expect(prependModeCharacterToInput('ls', 'bash')).toBe('!ls')
expect(prependModeCharacterToInput('', 'bash')).toBe('!')
})
it('returns input unchanged in prompt mode', () => {
expect(prependModeCharacterToInput('hello', 'prompt')).toBe('hello')
})
})
describe('detectModeEntry', () => {
// Regression for #662 — typing `!` into empty input must switch to bash
// mode AND yield an empty stripped buffer. Before the fix the single-char
// path returned without stripping, leaving `!` visible in the buffer.
it('strips the mode character when typing ! into empty input', () => {
expect(
detectModeEntry({ value: '!', prevInputLength: 0, cursorOffset: 0 }),
).toEqual({ mode: 'bash', strippedValue: '' })
})
it('strips the mode character when pasting !cmd into empty input', () => {
expect(
detectModeEntry({ value: '!ls -la', prevInputLength: 0, cursorOffset: 0 }),
).toEqual({ mode: 'bash', strippedValue: 'ls -la' })
})
it('returns null when the cursor is not at the start', () => {
expect(
detectModeEntry({ value: '!', prevInputLength: 0, cursorOffset: 1 }),
).toBeNull()
})
it('returns null when the value does not start with !', () => {
expect(
detectModeEntry({ value: 'hello', prevInputLength: 0, cursorOffset: 0 }),
).toBeNull()
})
it('returns null when typing ! after existing text', () => {
// value="ab!" with prevInputLength=2 is a single-char insertion but does
// not start with ! — getModeFromInput returns 'prompt'.
expect(
detectModeEntry({ value: 'ab!', prevInputLength: 2, cursorOffset: 0 }),
).toBeNull()
})
it('returns null when prepending ! to non-empty existing text', () => {
// Single-char insertion at start that produces "!ab" from "ab" — value
// length is 3, prevInputLength is 2, so isSingleCharInsertion is true
// and isMultiCharIntoEmpty is false. We accept the mode change here so
// that typing ! at the start of existing text still toggles mode.
const result = detectModeEntry({
value: '!ab',
prevInputLength: 2,
cursorOffset: 0,
})
expect(result).toEqual({ mode: 'bash', strippedValue: 'ab' })
})
})
})

View File

@@ -31,3 +31,30 @@ export function getValueFromInput(input: string): string {
export function isInputModeCharacter(input: string): boolean { export function isInputModeCharacter(input: string): boolean {
return input === '!' return input === '!'
} }
export type ModeEntryDecision = {
mode: HistoryMode
strippedValue: string
}
/**
* Decide whether an onChange `value` should switch the input mode (e.g.
* `prompt` → `bash`) and what the stripped buffer value should be.
*
* Returns null when no mode change applies. Returns a decision otherwise so
* callers run a single update path — no separate single-char vs multi-char
* branches that can drift apart.
*/
export function detectModeEntry(args: {
value: string
prevInputLength: number
cursorOffset: number
}): ModeEntryDecision | null {
if (args.cursorOffset !== 0) return null
const mode = getModeFromInput(args.value)
if (mode === 'prompt') return null
const isSingleCharInsertion = args.value.length === args.prevInputLength + 1
const isMultiCharIntoEmpty = args.prevInputLength === 0
if (!isSingleCharInsertion && !isMultiCharIntoEmpty) return null
return { mode, strippedValue: getValueFromInput(args.value) }
}

View File

@@ -0,0 +1,7 @@
/**
* Stub — query source enum not included in source snapshot. See
* src/types/message.ts for the same scoping caveat (issue #473).
*/
/* eslint-disable @typescript-eslint/no-explicit-any */
export type QuerySource = any

View File

@@ -1,84 +0,0 @@
import { describe, expect, test } from 'bun:test'
// The feature() function from bun:bundle is shimmed at build time.
// In tests, it's not available, so we test the getRepoMapContext logic
// by importing and calling it directly — the function checks feature('REPO_MAP')
// which in the test environment (no bun:bundle shim) will throw or return false.
// We test the actual logic paths through integration-style tests.
describe('getRepoMapContext', () => {
test('returns null when REPO_MAP flag is off (default)', async () => {
// In the test environment, feature('REPO_MAP') is not shimmed,
// so the function should return null or handle the missing shim gracefully.
// We test this by calling buildRepoMap directly and verifying the context
// integration pattern works.
// The feature flag is off by default (false in scripts/build.ts),
// so in production getRepoMapContext returns null.
// In tests, we verify the module exports correctly.
const { getRepoMapContext } = await import('./context.js')
expect(typeof getRepoMapContext).toBe('function')
})
test('buildRepoMap produces valid output for context injection', async () => {
const { mkdtempSync, writeFileSync, rmSync } = await import('fs')
const { tmpdir } = await import('os')
const { join } = await import('path')
const { buildRepoMap } = await import('./context/repoMap/index.js')
const tempDir = mkdtempSync(join(tmpdir(), 'repomap-ctx-'))
try {
writeFileSync(
join(tempDir, 'main.ts'),
'export function main(): void { console.log("hello") }\n',
)
writeFileSync(
join(tempDir, 'utils.ts'),
'import { main } from "./main"\nexport function helper(): void { main() }\n',
)
const result = await buildRepoMap({
root: tempDir,
maxTokens: 1024,
})
// Valid map that could be injected
expect(result.map.length).toBeGreaterThan(0)
expect(result.tokenCount).toBeGreaterThan(0)
expect(result.tokenCount).toBeLessThanOrEqual(1024)
expect(typeof result.cacheHit).toBe('boolean')
} finally {
rmSync(tempDir, { recursive: true, force: true })
const { invalidateCache } = await import('./context/repoMap/index.js')
invalidateCache(tempDir)
}
})
test('getSystemContext does not include repoMap key when flag is off', async () => {
// In test environment, feature() is not available from bun:bundle,
// which means getRepoMapContext will either return null or throw.
// Either way, repoMap should NOT appear in the system context.
// We verify the structural contract: getSystemContext returns an object
// without a repoMap key when the feature is disabled.
// Since we can't mock bun:bundle in tests, we verify the contract
// by checking that buildRepoMap output is properly gated.
const { buildRepoMap } = await import('./context/repoMap/index.js')
// The function works standalone
const result = await buildRepoMap({ maxTokens: 256 })
expect(typeof result.map).toBe('string')
// But the injection in getSystemContext is gated behind feature('REPO_MAP')
// which is false by default — verified by the feature flag test below
})
})
describe('REPO_MAP feature flag', () => {
test('flag defaults to false in build config', async () => {
const { readFileSync } = await import('fs')
const buildScript = readFileSync('scripts/build.ts', 'utf-8')
// Verify the flag exists and is set to false
expect(buildScript).toContain('REPO_MAP: false')
})
})

View File

@@ -31,7 +31,6 @@ export function setSystemPromptInjection(value: string | null): void {
// Clear context caches immediately when injection changes // Clear context caches immediately when injection changes
getUserContext.cache.clear?.() getUserContext.cache.clear?.()
getSystemContext.cache.clear?.() getSystemContext.cache.clear?.()
getRepoMapContext.cache.clear?.()
} }
export const getGitStatus = memoize(async (): Promise<string | null> => { export const getGitStatus = memoize(async (): Promise<string | null> => {
@@ -111,37 +110,6 @@ export const getGitStatus = memoize(async (): Promise<string | null> => {
} }
}) })
export const getRepoMapContext = memoize(
async (): Promise<string | null> => {
// Enable via compile-time feature flag OR runtime env var.
// The runtime env var lets users enable auto-injection without rebuilding.
const runtimeEnabled = isEnvTruthy(process.env.REPO_MAP)
if (!feature('REPO_MAP') && !runtimeEnabled) return null
if (isBareMode()) return null
if (isEnvTruthy(process.env.CLAUDE_CODE_REMOTE)) return null
try {
const startTime = Date.now()
logForDiagnosticsNoPII('info', 'repo_map_started')
const { buildRepoMap } = await import('./context/repoMap/index.js')
const result = await buildRepoMap({ maxTokens: 1024 })
logForDiagnosticsNoPII('info', 'repo_map_completed', {
duration_ms: Date.now() - startTime,
token_count: result.tokenCount,
file_count: result.fileCount,
cache_hit: result.cacheHit,
})
if (!result.map || result.map.length === 0) return null
return `This is a structural map of the repository, ranked by importance. Use it to understand the codebase architecture.\n\n${result.map}`
} catch (err) {
logForDiagnosticsNoPII('warn', 'repo_map_failed', {
error: String(err),
})
return null
}
},
)
/** /**
* This context is prepended to each conversation, and cached for the duration of the conversation. * This context is prepended to each conversation, and cached for the duration of the conversation.
*/ */
@@ -159,9 +127,6 @@ export const getSystemContext = memoize(
? null ? null
: await getGitStatus() : await getGitStatus()
// Build repo map in parallel with other context (memoized, so cheap on repeat)
const repoMap = await getRepoMapContext()
// Include system prompt injection if set (for cache breaking, internal-only) // Include system prompt injection if set (for cache breaking, internal-only)
const injection = feature('BREAK_CACHE_COMMAND') const injection = feature('BREAK_CACHE_COMMAND')
? getSystemPromptInjection() ? getSystemPromptInjection()
@@ -170,13 +135,11 @@ export const getSystemContext = memoize(
logForDiagnosticsNoPII('info', 'system_context_completed', { logForDiagnosticsNoPII('info', 'system_context_completed', {
duration_ms: Date.now() - startTime, duration_ms: Date.now() - startTime,
has_git_status: gitStatus !== null, has_git_status: gitStatus !== null,
has_repo_map: repoMap !== null,
has_injection: injection !== null, has_injection: injection !== null,
}) })
return { return {
...(gitStatus && { gitStatus }), ...(gitStatus && { gitStatus }),
...(repoMap && { repoMap }),
...(feature('BREAK_CACHE_COMMAND') && injection ...(feature('BREAK_CACHE_COMMAND') && injection
? { ? {
cacheBreaker: `[CACHE_BREAKER: ${injection}]`, cacheBreaker: `[CACHE_BREAKER: ${injection}]`,

View File

@@ -1,29 +0,0 @@
// fileA — imports from fileB and fileC
import { CacheLayer, buildCache } from './fileB'
import { createStore, type StoreConfig } from './fileC'
export class AppController {
private cache: CacheLayer
private config: StoreConfig
constructor(config: StoreConfig) {
this.cache = buildCache()
this.config = config
}
initialize(): void {
const store = createStore()
this.cache.cacheSet('primary', store)
}
getFromCache(key: string): unknown {
return this.cache.cacheGet(key)
}
}
export function startApp(config: StoreConfig): AppController {
const app = new AppController(config)
app.initialize()
return app
}

View File

@@ -1,23 +0,0 @@
// fileB — imports from fileC
import { DataStore, createStore } from './fileC'
export class CacheLayer {
private store: DataStore
constructor() {
this.store = createStore()
}
cacheGet(key: string): unknown | undefined {
return this.store.lookup(key)
}
cacheSet(key: string, value: unknown): void {
this.store.add(key, value)
}
}
export function buildCache(): CacheLayer {
return new CacheLayer()
}

View File

@@ -1,22 +0,0 @@
// fileC — the most imported module (imported by fileA and fileB)
export class DataStore {
private items: Map<string, unknown> = new Map()
add(key: string, value: unknown): void {
this.items.set(key, value)
}
lookup(key: string): unknown | undefined {
return this.items.get(key)
}
}
export function createStore(): DataStore {
return new DataStore()
}
export interface StoreConfig {
maxSize: number
ttl: number
}

View File

@@ -1,9 +0,0 @@
// fileD — imports from fileA
import { AppController, startApp } from './fileA'
export function runApp(): void {
const controller: AppController = startApp({ maxSize: 100, ttl: 3600 })
const result = controller.getFromCache('test')
console.log(result)
}

View File

@@ -1,25 +0,0 @@
// fileE — isolated, no imports from other fixture files
export interface Logger {
log(message: string): void
warn(message: string): void
error(message: string): void
}
export class ConsoleLogger implements Logger {
log(message: string): void {
console.log(`[LOG] ${message}`)
}
warn(message: string): void {
console.warn(`[WARN] ${message}`)
}
error(message: string): void {
console.error(`[ERROR] ${message}`)
}
}
export function createLogger(): Logger {
return new ConsoleLogger()
}

View File

@@ -1,139 +0,0 @@
import { createHash } from 'crypto'
import {
existsSync,
mkdirSync,
readFileSync,
statSync,
writeFileSync,
} from 'fs'
import { homedir } from 'os'
import { join } from 'path'
import type { CacheData, CacheEntry, CacheStats, Tag } from './types.js'
const CACHE_VERSION = 1
const CACHE_DIR = join(homedir(), '.openclaude', 'repomap-cache')
function getCacheFilePath(root: string): string {
const hash = createHash('sha1').update(root).digest('hex')
return join(CACHE_DIR, `${hash}.json`)
}
function ensureCacheDir(): void {
if (!existsSync(CACHE_DIR)) {
mkdirSync(CACHE_DIR, { recursive: true })
}
}
/** Load cache from disk. Returns empty cache if not found or invalid. */
export function loadCache(root: string): CacheData {
const path = getCacheFilePath(root)
try {
const raw = readFileSync(path, 'utf-8')
const data = JSON.parse(raw) as CacheData
if (data.version !== CACHE_VERSION) {
return { version: CACHE_VERSION, entries: {} }
}
return data
} catch {
return { version: CACHE_VERSION, entries: {} }
}
}
/** Save cache to disk. */
export function saveCache(root: string, cache: CacheData): void {
ensureCacheDir()
const path = getCacheFilePath(root)
writeFileSync(path, JSON.stringify(cache), 'utf-8')
}
/**
* Check if a file's cached entry is still valid based on mtime and size.
* Returns the cached tags if valid, null otherwise.
*/
export function getCachedTags(
cache: CacheData,
filePath: string,
root: string,
): Tag[] | null {
const entry = cache.entries[filePath]
if (!entry) return null
try {
const absolutePath = join(root, filePath)
const stat = statSync(absolutePath)
if (stat.mtimeMs === entry.mtimeMs && stat.size === entry.size) {
return entry.tags
}
} catch {
// File may have been deleted
}
return null
}
/** Update the cache entry for a file. */
export function setCachedTags(
cache: CacheData,
filePath: string,
root: string,
tags: Tag[],
): void {
try {
const absolutePath = join(root, filePath)
const stat = statSync(absolutePath)
cache.entries[filePath] = {
tags,
mtimeMs: stat.mtimeMs,
size: stat.size,
}
} catch {
// If we can't stat, don't cache
}
}
/**
* Compute a hash of the inputs that affect the rendered map.
* Used to cache the final rendered output.
*/
export function computeMapHash(
files: string[],
maxTokens: number,
focusFiles: string[],
): string {
const sorted = [...files].sort()
const input = JSON.stringify({ files: sorted, maxTokens, focusFiles: [...focusFiles].sort() })
return createHash('sha1').update(input).digest('hex')
}
/** Get cache statistics. */
export function getCacheStats(root: string): CacheStats {
const cacheFile = getCacheFilePath(root)
const exists = existsSync(cacheFile)
let entryCount = 0
if (exists) {
try {
const data = JSON.parse(readFileSync(cacheFile, 'utf-8')) as CacheData
entryCount = Object.keys(data.entries).length
} catch {
// corrupted cache
}
}
return {
cacheDir: CACHE_DIR,
cacheFile: exists ? cacheFile : null,
entryCount,
exists,
}
}
/** Delete the cache for a repo root. */
export function invalidateCache(root: string): void {
const path = getCacheFilePath(root)
try {
const { unlinkSync } = require('fs')
unlinkSync(path)
} catch {
// File may not exist
}
}

View File

@@ -1,109 +0,0 @@
import { execFile } from 'child_process'
import { readdirSync } from 'fs'
import { join, relative } from 'path'
import type { SupportedLanguage } from './types.js'
const SUPPORTED_EXTENSIONS: Record<string, SupportedLanguage> = {
'.ts': 'typescript',
'.tsx': 'typescript',
'.js': 'javascript',
'.jsx': 'javascript',
'.mjs': 'javascript',
'.cjs': 'javascript',
'.py': 'python',
}
const EXCLUDED_DIRS = new Set([
'node_modules',
'dist',
'.git',
'.hg',
'.svn',
'build',
'out',
'coverage',
'__pycache__',
'.next',
'.nuxt',
'vendor',
'.worktrees',
])
const EXCLUDED_FILES = new Set([
'bun.lock',
'bun.lockb',
'package-lock.json',
'yarn.lock',
'pnpm-lock.yaml',
])
export function getLanguageForFile(filePath: string): SupportedLanguage | null {
const ext = filePath.substring(filePath.lastIndexOf('.'))
return SUPPORTED_EXTENSIONS[ext] ?? null
}
export function isSupportedFile(filePath: string): boolean {
return getLanguageForFile(filePath) !== null
}
/** List files using git ls-files. Returns relative paths. */
function gitLsFiles(root: string): Promise<string[]> {
return new Promise((resolve, reject) => {
execFile(
'git',
['ls-files', '--cached', '--others', '--exclude-standard'],
{ cwd: root, maxBuffer: 10 * 1024 * 1024 },
(error, stdout) => {
if (error) {
reject(error)
return
}
const files = stdout
.split('\n')
.map(f => f.trim())
.filter(f => f.length > 0)
resolve(files)
},
)
})
}
/** Walk directory tree manually as fallback when git is unavailable. */
function walkDirectory(root: string, currentDir: string = root): string[] {
const results: string[] = []
let entries: ReturnType<typeof readdirSync>
try {
entries = readdirSync(currentDir, { withFileTypes: true })
} catch {
return results
}
for (const entry of entries) {
const name = entry.name
if (entry.isDirectory()) {
if (!EXCLUDED_DIRS.has(name) && !name.startsWith('.')) {
results.push(...walkDirectory(root, join(currentDir, name)))
}
} else if (entry.isFile()) {
if (!EXCLUDED_FILES.has(name)) {
results.push(relative(root, join(currentDir, name)))
}
}
}
return results
}
/**
* Enumerate all supported source files in the repo.
* Tries git ls-files first, falls back to manual walk.
*/
export async function getRepoFiles(root: string): Promise<string[]> {
let files: string[]
try {
files = await gitLsFiles(root)
} catch {
files = walkDirectory(root)
}
return files.filter(isSupportedFile)
}

View File

@@ -1,88 +0,0 @@
import Graph from 'graphology'
import type { FileTags } from './types.js'
// Common identifiers that should contribute less weight (high IDF penalty).
const COMMON_NAMES = new Set([
'map', 'get', 'set', 'value', 'key', 'data', 'result', 'error',
'name', 'type', 'id', 'index', 'item', 'items', 'list', 'options',
'config', 'args', 'params', 'props', 'state', 'event', 'callback',
'handler', 'fn', 'func', 'self', 'this', 'ctx', 'context', 'req',
'res', 'next', 'err', 'msg', 'obj', 'arr', 'str', 'num', 'val',
'init', 'start', 'stop', 'run', 'main', 'test', 'setup', 'teardown',
'constructor', 'toString', 'valueOf', 'length', 'size', 'count',
'push', 'pop', 'shift', 'filter', 'reduce', 'forEach', 'find',
'log', 'warn', 'info', 'debug', 'trace',
])
/**
* Build a directed graph from file tags.
*
* Nodes are file paths. An edge from A to B means file A references
* a symbol defined in file B. Edge weight = refCount * idf(symbolName).
*/
export function buildGraph(allFileTags: FileTags[]): Graph {
const graph = new Graph({ multi: false, type: 'directed' })
// Build a map from symbol name → files that define it
const defIndex = new Map<string, Set<string>>()
for (const ft of allFileTags) {
for (const tag of ft.tags) {
if (tag.kind === 'def') {
let files = defIndex.get(tag.name)
if (!files) {
files = new Set()
defIndex.set(tag.name, files)
}
files.add(ft.path)
}
}
}
// Compute IDF: log(totalFiles / filesDefiningSymbol)
// Common names get an extra penalty
const totalFiles = allFileTags.length
function idf(symbolName: string): number {
const defFiles = defIndex.get(symbolName)
const docFreq = defFiles ? defFiles.size : 1
const rawIdf = Math.log(totalFiles / docFreq)
return COMMON_NAMES.has(symbolName) ? rawIdf * 0.1 : rawIdf
}
// Add all files as nodes
for (const ft of allFileTags) {
if (!graph.hasNode(ft.path)) {
graph.addNode(ft.path)
}
}
// Build edges: for each ref in a file, find where it's defined
for (const ft of allFileTags) {
// Count refs per target file
const edgeWeights = new Map<string, number>()
for (const tag of ft.tags) {
if (tag.kind !== 'ref') continue
const defFiles = defIndex.get(tag.name)
if (!defFiles) continue
const weight = idf(tag.name)
for (const defFile of defFiles) {
if (defFile === ft.path) continue // skip self-references
const current = edgeWeights.get(defFile) ?? 0
edgeWeights.set(defFile, current + weight)
}
}
for (const [target, weight] of edgeWeights) {
if (graph.hasEdge(ft.path, target)) {
graph.setEdgeAttribute(ft.path, target, 'weight',
graph.getEdgeAttribute(ft.path, target, 'weight') + weight)
} else {
graph.addEdge(ft.path, target, { weight })
}
}
}
return graph
}

View File

@@ -1,144 +0,0 @@
import {
computeMapHash,
getCachedTags,
getCacheStats as getCacheStatsImpl,
invalidateCache as invalidateCacheImpl,
loadCache,
saveCache,
setCachedTags,
} from './cache.js'
import { getRepoFiles } from './gitFiles.js'
import { buildGraph } from './graph.js'
import { rankFiles } from './pagerank.js'
import { initParser } from './parser.js'
import { renderMap } from './renderer.js'
import { extractTags } from './symbolExtractor.js'
import type { FileTags, RepoMapOptions, RepoMapResult, CacheStats } from './types.js'
const DEFAULT_MAX_TOKENS = 2048
/**
* Build a structural summary of a code repository.
*
* Walks the repo, extracts symbols via tree-sitter, builds an IDF-weighted
* reference graph, ranks files with PageRank, and renders a token-budgeted
* structural summary.
*/
export async function buildRepoMap(options: RepoMapOptions = {}): Promise<RepoMapResult> {
const startTime = Date.now()
const root = options.root ?? process.cwd()
const maxTokens = options.maxTokens ?? DEFAULT_MAX_TOKENS
const focusFiles = options.focusFiles ?? []
// Initialize tree-sitter
await initParser()
// Get files
const files = options.files ?? await getRepoFiles(root)
const totalFileCount = files.length
// Check if we have a cached rendered map
const mapHash = computeMapHash(files, maxTokens, focusFiles)
const cache = loadCache(root)
// Check if rendered map is cached (stored as a special entry)
const renderedCacheKey = `__rendered__${mapHash}`
const renderedEntry = cache.entries[renderedCacheKey]
if (renderedEntry && renderedEntry.tags.length === 1) {
const cachedResult = renderedEntry.tags[0]!
// The cached "tag" stores the rendered map in the signature field
// and metadata in name/line fields
try {
const meta = JSON.parse(cachedResult.name)
return {
map: cachedResult.signature,
cacheHit: true,
buildTimeMs: Date.now() - startTime,
fileCount: meta.fileCount ?? 0,
totalFileCount,
tokenCount: meta.tokenCount ?? 0,
}
} catch {
// Invalid cached data, continue with full build
}
}
// Extract tags for all files (using per-file cache).
// Separate cached hits from files needing extraction.
const allFileTags: FileTags[] = []
const uncachedFiles: string[] = []
for (const file of files) {
const cachedTags = getCachedTags(cache, file, root)
if (cachedTags) {
allFileTags.push({ path: file, tags: cachedTags })
} else {
uncachedFiles.push(file)
}
}
// Process uncached files in parallel batches
const BATCH_SIZE = 50
for (let i = 0; i < uncachedFiles.length; i += BATCH_SIZE) {
const batch = uncachedFiles.slice(i, i + BATCH_SIZE)
const results = await Promise.all(
batch.map(file => extractTags(file, root).catch(() => null))
)
for (let j = 0; j < results.length; j++) {
const fileTags = results[j]
if (fileTags) {
allFileTags.push(fileTags)
setCachedTags(cache, fileTags.path, root, fileTags.tags)
}
}
}
// Build graph and rank
const graph = buildGraph(allFileTags)
const ranked = rankFiles(graph, focusFiles)
// Build a lookup map
const fileTagsMap = new Map<string, FileTags>()
for (const ft of allFileTags) {
fileTagsMap.set(ft.path, ft)
}
// Render
const { map, tokenCount, fileCount } = renderMap(ranked, fileTagsMap, maxTokens)
// Cache the rendered result
cache.entries[renderedCacheKey] = {
tags: [{
kind: 'def',
name: JSON.stringify({ fileCount, tokenCount }),
line: 0,
signature: map,
}],
mtimeMs: Date.now(),
size: 0,
}
saveCache(root, cache)
return {
map,
cacheHit: false,
buildTimeMs: Date.now() - startTime,
fileCount,
totalFileCount,
tokenCount,
}
}
/** Invalidate the disk cache for a given repo root. */
export function invalidateCache(root?: string): void {
invalidateCacheImpl(root ?? process.cwd())
}
/** Get cache statistics for a given repo root. */
export function getCacheStats(root?: string): CacheStats {
return getCacheStatsImpl(root ?? process.cwd())
}
// Re-export types for convenience
export type { RepoMapOptions, RepoMapResult, CacheStats } from './types.js'

View File

@@ -1,57 +0,0 @@
import type Graph from 'graphology'
import pagerank from 'graphology-pagerank'
export interface RankedFile {
path: string
score: number
}
/**
* Run PageRank on the file reference graph.
*
* focusFiles get a 100x boost in the personalization vector so they
* and their neighbors rank higher.
*
* Returns files sorted by score descending.
*/
export function rankFiles(
graph: Graph,
focusFiles: string[] = [],
): RankedFile[] {
if (graph.order === 0) return []
const hasPersonalization = focusFiles.length > 0
// graphology-pagerank accepts getEdgeWeight option
const scores: Record<string, number> = pagerank(graph, {
alpha: 0.85,
maxIterations: 100,
tolerance: 1e-6,
getEdgeWeight: 'weight',
})
// Apply focus boost post-hoc if focus files are specified
if (hasPersonalization) {
for (const file of focusFiles) {
if (scores[file] !== undefined) {
scores[file] *= 100
}
}
// Also boost direct neighbors of focus files
for (const file of focusFiles) {
if (!graph.hasNode(file)) continue
graph.forEachNeighbor(file, (neighbor) => {
if (scores[neighbor] !== undefined) {
scores[neighbor] *= 10
}
})
}
}
const ranked: RankedFile[] = Object.entries(scores)
.map(([path, score]) => ({ path, score }))
.sort((a, b) => b.score - a.score)
return ranked
}

View File

@@ -1,166 +0,0 @@
import { existsSync, readFileSync } from 'fs'
import { join, resolve } from 'path'
import { fileURLToPath } from 'url'
import type { SupportedLanguage } from './types.js'
// Resolve project root in both source and bundled modes.
// In source (bun test/dev): import.meta.url is src/context/repoMap/parser.ts → go up 4 levels
// In bundle (node dist/cli.mjs): import.meta.url is dist/cli.mjs → go up 2 levels
const __filename = fileURLToPath(import.meta.url)
const __projectRoot = join(
__filename,
process.env.NODE_ENV === 'test' ? '../../../../' : '../../',
)
// web-tree-sitter types
type TreeSitterParser = {
parse(input: string): { rootNode: unknown }
setLanguage(lang: unknown): void
delete(): void
}
type TreeSitterLanguage = {
query(source: string): unknown
}
// The actual module exports { Parser, Language } as named exports
let ParserClass: (new () => TreeSitterParser) & {
init(opts?: { locateFile?: (file: string) => string }): Promise<void>
} | null = null
let LanguageLoader: {
load(path: string | Uint8Array): Promise<TreeSitterLanguage>
} | null = null
let initialized = false
const languageCache = new Map<SupportedLanguage, TreeSitterLanguage>()
const queryCache = new Map<SupportedLanguage, string>()
/** Resolve the path to the tree-sitter WASM file. */
function getTreeSitterWasmPath(): string {
// Try require.resolve first (works in source mode with node_modules)
try {
const webTsDir = resolve(
require.resolve('web-tree-sitter/package.json'),
'..',
)
return join(webTsDir, 'tree-sitter.wasm')
} catch {
// Fallback: relative to project root
return join(__projectRoot, 'node_modules', 'web-tree-sitter', 'tree-sitter.wasm')
}
}
/** Resolve the path to a language WASM grammar file. */
function getLanguageWasmPath(language: SupportedLanguage): string {
const wasmName = language === 'typescript' ? 'tree-sitter-typescript' :
language === 'javascript' ? 'tree-sitter-javascript' :
`tree-sitter-${language}`
try {
const wasmDir = resolve(
require.resolve('tree-sitter-wasms/package.json'),
'..',
'out',
)
return join(wasmDir, `${wasmName}.wasm`)
} catch {
return join(__projectRoot, 'node_modules', 'tree-sitter-wasms', 'out', `${wasmName}.wasm`)
}
}
/** Resolve the path to a tag query .scm file for the given language. */
function getQueryPath(language: SupportedLanguage): string {
// Try source location first (works in both source and when queries are alongside the bundle)
const sourcePath = join(__projectRoot, 'src', 'context', 'repoMap', 'queries', `${language}-tags.scm`)
if (existsSync(sourcePath)) {
return sourcePath
}
// Fallback: relative to this file (source mode)
return join(fileURLToPath(import.meta.url), '..', 'queries', `${language}-tags.scm`)
}
/** Initialize the tree-sitter WASM module. */
export async function initParser(): Promise<void> {
if (initialized) return
try {
const mod = await import('web-tree-sitter')
ParserClass = mod.Parser as typeof ParserClass
LanguageLoader = mod.Language as typeof LanguageLoader
const wasmPath = getTreeSitterWasmPath()
await ParserClass!.init({
locateFile: () => wasmPath,
})
initialized = true
} catch (err) {
// eslint-disable-next-line no-console
console.error('[repoMap] Failed to initialize tree-sitter:', err)
throw err
}
}
/** Load a language grammar. Cached after first load. */
export async function loadLanguage(language: SupportedLanguage): Promise<TreeSitterLanguage | null> {
if (languageCache.has(language)) {
return languageCache.get(language)!
}
if (!initialized) {
await initParser()
}
try {
const wasmPath = getLanguageWasmPath(language)
const lang = await LanguageLoader!.load(wasmPath)
languageCache.set(language, lang)
return lang
} catch (err) {
// eslint-disable-next-line no-console
console.error(`[repoMap] Failed to load ${language} grammar:`, err)
return null
}
}
/** Load the tag query for a language. Cached after first load. */
export function loadQuery(language: SupportedLanguage): string | null {
if (queryCache.has(language)) {
return queryCache.get(language)!
}
try {
const queryPath = getQueryPath(language)
const content = readFileSync(queryPath, 'utf-8')
queryCache.set(language, content)
return content
} catch {
return null
}
}
/** Create a new parser instance with the given language set. */
export async function createParser(language: SupportedLanguage): Promise<TreeSitterParser | null> {
if (!initialized) {
await initParser()
}
const lang = await loadLanguage(language)
if (!lang) return null
try {
const parser = new ParserClass!()
parser.setLanguage(lang)
return parser
} catch {
return null
}
}
/** Clear all caches (useful for testing). */
export function clearParserCaches(): void {
languageCache.clear()
queryCache.clear()
initialized = false
ParserClass = null
LanguageLoader = null
}

View File

@@ -1,92 +0,0 @@
; Source: https://github.com/Aider-AI/aider/blob/main/aider/queries/tree-sitter-languages/javascript-tags.scm
; License: MIT (Apache-2.0 dual) — see https://github.com/Aider-AI/aider/blob/main/LICENSE
; Copied for use in openclaude's repo-map feature.
(
(comment)* @doc
.
(method_definition
name: (property_identifier) @name.definition.method) @definition.method
(#not-eq? @name.definition.method "constructor")
(#strip! @doc "^[\\s\\*/]+|^[\\s\\*/]$")
(#select-adjacent! @doc @definition.method)
)
(
(comment)* @doc
.
[
(class
name: (_) @name.definition.class)
(class_declaration
name: (_) @name.definition.class)
] @definition.class
(#strip! @doc "^[\\s\\*/]+|^[\\s\\*/]$")
(#select-adjacent! @doc @definition.class)
)
(
(comment)* @doc
.
[
(function
name: (identifier) @name.definition.function)
(function_declaration
name: (identifier) @name.definition.function)
(generator_function
name: (identifier) @name.definition.function)
(generator_function_declaration
name: (identifier) @name.definition.function)
] @definition.function
(#strip! @doc "^[\\s\\*/]+|^[\\s\\*/]$")
(#select-adjacent! @doc @definition.function)
)
(
(comment)* @doc
.
(lexical_declaration
(variable_declarator
name: (identifier) @name.definition.function
value: [(arrow_function) (function)]) @definition.function)
(#strip! @doc "^[\\s\\*/]+|^[\\s\\*/]$")
(#select-adjacent! @doc @definition.function)
)
(
(comment)* @doc
.
(variable_declaration
(variable_declarator
name: (identifier) @name.definition.function
value: [(arrow_function) (function)]) @definition.function)
(#strip! @doc "^[\\s\\*/]+|^[\\s\\*/]$")
(#select-adjacent! @doc @definition.function)
)
(assignment_expression
left: [
(identifier) @name.definition.function
(member_expression
property: (property_identifier) @name.definition.function)
]
right: [(arrow_function) (function)]
) @definition.function
(pair
key: (property_identifier) @name.definition.function
value: [(arrow_function) (function)]) @definition.function
(
(call_expression
function: (identifier) @name.reference.call) @reference.call
(#not-match? @name.reference.call "^(require)$")
)
(call_expression
function: (member_expression
property: (property_identifier) @name.reference.call)
arguments: (_) @reference.call)
(new_expression
constructor: (_) @name.reference.class) @reference.class

View File

@@ -1,16 +0,0 @@
; Source: https://github.com/Aider-AI/aider/blob/main/aider/queries/tree-sitter-languages/python-tags.scm
; License: MIT (Apache-2.0 dual) — see https://github.com/Aider-AI/aider/blob/main/LICENSE
; Copied for use in openclaude's repo-map feature.
(class_definition
name: (identifier) @name.definition.class) @definition.class
(function_definition
name: (identifier) @name.definition.function) @definition.function
(call
function: [
(identifier) @name.reference.call
(attribute
attribute: (identifier) @name.reference.call)
]) @reference.call

View File

@@ -1,45 +0,0 @@
; Source: https://github.com/Aider-AI/aider/blob/main/aider/queries/tree-sitter-languages/typescript-tags.scm
; License: MIT (Apache-2.0 dual) — see https://github.com/Aider-AI/aider/blob/main/LICENSE
; Copied for use in openclaude's repo-map feature.
(function_signature
name: (identifier) @name.definition.function) @definition.function
(method_signature
name: (property_identifier) @name.definition.method) @definition.method
(abstract_method_signature
name: (property_identifier) @name.definition.method) @definition.method
(abstract_class_declaration
name: (type_identifier) @name.definition.class) @definition.class
(module
name: (identifier) @name.definition.module) @definition.module
(interface_declaration
name: (type_identifier) @name.definition.interface) @definition.interface
(type_annotation
(type_identifier) @name.reference.type) @reference.type
(new_expression
constructor: (identifier) @name.reference.class) @reference.class
(function_declaration
name: (identifier) @name.definition.function) @definition.function
(method_definition
name: (property_identifier) @name.definition.method) @definition.method
(class_declaration
name: (type_identifier) @name.definition.class) @definition.class
(interface_declaration
name: (type_identifier) @name.definition.class) @definition.class
(type_alias_declaration
name: (type_identifier) @name.definition.type) @definition.type
(enum_declaration
name: (identifier) @name.definition.enum) @definition.enum

View File

@@ -1,72 +0,0 @@
import type { FileTags, Tag } from './types.js'
import type { RankedFile } from './pagerank.js'
import { countTokens } from './tokenize.js'
/**
* Render a token-budgeted repo map from ranked files and their tags.
*
* Format per file:
* path/to/file.ts:
* ⋮
* signature line for def 1
* ⋮
* signature line for def 2
* ⋮
*
* Files that don't fit within the budget are dropped entirely.
*/
export function renderMap(
rankedFiles: RankedFile[],
fileTagsMap: Map<string, FileTags>,
maxTokens: number,
): { map: string; tokenCount: number; fileCount: number } {
const sections: string[] = []
let currentTokens = 0
let fileCount = 0
for (const { path } of rankedFiles) {
const ft = fileTagsMap.get(path)
if (!ft) continue
// Only include definitions in the rendered output
const defs = ft.tags
.filter(t => t.kind === 'def')
.sort((a, b) => a.line - b.line)
if (defs.length === 0) continue
const section = renderFileSection(path, defs)
const sectionTokens = countTokens(section)
// Would this section bust the budget?
if (currentTokens + sectionTokens > maxTokens) {
// Don't include partial files — drop entirely
break
}
sections.push(section)
currentTokens += sectionTokens
fileCount++
}
const map = sections.join('\n')
return { map, tokenCount: currentTokens, fileCount }
}
function renderFileSection(path: string, defs: Tag[]): string {
const lines: string[] = [`${path}:`]
let lastLine = 0
for (const def of defs) {
// Add elision marker if there's a gap
if (def.line > lastLine + 1) {
lines.push('⋮')
}
lines.push(` ${def.signature}`)
lastLine = def.line
}
// Trailing elision marker
lines.push('⋮')
return lines.join('\n')
}

View File

@@ -1,275 +0,0 @@
import { afterEach, beforeAll, describe, expect, test } from 'bun:test'
import { cpSync, mkdtempSync, rmSync, utimesSync, writeFileSync } from 'fs'
import { tmpdir } from 'os'
import { join } from 'path'
import { invalidateCache, buildRepoMap } from './index.js'
import { extractTags } from './symbolExtractor.js'
import { buildGraph } from './graph.js'
import { initParser } from './parser.js'
import { countTokens } from './tokenize.js'
const FIXTURE_ROOT = join(import.meta.dir, '__fixtures__', 'mini-repo')
const FIXTURE_FILES = ['fileA.ts', 'fileB.ts', 'fileC.ts', 'fileD.ts', 'fileE.ts']
beforeAll(async () => {
await initParser()
})
// Clean up cache between tests to avoid cross-test interference
afterEach(() => {
invalidateCache(FIXTURE_ROOT)
})
describe('symbol extraction', () => {
test('extracts function and class defs from a TypeScript file', async () => {
const result = await extractTags('fileC.ts', FIXTURE_ROOT)
expect(result).not.toBeNull()
const defs = result!.tags.filter(t => t.kind === 'def')
const defNames = defs.map(t => t.name)
expect(defNames).toContain('DataStore')
expect(defNames).toContain('createStore')
expect(defNames).toContain('StoreConfig')
// All defs should have kind='def'
for (const d of defs) {
expect(d.kind).toBe('def')
}
})
test('extracts references to imported symbols', async () => {
const result = await extractTags('fileA.ts', FIXTURE_ROOT)
expect(result).not.toBeNull()
const refs = result!.tags.filter(t => t.kind === 'ref')
const refNames = refs.map(t => t.name)
// fileA imports CacheLayer from fileB and StoreConfig from fileC
expect(refNames).toContain('CacheLayer')
expect(refNames).toContain('StoreConfig')
})
})
describe('graph', () => {
test('builds edges between files that reference each other\'s symbols', async () => {
const allTags = []
for (const f of FIXTURE_FILES) {
const tags = await extractTags(f, FIXTURE_ROOT)
if (tags) allTags.push(tags)
}
const graph = buildGraph(allTags)
// fileA imports from fileB (references CacheLayer defined in fileB)
expect(graph.hasEdge('fileA.ts', 'fileB.ts')).toBe(true)
// fileA imports from fileC (references StoreConfig, DataStore defined in fileC)
expect(graph.hasEdge('fileA.ts', 'fileC.ts')).toBe(true)
// fileB imports from fileC (references DataStore defined in fileC)
expect(graph.hasEdge('fileB.ts', 'fileC.ts')).toBe(true)
// fileD imports from fileA
expect(graph.hasEdge('fileD.ts', 'fileA.ts')).toBe(true)
// fileE is isolated — no edges to/from it
expect(graph.degree('fileE.ts')).toBe(0)
})
})
describe('pagerank', () => {
test('ranks the most-imported file highest', async () => {
const result = await buildRepoMap({
root: FIXTURE_ROOT,
maxTokens: 2048,
files: FIXTURE_FILES,
})
// The map starts with the highest-ranked file
const firstFile = result.map.split('\n')[0]
expect(firstFile).toBe('fileC.ts:')
// fileE should be ranked lowest (or near last)
const lines = result.map.split('\n')
const filePositions = FIXTURE_FILES.map(f => {
const idx = lines.findIndex(l => l === `${f}:`)
return { file: f, position: idx }
}).filter(x => x.position >= 0)
.sort((a, b) => a.position - b.position)
// fileC should be first
expect(filePositions[0]!.file).toBe('fileC.ts')
// fileE should be last (or among the last)
const lastFile = filePositions[filePositions.length - 1]!.file
expect(['fileD.ts', 'fileE.ts']).toContain(lastFile)
})
})
describe('renderer', () => {
test('respects the token budget within 5%', async () => {
const maxTokens = 500
const result = await buildRepoMap({
root: FIXTURE_ROOT,
maxTokens,
files: FIXTURE_FILES,
})
const actualTokens = countTokens(result.map)
expect(actualTokens).toBeLessThanOrEqual(maxTokens * 1.05)
expect(result.tokenCount).toBeLessThanOrEqual(maxTokens * 1.05)
})
test('drops files that don\'t fit rather than listing their names', async () => {
// Very tight budget — should only fit 1-2 files
const result = await buildRepoMap({
root: FIXTURE_ROOT,
maxTokens: 100,
files: FIXTURE_FILES,
})
// Count how many files appear as headers in the output
const fileHeaders = result.map.split('\n').filter(l => l.endsWith(':') && !l.startsWith(' '))
// Every file header in the output should have its signatures listed
for (const header of fileHeaders) {
// The file must have at least one signature line after it
const headerIdx = result.map.indexOf(header)
const afterHeader = result.map.slice(headerIdx + header.length)
// Should have content (signatures), not just the filename
expect(afterHeader.trim().length).toBeGreaterThan(0)
}
// Should have fewer files than total
expect(fileHeaders.length).toBeLessThan(FIXTURE_FILES.length)
})
})
describe('cache', () => {
test('second build of unchanged fixture uses the cache', async () => {
// First build (cold)
const result1 = await buildRepoMap({
root: FIXTURE_ROOT,
maxTokens: 2048,
files: FIXTURE_FILES,
})
expect(result1.cacheHit).toBe(false)
// Second build (warm)
const result2 = await buildRepoMap({
root: FIXTURE_ROOT,
maxTokens: 2048,
files: FIXTURE_FILES,
})
expect(result2.cacheHit).toBe(true)
expect(result2.buildTimeMs).toBeLessThan(result1.buildTimeMs)
// Output should be identical
expect(result2.map).toBe(result1.map)
})
test('modifying a file invalidates only that file', async () => {
// Create a temp copy of the fixture
const tempDir = mkdtempSync(join(tmpdir(), 'repomap-test-'))
try {
for (const f of FIXTURE_FILES) {
cpSync(join(FIXTURE_ROOT, f), join(tempDir, f))
}
// First build
const result1 = await buildRepoMap({
root: tempDir,
maxTokens: 2048,
files: FIXTURE_FILES,
})
expect(result1.cacheHit).toBe(false)
// Touch one file to change its mtime
const targetFile = join(tempDir, 'fileE.ts')
const now = new Date()
utimesSync(targetFile, now, now)
// Second build — rendered cache should be invalidated because file list hash
// includes the files and the rendered map hash changes with different mtimes
// for the per-file cache check
invalidateCache(tempDir)
const result2 = await buildRepoMap({
root: tempDir,
maxTokens: 2048,
files: FIXTURE_FILES,
})
// The per-file cache for fileE should miss (mtime changed),
// but other files should still hit the per-file cache
expect(result2.cacheHit).toBe(false)
// Output should still be valid
expect(result2.map.length).toBeGreaterThan(0)
expect(result2.fileCount).toBe(result1.fileCount)
} finally {
rmSync(tempDir, { recursive: true, force: true })
invalidateCache(tempDir)
}
})
})
describe('gitFiles', () => {
test('falls back gracefully when not in a git repo', async () => {
// Create a temp directory with source files but NO .git
const tempDir = mkdtempSync(join(tmpdir(), 'repomap-nogit-'))
try {
writeFileSync(
join(tempDir, 'hello.ts'),
'export function hello(): string { return "world" }\n',
)
writeFileSync(
join(tempDir, 'utils.ts'),
'export function add(a: number, b: number): number { return a + b }\n',
)
const result = await buildRepoMap({
root: tempDir,
maxTokens: 1024,
})
// Should succeed without throwing
expect(result.map.length).toBeGreaterThan(0)
expect(result.totalFileCount).toBeGreaterThan(0)
} finally {
rmSync(tempDir, { recursive: true, force: true })
invalidateCache(tempDir)
}
})
})
describe('error handling', () => {
test('no crash on malformed source file', async () => {
const tempDir = mkdtempSync(join(tmpdir(), 'repomap-malformed-'))
try {
// Valid file
writeFileSync(
join(tempDir, 'good.ts'),
'export function good(): number { return 1 }\n',
)
// Malformed file — severe syntax errors
writeFileSync(
join(tempDir, 'bad.ts'),
'}{}{}{export classclass [[[ function ,,, @@@ ###\n',
)
const result = await buildRepoMap({
root: tempDir,
maxTokens: 1024,
files: ['good.ts', 'bad.ts'],
})
// Should complete successfully
expect(result.map.length).toBeGreaterThan(0)
// The good file should be in the output
expect(result.map).toContain('good.ts')
} finally {
rmSync(tempDir, { recursive: true, force: true })
invalidateCache(tempDir)
}
})
})

View File

@@ -1,108 +0,0 @@
import { readFileSync } from 'fs'
import { join } from 'path'
import { getLanguageForFile } from './gitFiles.js'
import { createParser, loadLanguage, loadQuery } from './parser.js'
import type { FileTags, Tag } from './types.js'
/**
* Extract definition and reference tags from a single source file.
* Returns null if the file can't be parsed (unsupported language, parse error, etc).
*/
export async function extractTags(
filePath: string,
root: string,
): Promise<FileTags | null> {
const language = getLanguageForFile(filePath)
if (!language) return null
const absolutePath = join(root, filePath)
let source: string
try {
source = readFileSync(absolutePath, 'utf-8')
} catch {
return null
}
const lines = source.split('\n')
const parser = await createParser(language)
if (!parser) return null
const querySource = loadQuery(language)
if (!querySource) {
parser.delete()
return null
}
try {
const tree = parser.parse(source) as {
rootNode: unknown
}
const lang = await loadLanguage(language)
if (!lang) {
parser.delete()
return null
}
// Use the non-deprecated Query constructor
const { Query } = await import('web-tree-sitter')
const query = new Query(lang, querySource) as {
matches(rootNode: unknown): Array<{
pattern: number
captures: Array<{
name: string
node: {
text: string
startPosition: { row: number; column: number }
endPosition: { row: number; column: number }
}
}>
}>
}
const matches = query.matches(tree.rootNode)
const tags: Tag[] = []
const seen = new Set<string>() // dedup by kind+name+line
for (const match of matches) {
let name: string | null = null
let kind: 'def' | 'ref' | null = null
let subKind: string | undefined
let lineRow = 0
for (const capture of match.captures) {
const captureName = capture.name
// Name captures: name.definition.X or name.reference.X
if (captureName.startsWith('name.definition.')) {
name = capture.node.text
kind = 'def'
subKind = captureName.slice('name.definition.'.length)
lineRow = capture.node.startPosition.row
} else if (captureName.startsWith('name.reference.')) {
name = capture.node.text
kind = 'ref'
subKind = captureName.slice('name.reference.'.length)
lineRow = capture.node.startPosition.row
}
}
if (name && kind) {
const key = `${kind}:${name}:${lineRow}`
if (!seen.has(key)) {
seen.add(key)
const line = lineRow + 1 // convert 0-based to 1-based
const signature = lines[lineRow]?.trimEnd() ?? ''
tags.push({ kind, name, line, signature, subKind })
}
}
}
parser.delete()
return { path: filePath, tags }
} catch {
parser.delete()
return null
}
}

View File

@@ -1,15 +0,0 @@
import { getEncoding, type Tiktoken } from 'js-tiktoken'
let encoder: Tiktoken | null = null
function getEncoder() {
if (!encoder) {
encoder = getEncoding('cl100k_base')
}
return encoder
}
/** Count the number of tokens in a string using cl100k_base encoding. */
export function countTokens(text: string): number {
return getEncoder().encode(text).length
}

View File

@@ -1,65 +0,0 @@
export interface Tag {
/** 'def' for definitions, 'ref' for references */
kind: 'def' | 'ref'
/** Symbol name (e.g. function name, class name) */
name: string
/** 1-based line number in the source file */
line: number
/** The full line of source code at this position (used as signature for defs) */
signature: string
/** Sub-kind from the query (e.g. 'function', 'class', 'method', 'type') */
subKind?: string
}
export interface FileTags {
/** Relative path from the repo root */
path: string
/** All tags extracted from this file */
tags: Tag[]
}
export interface RepoMapOptions {
/** Root directory of the repo (defaults to cwd) */
root?: string
/** Maximum token budget for the rendered map */
maxTokens?: number
/** Files to boost in PageRank (relative paths) */
focusFiles?: string[]
/** Override the list of files to process (relative paths) */
files?: string[]
}
export interface RepoMapResult {
/** The rendered repo map string */
map: string
/** Whether the result came from cache */
cacheHit: boolean
/** Time in milliseconds to build the map */
buildTimeMs: number
/** Number of files included in the rendered map */
fileCount: number
/** Total number of files processed */
totalFileCount: number
/** Actual token count of the rendered map */
tokenCount: number
}
export interface CacheEntry {
tags: Tag[]
mtimeMs: number
size: number
}
export interface CacheData {
version: number
entries: Record<string, CacheEntry>
}
export interface CacheStats {
cacheDir: string
cacheFile: string | null
entryCount: number
exists: boolean
}
export type SupportedLanguage = 'typescript' | 'javascript' | 'python'

View File

@@ -446,3 +446,80 @@ export async function connectRemoteControl(
export type ExitReason = { export type ExitReason = {
} }
// ============================================================================
// Stub re-exports — types not included in source snapshot.
//
// The upstream Anthropic SDK defines these in sub-files (sdk/coreTypes,
// sdk/runtimeTypes, sdk/controlTypes, sdk/toolTypes) that are stubbed
// in this open repo. Until the real definitions are restored, alias the
// names to `any` so callers can resolve their imports and `tsc` becomes
// actionable. See issue #473 for the typecheck-foundation effort.
// ============================================================================
/* eslint-disable @typescript-eslint/no-explicit-any */
export type AnyZodRawShape = any
export type ApiKeySource = any
export type AsyncHookJSONOutput = any
export type ConfigChangeHookInput = any
export type CwdChangedHookInput = any
export type ElicitationHookInput = any
export type ElicitationResultHookInput = any
export type FileChangedHookInput = any
export type ForkSessionOptions = any
export type ForkSessionResult = any
export type GetSessionInfoOptions = any
export type GetSessionMessagesOptions = any
export type HookEvent = any
export type HookInput = any
export type HookJSONOutput = any
export type InferShape<_T> = any
export type InstructionsLoadedHookInput = any
export type InternalOptions = any
export type InternalQuery = any
export type ListSessionsOptions = any
export type McpSdkServerConfigWithInstance = any
export type McpServerConfigForProcessTransport = any
export type McpServerStatus = any
export type ModelInfo = any
export type ModelUsage = any
export type NotificationHookInput = any
export type Options = any
export type PermissionDeniedHookInput = any
export type PermissionMode = any
export type PermissionRequestHookInput = any
export type PermissionResult = any
export type PermissionUpdate = any
export type PostCompactHookInput = any
export type PostToolUseFailureHookInput = any
export type PostToolUseHookInput = any
export type PreCompactHookInput = any
export type PreToolUseHookInput = any
export type Query = any
export type RewindFilesResult = any
export type SDKAssistantMessage = any
export type SDKAssistantMessageError = any
export type SDKCompactBoundaryMessage = any
export type SdkMcpToolDefinition = any
export type SDKPartialAssistantMessage = any
export type SDKPermissionDenial = any
export type SDKRateLimitInfo = any
export type SDKStatus = any
export type SDKStatusMessage = any
export type SDKSystemMessage = any
export type SDKToolProgressMessage = any
export type SDKUserMessageReplay = any
export type SessionEndHookInput = any
export type SessionMessage = any
export type SessionMutationOptions = any
export type SessionStartHookInput = any
export type SetupHookInput = any
export type StopFailureHookInput = any
export type StopHookInput = any
export type SubagentStartHookInput = any
export type SubagentStopHookInput = any
export type SyncHookJSONOutput = any
export type TaskCompletedHookInput = any
export type TaskCreatedHookInput = any
export type TeammateIdleHookInput = any
export type UserPromptSubmitHookInput = any

518
src/entrypoints/sdk.d.ts vendored Normal file
View File

@@ -0,0 +1,518 @@
// Type declarations for @gitlawb/openclaude SDK
// Manually maintained — keep in sync with src/entrypoints/sdk/index.ts
// Drift is caught by validate-externals.ts (runs in CI)
// ============================================================================
// Error
// ============================================================================
export class AbortError extends Error {
override readonly name: 'AbortError'
}
export class ClaudeError extends Error {
constructor(message: string)
}
export class SDKError extends ClaudeError {
constructor(message: string)
}
export class SDKAuthenticationError extends SDKError {
constructor(message?: string)
}
export class SDKBillingError extends SDKError {
constructor(message?: string)
}
export class SDKRateLimitError extends SDKError {
constructor(
message?: string,
readonly resetsAt?: number,
readonly rateLimitType?: string,
)
}
export class SDKInvalidRequestError extends SDKError {
constructor(message?: string)
}
export class SDKServerError extends SDKError {
constructor(message?: string)
}
export class SDKMaxOutputTokensError extends SDKError {
constructor(message?: string)
}
export type SDKAssistantMessageError =
| 'authentication_failed'
| 'billing_error'
| 'rate_limit'
| 'invalid_request'
| 'server_error'
| 'unknown'
| 'max_output_tokens'
export function sdkErrorFromType(
errorType: SDKAssistantMessageError,
message?: string,
): SDKError | ClaudeError
// ============================================================================
// Types
// ============================================================================
export type ApiKeySource = 'user' | 'project' | 'org' | 'temporary' | 'oauth' | 'none'
export type RewindFilesResult = {
canRewind: boolean
error?: string
filesChanged?: string[]
insertions?: number
deletions?: number
}
export type McpServerStatus = {
name: string
status: 'connected' | 'failed' | 'needs-auth' | 'pending' | 'disabled'
serverInfo?: { name: string; version: string }
error?: string
scope?: string
tools?: {
name: string
description?: string
annotations?: {
readOnly?: boolean
destructive?: boolean
openWorld?: boolean
}
}[]
}
export type PermissionResult = ({
behavior: 'allow'
updatedInput?: Record<string, unknown>
updatedPermissions?: ({
type: 'addRules'
rules: { toolName: string; ruleContent?: string }[]
behavior: 'allow' | 'deny' | 'ask'
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
}) | ({
type: 'replaceRules'
rules: { toolName: string; ruleContent?: string }[]
behavior: 'allow' | 'deny' | 'ask'
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
}) | ({
type: 'removeRules'
rules: { toolName: string; ruleContent?: string }[]
behavior: 'allow' | 'deny' | 'ask'
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
}) | ({
type: 'setMode'
mode: 'default' | 'acceptEdits' | 'bypassPermissions' | 'plan' | 'dontAsk'
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
}) | ({
type: 'addDirectories'
directories: string[]
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
}) | ({
type: 'removeDirectories'
directories: string[]
destination: 'userSettings' | 'projectSettings' | 'localSettings' | 'session' | 'cliArg'
})[]
toolUseID?: string
decisionClassification?: 'user_temporary' | 'user_permanent' | 'user_reject'
}) | ({
behavior: 'deny'
message: string
interrupt?: boolean
toolUseID?: string
decisionClassification?: 'user_temporary' | 'user_permanent' | 'user_reject'
})
export type SDKSessionInfo = {
sessionId: string
summary: string
lastModified: number
fileSize?: number
customTitle?: string
firstPrompt?: string
gitBranch?: string
cwd?: string
tag?: string
createdAt?: number
}
export type ListSessionsOptions = {
dir?: string
limit?: number
offset?: number
includeWorktrees?: boolean
}
export type GetSessionInfoOptions = {
dir?: string
}
export type GetSessionMessagesOptions = {
dir?: string
limit?: number
offset?: number
includeSystemMessages?: boolean
}
export type SessionMutationOptions = {
dir?: string
}
export type ForkSessionOptions = {
dir?: string
upToMessageId?: string
title?: string
}
export type ForkSessionResult = {
sessionId: string
}
export type SessionMessage = {
role: 'user' | 'assistant' | 'system'
content: unknown
timestamp?: string
uuid?: string
parentUuid?: string | null
[key: string]: unknown
}
// Re-export precise SDK message types from generated types
// These use camelCase field names and discriminated unions for full IntelliSense
export type { SDKMessage as SDKMessage } from './sdk/coreTypes.generated.js'
export type { SDKUserMessage as SDKUserMessage } from './sdk/coreTypes.generated.js'
export type { SDKResultMessage as SDKResultMessage } from './sdk/coreTypes.generated.js'
// ============================================================================
// Query types
// ============================================================================
export type QueryPermissionMode =
| 'default'
| 'plan'
| 'auto-accept'
| 'bypass-permissions'
| 'bypassPermissions'
| 'acceptEdits'
export type QueryOptions = {
cwd: string
additionalDirectories?: string[]
model?: string
sessionId?: string
/** Fork the session before resuming (requires sessionId). */
fork?: boolean
/** Alias for fork. When true, resumed session forks to a new session ID. */
forkSession?: boolean
/** Resume the most recent session for this cwd (no sessionId needed). */
continue?: boolean
resume?: string
/** When resuming, resume messages up to and including this message UUID. */
resumeSessionAt?: string
permissionMode?: QueryPermissionMode
abortController?: AbortController
executable?: string
allowDangerouslySkipPermissions?: boolean
disallowedTools?: string[]
hooks?: Record<string, unknown[]>
mcpServers?: Record<string, unknown>
settings?: {
env?: Record<string, string>
attribution?: { commit: string; pr: string }
}
/** Environment variables to apply during query execution. Overrides process.env. Takes precedence over settings.env. */
env?: Record<string, string | undefined>
/**
* Callback invoked before each tool use. Return `{ behavior: 'allow' }` to
* permit the call or `{ behavior: 'deny', message?: string }` to reject it.
*
* **Secure-by-default**: If neither `canUseTool` nor `onPermissionRequest`
* is provided, ALL tool uses are denied. You MUST provide at least one of
* these callbacks to allow tool execution.
*/
canUseTool?: (
name: string,
input: unknown,
options?: { toolUseID?: string },
) => Promise<{ behavior: 'allow' | 'deny'; message?: string; updatedInput?: unknown }>
/**
* Callback invoked when a tool needs permission approval. The host receives
* the request immediately and can resolve it by calling
* `query.respondToPermission(toolUseId, decision)` before the timeout.
* If omitted, tools that require permission fall through to the default
* permission logic immediately (no timeout).
*/
onPermissionRequest?: (message: SDKPermissionRequestMessage) => void
systemPrompt?:
| string
| { type: 'preset'; preset: string; append?: string }
| { type: 'custom'; content: string }
/** Agent definitions to register with the query engine. */
agents?: Record<string, {
description: string
prompt: string
tools?: string[]
disallowedTools?: string[]
model?: string
maxTurns?: number
}>
settingSources?: string[]
/** When true, yields stream_event messages for token-by-token streaming. */
includePartialMessages?: boolean
/** @internal Timeout in ms for permission request resolution. Default 30000. */
_permissionTimeoutMs?: number
stderr?: (data: string) => void
}
export interface Query {
readonly sessionId: string
[Symbol.asyncIterator](): AsyncIterator<SDKMessage>
setModel(model: string): Promise<void>
setPermissionMode(mode: QueryPermissionMode): Promise<void>
close(): void
interrupt(): void
respondToPermission(toolUseId: string, decision: PermissionResult): void
/** Check if file rewind is possible. */
rewindFiles(): RewindFilesResult
/** Actually perform the file rewind. Returns files changed and diff stats. */
rewindFilesAsync(): Promise<RewindFilesResult>
supportedCommands(): string[]
supportedModels(): string[]
supportedAgents(): string[]
mcpServerStatus(): McpServerStatus[]
accountInfo(): Promise<{ apiKeySource: ApiKeySource; [key: string]: unknown }>
setMaxThinkingTokens(tokens: number): void
}
/**
* Permission request message emitted when a tool needs permission approval.
* Hosts can respond via respondToPermission() using the request_id.
*/
export type SDKPermissionRequestMessage = {
type: 'permission_request'
request_id: string
tool_name: string
tool_use_id: string
input: Record<string, unknown>
uuid: string
session_id: string
}
export type SDKPermissionTimeoutMessage = {
type: 'permission_timeout'
tool_name: string
tool_use_id: string
timed_out_after_ms: number
uuid: string
session_id: string
}
// ============================================================================
// V2 API types
// ============================================================================
export type SDKSessionOptions = {
cwd: string
model?: string
permissionMode?: QueryPermissionMode
abortController?: AbortController
/**
* Callback invoked before each tool use. Return `{ behavior: 'allow' }` to
* permit the call or `{ behavior: 'deny', message?: string }` to reject it.
*
* **Secure-by-default**: If neither `canUseTool` nor `onPermissionRequest`
* is provided, ALL tool uses are denied. You MUST provide at least one of
* these callbacks to allow tool execution.
*/
canUseTool?: (
name: string,
input: unknown,
options?: { toolUseID?: string },
) => Promise<{ behavior: 'allow' | 'deny'; message?: string; updatedInput?: unknown }>
/** MCP server configurations for this session. */
mcpServers?: Record<string, unknown>
/**
* Callback invoked when a tool needs permission approval. The host receives
* the request immediately and can resolve it via respondToPermission().
*/
onPermissionRequest?: (message: SDKPermissionRequestMessage) => void
}
export interface SDKSession {
sessionId: string
sendMessage(content: string): AsyncIterable<SDKMessage>
getMessages(): SDKMessage[]
interrupt(): void
/** Respond to a pending permission prompt. */
respondToPermission(toolUseId: string, decision: PermissionResult): void
}
// ============================================================================
// MCP tool types
// ============================================================================
export interface SdkMcpToolDefinition<Schema = any> {
name: string
description: string
inputSchema: Schema
handler: (args: any, extra: unknown) => Promise<any>
annotations?: any
searchHint?: string
alwaysLoad?: boolean
}
// ============================================================================
// Session functions
// ============================================================================
export function listSessions(
options?: ListSessionsOptions,
): Promise<SDKSessionInfo[]>
export function getSessionInfo(
sessionId: string,
options?: GetSessionInfoOptions,
): Promise<SDKSessionInfo | undefined>
export function getSessionMessages(
sessionId: string,
options?: GetSessionMessagesOptions,
): Promise<SessionMessage[]>
export function renameSession(
sessionId: string,
title: string,
options?: SessionMutationOptions,
): Promise<void>
export function tagSession(
sessionId: string,
tag: string | null,
options?: SessionMutationOptions,
): Promise<void>
export function forkSession(
sessionId: string,
options?: ForkSessionOptions,
): Promise<ForkSessionResult>
export function deleteSession(
sessionId: string,
options?: SessionMutationOptions,
): Promise<void>
// ============================================================================
// Query functions
// ============================================================================
export function query(params: {
prompt: string | AsyncIterable<SDKUserMessage>
options?: QueryOptions
}): Query
export function queryAsync(params: {
prompt: string | AsyncIterable<SDKUserMessage>
options?: QueryOptions
}): Promise<Query>
// ============================================================================
// V2 API functions
// ============================================================================
export function unstable_v2_createSession(options: SDKSessionOptions): SDKSession
export function unstable_v2_resumeSession(
sessionId: string,
options: SDKSessionOptions,
): Promise<SDKSession>
export function unstable_v2_prompt(
message: string,
options: SDKSessionOptions,
): Promise<SDKResultMessage>
// ============================================================================
// MCP tool functions
// ============================================================================
export function tool<Schema = any>(
name: string,
description: string,
inputSchema: Schema,
handler: (args: any, extra: unknown) => Promise<any>,
extras?: {
annotations?: any
searchHint?: string
alwaysLoad?: boolean
},
): SdkMcpToolDefinition<Schema>
/**
* MCP server transport configuration types.
* Matches McpServerConfigForProcessTransport from coreTypes.generated.ts.
*/
export type SdkMcpStdioConfig = {
type?: "stdio"
command: string
args?: string[]
env?: Record<string, string>
}
export type SdkMcpSSEConfig = {
type: "sse"
url: string
headers?: Record<string, string>
}
export type SdkMcpHttpConfig = {
type: "http"
url: string
headers?: Record<string, string>
}
export type SdkMcpSdkConfig = {
type: "sdk"
name: string
}
export type SdkMcpServerConfig = SdkMcpStdioConfig | SdkMcpSSEConfig | SdkMcpHttpConfig | SdkMcpSdkConfig
/**
* Scoped MCP server config with session scope.
* Returned by createSdkMcpServer() for use with mcpServers option.
*/
export type SdkScopedMcpServerConfig = SdkMcpServerConfig & {
scope: "session"
}
/**
* Wraps an MCP server configuration for use with the SDK.
* Adds the 'session' scope marker so the SDK knows this server
* should be connected per-session (not globally).
*
* @param config - MCP server config (stdio, sse, http, or sdk type)
* @returns Scoped config with scope: 'session' added
*
* @example
* ```typescript
* const server = createSdkMcpServer({
* type: 'stdio',
* command: 'npx',
* args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
* })
* const session = unstable_v2_createSession({
* cwd: '/my/project',
* mcpServers: { 'fs': server },
* })
* ```
*/
export function createSdkMcpServer(config: SdkMcpServerConfig): SdkScopedMcpServerConfig

View File

@@ -0,0 +1,10 @@
/**
* Stub — control protocol types not included in source snapshot. See
* src/types/message.ts for the same scoping caveat (issue #473).
*/
/* eslint-disable @typescript-eslint/no-explicit-any */
export type SDKControlRequest = any
export type SDKControlResponse = any
export type SDKControlPermissionRequest = any
export type StdoutMessage = any

View File

@@ -55,7 +55,7 @@ export const OutputFormatSchema = lazySchema(() =>
// ============================================================================ // ============================================================================
export const ApiKeySourceSchema = lazySchema(() => export const ApiKeySourceSchema = lazySchema(() =>
z.enum(['user', 'project', 'org', 'temporary', 'oauth']), z.enum(['user', 'project', 'org', 'temporary', 'oauth', 'none']),
) )
export const ConfigScopeSchema = lazySchema(() => export const ConfigScopeSchema = lazySchema(() =>
@@ -1851,6 +1851,18 @@ export const SDKSessionInfoSchema = lazySchema(() =>
.describe('Session metadata returned by listSessions and getSessionInfo.'), .describe('Session metadata returned by listSessions and getSessionInfo.'),
) )
export const SDKPermissionRequestMessageSchema = lazySchema(() =>
z.object({
type: z.literal('permission_request'),
request_id: z.string().describe('Unique request ID for this permission prompt'),
tool_name: z.string().describe('Name of the tool requesting permission'),
tool_use_id: z.string().describe('Tool use ID for matching with respondToPermission'),
input: z.record(z.string(), z.unknown()).describe('Tool input parameters'),
uuid: UUIDPlaceholder(),
session_id: z.string(),
}),
)
export const SDKMessageSchema = lazySchema(() => export const SDKMessageSchema = lazySchema(() =>
z.union([ z.union([
SDKAssistantMessageSchema(), SDKAssistantMessageSchema(),
@@ -1877,6 +1889,7 @@ export const SDKMessageSchema = lazySchema(() =>
SDKRateLimitEventSchema(), SDKRateLimitEventSchema(),
SDKElicitationCompleteMessageSchema(), SDKElicitationCompleteMessageSchema(),
SDKPromptSuggestionMessageSchema(), SDKPromptSuggestionMessageSchema(),
SDKPermissionRequestMessageSchema(),
]), ]),
) )

File diff suppressed because it is too large Load Diff

16
src/global.d.ts vendored Normal file
View File

@@ -0,0 +1,16 @@
/**
* Build-time globals replaced by the bundler at build time.
*
* `scripts/build.ts` substitutes these via Bun's `define` option, so at
* runtime the references are inlined as string literals. This declaration
* exists only to make `tsc --noEmit` aware of them — without it, every
* `MACRO.*` access fires TS2304 "Cannot find name 'MACRO'".
*/
declare const MACRO: {
VERSION: string
DISPLAY_VERSION: string
BUILD_TIME: string
ISSUES_EXPLAINER: string
PACKAGE_URL: string
NATIVE_PACKAGE_URL: string | undefined
}

View File

@@ -79,6 +79,7 @@ import { headlessProfilerCheckpoint } from './utils/headlessProfiler.js'
import { import {
getDefaultMainLoopModelSetting, getDefaultMainLoopModelSetting,
getRuntimeMainLoopModel, getRuntimeMainLoopModel,
parseUserSpecifiedModel,
renderModelName, renderModelName,
} from './utils/model/model.js' } from './utils/model/model.js'
import { import {
@@ -624,7 +625,7 @@ async function* queryLoop(
getDefaultMainLoopModelSetting() getDefaultMainLoopModelSetting()
let currentModel = getRuntimeMainLoopModel({ let currentModel = getRuntimeMainLoopModel({
permissionMode, permissionMode,
mainLoopModel: appStateMainLoopModel, mainLoopModel: parseUserSpecifiedModel(appStateMainLoopModel),
exceeds200kTokens: exceeds200kTokens:
permissionMode === 'plan' && permissionMode === 'plan' &&
doesMostRecentAssistantMessageExceed200k(messagesForQuery), doesMostRecentAssistantMessageExceed200k(messagesForQuery),

View File

@@ -1283,6 +1283,21 @@ async function* queryModel(
let messagesForAPI = normalizeMessagesForAPI(messages, filteredTools) let messagesForAPI = normalizeMessagesForAPI(messages, filteredTools)
queryCheckpoint('query_message_normalization_end') queryCheckpoint('query_message_normalization_end')
// Apply hybrid context strategy for optimal cache/fresh balance
if (feature('HYBRID_CONTEXT_STRATEGY')) {
const { applyHybridStrategy } = await import('../../utils/hybridContextStrategy.js')
// Cap at 200k to avoid edge case with very large context windows
const strategyResult = applyHybridStrategy(messagesForAPI, {
cacheWeight: 0.4,
freshWeight: 0.6,
maxTotalTokens: Math.min(
getContextWindowForModel(model, getSdkBetas()) - COMPACT_MAX_OUTPUT_TOKENS,
200000
),
})
messagesForAPI = strategyResult.selectedMessages
}
// Model-specific post-processing: strip tool-search-specific fields if the // Model-specific post-processing: strip tool-search-specific fields if the
// selected model doesn't support tool search. // selected model doesn't support tool search.
// //

View File

@@ -48,7 +48,6 @@ import { TodoWriteTool } from './tools/TodoWriteTool/TodoWriteTool.js'
import { ExitPlanModeV2Tool } from './tools/ExitPlanModeTool/ExitPlanModeV2Tool.js' import { ExitPlanModeV2Tool } from './tools/ExitPlanModeTool/ExitPlanModeV2Tool.js'
import { TestingPermissionTool } from './tools/testing/TestingPermissionTool.js' import { TestingPermissionTool } from './tools/testing/TestingPermissionTool.js'
import { GrepTool } from './tools/GrepTool/GrepTool.js' import { GrepTool } from './tools/GrepTool/GrepTool.js'
import { RepoMapTool } from './tools/RepoMapTool/RepoMapTool.js'
// Lazy require to break circular dependency: tools.ts -> TeamCreateTool/TeamDeleteTool -> ... -> tools.ts // Lazy require to break circular dependency: tools.ts -> TeamCreateTool/TeamDeleteTool -> ... -> tools.ts
/* eslint-disable @typescript-eslint/no-require-imports */ /* eslint-disable @typescript-eslint/no-require-imports */
const getTeamCreateTool = () => const getTeamCreateTool = () =>
@@ -189,7 +188,6 @@ export function getAllBaseTools(): Tools {
// trick as ripgrep). When available, find/grep in Claude's shell are aliased // trick as ripgrep). When available, find/grep in Claude's shell are aliased
// to these fast tools, so the dedicated Glob/Grep tools are unnecessary. // to these fast tools, so the dedicated Glob/Grep tools are unnecessary.
...(hasEmbeddedSearchTools() ? [] : [GlobTool, GrepTool]), ...(hasEmbeddedSearchTools() ? [] : [GlobTool, GrepTool]),
RepoMapTool,
ExitPlanModeV2Tool, ExitPlanModeV2Tool,
FileReadTool, FileReadTool,
FileEditTool, FileEditTool,

View File

@@ -1,167 +0,0 @@
import { beforeAll, describe, expect, test } from 'bun:test'
import { cpSync, mkdtempSync, rmSync } from 'fs'
import { tmpdir } from 'os'
import { join } from 'path'
import { initParser } from '../../context/repoMap/parser.js'
import { invalidateCache } from '../../context/repoMap/index.js'
import { RepoMapTool } from './RepoMapTool.js'
import { getToolUseSummary } from './UI.js'
const FIXTURE_ROOT = join(
import.meta.dir,
'..',
'..',
'context',
'repoMap',
'__fixtures__',
'mini-repo',
)
const FIXTURE_FILES = [
'fileA.ts',
'fileB.ts',
'fileC.ts',
'fileD.ts',
'fileE.ts',
]
beforeAll(async () => {
await initParser()
})
describe('RepoMapTool schema', () => {
test('validates a minimal input {}', () => {
const schema = RepoMapTool.inputSchema
const result = schema.safeParse({})
expect(result.success).toBe(true)
})
test('rejects max_tokens below 256', () => {
const schema = RepoMapTool.inputSchema
const result = schema.safeParse({ max_tokens: 100 })
expect(result.success).toBe(false)
})
test('rejects max_tokens above 16384', () => {
const schema = RepoMapTool.inputSchema
const result = schema.safeParse({ max_tokens: 20000 })
expect(result.success).toBe(false)
})
test('accepts focus_files as string[]', () => {
const schema = RepoMapTool.inputSchema
const result = schema.safeParse({
focus_files: ['src/tools/', 'src/context.ts'],
})
expect(result.success).toBe(true)
})
})
describe('RepoMapTool call', () => {
test('returns a rendered map for a directory', async () => {
const tempDir = mkdtempSync(join(tmpdir(), 'repomap-tool-'))
try {
for (const f of FIXTURE_FILES) {
cpSync(join(FIXTURE_ROOT, f), join(tempDir, f))
}
// We need to call buildRepoMap directly since getCwd patching is complex
const { buildRepoMap } = await import(
'../../context/repoMap/index.js'
)
const result = await buildRepoMap({
root: tempDir,
maxTokens: 1024,
})
expect(result.map.length).toBeGreaterThan(0)
expect(result.fileCount).toBeGreaterThan(0)
expect(result.totalFileCount).toBe(5)
expect(result.tokenCount).toBeGreaterThan(0)
expect(result.tokenCount).toBeLessThanOrEqual(1024)
} finally {
rmSync(tempDir, { recursive: true, force: true })
invalidateCache(tempDir)
}
})
test('respects max_tokens parameter', async () => {
const tempDir = mkdtempSync(join(tmpdir(), 'repomap-tool-'))
try {
for (const f of FIXTURE_FILES) {
cpSync(join(FIXTURE_ROOT, f), join(tempDir, f))
}
const { buildRepoMap } = await import(
'../../context/repoMap/index.js'
)
const small = await buildRepoMap({ root: tempDir, maxTokens: 256 })
const large = await buildRepoMap({ root: tempDir, maxTokens: 4096 })
expect(small.tokenCount).toBeLessThanOrEqual(256)
// Large budget should include more or equal content
expect(large.map.length).toBeGreaterThanOrEqual(small.map.length)
} finally {
rmSync(tempDir, { recursive: true, force: true })
invalidateCache(tempDir)
}
})
test('focus_files boosts specified files in the ranking', async () => {
const tempDir = mkdtempSync(join(tmpdir(), 'repomap-tool-'))
try {
for (const f of FIXTURE_FILES) {
cpSync(join(FIXTURE_ROOT, f), join(tempDir, f))
}
const { buildRepoMap } = await import(
'../../context/repoMap/index.js'
)
// Without focus, fileE is ranked last (isolated)
const noFocus = await buildRepoMap({ root: tempDir, maxTokens: 2048 })
const lines = noFocus.map.split('\n')
const fileEPos = lines.findIndex(l => l === 'fileE.ts:')
// With focus on fileE
invalidateCache(tempDir)
const withFocus = await buildRepoMap({
root: tempDir,
maxTokens: 2048,
focusFiles: ['fileE.ts'],
})
const focusLines = withFocus.map.split('\n')
const fileEFocusPos = focusLines.findIndex(l => l === 'fileE.ts:')
// fileE should rank higher (earlier position) with focus
expect(fileEFocusPos).toBeLessThan(fileEPos)
} finally {
rmSync(tempDir, { recursive: true, force: true })
invalidateCache(tempDir)
}
})
})
describe('RepoMapTool properties', () => {
test('is marked read-only and concurrency-safe', () => {
expect(RepoMapTool.isReadOnly({})).toBe(true)
expect(RepoMapTool.isConcurrencySafe({})).toBe(true)
})
})
describe('RepoMapTool UI', () => {
test('getToolUseSummary returns descriptive string including focus', () => {
expect(getToolUseSummary(undefined)).toBe('Repository map')
expect(getToolUseSummary({})).toBe('Repository map')
expect(getToolUseSummary({ focus_files: ['src/tools/'] })).toContain(
'focus:',
)
expect(getToolUseSummary({ focus_files: ['src/tools/'] })).toContain(
'src/tools/',
)
expect(
getToolUseSummary({ focus_symbols: ['buildTool'] }),
).toContain('buildTool')
})
})

View File

@@ -1,176 +0,0 @@
import { z } from 'zod/v4'
import { buildTool, type ToolDef } from '../../Tool.js'
import { getCwd } from '../../utils/cwd.js'
import { lazySchema } from '../../utils/lazySchema.js'
import { checkReadPermissionForTool } from '../../utils/permissions/filesystem.js'
import type { PermissionDecision } from '../../utils/permissions/PermissionResult.js'
import { buildRepoMap } from '../../context/repoMap/index.js'
import { REPO_MAP_TOOL_NAME, getDescription } from './prompt.js'
import {
getToolUseSummary,
renderToolResultMessage,
renderToolUseErrorMessage,
renderToolUseMessage,
} from './UI.js'
const inputSchema = lazySchema(() =>
z.strictObject({
max_tokens: z
.number()
.int()
.min(256)
.max(16384)
.optional()
.describe(
'Maximum token budget for the rendered map. Higher values include more files. Default: 1024.',
),
focus_files: z
.array(z.string())
.optional()
.describe(
'Relative file or directory paths to boost in the ranking (e.g. ["src/tools/", "src/context.ts"]).',
),
focus_symbols: z
.array(z.string())
.optional()
.describe(
'Symbol names to boost — files defining these symbols rank higher (e.g. ["buildTool", "ToolUseContext"]).',
),
}),
)
type InputSchema = ReturnType<typeof inputSchema>
const outputSchema = lazySchema(() =>
z.object({
rendered: z.string(),
token_count: z.number(),
file_count: z.number(),
total_file_count: z.number(),
cache_hit: z.boolean(),
build_time_ms: z.number(),
}),
)
type OutputSchema = ReturnType<typeof outputSchema>
type Output = z.infer<OutputSchema>
export const RepoMapTool = buildTool({
name: REPO_MAP_TOOL_NAME,
searchHint: 'structural map of repository files and symbols',
maxResultSizeChars: 50_000,
async description() {
return getDescription()
},
userFacingName() {
return 'Repository map'
},
getToolUseSummary,
getActivityDescription(input) {
if (input?.focus_files?.length) {
return `Building repository map (focus: ${input.focus_files.join(', ')})`
}
return 'Building repository map'
},
get inputSchema(): InputSchema {
return inputSchema()
},
get outputSchema(): OutputSchema {
return outputSchema()
},
isConcurrencySafe() {
return true
},
isReadOnly() {
return true
},
isSearchOrReadCommand() {
return { isSearch: false, isRead: true }
},
toAutoClassifierInput(input) {
const parts: string[] = ['repomap']
if (input.focus_files?.length) parts.push(`focus: ${input.focus_files.join(',')}`)
return parts.join(' ')
},
async checkPermissions(input, context): Promise<PermissionDecision> {
const appState = context.getAppState()
return checkReadPermissionForTool(
RepoMapTool,
input,
appState.toolPermissionContext,
)
},
async prompt() {
return getDescription()
},
renderToolUseMessage,
renderToolUseErrorMessage,
renderToolResultMessage,
extractSearchText({ rendered }) {
return rendered
},
mapToolResultToToolResultBlockParam(output, toolUseID) {
const summary = [
`Repository map: ${output.file_count} files ranked (${output.total_file_count} total), ${output.token_count} tokens`,
output.cache_hit ? '(cached)' : `(built in ${output.build_time_ms}ms)`,
].join(' ')
return {
tool_use_id: toolUseID,
type: 'tool_result',
content: `${summary}\n\n${output.rendered}`,
}
},
async call(
{ max_tokens = 1024, focus_files, focus_symbols },
{ abortController },
) {
const root = getCwd()
// Resolve focus_symbols to file paths by searching the tag cache
let resolvedFocusFiles = focus_files ?? []
if (focus_symbols?.length) {
// Import the symbol lookup dynamically to avoid circular deps at module load
const { getRepoFiles } = await import('../../context/repoMap/gitFiles.js')
const { extractTags } = await import('../../context/repoMap/symbolExtractor.js')
const { initParser } = await import('../../context/repoMap/parser.js')
await initParser()
const files = await getRepoFiles(root)
const symbolFiles: string[] = []
const symbolSet = new Set(focus_symbols)
// Scan files for matching symbol definitions
for (const file of files) {
if (abortController.signal.aborted) break
const tags = await extractTags(file, root)
if (tags) {
const hasMatch = tags.tags.some(
t => t.kind === 'def' && symbolSet.has(t.name),
)
if (hasMatch) {
symbolFiles.push(file)
}
}
}
resolvedFocusFiles = [...resolvedFocusFiles, ...symbolFiles]
}
const result = await buildRepoMap({
root,
maxTokens: max_tokens,
focusFiles: resolvedFocusFiles.length > 0 ? resolvedFocusFiles : undefined,
})
const output: Output = {
rendered: result.map,
token_count: result.tokenCount,
file_count: result.fileCount,
total_file_count: result.totalFileCount,
cache_hit: result.cacheHit,
build_time_ms: result.buildTimeMs,
}
return { data: output }
},
} satisfies ToolDef<InputSchema, Output>)

View File

@@ -1,96 +0,0 @@
import type { ToolResultBlockParam } from '@anthropic-ai/sdk/resources/index.mjs'
import React from 'react'
import { FallbackToolUseErrorMessage } from '../../components/FallbackToolUseErrorMessage.js'
import { MessageResponse } from '../../components/MessageResponse.js'
import { TOOL_SUMMARY_MAX_LENGTH } from '../../constants/toolLimits.js'
import { Text } from '../../ink.js'
import type { ToolProgressData } from '../../Tool.js'
import type { ProgressMessage } from '../../types/message.js'
import { truncate } from '../../utils/format.js'
type Output = {
rendered: string
token_count: number
file_count: number
total_file_count: number
cache_hit: boolean
build_time_ms: number
}
export function getToolUseSummary(
input:
| Partial<{
max_tokens?: number
focus_files?: string[]
focus_symbols?: string[]
}>
| undefined,
): string | null {
if (!input) return 'Repository map'
const parts: string[] = []
if (input.focus_files?.length) {
parts.push(input.focus_files.join(', '))
}
if (input.focus_symbols?.length) {
parts.push(input.focus_symbols.join(', '))
}
if (parts.length > 0) {
return truncate(`Repository map (focus: ${parts.join('; ')})`, TOOL_SUMMARY_MAX_LENGTH)
}
return 'Repository map'
}
export function renderToolUseMessage(
input: Partial<{
max_tokens?: number
focus_files?: string[]
focus_symbols?: string[]
}>,
): React.ReactNode {
const parts: string[] = []
if (input.max_tokens) {
parts.push(`max_tokens: ${input.max_tokens}`)
}
if (input.focus_files?.length) {
parts.push(`focus: ${input.focus_files.join(', ')}`)
}
if (input.focus_symbols?.length) {
parts.push(`symbols: ${input.focus_symbols.join(', ')}`)
}
return parts.length > 0 ? parts.join(', ') : null
}
export function renderToolResultMessage(
output: Output,
_progressMessages: ProgressMessage<ToolProgressData>[],
{ verbose }: { verbose: boolean },
): React.ReactNode {
const summary = `${output.file_count} files ranked, ${output.token_count} tokens${output.cache_hit ? ' (cached)' : `, ${output.build_time_ms}ms`}`
if (verbose) {
return (
<MessageResponse>
<Text>
Built repository map: {summary}
{'\n'}
({output.total_file_count} total files considered)
</Text>
</MessageResponse>
)
}
return (
<MessageResponse height={1}>
<Text>
Built repository map: {summary}
</Text>
</MessageResponse>
)
}
export function renderToolUseErrorMessage(
result: ToolResultBlockParam['content'],
{ verbose }: { verbose: boolean },
): React.ReactNode {
return <FallbackToolUseErrorMessage result={result} verbose={verbose} />
}

View File

@@ -1,31 +0,0 @@
export const REPO_MAP_TOOL_NAME = 'RepoMap'
export function getDescription(): string {
return `Build a structural map of the repository showing ranked files and their key signatures (functions, classes, types, interfaces).
## When to use
- At the start of a session on an unfamiliar repository to understand the codebase architecture
- Before cross-file refactors to identify which files are structurally connected
- When searching for where a concept or feature lives across the codebase
- When the user asks "how is this repo organized" or "what are the important files"
## When NOT to use
- To read the contents of a specific file — use Read instead
- To search for exact text or patterns — use Grep instead
- To find files by name or glob pattern — use Glob instead
- When you already know which files to examine
## How it works
The tool parses every supported source file (TypeScript, JavaScript, Python) using tree-sitter, extracts symbol definitions and references, builds a cross-file reference graph weighted by symbol importance (IDF), and ranks files using PageRank. The output is a token-budgeted summary showing the highest-ranked files with their key signatures (function/class/type declarations).
## Parameters
- **max_tokens**: Controls how many files fit in the output. Use 1024 for a quick overview, 4096+ for comprehensive maps. Default: 1024.
- **focus_files**: Pass relative paths (e.g. \`["src/tools/"]\`) to boost specific files and their neighbors in the ranking. Use when the user mentions specific directories or files.
- **focus_symbols**: Pass symbol names (e.g. \`["buildTool", "ToolUseContext"]\`) to boost files that define those symbols. Use when the user asks about specific functions or types.
## Important notes
- The map shows **signatures only**, not function bodies. Use Read to see implementations.
- Results are **auto-cached** on disk — repeat calls with the same parameters return instantly.
- Files are ranked by structural importance: files imported by many others rank highest.
`
}

25
src/types/message.ts Normal file
View File

@@ -0,0 +1,25 @@
/**
* Stub — message type definitions not included in source snapshot.
*
* The upstream Anthropic source defines a rich Message discriminated union
* with structured Content blocks, role tags, tool_use payloads, and so on.
* That file is not mirrored to this open snapshot. This stub exists so
* `tsc --noEmit` can resolve `import { Message, ... } from 'src/types/message'`
* across the ~21 callers without fixing every transitive type the call
* sites use.
*
* Once the real definitions are restored upstream-side or reconstructed
* from runtime usage, replace these `any` aliases with proper types and
* delete this comment. See issue #473 for the typecheck-foundation effort.
*/
/* eslint-disable @typescript-eslint/no-explicit-any */
export type Message = any
export type AssistantMessage = any
export type UserMessage = any
export type SystemMessage = any
export type SystemAPIErrorMessage = any
export type AttachmentMessage = any
export type ProgressMessage = any
export type HookResultMessage = any
export type NormalizedUserMessage = any

7
src/types/tools.ts Normal file
View File

@@ -0,0 +1,7 @@
/**
* Stub — tool type definitions not included in source snapshot. See
* src/types/message.ts for the same scoping caveat (issue #473).
*/
/* eslint-disable @typescript-eslint/no-explicit-any */
export type ShellProgress = any

15
src/types/utils.ts Normal file
View File

@@ -0,0 +1,15 @@
/**
* Stub — utility type definitions not included in source snapshot. See
* src/types/message.ts for the same scoping caveat (issue #473).
*/
/* eslint-disable @typescript-eslint/no-explicit-any */
export type DeepImmutable<T> = T extends any[]
? readonly DeepImmutable<T[number]>[]
: T extends object
? { readonly [K in keyof T]: DeepImmutable<T[K]> }
: T
export type Permutations<T extends string, U extends string = T> = T extends T
? T | `${T}${Permutations<Exclude<U, T>>}`
: never

View File

@@ -0,0 +1,104 @@
import { describe, expect, it } from 'bun:test'
import {
analyzeConversationPatterns,
predictContextNeeds,
preloadContext,
createPreloadStrategy,
} from './contextPreload.js'
function createMessage(role: string, content: string, createdAt: number = Date.now()): any {
return {
message: { role, content, id: 'test', type: 'message', created_at: createdAt },
sender: role,
}
}
describe('contextPreload', () => {
describe('analyzeConversationPatterns', () => {
it('extracts patterns from messages', () => {
const messages = [
createMessage('user', 'Fix the error in my code', 1000),
createMessage('assistant', 'I found the bug', 2000),
]
const patterns = analyzeConversationPatterns(messages)
expect(patterns.length).toBeGreaterThanOrEqual(0)
})
it('detects debug patterns', () => {
const messages = [
createMessage('user', 'Debug this error please', 1000),
createMessage('assistant', 'Found it', 2000),
]
const patterns = analyzeConversationPatterns(messages)
expect(patterns.some(p => p.userQuery === 'debug')).toBe(true)
})
it('detects code patterns', () => {
const messages = [
createMessage('user', 'Write a function for me', 1000),
createMessage('assistant', 'Here is the code', 2000),
]
const patterns = analyzeConversationPatterns(messages)
expect(patterns.some(p => p.userQuery === 'code')).toBe(true)
})
})
describe('predictContextNeeds', () => {
it('predicts context needs based on query', () => {
const patterns = [{ userQuery: 'debug', neededContext: ['error_history'], frequency: 1 }]
const prediction = predictContextNeeds('Fix the bug', patterns, {
maxPreloadTokens: 10000,
confidenceThreshold: 0.3,
})
expect(prediction.confidence).toBeGreaterThan(0)
expect(prediction.predictedNeed.length).toBeGreaterThan(0)
})
it('returns non-empty predictedNeed when pattern matches', () => {
const patterns = [
{ userQuery: 'debug', neededContext: ['error_history', 'stack_trace'], frequency: 2 },
]
const prediction = predictContextNeeds('debug this error', patterns, {
maxPreloadTokens: 10000,
confidenceThreshold: 0.1,
})
expect(prediction.predictedNeed).toContain('error_history')
})
})
describe('preloadContext', () => {
it('preloads relevant context', () => {
const messages = [
createMessage('system', 'System prompt'),
createMessage('user', 'Debug error'),
createMessage('assistant', 'Fixed'),
]
const prediction = { predictedNeed: ['error'], confidence: 0.8, suggestedMessages: [] }
const result = preloadContext(messages, prediction, { maxPreloadTokens: 5000 })
expect(result.length).toBeGreaterThan(0)
})
})
describe('createPreloadStrategy', () => {
it('creates strategy with all methods', () => {
const strategy = createPreloadStrategy({ maxPreloadTokens: 10000 })
expect(strategy.analyze).toBeDefined()
expect(strategy.predict).toBeDefined()
expect(strategy.preload).toBeDefined()
})
})
})

145
src/utils/contextPreload.ts Normal file
View File

@@ -0,0 +1,145 @@
/**
* Context Pre-loading - Production Grade
*
* Proactively loads relevant context before it's needed.
* Prediction based on conversation patterns.
*/
import { roughTokenCountEstimation } from '../services/tokenEstimation.js'
import type { Message } from '../types/message.js'
export interface PreloadConfig {
maxPreloadTokens: number
predictionWindow?: number
confidenceThreshold?: number
}
export interface PreloadPrediction {
predictedNeed: string[]
confidence: number
suggestedMessages: Message[]
}
export interface ConversationPattern {
userQuery: string
neededContext: string[]
frequency: number
}
const PATTERN_KEYWORDS: Record<string, string[]> = {
'code': ['code', 'function', 'implement', 'write'],
'debug': ['error', 'bug', 'fix', 'issue', 'debug'],
'refactor': ['refactor', 'improve', 'clean', 'optimize'],
'test': ['test', 'spec', 'coverage', 'verify'],
'explain': ['explain', 'what', 'how', 'why', 'describe'],
'search': ['find', 'search', 'look', 'grep', 'glob'],
}
export function analyzeConversationPatterns(messages: Message[]): ConversationPattern[] {
const patterns: ConversationPattern[] = []
const recentMessages = messages.slice(-10)
for (let i = 0; i < recentMessages.length - 1; i++) {
const userMsg = recentMessages[i]
const assistantMsg = recentMessages[i + 1]
const userContent = typeof userMsg.message?.content === 'string' ? userMsg.message.content : ''
const assistantContent = typeof assistantMsg.message?.content === 'string' ? assistantMsg.message.content : ''
for (const [category, keywords] of Object.entries(PATTERN_KEYWORDS)) {
if (keywords.some(k => userContent.toLowerCase().includes(k))) {
patterns.push({
userQuery: category,
neededContext: extractContextNeeds(assistantContent),
frequency: 1,
})
}
}
}
return patterns
}
function extractContextNeeds(content: string): string[] {
const needs: string[] = []
if (content.includes('file')) needs.push('file_context')
if (content.includes('function')) needs.push('function_defs')
if (content.includes('error')) needs.push('error_history')
if (content.includes('test')) needs.push('test_files')
return needs
}
export function predictContextNeeds(
currentQuery: string,
patterns: ConversationPattern[],
config: PreloadConfig,
): PreloadPrediction {
const threshold = config.confidenceThreshold ?? 0.5
let matchedCategory = ''
let highestConfidence = 0
for (const [category, keywords] of Object.entries(PATTERN_KEYWORDS)) {
const matches = keywords.filter(k => currentQuery.toLowerCase().includes(k)).length
const confidence = matches / keywords.length
if (confidence > highestConfidence && confidence >= threshold) {
highestConfidence = confidence
matchedCategory = category
}
}
const relevantPatterns = patterns.filter(p => p.userQuery === matchedCategory)
const allNeeds = relevantPatterns.flatMap(p => p.neededContext)
return {
predictedNeed: [...new Set(allNeeds)],
confidence: highestConfidence,
suggestedMessages: [],
}
}
export function preloadContext(
availableContext: Message[],
prediction: PreloadPrediction,
config: PreloadConfig,
): Message[] {
const targetTokens = config.maxPreloadTokens ?? 30000
const selected: Message[] = []
let usedTokens = 0
const priorityTypes = prediction.predictedNeed
const sorted = [...availableContext].sort((a, b) => {
const aContent = typeof a.message?.content === 'string' ? a.message.content : ''
const bContent = typeof b.message?.content === 'string' ? b.message.content : ''
const aPriority = priorityTypes.some(t => aContent.includes(t)) ? 1 : 0
const bPriority = priorityTypes.some(t => bContent.includes(t)) ? 1 : 0
if (bPriority !== aPriority) return bPriority - aPriority
return (b.message?.created_at ?? 0) - (a.message?.created_at ?? 0)
})
for (const msg of sorted) {
const tokens = roughTokenCountEstimation(
typeof msg.message?.content === 'string' ? msg.message.content : ''
)
if (usedTokens + tokens > targetTokens) break
selected.push(msg)
usedTokens += tokens
}
return selected
}
export function createPreloadStrategy(config: PreloadConfig) {
return {
analyze: analyzeConversationPatterns,
predict: (query: string, patterns: ConversationPattern[]) =>
predictContextNeeds(query, patterns, config),
preload: (context: Message[], prediction: PreloadPrediction) =>
preloadContext(context, prediction, config),
}
}

View File

@@ -201,6 +201,95 @@ export type AxiosErrorKind =
| 'http' // other axios error (may have status) | 'http' // other axios error (may have status)
| 'other' // not an axios error | 'other' // not an axios error
// ============================================================================
// SDK-specific error classes
// ============================================================================
/**
* Base class for all SDK errors. Extends ClaudeError so that existing
* `catch (e) { if (e instanceof ClaudeError) … }` checks still work,
* while giving SDK consumers a more specific base to match against.
*/
export class SDKError extends ClaudeError {
constructor(message: string) {
super(message)
this.name = 'SDKError'
}
}
export class SDKAuthenticationError extends SDKError {
constructor(message?: string) {
super(message ?? 'Authentication failed')
this.name = 'SDKAuthenticationError'
}
}
export class SDKBillingError extends SDKError {
constructor(message?: string) {
super(message ?? 'Billing error - check subscription')
this.name = 'SDKBillingError'
}
}
export class SDKRateLimitError extends SDKError {
constructor(
message?: string,
public readonly resetsAt?: number,
public readonly rateLimitType?: string,
) {
super(message ?? 'Rate limit exceeded')
this.name = 'SDKRateLimitError'
}
}
export class SDKInvalidRequestError extends SDKError {
constructor(message?: string) {
super(message ?? 'Invalid request')
this.name = 'SDKInvalidRequestError'
}
}
export class SDKServerError extends SDKError {
constructor(message?: string) {
super(message ?? 'Server error')
this.name = 'SDKServerError'
}
}
export class SDKMaxOutputTokensError extends SDKError {
constructor(message?: string) {
super(message ?? 'Max output tokens reached')
this.name = 'SDKMaxOutputTokensError'
}
}
export type SDKAssistantMessageError =
| 'authentication_failed'
| 'billing_error'
| 'rate_limit'
| 'invalid_request'
| 'server_error'
| 'unknown'
| 'max_output_tokens'
/**
* Convert an SDKAssistantMessageError type string to the proper Error class.
*/
export function sdkErrorFromType(
errorType: SDKAssistantMessageError,
message?: string,
): SDKError | ClaudeError {
switch (errorType) {
case 'authentication_failed': return new SDKAuthenticationError(message)
case 'billing_error': return new SDKBillingError(message)
case 'rate_limit': return new SDKRateLimitError(message)
case 'invalid_request': return new SDKInvalidRequestError(message)
case 'server_error': return new SDKServerError(message)
case 'max_output_tokens': return new SDKMaxOutputTokensError(message)
default: return new ClaudeError(message ?? 'Unknown error')
}
}
/** /**
* Classify a caught error from an axios request into one of a few buckets. * Classify a caught error from an axios request into one of a few buckets.
* Replaces the ~20-line isAxiosError → 401/403 → ECONNABORTED → ECONNREFUSED * Replaces the ~20-line isAxiosError → 401/403 → ECONNABORTED → ECONNREFUSED

View File

@@ -2,7 +2,7 @@ import type { UUID } from 'crypto'
import { logEvent } from 'src/services/analytics/index.js' import { logEvent } from 'src/services/analytics/index.js'
import type { AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS } from 'src/services/analytics/metadata.js' import type { AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS } from 'src/services/analytics/metadata.js'
import { type Command, getCommandName, isCommandEnabled } from '../commands.js' import { type Command, getCommandName, isCommandEnabled } from '../commands.js'
import { selectableUserMessagesFilter } from '../components/MessageSelector.js' import { selectableUserMessagesFilter } from './messageFilters.js'
import type { SpinnerMode } from '../components/Spinner/types.js' import type { SpinnerMode } from '../components/Spinner/types.js'
import type { QuerySource } from '../constants/querySource.js' import type { QuerySource } from '../constants/querySource.js'
import { expandPastedTextRefs, parseReferences } from '../history.js' import { expandPastedTextRefs, parseReferences } from '../history.js'

View File

@@ -0,0 +1,230 @@
import { describe, expect, it } from 'bun:test'
import {
splitContext,
applyHybridStrategy,
optimizeForCost,
optimizeForAccuracy,
getHybridStats,
} from './hybridContextStrategy.js'
function createMessage(role: string, content: string, createdAt: number = Date.now()): any {
return {
message: { role, content, id: 'test', type: 'message', created_at: createdAt },
sender: role,
}
}
describe('hybridContextStrategy', () => {
describe('splitContext', () => {
it('splits context into cached and fresh', () => {
const messages = [
createMessage('system', 'System prompt', Date.now() - 86400000),
createMessage('user', 'Hello'),
createMessage('assistant', 'Hi there'),
]
const split = splitContext(messages, {
cacheWeight: 0.4,
freshWeight: 0.6,
maxTotalTokens: 10000,
})
expect(split.cachedTokens).toBeGreaterThanOrEqual(0)
expect(split.freshTokens).toBeGreaterThanOrEqual(0)
expect(split.totalTokens).toBeGreaterThan(0)
})
it('respects weight configuration', () => {
const messages = [
createMessage('system', 'Old system', Date.now() - 86400000),
createMessage('user', 'Recent message', Date.now()),
]
const split = splitContext(messages, {
cacheWeight: 0.5,
freshWeight: 0.5,
maxTotalTokens: 10000,
})
expect(split.cached).toBeDefined()
expect(split.fresh).toBeDefined()
})
})
describe('applyHybridStrategy', () => {
it('applies strategy and returns messages', () => {
const messages = [
createMessage('user', 'Message 1'),
createMessage('assistant', 'Response 1'),
]
const result = applyHybridStrategy(messages, {
cacheWeight: 0.5,
freshWeight: 0.5,
maxTotalTokens: 10000,
})
expect(result.selectedMessages.length).toBeGreaterThan(0)
expect(['cache_heavy', 'fresh_heavy', 'balanced']).toContain(result.strategy)
})
it('calculates estimated cost', () => {
const messages = [
createMessage('user', 'Test message'),
]
const result = applyHybridStrategy(messages, {
cacheWeight: 0.5,
freshWeight: 0.5,
maxTotalTokens: 10000,
})
expect(result.estimatedCost).toBeGreaterThanOrEqual(0)
})
})
describe('optimizeForCost', () => {
it('returns messages within budget', () => {
const messages = [
createMessage('user', 'Message 1'),
createMessage('assistant', 'Response 1'),
]
const result = optimizeForCost(messages, 0.001)
expect(result.length).toBeGreaterThanOrEqual(0)
})
})
describe('optimizeForAccuracy', () => {
it('optimizes for accuracy with token limit', () => {
const messages = [
createMessage('user', 'Message 1'),
createMessage('assistant', 'Response 1'),
]
const result = optimizeForAccuracy(messages, 5000)
expect(result.length).toBeGreaterThan(0)
})
})
describe('getHybridStats', () => {
it('returns statistics', () => {
const messages = [
createMessage('system', 'System', Date.now() - 86400000),
createMessage('user', 'Hello'),
]
const split = splitContext(messages, { cacheWeight: 0.5, freshWeight: 0.5, maxTotalTokens: 10000 })
const stats = getHybridStats(split)
expect(stats.cacheRatio).toBeGreaterThanOrEqual(0)
expect(stats.freshRatio).toBeGreaterThanOrEqual(0)
expect(stats.totalTokens).toBeGreaterThan(0)
})
})
describe('tool_use/tool_result pairing', () => {
it('preserves tool_use and tool_result together', () => {
const toolUseId = 'tool-use-123'
const messages = [
{
type: 'assistant',
uuid: 'uuid-1',
message: {
role: 'assistant',
content: [{ type: 'tool_use', id: toolUseId, name: 'Read' }],
id: 'msg-1',
created_at: 1000,
},
},
{
type: 'user',
uuid: 'uuid-2',
message: {
role: 'user',
content: [{ type: 'tool_result', tool_use_id: toolUseId, content: 'file content' }],
id: 'msg-2',
created_at: 2000,
},
},
{
type: 'assistant',
uuid: 'uuid-3',
message: {
role: 'assistant',
content: 'Response after tool',
id: 'msg-3',
created_at: 3000,
},
},
] as any[]
const result = applyHybridStrategy(messages, {
cacheWeight: 0.5,
freshWeight: 0.5,
maxTotalTokens: 10000,
})
const hasToolUse = result.selectedMessages.some(
m => Array.isArray(m.message?.content) && m.message.content.some((b: any) => b.type === 'tool_use')
)
const hasToolResult = result.selectedMessages.some(
m => Array.isArray(m.message?.content) && m.message.content.some((b: any) => b.type === 'tool_result')
)
expect(hasToolUse).toBe(true)
expect(hasToolResult).toBe(true)
})
it('accounts for large tool_use input in token counting', () => {
const largeInput = 'x'.repeat(5000)
const messages = [
{
type: 'assistant',
message: {
role: 'assistant',
content: [
{ type: 'tool_use', id: 'tu1', name: 'Edit', input: { path: 'test.js', content: largeInput } },
],
created_at: 1000,
},
},
] as any[]
const result = applyHybridStrategy(messages, {
cacheWeight: 0.5,
freshWeight: 0.5,
maxTotalTokens: 20000,
})
expect(result.totalTokens).toBeGreaterThan(1000)
})
it('accounts for large thinking blocks in token counting', () => {
const longThinking = 'Thinking '.repeat(1000)
const messages = [
{
type: 'assistant',
message: {
role: 'assistant',
content: [
{ type: 'thinking', thinking: longThinking },
{ type: 'text', text: 'Final response' },
],
created_at: 1000,
},
},
] as any[]
const result = applyHybridStrategy(messages, {
cacheWeight: 0.5,
freshWeight: 0.5,
maxTotalTokens: 20000,
})
expect(result.totalTokens).toBeGreaterThan(500)
})
})
})

View File

@@ -0,0 +1,306 @@
/**
* Hybrid Context Strategy - Production Grade
*
* Combines cached + new tokens intelligently.
* Optimizes for cost vs accuracy.
*/
import { roughTokenCountEstimation } from '../services/tokenEstimation.js'
import type { Message } from '../types/message.js'
export interface HybridConfig {
cacheWeight: number
freshWeight: number
maxTotalTokens: number
costThreshold?: number
}
export interface ContextSplit {
cached: Message[]
fresh: Message[]
cachedTokens: number
freshTokens: number
totalTokens: number
}
export interface HybridStrategyResult {
selectedMessages: Message[]
totalTokens: number
strategy: 'cache_heavy' | 'fresh_heavy' | 'balanced'
estimatedCost: number
}
const DEFAULT_CONFIG: Required<HybridConfig> = {
cacheWeight: 0.4,
freshWeight: 0.6,
maxTotalTokens: 100000,
costThreshold: 0.01,
}
// Keep enough for: tool_use -> tool_result -> assistant -> user -> next
const MIN_TAILMessages = 5
function getMessageChain(
messages: Message[],
): { chains: Message[][]; orphans: Message[] } {
const toolUseIds = new Set<string>()
const toolUseMessages = new Map<string, Message[]>()
const allMessagesByUuid = new Map<string, Message[]>()
for (const msg of messages) {
const uuid = msg.uuid ?? ''
if (uuid) {
const existing = allMessagesByUuid.get(uuid) ?? []
existing.push(msg)
allMessagesByUuid.set(uuid, existing)
}
const content = msg.message?.content
if (Array.isArray(content)) {
for (const block of content) {
if (block?.type === 'tool_use' && block?.id) {
toolUseIds.add(block.id)
const existing = toolUseMessages.get(block.id) ?? []
existing.push(msg)
toolUseMessages.set(block.id, existing)
}
}
}
}
const chains: Message[][] = []
const orphans: Message[] = []
for (const [toolUseId, msgs] of toolUseMessages) {
const chainMessages: Message[] = [...msgs]
for (const msg of messages) {
const content = msg.message?.content
if (Array.isArray(content)) {
for (const block of content) {
if (block?.type === 'tool_result' && block?.tool_use_id === toolUseId) {
chainMessages.push(msg)
}
}
}
}
chains.push(chainMessages)
}
const chainMessageUuids = new Set<string>()
for (const chain of chains) {
for (const msg of chain) {
if (msg.uuid) chainMessageUuids.add(msg.uuid)
}
}
for (const [uuid, msgs] of allMessagesByUuid) {
if (!chainMessageUuids.has(uuid)) {
orphans.push(...msgs)
}
}
return { chains, orphans }
}
function getCacheAge(message: Message): number {
const created = message.message?.created_at ?? 0
if (created === 0) return 1000
return (Date.now() - created) / (1000 * 60 * 60)
}
function getMessageTokenCount(message: Message): number {
const content = message.message?.content
if (typeof content === 'string') {
return roughTokenCountEstimation(content)
}
if (Array.isArray(content)) {
let tokens = 0
for (const block of content) {
if (typeof block !== 'object' || block === null) continue
const b = block as Record<string, unknown>
if (b.type === 'text' && typeof b.text === 'string') {
tokens += roughTokenCountEstimation(b.text)
} else if (b.type === 'tool_use') {
const inputSize = JSON.stringify(b.input ?? {}).length
tokens += Math.ceil(inputSize / 4) + 20
} else if (b.type === 'tool_result') {
if (typeof b.content === 'string') {
tokens += roughTokenCountEstimation(b.content)
} else if (Array.isArray(b.content)) {
for (const rc of b.content) {
if (typeof rc === 'object' && rc !== null && 'text' in rc) {
tokens += roughTokenCountEstimation((rc as { text: string }).text)
}
}
} else {
tokens += 50
}
if (b.is_error === true) tokens += 10
} else if (b.type === 'thinking' && typeof b.thinking === 'string') {
tokens += roughTokenCountEstimation(b.thinking)
}
}
return tokens
}
return 0
}
function calculateCacheValue(message: Message): number {
const content = typeof message.message?.content === 'string' ? message.message.content : ''
const age = getCacheAge(message)
let value = 0.5
if (content.includes('error') || content.includes('fail')) value += 0.3
if (content.includes('function') || content.includes('class')) value += 0.2
if (content.includes('important') || content.includes('key')) value += 0.15
if (age < 1) value += 0.2
else if (age < 6) value += 0.1
else value -= 0.2
if (message.message?.role === 'system') value += 0.1
return Math.max(0, Math.min(1, value))
}
export function splitContext(
messages: Message[],
config: HybridConfig,
): ContextSplit {
const cfg = { ...DEFAULT_CONFIG, ...config }
const sorted = [...messages].sort((a, b) => {
const aValue = calculateCacheValue(a)
const bValue = calculateCacheValue(b)
return bValue - aValue
})
const cached: Message[] = []
const fresh: Message[] = []
let cachedTokens = 0
let freshTokens = 0
const cacheTarget = Math.floor(cfg.maxTotalTokens * cfg.cacheWeight)
const freshTarget = Math.floor(cfg.maxTotalTokens * cfg.freshWeight)
for (const msg of sorted) {
const tokens = getMessageTokenCount(msg)
const age = getCacheAge(msg)
if (age > 24 && cachedTokens < cacheTarget) {
if (cachedTokens + tokens <= cacheTarget) {
cached.push(msg)
cachedTokens += tokens
continue
}
}
if (freshTokens + tokens <= freshTarget) {
fresh.push(msg)
freshTokens += tokens
}
}
return {
cached,
fresh,
cachedTokens,
freshTokens,
totalTokens: cachedTokens + freshTokens,
}
}
export function applyHybridStrategy(
messages: Message[],
config: HybridConfig,
): HybridStrategyResult {
const cfg = { ...DEFAULT_CONFIG, ...config }
// Preserve message chains (tool_use/tool_result pairs)
const { chains, orphans } = getMessageChain(messages)
// Always preserve the conversation tail (last N messages)
const tailMessages = messages.slice(-MIN_TAILMessages)
const coreMessages = messages.slice(0, -MIN_TAILMessages)
const split = splitContext(coreMessages, cfg)
let strategy: HybridStrategyResult['strategy'] = 'balanced'
if (split.cachedTokens > split.freshTokens * 1.5) {
strategy = 'cache_heavy'
} else if (split.freshTokens > split.cachedTokens * 1.5) {
strategy = 'fresh_heavy'
}
const allSelected = [
...chains.flat(),
...split.cached,
...split.fresh,
...tailMessages
]
const seenUuids = new Set<string>()
const selectedMessages: Message[] = []
for (const msg of allSelected) {
const uuid = msg.uuid ?? msg.message?.id ?? ''
if (!seenUuids.has(uuid)) {
seenUuids.add(uuid)
selectedMessages.push(msg)
}
}
selectedMessages.sort(
(a, b) => (a.message?.created_at ?? 0) - (b.message?.created_at ?? 0)
)
let totalTokens = 0
for (const msg of selectedMessages) {
totalTokens += getMessageTokenCount(msg)
}
const estimatedCost = totalTokens * 0.000001 * 0.5
return {
selectedMessages,
totalTokens,
strategy,
estimatedCost,
}
}
export function optimizeForCost(messages: Message[], budget: number): Message[] {
const result = applyHybridStrategy(messages, {
cacheWeight: 0.7,
freshWeight: 0.3,
maxTotalTokens: Math.floor(budget * 1000),
costThreshold: budget,
})
return result.selectedMessages
}
export function optimizeForAccuracy(messages: Message[], maxTokens: number): Message[] {
const result = applyHybridStrategy(messages, {
cacheWeight: 0.3,
freshWeight: 0.7,
maxTotalTokens: maxTokens,
})
return result.selectedMessages
}
export function getHybridStats(split: ContextSplit) {
const cacheRatio = split.totalTokens > 0 ? split.cachedTokens / split.totalTokens : 0
const freshRatio = split.totalTokens > 0 ? split.freshTokens / split.totalTokens : 0
return {
cacheRatio: Math.round(cacheRatio * 100),
freshRatio: Math.round(freshRatio * 100),
totalTokens: split.totalTokens,
messageCount: split.cached.length + split.fresh.length,
efficiency: split.totalTokens / (split.cachedTokens + split.freshTokens + 1),
}
}

View File

@@ -0,0 +1,81 @@
import type { ContentBlockParam, TextBlockParam } from '@anthropic-ai/sdk/resources/index.mjs'
import type { Message, UserMessage } from '../types/message.js'
import {
BASH_STDERR_TAG,
BASH_STDOUT_TAG,
LOCAL_COMMAND_STDERR_TAG,
LOCAL_COMMAND_STDOUT_TAG,
TASK_NOTIFICATION_TAG,
TEAMMATE_MESSAGE_TAG,
TICK_TAG,
} from '../constants/xml.js'
import { isSyntheticMessage, isToolUseResultMessage } from './messages.js'
function isTextBlock(block: ContentBlockParam): block is TextBlockParam {
return block.type === 'text'
}
export function selectableUserMessagesFilter(message: Message): message is UserMessage {
if (message.type !== 'user') {
return false
}
if (Array.isArray(message.message.content) && message.message.content[0]?.type === 'tool_result') {
return false
}
if (isSyntheticMessage(message)) {
return false
}
if (message.isMeta) {
return false
}
if (message.isCompactSummary || message.isVisibleInTranscriptOnly) {
return false
}
const content = message.message.content
const lastBlock = typeof content === 'string' ? null : content[content.length - 1]
const messageText = typeof content === 'string' ? content.trim() : lastBlock && isTextBlock(lastBlock) ? lastBlock.text.trim() : ''
// Filter out non-user-authored messages (command outputs, task notifications, ticks).
if (messageText.indexOf(`<${LOCAL_COMMAND_STDOUT_TAG}>`) !== -1 || messageText.indexOf(`<${LOCAL_COMMAND_STDERR_TAG}>`) !== -1 || messageText.indexOf(`<${BASH_STDOUT_TAG}>`) !== -1 || messageText.indexOf(`<${BASH_STDERR_TAG}>`) !== -1 || messageText.indexOf(`<${TASK_NOTIFICATION_TAG}>`) !== -1 || messageText.indexOf(`<${TICK_TAG}>`) !== -1 || messageText.indexOf(`<${TEAMMATE_MESSAGE_TAG}`) !== -1) {
return false
}
return true
}
/**
* Checks if all messages after the given index are synthetic (interruptions, cancels, etc.)
* or non-meaningful content. Returns true if there's nothing meaningful to confirm -
* for example, if the user hit enter then immediately cancelled.
*/
export function messagesAfterAreOnlySynthetic(messages: Message[], fromIndex: number): boolean {
for (let i = fromIndex + 1; i < messages.length; i++) {
const msg = messages[i]
if (!msg) continue
// Skip known non-meaningful message types
if (isSyntheticMessage(msg)) continue
if (isToolUseResultMessage(msg)) continue
if (msg.type === 'progress') continue
if (msg.type === 'system') continue
if (msg.type === 'attachment') continue
if (msg.type === 'user' && msg.isMeta) continue
// Assistant with actual content = meaningful
if (msg.type === 'assistant') {
const content = msg.message.content
if (Array.isArray(content)) {
const hasMeaningfulContent = content.some(block => block.type === 'text' && block.text.trim() || block.type === 'tool_use')
if (hasMeaningfulContent) return false
}
continue
}
// User messages that aren't synthetic or meta = meaningful
if (msg.type === 'user') {
return false
}
// Other types (e.g., tombstone) are non-meaningful, continue
}
return true
}

View File

@@ -158,6 +158,19 @@ export const CLAUDE_OPUS_4_6_CONFIG = {
minimax: 'MiniMax-M2.5', minimax: 'MiniMax-M2.5',
} as const satisfies ModelConfig } as const satisfies ModelConfig
export const CLAUDE_OPUS_4_7_CONFIG = {
firstParty: 'claude-opus-4-7',
bedrock: 'us.anthropic.claude-opus-4-7-v1',
vertex: 'claude-opus-4-7',
foundry: 'claude-opus-4-7',
openai: 'gpt-4o',
gemini: 'gemini-2.5-pro',
github: 'github:copilot',
codex: 'gpt-5.5',
'nvidia-nim': 'nvidia/llama-3.1-nemotron-70b-instruct',
minimax: 'MiniMax-M2.5',
} as const satisfies ModelConfig
export const CLAUDE_SONNET_4_6_CONFIG = { export const CLAUDE_SONNET_4_6_CONFIG = {
firstParty: 'claude-sonnet-4-6', firstParty: 'claude-sonnet-4-6',
bedrock: 'us.anthropic.claude-sonnet-4-6', bedrock: 'us.anthropic.claude-sonnet-4-6',
@@ -184,6 +197,7 @@ export const ALL_MODEL_CONFIGS = {
opus41: CLAUDE_OPUS_4_1_CONFIG, opus41: CLAUDE_OPUS_4_1_CONFIG,
opus45: CLAUDE_OPUS_4_5_CONFIG, opus45: CLAUDE_OPUS_4_5_CONFIG,
opus46: CLAUDE_OPUS_4_6_CONFIG, opus46: CLAUDE_OPUS_4_6_CONFIG,
opus47: CLAUDE_OPUS_4_7_CONFIG,
} as const satisfies Record<string, ModelConfig> } as const satisfies Record<string, ModelConfig>
export type ModelKey = keyof typeof ALL_MODEL_CONFIGS export type ModelKey = keyof typeof ALL_MODEL_CONFIGS

View File

@@ -83,7 +83,8 @@ export function isNonCustomOpusModel(model: ModelName): boolean {
model === getModelStrings().opus40 || model === getModelStrings().opus40 ||
model === getModelStrings().opus41 || model === getModelStrings().opus41 ||
model === getModelStrings().opus45 || model === getModelStrings().opus45 ||
model === getModelStrings().opus46 model === getModelStrings().opus46 ||
model === getModelStrings().opus47
) )
} }
@@ -204,12 +205,12 @@ export function getDefaultOpusModel(): ModelName {
return process.env.OPENAI_MODEL || 'grok-4' return process.env.OPENAI_MODEL || 'grok-4'
} }
// 3P providers (Bedrock, Vertex, Foundry) — kept as a separate branch // 3P providers (Bedrock, Vertex, Foundry) — kept as a separate branch
// even when values match, since 3P availability lags firstParty and // since 3P availability lags firstParty and these will diverge again at
// these will diverge again at the next model launch. // the next model launch. Keep 3P on Opus 4.6 until they roll out 4.7.
if (getAPIProvider() !== 'firstParty') { if (getAPIProvider() !== 'firstParty') {
return getModelStrings().opus46 return getModelStrings().opus46
} }
return getModelStrings().opus46 return getModelStrings().opus47
} }
// @[MODEL LAUNCH]: Update the default Sonnet model (3P providers may lag so keep defaults unchanged). // @[MODEL LAUNCH]: Update the default Sonnet model (3P providers may lag so keep defaults unchanged).
@@ -407,7 +408,10 @@ export function getDefaultMainLoopModel(): ModelName {
export function firstPartyNameToCanonical(name: ModelName): ModelShortName { export function firstPartyNameToCanonical(name: ModelName): ModelShortName {
name = name.toLowerCase() name = name.toLowerCase()
// Special cases for Claude 4+ models to differentiate versions // Special cases for Claude 4+ models to differentiate versions
// Order matters: check more specific versions first (4-5 before 4) // Order matters: check more specific versions first (4-7 before 4-6 before 4-5 before 4)
if (name.includes('claude-opus-4-7')) {
return 'claude-opus-4-7'
}
if (name.includes('claude-opus-4-6')) { if (name.includes('claude-opus-4-6')) {
return 'claude-opus-4-6' return 'claude-opus-4-6'
} }
@@ -478,9 +482,9 @@ export function getClaudeAiUserDefaultModelDescription(
): string { ): string {
if (isMaxSubscriber() || isTeamPremiumSubscriber()) { if (isMaxSubscriber() || isTeamPremiumSubscriber()) {
if (isOpus1mMergeEnabled()) { if (isOpus1mMergeEnabled()) {
return `Opus 4.6 with 1M context · Most capable for complex work${fastMode ? getOpus46PricingSuffix(true) : ''}` return `Opus 4.7 with 1M context · Most capable for complex work${fastMode ? getOpus46PricingSuffix(true) : ''}`
} }
return `Opus 4.6 · Most capable for complex work${fastMode ? getOpus46PricingSuffix(true) : ''}` return `Opus 4.7 · Most capable for complex work${fastMode ? getOpus46PricingSuffix(true) : ''}`
} }
return 'Sonnet 4.6 · Best for everyday tasks' return 'Sonnet 4.6 · Best for everyday tasks'
} }
@@ -489,7 +493,7 @@ export function renderDefaultModelSetting(
setting: ModelName | ModelAlias, setting: ModelName | ModelAlias,
): string { ): string {
if (setting === 'opusplan') { if (setting === 'opusplan') {
return 'Opus 4.6 in plan mode, else Sonnet 4.6' return 'Opus 4.7 in plan mode, else Sonnet 4.6'
} }
return renderModelName(parseUserSpecifiedModel(setting)) return renderModelName(parseUserSpecifiedModel(setting))
} }
@@ -582,10 +586,14 @@ export function getPublicModelDisplayName(model: ModelName): string | null {
return 'GPT-5.4' return 'GPT-5.4'
case 'gpt-5.3-codex-spark': case 'gpt-5.3-codex-spark':
return 'GPT-5.3 Codex Spark' return 'GPT-5.3 Codex Spark'
case getModelStrings().opus46: case getModelStrings().opus47 + '[1m]':
return 'Opus 4.6' return 'Opus 4.7 (1M context)'
case getModelStrings().opus47:
return 'Opus 4.7'
case getModelStrings().opus46 + '[1m]': case getModelStrings().opus46 + '[1m]':
return 'Opus 4.6 (1M context)' return 'Opus 4.6 (1M context)'
case getModelStrings().opus46:
return 'Opus 4.6'
case getModelStrings().opus45: case getModelStrings().opus45:
return 'Opus 4.5' return 'Opus 4.5'
case getModelStrings().opus41: case getModelStrings().opus41:
@@ -825,6 +833,9 @@ export function getMarketingNameForModel(modelId: string): string | undefined {
const has1m = modelId.toLowerCase().includes('[1m]') const has1m = modelId.toLowerCase().includes('[1m]')
const canonical = getCanonicalName(modelId) const canonical = getCanonicalName(modelId)
if (canonical.includes('claude-opus-4-7')) {
return has1m ? 'Opus 4.7 (with 1M context)' : 'Opus 4.7'
}
if (canonical.includes('claude-opus-4-6')) { if (canonical.includes('claude-opus-4-6')) {
return has1m ? 'Opus 4.6 (with 1M context)' : 'Opus 4.6' return has1m ? 'Opus 4.6 (with 1M context)' : 'Opus 4.6'
} }

View File

@@ -159,6 +159,16 @@ function getOpus41Option(): ModelOption {
} }
} }
function getOpus47Option(fastMode = false): ModelOption {
const is3P = getAPIProvider() !== 'firstParty'
return {
value: is3P ? getModelStrings().opus47 : 'opus',
label: 'Opus',
description: `Opus 4.7 · Most capable for complex work${getOpus46PricingSuffix(fastMode)}`,
descriptionForModel: 'Opus 4.7 - most capable for complex work',
}
}
function getOpus46Option(fastMode = false): ModelOption { function getOpus46Option(fastMode = false): ModelOption {
const is3P = getAPIProvider() !== 'firstParty' const is3P = getAPIProvider() !== 'firstParty'
return { return {
@@ -241,7 +251,7 @@ function getMaxOpusOption(fastMode = false): ModelOption {
return { return {
value: 'opus', value: 'opus',
label: 'Opus', label: 'Opus',
description: `Opus 4.6 · Most capable for complex work${fastMode ? getOpus46PricingSuffix(true) : ''}`, description: `Opus 4.7 · Most capable for complex work${fastMode ? getOpus46PricingSuffix(true) : ''}`,
} }
} }
@@ -269,9 +279,9 @@ function getMergedOpus1MOption(fastMode = false): ModelOption {
return { return {
value: is3P ? getModelStrings().opus46 + '[1m]' : 'opus[1m]', value: is3P ? getModelStrings().opus46 + '[1m]' : 'opus[1m]',
label: 'Opus (1M context)', label: 'Opus (1M context)',
description: `Opus 4.6 with 1M context · Most capable for complex work${!is3P && fastMode ? getOpus46PricingSuffix(fastMode) : ''}`, description: `${is3P ? 'Opus 4.6' : 'Opus 4.7'} with 1M context · Most capable for complex work${!is3P && fastMode ? getOpus46PricingSuffix(fastMode) : ''}`,
descriptionForModel: descriptionForModel:
'Opus 4.6 with 1M context - most capable for complex work', `${is3P ? 'Opus 4.6' : 'Opus 4.7'} with 1M context - most capable for complex work`,
} }
} }
@@ -291,7 +301,7 @@ function getOpusPlanOption(): ModelOption {
return { return {
value: 'opusplan', value: 'opusplan',
label: 'Opus Plan Mode', label: 'Opus Plan Mode',
description: 'Use Opus 4.6 in plan mode, Sonnet 4.6 otherwise', description: 'Use Opus 4.7 in plan mode, Sonnet 4.6 otherwise',
} }
} }
@@ -504,7 +514,7 @@ function getModelOptionsBase(fastMode = false): ModelOption[] {
} }
} }
// PAYG 1P API: Default (Sonnet) + Sonnet 1M + Opus 4.6 + Opus 1M + Haiku // PAYG 1P API: Default (Sonnet) + Sonnet 1M + Opus 4.7 + Opus 4.6 + Opus 1M + Haiku
if (getAPIProvider() === 'firstParty') { if (getAPIProvider() === 'firstParty') {
const payg1POptions = [getDefaultOptionForUser(fastMode)] const payg1POptions = [getDefaultOptionForUser(fastMode)]
if (checkSonnet1mAccess()) { if (checkSonnet1mAccess()) {
@@ -513,6 +523,7 @@ function getModelOptionsBase(fastMode = false): ModelOption[] {
if (isOpus1mMergeEnabled()) { if (isOpus1mMergeEnabled()) {
payg1POptions.push(getMergedOpus1MOption(fastMode)) payg1POptions.push(getMergedOpus1MOption(fastMode))
} else { } else {
payg1POptions.push(getOpus47Option(fastMode))
payg1POptions.push(getOpus46Option(fastMode)) payg1POptions.push(getOpus46Option(fastMode))
if (checkOpus1mAccess()) { if (checkOpus1mAccess()) {
payg1POptions.push(getOpus46_1MOption(fastMode)) payg1POptions.push(getOpus46_1MOption(fastMode))
@@ -546,8 +557,9 @@ function getModelOptionsBase(fastMode = false): ModelOption[] {
if (customOpus !== undefined) { if (customOpus !== undefined) {
payg3pOptions.push(customOpus) payg3pOptions.push(customOpus)
} else { } else {
// Add Opus 4.1, Opus 4.6 and Opus 4.6 1M // Add Opus 4.1, Opus 4.7, Opus 4.6 and Opus 4.6 1M
payg3pOptions.push(getOpus41Option()) // This is the default opus payg3pOptions.push(getOpus41Option()) // This is the default opus
payg3pOptions.push(getOpus47Option(fastMode))
payg3pOptions.push(getOpus46Option(fastMode)) payg3pOptions.push(getOpus46Option(fastMode))
if (checkOpus1mAccess()) { if (checkOpus1mAccess()) {
payg3pOptions.push(getOpus46_1MOption(fastMode)) payg3pOptions.push(getOpus46_1MOption(fastMode))

View File

@@ -23,6 +23,23 @@ const TIERS = [
}, },
] as const ] as const
function buildCapabilityOverrideCacheKey(
model: string,
capability: ModelCapabilityOverride,
): string {
const envParts = TIERS.flatMap(tier => [
process.env[tier.modelEnvVar] ?? '',
process.env[tier.capabilitiesEnvVar] ?? '',
])
return [
model.toLowerCase(),
capability,
getAPIProvider(),
...envParts,
].join('\0')
}
/** /**
* Check whether a 3p model capability override is set for a model that matches one of * Check whether a 3p model capability override is set for a model that matches one of
* the pinned ANTHROPIC_DEFAULT_*_MODEL env vars. * the pinned ANTHROPIC_DEFAULT_*_MODEL env vars.
@@ -46,5 +63,5 @@ export const get3PModelCapabilityOverride = memoize(
} }
return undefined return undefined
}, },
(model, capability) => `${model.toLowerCase()}:${capability}`, buildCapabilityOverrideCacheKey,
) )

View File

@@ -202,6 +202,9 @@ function get3PFallbackSuggestion(model: string): string | undefined {
return undefined return undefined
} }
const lowerModel = model.toLowerCase() const lowerModel = model.toLowerCase()
if (lowerModel.includes('opus-4-7') || lowerModel.includes('opus_4_7')) {
return getModelStrings().opus46
}
if (lowerModel.includes('opus-4-6') || lowerModel.includes('opus_4_6')) { if (lowerModel.includes('opus-4-6') || lowerModel.includes('opus_4_6')) {
return getModelStrings().opus41 return getModelStrings().opus41
} }

View File

@@ -11,6 +11,7 @@ import {
CLAUDE_OPUS_4_1_CONFIG, CLAUDE_OPUS_4_1_CONFIG,
CLAUDE_OPUS_4_5_CONFIG, CLAUDE_OPUS_4_5_CONFIG,
CLAUDE_OPUS_4_6_CONFIG, CLAUDE_OPUS_4_6_CONFIG,
CLAUDE_OPUS_4_7_CONFIG,
CLAUDE_OPUS_4_CONFIG, CLAUDE_OPUS_4_CONFIG,
CLAUDE_SONNET_4_5_CONFIG, CLAUDE_SONNET_4_5_CONFIG,
CLAUDE_SONNET_4_6_CONFIG, CLAUDE_SONNET_4_6_CONFIG,
@@ -123,6 +124,8 @@ export const MODEL_COSTS: Record<ModelShortName, ModelCosts> = {
COST_TIER_5_25, COST_TIER_5_25,
[firstPartyNameToCanonical(CLAUDE_OPUS_4_6_CONFIG.firstParty)]: [firstPartyNameToCanonical(CLAUDE_OPUS_4_6_CONFIG.firstParty)]:
COST_TIER_5_25, COST_TIER_5_25,
[firstPartyNameToCanonical(CLAUDE_OPUS_4_7_CONFIG.firstParty)]:
COST_TIER_5_25,
} }
/** /**

View File

@@ -5,16 +5,15 @@ import { resolveRipgrepConfig, wrapRipgrepUnavailableError } from './ripgrep.js'
const MOCK_BUILTIN_PATH = path.normalize( const MOCK_BUILTIN_PATH = path.normalize(
process.platform === 'win32' process.platform === 'win32'
? `vendor/ripgrep/${process.arch}-win32/rg.exe` ? `node_modules/@vscode/ripgrep/bin/rg.exe`
: `vendor/ripgrep/${process.arch}-${process.platform}/rg`, : `node_modules/@vscode/ripgrep/bin/rg`,
) )
test('ripgrepCommand falls back to system rg when builtin binary is missing', () => { test('falls back to system rg when @vscode/ripgrep cannot be resolved', () => {
const config = resolveRipgrepConfig({ const config = resolveRipgrepConfig({
userWantsSystemRipgrep: false, userWantsSystemRipgrep: false,
bundledMode: false, bundledMode: false,
builtinCommand: MOCK_BUILTIN_PATH, builtinCommand: null,
builtinExists: false,
systemExecutablePath: '/usr/bin/rg', systemExecutablePath: '/usr/bin/rg',
processExecPath: '/fake/bun', processExecPath: '/fake/bun',
}) })
@@ -26,12 +25,11 @@ test('ripgrepCommand falls back to system rg when builtin binary is missing', ()
}) })
}) })
test('ripgrepCommand keeps builtin mode when bundled binary exists', () => { test('uses builtin @vscode/ripgrep path when the package resolves', () => {
const config = resolveRipgrepConfig({ const config = resolveRipgrepConfig({
userWantsSystemRipgrep: false, userWantsSystemRipgrep: false,
bundledMode: false, bundledMode: false,
builtinCommand: MOCK_BUILTIN_PATH, builtinCommand: MOCK_BUILTIN_PATH,
builtinExists: true,
systemExecutablePath: '/usr/bin/rg', systemExecutablePath: '/usr/bin/rg',
processExecPath: '/fake/bun', processExecPath: '/fake/bun',
}) })
@@ -43,10 +41,59 @@ test('ripgrepCommand keeps builtin mode when bundled binary exists', () => {
}) })
}) })
test('honors USE_BUILTIN_RIPGREP=0 by selecting system rg even when builtin is available', () => {
const config = resolveRipgrepConfig({
userWantsSystemRipgrep: true,
bundledMode: false,
builtinCommand: MOCK_BUILTIN_PATH,
systemExecutablePath: '/usr/bin/rg',
processExecPath: '/fake/bun',
})
expect(config).toMatchObject({
mode: 'system',
command: 'rg',
args: [],
})
})
test('keeps embedded mode for Bun-compiled standalone executables', () => {
const config = resolveRipgrepConfig({
userWantsSystemRipgrep: false,
bundledMode: true,
builtinCommand: null,
systemExecutablePath: '/usr/bin/rg',
processExecPath: '/opt/openclaude/bin/openclaude',
})
expect(config).toMatchObject({
mode: 'embedded',
command: '/opt/openclaude/bin/openclaude',
args: ['--no-config'],
argv0: 'rg',
})
})
test('falls through to system rg as a last resort even when not on PATH', () => {
const config = resolveRipgrepConfig({
userWantsSystemRipgrep: false,
bundledMode: false,
builtinCommand: null,
systemExecutablePath: 'rg',
processExecPath: '/fake/bun',
})
expect(config).toMatchObject({
mode: 'system',
command: 'rg',
args: [],
})
})
test('wrapRipgrepUnavailableError explains missing packaged fallback', () => { test('wrapRipgrepUnavailableError explains missing packaged fallback', () => {
const error = wrapRipgrepUnavailableError( const error = wrapRipgrepUnavailableError(
{ code: 'ENOENT', message: 'spawn rg ENOENT' }, { code: 'ENOENT', message: 'spawn rg ENOENT' },
{ mode: 'builtin', command: 'C:\\fake\\vendor\\ripgrep\\rg.exe', args: [] }, { mode: 'builtin', command: 'C:\\fake\\node_modules\\@vscode\\ripgrep\\bin\\rg.exe', args: [] },
'win32', 'win32',
) )

View File

@@ -5,7 +5,6 @@ import memoize from 'lodash-es/memoize.js'
import { homedir } from 'os' import { homedir } from 'os'
import * as path from 'path' import * as path from 'path'
import { logEvent } from 'src/services/analytics/index.js' import { logEvent } from 'src/services/analytics/index.js'
import { fileURLToPath } from 'url'
import { isInBundledMode } from './bundledMode.js' import { isInBundledMode } from './bundledMode.js'
import { logForDebugging } from './debug.js' import { logForDebugging } from './debug.js'
import { isEnvDefinedFalsy } from './envUtils.js' import { isEnvDefinedFalsy } from './envUtils.js'
@@ -15,13 +14,6 @@ import { logError } from './log.js'
import { getPlatform } from './platform.js' import { getPlatform } from './platform.js'
import { countCharInString } from './stringUtils.js' import { countCharInString } from './stringUtils.js'
const __filename = fileURLToPath(import.meta.url)
// we use node:path.join instead of node:url.resolve because the former doesn't encode spaces
const __dirname = path.join(
__filename,
process.env.NODE_ENV === 'test' ? '../../../' : '../',
)
type RipgrepConfig = { type RipgrepConfig = {
mode: 'system' | 'builtin' | 'embedded' mode: 'system' | 'builtin' | 'embedded'
command: string command: string
@@ -35,11 +27,31 @@ function isErrnoException(error: unknown): error is NodeJS.ErrnoException {
return error instanceof Error return error instanceof Error
} }
/**
* Returns the ripgrep binary path provided by the @vscode/ripgrep package.
* The package downloads a platform/arch-specific binary at npm install time
* (cached under the package's bin/ directory). Returns null when the package
* cannot be resolved — for example when running as a Bun-compiled standalone
* executable that doesn't ship node_modules.
*/
function resolveBuiltinRgPath(): string | null {
try {
// Lazy require so the resolution failure path stays graceful at import
// time. The package only exports `rgPath`, so we do not need the rest.
const mod = require('@vscode/ripgrep') as { rgPath?: string }
if (mod.rgPath && existsSync(mod.rgPath)) {
return mod.rgPath
}
} catch {
// Falls through to null — caller decides the fallback.
}
return null
}
type ResolveRipgrepConfigArgs = { type ResolveRipgrepConfigArgs = {
userWantsSystemRipgrep: boolean userWantsSystemRipgrep: boolean
bundledMode: boolean bundledMode: boolean
builtinCommand: string builtinCommand: string | null
builtinExists: boolean
systemExecutablePath: string systemExecutablePath: string
processExecPath?: string processExecPath?: string
} }
@@ -48,7 +60,6 @@ export function resolveRipgrepConfig({
userWantsSystemRipgrep, userWantsSystemRipgrep,
bundledMode, bundledMode,
builtinCommand, builtinCommand,
builtinExists,
systemExecutablePath, systemExecutablePath,
processExecPath = process.execPath, processExecPath = process.execPath,
}: ResolveRipgrepConfigArgs): RipgrepConfig { }: ResolveRipgrepConfigArgs): RipgrepConfig {
@@ -66,7 +77,7 @@ export function resolveRipgrepConfig({
} }
} }
if (builtinExists) { if (builtinCommand) {
return { mode: 'builtin', command: builtinCommand, args: [] } return { mode: 'builtin', command: builtinCommand, args: [] }
} }
@@ -74,7 +85,9 @@ export function resolveRipgrepConfig({
return { mode: 'system', command: 'rg', args: [] } return { mode: 'system', command: 'rg', args: [] }
} }
return { mode: 'builtin', command: builtinCommand, args: [] } // Last resort — leaves error reporting to the executor when no binary
// can be located. wrapRipgrepUnavailableError() surfaces an install hint.
return { mode: 'system', command: 'rg', args: [] }
} }
const getRipgrepConfig = memoize((): RipgrepConfig => { const getRipgrepConfig = memoize((): RipgrepConfig => {
@@ -82,19 +95,13 @@ const getRipgrepConfig = memoize((): RipgrepConfig => {
process.env.USE_BUILTIN_RIPGREP, process.env.USE_BUILTIN_RIPGREP,
) )
const bundledMode = isInBundledMode() const bundledMode = isInBundledMode()
const rgRoot = path.resolve(__dirname, 'vendor', 'ripgrep') const builtinCommand = resolveBuiltinRgPath()
const builtinCommand =
process.platform === 'win32'
? path.resolve(rgRoot, `${process.arch}-win32`, 'rg.exe')
: path.resolve(rgRoot, `${process.arch}-${process.platform}`, 'rg')
const builtinExists = existsSync(builtinCommand)
const { cmd: systemExecutablePath } = findExecutable('rg', []) const { cmd: systemExecutablePath } = findExecutable('rg', [])
return resolveRipgrepConfig({ return resolveRipgrepConfig({
userWantsSystemRipgrep, userWantsSystemRipgrep,
bundledMode, bundledMode,
builtinCommand, builtinCommand,
builtinExists,
systemExecutablePath, systemExecutablePath,
}) })
}) })

View File

@@ -97,13 +97,22 @@ describe("Secure Storage Platform Implementations", () => {
expect(options2.input).toContain("token'quote"); expect(options2.input).toContain("token'quote");
}); });
test("delete() includes assembly load", () => { test("delete() skips legacy PasswordVault by default", () => {
windowsCredentialStorage.delete();
expect(mockExecaSync).toHaveBeenCalledTimes(1);
const script = mockExecaSync.mock.calls[0][1][1];
expect(script).not.toContain("System.Runtime.WindowsRuntime");
});
test("delete() includes legacy assembly load when explicitly enabled", () => {
process.env.OPENCLAUDE_ENABLE_LEGACY_WINDOWS_PASSWORDVAULT = "1";
windowsCredentialStorage.delete(); windowsCredentialStorage.delete();
const script = mockExecaSync.mock.calls[1][1][1]; const script = mockExecaSync.mock.calls[1][1][1];
expect(script).toContain("Add-Type -AssemblyName System.Runtime.WindowsRuntime"); expect(script).toContain("Add-Type -AssemblyName System.Runtime.WindowsRuntime");
}); });
test("escapes double quotes in username", () => { test("escapes double quotes in username", () => {
process.env.OPENCLAUDE_ENABLE_LEGACY_WINDOWS_PASSWORDVAULT = "1";
process.env.USER = 'user"name'; process.env.USER = 'user"name';
windowsCredentialStorage.read(); windowsCredentialStorage.read();
const script = mockExecaSync.mock.calls[1][1][1]; const script = mockExecaSync.mock.calls[1][1][1];
@@ -111,7 +120,17 @@ describe("Secure Storage Platform Implementations", () => {
expect(script).not.toContain('user"name'); expect(script).not.toContain('user"name');
}); });
test("read() falls back to legacy PasswordVault when the DPAPI payload is invalid JSON", () => { test("read() does not touch legacy PasswordVault by default", () => {
mockExecaSync.mockImplementationOnce(() => ({ exitCode: 1, stdout: "" }));
const result = windowsCredentialStorage.read();
expect(result).toBeNull();
expect(mockExecaSync).toHaveBeenCalledTimes(1);
});
test("read() falls back to legacy PasswordVault when explicitly enabled", () => {
process.env.OPENCLAUDE_ENABLE_LEGACY_WINDOWS_PASSWORDVAULT = "1";
mockExecaSync mockExecaSync
.mockImplementationOnce(() => ({ exitCode: 0, stdout: "{not-json" })) .mockImplementationOnce(() => ({ exitCode: 0, stdout: "{not-json" }))
.mockImplementationOnce(() => ({ .mockImplementationOnce(() => ({
@@ -126,6 +145,7 @@ describe("Secure Storage Platform Implementations", () => {
}); });
test("read() fails closed when the legacy PasswordVault payload is invalid JSON", () => { test("read() fails closed when the legacy PasswordVault payload is invalid JSON", () => {
process.env.OPENCLAUDE_ENABLE_LEGACY_WINDOWS_PASSWORDVAULT = "1";
mockExecaSync mockExecaSync
.mockImplementationOnce(() => ({ exitCode: 1, stdout: "" })) .mockImplementationOnce(() => ({ exitCode: 1, stdout: "" }))
.mockImplementationOnce(() => ({ exitCode: 0, stdout: "{not-json" })); .mockImplementationOnce(() => ({ exitCode: 0, stdout: "{not-json" }));

View File

@@ -30,6 +30,10 @@ function getWindowsSecureStorageFilePath(): string {
return join(getClaudeConfigHomeDir(), `${resourceName}.secure.dpapi`) return join(getClaudeConfigHomeDir(), `${resourceName}.secure.dpapi`)
} }
function shouldUseLegacyPasswordVault(): boolean {
return process.env.OPENCLAUDE_ENABLE_LEGACY_WINDOWS_PASSWORDVAULT === '1'
}
function runPowerShell( function runPowerShell(
script: string, script: string,
options?: { input?: string }, options?: { input?: string },
@@ -61,6 +65,10 @@ function getFailureWarning(
} }
function readLegacyPasswordVault(): SecureStorageData | null { function readLegacyPasswordVault(): SecureStorageData | null {
if (!shouldUseLegacyPasswordVault()) {
return null
}
const resourceName = getLegacyResourceName().replace(/"/g, '`"') const resourceName = getLegacyResourceName().replace(/"/g, '`"')
const username = getUsername().replace(/"/g, '`"') const username = getUsername().replace(/"/g, '`"')
const script = ` const script = `
@@ -204,21 +212,23 @@ export const windowsCredentialStorage: SecureStorage = {
` `
const removeDpapiResult = runPowerShell(removeDpapiScript) const removeDpapiResult = runPowerShell(removeDpapiScript)
const resourceName = getLegacyResourceName().replace(/"/g, '`"') if (shouldUseLegacyPasswordVault()) {
const username = getUsername().replace(/"/g, '`"') const resourceName = getLegacyResourceName().replace(/"/g, '`"')
const removeLegacyScript = ` const username = getUsername().replace(/"/g, '`"')
Add-Type -AssemblyName System.Runtime.WindowsRuntime const removeLegacyScript = `
try { Add-Type -AssemblyName System.Runtime.WindowsRuntime
$vault = New-Object Windows.Security.Credentials.PasswordVault try {
$cred = $vault.Retrieve("${resourceName}", "${username}") $vault = New-Object Windows.Security.Credentials.PasswordVault
$vault.Remove($cred) $cred = $vault.Retrieve("${resourceName}", "${username}")
} catch { $vault.Remove($cred)
exit 0 } catch {
} exit 0
` }
const removeLegacyResult = runPowerShell(removeLegacyScript) `
const removeLegacyResult = runPowerShell(removeLegacyScript)
void removeLegacyResult void removeLegacyResult
}
return (removeDpapiResult?.exitCode ?? 1) === 0 return (removeDpapiResult?.exitCode ?? 1) === 0
}, },

View File

@@ -0,0 +1,165 @@
import { describe, expect, it } from 'bun:test'
import { StreamingTokenCounter } from './streamingTokenCounter.js'
describe('StreamingTokenCounter', () => {
describe('start', () => {
it('resets state and sets input tokens', () => {
const counter = new StreamingTokenCounter()
counter.start(1000)
expect(counter.total).toBe(1000)
})
})
describe('addChunk', () => {
it('accumulates content', () => {
const counter = new StreamingTokenCounter()
counter.start(500)
counter.addChunk('Hello world ')
expect(counter.characterCount).toBe(12)
})
it('accumulates multiple chunks', () => {
const counter = new StreamingTokenCounter()
counter.start(500)
counter.addChunk('Hello ')
counter.addChunk('world ')
expect(counter.characterCount).toBeGreaterThanOrEqual(10)
})
it('handles empty chunks', () => {
const counter = new StreamingTokenCounter()
counter.start(50)
counter.addChunk(undefined)
counter.addChunk('')
expect(counter.output).toBe(0)
expect(counter.total).toBe(50)
})
it('updates cached token count at word boundaries during streaming', () => {
const counter = new StreamingTokenCounter()
counter.start(100)
counter.addChunk('Hello ')
const afterFirst = counter.output
expect(afterFirst).toBeGreaterThan(0)
counter.addChunk('world ')
const afterSecond = counter.output
expect(afterSecond).toBeGreaterThan(afterFirst)
})
it('advances count past space after word boundary', () => {
const counter = new StreamingTokenCounter()
counter.start()
counter.addChunk('Hello ') // counts Hello
const count1 = counter.output
counter.addChunk('world') // short chunk, no space - shouldn't advance
const count2 = counter.output
expect(count2).toBe(count1)
counter.addChunk(' ') // space triggers count
const count3 = counter.output
expect(count3).toBeGreaterThan(count2)
})
})
describe('finalize', () => {
it('counts all content after finalize', () => {
const counter = new StreamingTokenCounter()
counter.start(500)
counter.addChunk('Hello world')
counter.finalize()
expect(counter.output).toBeGreaterThan(0)
})
it('counts tokens after finalize', () => {
const counter = new StreamingTokenCounter()
counter.start(100)
counter.addChunk('Hello ')
counter.addChunk('world ')
counter.finalize()
expect(counter.output).toBeGreaterThan(0)
expect(counter.total).toBe(100 + counter.output)
})
})
describe('total', () => {
it('sums input and output after finalize', () => {
const counter = new StreamingTokenCounter()
counter.start(500)
counter.addChunk('Test content ')
counter.finalize()
expect(counter.total).toBeGreaterThanOrEqual(500)
})
})
describe('tokensPerSecond', () => {
it('calculates tokens per second', () => {
const counter = new StreamingTokenCounter()
counter.start()
counter.addChunk('123456789 ')
expect(typeof counter.tokensPerSecond).toBe('number')
})
})
describe('estimateRemainingTokens', () => {
it('returns positive when under target', () => {
const counter = new StreamingTokenCounter()
counter.start(500)
counter.addChunk('Hello ')
counter.finalize()
expect(counter.estimateRemainingTokens(1000)).toBeGreaterThan(0)
})
it('returns 0 when at or over target', () => {
const counter = new StreamingTokenCounter()
counter.start(500)
counter.addChunk('Hello ')
counter.finalize()
expect(counter.estimateRemainingTokens(1)).toBe(0)
})
})
describe('estimateRemainingTimeMs', () => {
it('returns estimate based on rate', () => {
const counter = new StreamingTokenCounter()
counter.start()
counter.addChunk('Hello world ')
expect(counter.estimateRemainingTimeMs(100)).toBeGreaterThanOrEqual(0)
})
})
describe('characterCount', () => {
it('returns accumulated character count', () => {
const counter = new StreamingTokenCounter()
counter.addChunk('Hello')
expect(counter.characterCount).toBe(5)
})
it('accumulates content from chunks', () => {
const counter = new StreamingTokenCounter()
counter.start(100)
counter.addChunk('Hello ')
counter.addChunk('world ')
expect(counter.characterCount).toBeGreaterThan(0)
})
})
describe('reset', () => {
it('clears all state', () => {
const counter = new StreamingTokenCounter()
counter.start(500)
counter.addChunk('Hello world ')
counter.reset()
expect(counter.characterCount).toBe(0)
})
it('resets correctly', () => {
const counter = new StreamingTokenCounter()
counter.start(100)
counter.addChunk('test ')
counter.reset()
expect(counter.characterCount).toBe(0)
expect(counter.total).toBe(0)
})
})
})

View File

@@ -0,0 +1,133 @@
/**
* Streaming Token Counter - Accurate token counting during generation
*
* Accumulates raw content and counts tokens at consistent boundaries
* to avoid dependency on arbitrary chunk boundaries.
*/
import { roughTokenCountEstimation } from '../services/tokenEstimation.js'
export class StreamingTokenCounter {
private inputTokens = 0
private accumulatedContent = ''
private lastCountedIndex = 0
private cachedOutputTokens = 0
private startTime = 0
/**
* Start tracking a new stream
* @param initialInputTokens - Token count for system prompt + history
*/
start(initialInputTokens?: number): void {
this.reset()
this.startTime = Date.now()
this.inputTokens = initialInputTokens ?? 0
}
/**
* Add content from a streaming chunk
* Accumulates raw content, counting only at word boundaries
* to avoid instability from arbitrary chunk boundaries.
*/
addChunk(deltaContent?: string): void {
if (deltaContent) {
this.accumulatedContent += deltaContent
this.recountAtWordBoundary()
}
}
/**
* Recount tokens at word boundaries for stability.
* Only counts after whitespace to avoid mid-word splits.
*/
private recountAtWordBoundary(): void {
const content = this.accumulatedContent
const unprocessedContent = content.slice(this.lastCountedIndex)
const searchStart = unprocessedContent[0] === ' ' ? 1 : 0
const nextSpaceIndex = unprocessedContent.indexOf(' ', searchStart)
const shouldCount =
nextSpaceIndex > 0 ||
unprocessedContent.length > 50 ||
unprocessedContent.length === 0
let boundaryIndex: number
if (nextSpaceIndex > 0) {
boundaryIndex = this.lastCountedIndex + nextSpaceIndex
} else if (unprocessedContent.length > 50) {
boundaryIndex = content.length
} else {
return
}
const toCount = content.slice(0, boundaryIndex)
this.cachedOutputTokens = roughTokenCountEstimation(toCount)
this.lastCountedIndex = boundaryIndex
}
/**
* Flush remaining content and finalize count.
* Call this when stream completes.
*/
finalize(): number {
if (this.accumulatedContent.length > this.lastCountedIndex) {
this.cachedOutputTokens = roughTokenCountEstimation(this.accumulatedContent)
this.lastCountedIndex = this.accumulatedContent.length
}
return this.cachedOutputTokens
}
/** Get total tokens (input + output) */
get total(): number {
return this.inputTokens + this.cachedOutputTokens
}
/** Get output tokens only */
get output(): number {
return this.cachedOutputTokens
}
/** Get elapsed time in milliseconds */
get elapsedMs(): number {
return this.startTime > 0 ? Date.now() - this.startTime : 0
}
/** Get tokens per second generation rate */
get tokensPerSecond(): number {
if (this.elapsedMs === 0) return 0
return (this.cachedOutputTokens / this.elapsedMs) * 1000
}
/** Get estimated total generation time based on current rate */
getEstimatedGenerationTimeMs(): number {
if (this.tokensPerSecond === 0) return 0
return Math.round((this.cachedOutputTokens / this.tokensPerSecond) * 1000)
}
/** Estimate remaining tokens until target output size */
estimateRemainingTokens(targetOutputTokens: number): number {
return Math.max(0, targetOutputTokens - this.cachedOutputTokens)
}
/** Estimate remaining time based on target output tokens */
estimateRemainingTimeMs(targetOutputTokens: number): number {
if (this.tokensPerSecond === 0) return 0
const remaining = this.estimateRemainingTokens(targetOutputTokens)
return Math.round((remaining / this.tokensPerSecond) * 1000)
}
/** Get character count for raw content */
get characterCount(): number {
return this.accumulatedContent.length
}
/** Reset counter */
reset(): void {
this.inputTokens = 0
this.accumulatedContent = ''
this.lastCountedIndex = 0
this.cachedOutputTokens = 0
this.startTime = 0
}
}

View File

@@ -1,5 +1,12 @@
import { afterEach, beforeEach, describe, expect, test } from 'bun:test' import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test'
import { modelSupportsThinking } from './thinking.js' import { resetSettingsCache } from './settings/settingsCache.js'
mock.module('./model/providers.js', () => ({
getAPIProvider: () =>
process.env.CLAUDE_CODE_USE_OPENAI === '1' ? 'openai' : 'firstParty',
}))
const { modelSupportsThinking } = await import('./thinking.js')
const ENV_KEYS = [ const ENV_KEYS = [
'CLAUDE_CODE_USE_OPENAI', 'CLAUDE_CODE_USE_OPENAI',
@@ -14,6 +21,13 @@ const ENV_KEYS = [
'OPENAI_MODEL', 'OPENAI_MODEL',
'NVIDIA_NIM', 'NVIDIA_NIM',
'MINIMAX_API_KEY', 'MINIMAX_API_KEY',
'XAI_API_KEY',
'ANTHROPIC_DEFAULT_OPUS_MODEL',
'ANTHROPIC_DEFAULT_OPUS_MODEL_SUPPORTED_CAPABILITIES',
'ANTHROPIC_DEFAULT_SONNET_MODEL',
'ANTHROPIC_DEFAULT_SONNET_MODEL_SUPPORTED_CAPABILITIES',
'ANTHROPIC_DEFAULT_HAIKU_MODEL',
'ANTHROPIC_DEFAULT_HAIKU_MODEL_SUPPORTED_CAPABILITIES',
'USER_TYPE', 'USER_TYPE',
] ]
@@ -24,6 +38,7 @@ beforeEach(() => {
originalEnv[key] = process.env[key] originalEnv[key] = process.env[key]
delete process.env[key] delete process.env[key]
} }
resetSettingsCache()
}) })
afterEach(() => { afterEach(() => {
@@ -34,6 +49,7 @@ afterEach(() => {
process.env[key] = originalEnv[key] process.env[key] = originalEnv[key]
} }
} }
resetSettingsCache()
}) })
describe('modelSupportsThinking — Z.AI GLM', () => { describe('modelSupportsThinking — Z.AI GLM', () => {
@@ -61,4 +77,19 @@ describe('modelSupportsThinking — Z.AI GLM', () => {
expect(modelSupportsThinking('glm-50')).toBe(false) expect(modelSupportsThinking('glm-50')).toBe(false)
}) })
test('does not reuse stale capability overrides after env changes', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'https://dashscope.aliyuncs.com/compatible-mode/v1'
process.env.ANTHROPIC_DEFAULT_SONNET_MODEL = 'GLM-5.1'
process.env.ANTHROPIC_DEFAULT_SONNET_MODEL_SUPPORTED_CAPABILITIES = ''
expect(modelSupportsThinking('GLM-5.1')).toBe(false)
delete process.env.ANTHROPIC_DEFAULT_SONNET_MODEL
delete process.env.ANTHROPIC_DEFAULT_SONNET_MODEL_SUPPORTED_CAPABILITIES
process.env.OPENAI_BASE_URL = 'https://api.z.ai/api/coding/paas/v4'
expect(modelSupportsThinking('GLM-5.1')).toBe(true)
})
}) })

View File

@@ -131,7 +131,7 @@ export function modelSupportsAdaptiveThinking(model: string): boolean {
} }
const canonical = getCanonicalName(model) const canonical = getCanonicalName(model)
// Supported by a subset of Claude 4 models // Supported by a subset of Claude 4 models
if (canonical.includes('opus-4-6') || canonical.includes('sonnet-4-6')) { if (canonical.includes('opus-4-7') || canonical.includes('opus-4-6') || canonical.includes('sonnet-4-6')) {
return true return true
} }
// Exclude any other known legacy models (allowlist above catches 4-6 variants first) // Exclude any other known legacy models (allowlist above catches 4-6 variants first)

View File

@@ -10,9 +10,12 @@ function installCommonMocks(options?: {
oauthEmail?: string oauthEmail?: string
gitEmail?: string gitEmail?: string
}) { }) {
mock.module('../bootstrap/state.js', () => ({ // NOTE: Do NOT mock ../bootstrap/state.js here.
getSessionId: () => 'session-test', // mock.module() is process-global in bun:test and mock.restore() does NOT
})) // undo it. Mocking state.js leaks getSessionId = () => 'session-test' into
// every other test file that imports state.js (e.g. SDK CON-1 tests).
// The dynamic import (importFreshUserModule) will use the real state.js,
// which is fine — these tests only assert email, not sessionId.
mock.module('./auth.js', () => ({ mock.module('./auth.js', () => ({
getOauthAccountInfo: () => getOauthAccountInfo: () =>

54
src/utils/validation.ts Normal file
View File

@@ -0,0 +1,54 @@
/**
* Shared validation utilities for SDK-facing APIs.
*/
/**
* Validate an array of items using a per-item validator.
* Throws TypeError with the index and missing field if validation fails.
*/
export function validateArrayOf<T>(
items: unknown[],
validator: (item: unknown, index: number) => T,
label: string,
): T[] {
if (!Array.isArray(items)) {
throw new TypeError(`${label}: expected an array, got ${typeof items}`)
}
return items.map((item, i) => {
try {
return validator(item, i)
} catch (err) {
if (err instanceof TypeError) {
throw new TypeError(`${label}: item at index ${i} - ${err.message}`)
}
throw err
}
})
}
/**
* Assert that a value is a non-empty string.
*/
export function assertNonEmptyString(value: unknown, field: string): asserts value is string {
if (typeof value !== 'string' || value.length === 0) {
throw new TypeError(`missing or empty '${field}' (expected non-empty string)`)
}
}
/**
* Assert that a value is a non-null object (but not an array).
*/
export function assertObject(value: unknown, field: string): asserts value is Record<string, unknown> {
if (typeof value !== 'object' || value === null || Array.isArray(value)) {
throw new TypeError(`missing or invalid '${field}' (expected object)`)
}
}
/**
* Assert that a value is a function.
*/
export function assertFunction(value: unknown, field: string): asserts value is (...args: any[]) => any {
if (typeof value !== 'function') {
throw new TypeError(`missing or invalid '${field}' (expected function)`)
}
}

View File

@@ -0,0 +1,279 @@
import { describe, test, expect } from 'bun:test'
import {
SDKAssistantMessageSchema,
SDKSystemMessageSchema,
SDKCompactBoundaryMessageSchema,
SDKMessageSchema,
SDKUserMessageSchema,
SDKResultMessageSchema,
SDKResultSuccessSchema,
SDKResultErrorSchema,
SDKSessionInfoSchema,
PermissionModeSchema,
ThinkingConfigSchema,
AgentDefinitionSchema,
McpServerStatusSchema,
ModelUsageSchema,
FastModeStateSchema,
HookInputSchema,
ExitReasonSchema,
} from '../../src/entrypoints/sdk/coreSchemas.js'
import { z } from 'zod/v4'
/**
* Tests for generated SDK types from Zod schemas.
*
* These tests verify that:
* 1. All schemas materialize correctly (no lazy errors)
* 2. Schemas can parse valid data
* 3. Key discriminated fields are correct
* 4. The full SDKMessage union accepts all message variants
*/
describe('SDK Zod schemas (type generation source)', () => {
test('SDKAssistantMessageSchema accepts valid data', () => {
const schema = SDKAssistantMessageSchema()
const result = schema.safeParse({
type: 'assistant',
message: { role: 'assistant', content: [{ type: 'text', text: 'hi' }] },
parent_tool_use_id: null,
uuid: '12345678-1234-1234-1234-123456789012',
session_id: '12345678-1234-1234-1234-123456789012',
})
expect(result.success).toBe(true)
})
test('SDKSystemMessageSchema accepts valid data', () => {
const schema = SDKSystemMessageSchema()
const result = schema.safeParse({
type: 'system',
subtype: 'init',
apiKeySource: 'user',
claude_code_version: '0.3.0',
cwd: '/home/user/project',
tools: ['Read', 'Write'],
mcp_servers: [{ name: 'test', status: 'connected' }],
model: 'claude-sonnet-4-6',
permissionMode: 'default',
slash_commands: [],
output_style: 'default',
skills: [],
plugins: [],
uuid: '12345678-1234-1234-1234-123456789012',
session_id: '12345678-1234-1234-1234-123456789012',
})
expect(result.success).toBe(true)
})
test('SDKCompactBoundaryMessageSchema accepts valid data', () => {
const schema = SDKCompactBoundaryMessageSchema()
const result = schema.safeParse({
type: 'system',
subtype: 'compact_boundary',
compact_metadata: {
trigger: 'manual',
pre_tokens: 1000,
},
uuid: '12345678-1234-1234-1234-123456789012',
session_id: '12345678-1234-1234-1234-123456789012',
})
expect(result.success).toBe(true)
})
test('SDKCompactBoundaryMessageSchema accepts preserved_segment', () => {
const schema = SDKCompactBoundaryMessageSchema()
const result = schema.safeParse({
type: 'system',
subtype: 'compact_boundary',
compact_metadata: {
trigger: 'auto',
pre_tokens: 50000,
preserved_segment: {
head_uuid: 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa',
anchor_uuid: 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb',
tail_uuid: 'cccccccc-cccc-cccc-cccc-cccccccccccc',
},
},
uuid: '12345678-1234-1234-1234-123456789012',
session_id: '12345678-1234-1234-1234-123456789012',
})
expect(result.success).toBe(true)
})
test('SDKUserMessageSchema accepts valid data', () => {
const schema = SDKUserMessageSchema()
const result = schema.safeParse({
type: 'user',
message: { role: 'user', content: 'hello' },
parent_tool_use_id: null,
})
expect(result.success).toBe(true)
})
test('SDKResultSuccessSchema accepts valid data', () => {
const schema = SDKResultSuccessSchema()
const result = schema.safeParse({
type: 'result',
subtype: 'success',
duration_ms: 1500,
duration_api_ms: 1200,
is_error: false,
num_turns: 1,
result: 'Done',
stop_reason: 'end_turn',
total_cost_usd: 0.01,
usage: { input_tokens: 100, output_tokens: 50 },
modelUsage: {},
permission_denials: [],
uuid: '12345678-1234-1234-1234-123456789012',
session_id: '12345678-1234-1234-1234-123456789012',
})
expect(result.success).toBe(true)
})
test('SDKResultErrorSchema accepts valid data', () => {
const schema = SDKResultErrorSchema()
const result = schema.safeParse({
type: 'result',
subtype: 'error_during_execution',
duration_ms: 100,
duration_api_ms: 80,
is_error: true,
num_turns: 1,
stop_reason: null,
total_cost_usd: 0.001,
usage: { input_tokens: 50, output_tokens: 10 },
modelUsage: {},
permission_denials: [],
errors: ['Something went wrong'],
uuid: '12345678-1234-1234-1234-123456789012',
session_id: '12345678-1234-1234-1234-123456789012',
})
expect(result.success).toBe(true)
})
test('SDKMessageSchema accepts all message types', () => {
const schema = SDKMessageSchema()
const messages = [
{
type: 'assistant',
message: {},
parent_tool_use_id: null,
uuid: '12345678-1234-1234-1234-123456789012',
session_id: '12345678-1234-1234-1234-123456789012',
},
{
type: 'user',
message: {},
parent_tool_use_id: null,
},
{
type: 'system',
subtype: 'init',
apiKeySource: 'user',
claude_code_version: '0.3.0',
cwd: '/tmp',
tools: [],
mcp_servers: [],
model: 'sonnet',
permissionMode: 'default',
slash_commands: [],
output_style: 'default',
skills: [],
plugins: [],
uuid: '12345678-1234-1234-1234-123456789012',
session_id: '12345678-1234-1234-1234-123456789012',
},
{
type: 'system',
subtype: 'compact_boundary',
compact_metadata: { trigger: 'manual', pre_tokens: 100 },
uuid: '12345678-1234-1234-1234-123456789012',
session_id: '12345678-1234-1234-1234-123456789012',
},
]
for (const msg of messages) {
const result = schema.safeParse(msg)
expect(result.success).toBe(true)
}
})
test('SDKSessionInfoSchema accepts valid data', () => {
const schema = SDKSessionInfoSchema()
const result = schema.safeParse({
sessionId: '12345678-1234-1234-1234-123456789012',
summary: 'Test session',
lastModified: Date.now(),
})
expect(result.success).toBe(true)
})
test('PermissionModeSchema accepts valid modes', () => {
const schema = PermissionModeSchema()
const modes = ['default', 'acceptEdits', 'bypassPermissions', 'plan', 'dontAsk']
for (const mode of modes) {
expect(schema.safeParse(mode).success).toBe(true)
}
expect(schema.safeParse('invalid').success).toBe(false)
})
test('ThinkingConfigSchema accepts all variants', () => {
const schema = ThinkingConfigSchema()
expect(schema.safeParse({ type: 'adaptive' }).success).toBe(true)
expect(schema.safeParse({ type: 'enabled' }).success).toBe(true)
expect(schema.safeParse({ type: 'enabled', budgetTokens: 10000 }).success).toBe(true)
expect(schema.safeParse({ type: 'disabled' }).success).toBe(true)
expect(schema.safeParse({ type: 'unknown' }).success).toBe(false)
})
test('FastModeStateSchema accepts valid states', () => {
const schema = FastModeStateSchema()
expect(schema.safeParse('off').success).toBe(true)
expect(schema.safeParse('cooldown').success).toBe(true)
expect(schema.safeParse('on').success).toBe(true)
expect(schema.safeParse('unknown').success).toBe(false)
})
test('ExitReasonSchema accepts valid reasons', () => {
const schema = ExitReasonSchema()
const reasons = ['clear', 'resume', 'logout', 'prompt_input_exit', 'other', 'bypass_permissions_disabled']
for (const r of reasons) {
expect(schema.safeParse(r).success).toBe(true)
}
expect(schema.safeParse('invalid').success).toBe(false)
})
test('ModelUsageSchema accepts valid data', () => {
const schema = ModelUsageSchema()
const result = schema.safeParse({
inputTokens: 100,
outputTokens: 50,
cacheReadInputTokens: 200,
cacheCreationInputTokens: 300,
webSearchRequests: 1,
costUSD: 0.01,
contextWindow: 200000,
maxOutputTokens: 8192,
})
expect(result.success).toBe(true)
})
test('AgentDefinitionSchema accepts valid data', () => {
const schema = AgentDefinitionSchema()
const result = schema.safeParse({
description: 'Test agent',
prompt: 'You are a test agent',
})
expect(result.success).toBe(true)
})
test('McpServerStatusSchema accepts valid data', () => {
const schema = McpServerStatusSchema()
const result = schema.safeParse({
name: 'test-server',
status: 'connected',
})
expect(result.success).toBe(true)
})
})

View File

@@ -1,10 +1,14 @@
{ {
"compilerOptions": { "compilerOptions": {
"target": "ES2022", "target": "ES2023",
"lib": ["ES2023", "DOM"],
"module": "ESNext", "module": "ESNext",
"moduleResolution": "bundler", "moduleResolution": "bundler",
"jsx": "react-jsx", "jsx": "react-jsx",
"strict": true, "strict": true,
"noImplicitAny": false,
"noEmit": true,
"allowImportingTsExtensions": true,
"esModuleInterop": true, "esModuleInterop": true,
"skipLibCheck": true, "skipLibCheck": true,
"forceConsistentCasingInFileNames": true, "forceConsistentCasingInFileNames": true,