Compare commits

...

84 Commits

Author SHA1 Message Date
gnanam1990
149b1eb8fb fix: surface actionable error when DuckDuckGo web search is rate-limited
Non-Anthropic / non-codex providers (minimax, kimi, generic OpenAI-compatible)
fell through to the DDG adapter when no paid search key was configured. DDG's
scraper is blocked on most IPs, so web_search surfaced an opaque "anomaly in
the request" error. Catch that response in the DDG provider and rethrow with
the exact env vars that would unblock the tool, or the option to switch to a
native-search provider.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 22:22:59 +05:30
Kevin Codex
67de6bd2cf fix(openai-shim): echo reasoning_content on assistant tool-call messages for Moonshot (#828)
Kimi / Moonshot's chat completions endpoint requires that every assistant
message carrying tool_calls also carry reasoning_content when the
"thinking" feature is active. When an agent sends prior-turn assistant
history back (standard multi-turn / subagent / Explore patterns), the
shim previously stripped the thinking block:

  case 'thinking':
  case 'redacted_thinking':
    // Strip thinking blocks for OpenAI-compatible providers.
    break

That's correct for providers that would mis-interpret serialized
<thinking> tags, but Moonshot validates the schema strictly and rejects
with:

  API Error: 400 {"error":{"message":"thinking is enabled but
  reasoning_content is missing in assistant tool call message at
  index N","type":"invalid_request_error"}}

Reproducer: launch with Kimi profile, run any tool-using command
(Explore, Bash, etc.) — every request after the first 400s.

Fix: in convertMessages(), when the per-request flag
preserveReasoningContent is set (only for Moonshot baseUrls today),
attach the original thinking block's text as reasoning_content on the
outgoing OpenAI-shaped assistant message. Other providers continue to
strip (unknown-field rejection risk).

OpenAIMessage type grows a reasoning_content?: string field.
convertMessages() accepts an options object and threads the flag
through; the only call site (_doOpenAIRequest) gates via
isMoonshotBaseUrl(request.baseUrl).

Tests (openaiShim.test.ts):
  - Moonshot: echoes reasoning_content on assistant tool-call messages
    (regression for the reported 400)
  - non-Moonshot providers do NOT receive reasoning_content (guards
    against leaking the field to strict-parse endpoints)

Full suite: 1195/1195 pass under --max-concurrency=1. PR scan clean.

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-22 22:47:57 +08:00
0xfandom
4d559c9135 docs(env): document OPENCLAUDE_DISABLE_STRICT_TOOLS in .env.example (#826)
Code support was merged in #770 but the .env.example entry was
missed, leaving users without a discoverable way to find the flag.

Closes #737
2026-04-22 22:16:47 +08:00
JATMN
b7b83eff13 Fix bracketed paste blocking provider form submit (#818) 2026-04-22 19:48:33 +08:00
Urvish L.
44a2c30d5f feat: implement Hook Chains runtime integration for self-healing agent mesh MVP (#711)
* feat: implement Hook Chains runtime integration for self-healing agent mesh MVP

- Add Hook Chains config loader, evaluator, and dispatcher in src/utils/hookChains.ts
- Wire PostToolUseFailure hook dispatch in executePostToolUseFailureHooks()
- Wire TaskCompleted hook dispatch in executeTaskCompletedHooks()
- Integrate fallback-agent launcher with permission preservation (canUseTool threading)
- Add safety hardening for config-read errors (try-catch protection)
- Update docs with MVP runtime trigger explanation
- Add 10 unit tests and 4 integration tests covering config, rules, guards, and actions

This completes the self-healing agent mesh MVP by enabling declarative rule-based
responses to tool failures and task completions, with fallback agent spawning,
team notification, and capacity warming actions.

* Update docs/hook-chains.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/utils/hookChains.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix: address PR #711 review blockers for Hook Chains

- Gate hook-chain dispatch behind feature('HOOK_CHAINS') and default env gate to off
- Remove committed local artifact (agent.log) and ignore it in .gitignore
- Revert hook dispatcher signature threading changes for canUseTool
- Use ToolUseContext metadata hookChainsCanUseTool for fallback launch permissions
- Make spawn_fallback_agent fail explicitly when launcher context is unavailable
- Add config cache max age and guard map size limits to bound runtime memory
- Update docs and tests for default-off gating and explicit fallback failure

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-22 19:40:23 +08:00
ArkhAngelLifeJiggy
5b9cd21e37 feat: add streaming optimizer and structured request logging (#703)
* Integrate request logging and streaming optimizer

- Add logApiCallStart/End for API request tracking with correlation IDs
- Add streaming state tracking with processStreamChunk
- Flush buffer and log stream stats at stream end
- Resolve merge conflict with main branch

* feat: add streaming optimizer and structured request logging

* fix: address PR review feedback

- Remove buffering from streamingOptimizer - now purely observational
- Use logForDebugging instead of console.log for structured logging
- Remove dead code (streamResponse, bufferedStreamResponse, etc.)
- Use existing logging infrastructure instead of raw console.log
- Keep only used functions: createStreamState, processStreamChunk, getStreamStats

* test: add unit tests for requestLogging and streamingOptimizer

- streamingOptimizer.test.ts: 6 tests for createStreamState, processStreamChunk, getStreamStats
- requestLogging.test.ts: 6 tests for createCorrelationId, logApiCallStart, logApiCallEnd

* fix: correct durationMs test to be >= 0 instead of exactly 0

* fix: address PR #703 blockers and non-blockers

1. BLOCKER FIX: Skip clone() for streaming responses
   - Only call response.clone() + .json() for non-streaming requests
   - For streaming, usage comes via stream chunks anyway

2. NON-BLOCKER: Document dead code in flushStreamBuffer
   - Added comment explaining it's a no-op kept for API compat

3. NON-BLOCKER: vi.mock in tests - left as-is (test framework issue)

* fix: address all remaining non-blockers for PR #703

1. Remove dead code: flushStreamBuffer call and unused import
2. Fix test for Bun: remove vi.mock, use simple no-throw tests
2026-04-22 15:36:07 +08:00
ArkhAngelLifeJiggy
e92e5274b2 feat: add model-specific tokenizers and compression ratio detection (#799)
- ModelTokenizerConfig for different model families
- getTokenizerConfig() / getBytesPerTokenForModel()
- Content type detection (json, code, prose, list, technical)
- COMPRESSION_RATIOS - empirical ratios per content type
- estimateWithBounds() - confidence intervals

Features: 1.1, 1.14, 1.15
Tests: 13 passing
2026-04-22 13:24:12 +08:00
github-actions[bot]
86bce4ae74 chore(main): release 0.6.0 (#786)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-22 09:41:30 +08:00
Kevin Codex
c13842e91c fix(test): autoCompact floor assertion is flag-sensitive (#816)
The test "never returns negative even for unknown 3P models (issue #635)"
asserted that getEffectiveContextWindowSize() returns >= 33_000 for an
unknown 3P model under the OpenAI shim. That specific number assumes
reservedTokensForSummary = 20_000 (MAX_OUTPUT_TOKENS_FOR_SUMMARY), which
holds only when the tengu_otk_slot_v1 GrowthBook flag is disabled.

When the flag is ON — which is the case in CI but not always locally —
getMaxOutputTokensForModel() caps the model's default output at
CAPPED_DEFAULT_MAX_TOKENS (8_000). Then reservedTokensForSummary = 8_000,
floor = 8_000 + 13_000 = 21_000, and the test fails with 21_000 < 33_000.

The test reliably passes locally and reliably fails in CI, manifesting as
the intermittent PR-check failure.

Fix: relax the lower bound to 21_000 (cap-enabled worst case), which is
still well above zero — preserving the anti-regression intent of
issue #635 (no infinite auto-compact from a negative effective window)
without binding the test to GrowthBook flag state.

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-22 09:37:57 +08:00
Kevin Codex
458120889f fix(model): codex/nvidia-nim/minimax now read OPENAI_MODEL env (#815)
getUserSpecifiedModelSetting() decides which env var to consult based on
the active provider. The check included openai and github but omitted
codex, nvidia-nim, and minimax — even though all three use the OpenAI
shim transport and get their model routing via CLAUDE_CODE_USE_OPENAI=1
+ OPENAI_MODEL (set by applyProviderProfileToProcessEnv).

Concrete failure: user switches from Moonshot profile (which persisted
settings.model='kimi-k2.6') to the Codex profile. The new profile
correctly writes OPENAI_MODEL=codexplan + base URL to
chatgpt.com/backend-api/codex. Startup banner reflects Codex / gpt-5.4
correctly. But at request time getUserSpecifiedModelSetting() returns
early for provider='codex' (not in the env-consult list), falls through
to the stale settings.model='kimi-k2.6', and the Codex API rejects:

  API Error 400: "The 'kimi-k2.6' model is not supported when using
  Codex with a ChatGPT account."

Fix: extract an isOpenAIShimProvider flag covering openai|codex|github|
nvidia-nim|minimax — all providers that set OPENAI_MODEL as their model
env var. The Gemini and Mistral branches stay as-is (they use
GEMINI_MODEL / MISTRAL_MODEL).

Five regression tests pin the fix for each OpenAI-shim provider plus
guard tests for openai and github that already worked.

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-22 09:01:44 +08:00
Mike
ee19159c17 feat(provider): expose Atomic Chat in /provider picker with autodetect (#810)
Adds Atomic Chat as a first-class preset inside the in-session /provider
slash command, mirroring the Ollama auto-detect flow. Picking it probes
127.0.0.1:1337/v1/models, lists loaded models for direct selection, and
falls back to "Enter manually" / "Back" when the server is unreachable
or no models are loaded. README updated to reflect the new setup path.

Made-with: Cursor
2026-04-22 07:55:53 +08:00
Kevin Codex
13de4e85df fix(provider): saved profile ignored when stale CLAUDE_CODE_USE_* in shell (#807)
* fix(provider): saved profile ignored when stale CLAUDE_CODE_USE_* in shell

Users reported "my saved /provider profile isn't picked up at startup —
the banner shows gpt-4o / api.openai.com even though I saved Moonshot".

Root cause: applyActiveProviderProfileFromConfig() bailed out whenever
hasProviderSelectionFlags(processEnv) was true — i.e. whenever ANY
CLAUDE_CODE_USE_* flag was present. But a bare `CLAUDE_CODE_USE_OPENAI=1`
with no paired OPENAI_BASE_URL / OPENAI_MODEL is almost always a stale
shell export left over from a prior manual setup, not genuine startup
intent. Respecting it skipped the saved profile and let StartupScreen.ts
fall through to the hardcoded `gpt-4o` / `https://api.openai.com/v1`
defaults — the exact symptom users see.

Fix: narrow the guard from "any flag set" to "flag set AND at least one
concrete config value (BASE_URL, MODEL, or API_KEY)". A bare stale flag
no longer blocks the saved profile. A real shell selection (flag + URL
or flag + model) still wins, preserving the "explicit startup intent
overrides saved profile" contract.

New helper: hasCompleteProviderSelection(env). Per-provider check for a
paired concrete value. Bedrock/Vertex/Foundry keep the flag-alone
semantic since they rely on ambient AWS/GCP credentials rather than env
config.

Three new tests cover the bug and the two counter-cases:
  - bare USE flag → profile applies (fixes the bug)
  - USE flag + BASE_URL → profile blocked (preserves explicit intent)
  - USE flag + MODEL → profile blocked (preserves explicit intent)

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

* fix(provider): don't overlay stale legacy profile on plural-managed env

Second half of the "saved profile not picked up in banner" bug. The prior
commit fixed the guard that prevented applyActiveProviderProfileFromConfig()
from firing when a stale CLAUDE_CODE_USE_* flag was in the shell. But even
when the plural system applies correctly, buildStartupEnvFromProfile() was
then loading the legacy .openclaude-profile.json AND overwriting the
plural-managed env with whatever that file contained.

addProviderProfile() (the call path the /provider preset picker uses) does
NOT sync the legacy file, so a user who went:

  manual setup: CLAUDE_CODE_USE_OPENAI=1 + OPENAI_MODEL=gpt-4o
              → writes .openclaude-profile.json as { openai, gpt-4o, ... }
  /provider:   add Moonshot preset, mark active
              → writes plural config; legacy file UNCHANGED

would see startup reliably apply Moonshot env first, then get it clobbered
by the stale legacy file. Banner shows gpt-4o / api.openai.com while
runtime ends up with the correct env via a different code path — exactly
the user-reported symptom.

Fix: in buildStartupEnvFromProfile, when the plural system has already
set env (CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED === '1'), skip the
legacy-file overlay entirely and return processEnv unchanged. Legacy is
now strictly a first-run / fallback path for users who haven't adopted
the plural system.

Also removes the stripped-then-rebuilt env construction that was part of
the old overlay path — no longer needed.

Test updates:
  - Replaced "lets saved startup profile override profile-managed env"
    (encoded the old broken behavior) with a regression test that pins
    the new semantic: plural env survives when legacy is stale.
  - Added "falls back to legacy when plural hasn't applied" to pin the
    first-run path still works.

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

---------

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-22 00:59:32 +08:00
Kevin Codex
a5bfcbbadf feat(provider): zero-config autodetection primitive (#784)
First-run users with a credential already exported (ANTHROPIC_API_KEY,
OPENAI_API_KEY, etc.) currently still have to navigate the provider picker
or set CLAUDE_CODE_USE_* flags manually. Selecting the right provider from
ambient state should be automatic.

New module src/utils/providerAutoDetect.ts:

- detectProviderFromEnv() — synchronous env scan in a deterministic priority
  order (anthropic → codex → github → openai → gemini → mistral → minimax).
  Also detects Codex via ~/.codex/auth.json presence.
- detectLocalService() — parallel probes for Ollama (:11434) and LM Studio
  (:1234), with honoring of OLLAMA_BASE_URL / LM_STUDIO_BASE_URL overrides.
  Short 1.2s default timeout so first-run latency stays low when no local
  service is running.
- detectBestProvider() — orchestrator. Env scan short-circuits the probe;
  only hits the network when env has nothing.

All detection paths are side-effect-free: returns a DetectedProvider
descriptor describing what was found and why. Callers decide whether to
apply it (gated on hasExplicitProviderSelection() / profile file existence)
and how to hydrate the launch env.

Codex auth-file check is injectable (hasCodexAuth option) so tests are
hermetic from the dev machine's ~/.codex/auth.json state.

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-21 23:37:04 +08:00
ArkhAngelLifeJiggy
268c0398e4 feat: add thinking token extraction (#798)
* feat: add thinking token tracking and historical analytics

- extractThinkingTokens(): separate thinking from output tokens
- TokenUsageTracker class for historical analytics
- Track: cache hit rate, most used model, requests per hour/day
- Analytics: average tokens per request, totals
- Add tests (7 passing)

PR 4B: Features 1.10 + 1.11

* refactor: extract thinking and analytics to separate files

- Create thinkingTokenExtractor.ts with ThinkingTokenAnalyzer
- Create tokenAnalytics.ts with TokenUsageTracker
- Add production-grade methods and tests
- Update test imports
2026-04-21 23:25:12 +08:00
nickmesen
761924daa7 fix: Collapse all-text arrays to string for DeepSeek compatibility (#806)
Fixes #774. When tool_result content contains multiple text blocks,
they were serialized as arrays instead of strings, causing DeepSeek
to reject the request with 400 error.

Changes:
- convertToolResultContent: collapse all-text arrays to joined string
- convertContentBlocks: defensive collapse for user/assistant messages
- Arrays with images are preserved (not collapsed)

Tests: 3 new tests added, 53 pass, 0 fail

Co-authored-by: nick.mesen <nickmesen@users.noreply.github.com>
2026-04-21 23:17:12 +08:00
Kevin Codex
e908864da7 feat(api): smart model routing primitive (cheap-for-simple, strong-for-hard) (#785)
Most everyday turns ("ok", "thanks", "yep go ahead", "what does that do?")
get no measurable quality improvement from Opus-tier models over Haiku-tier,
but cost ~10x more and stream slower. Smart routing opts a user into
automatically routing obviously-simple turns to a cheaper model while
keeping the strong model for anything non-trivial.

New module src/services/api/smartModelRouting.ts:

- routeModel(input, config) → { model, complexity, reason }
- Pure primitive: no env reads, no state, caller supplies everything.
- Config is opt-in (enabled: false by default).

Routes to strong (conservative) when ANY of:
  - First turn of session (task-setup is worth the quality)
  - Code fence or inline code span present
  - Reasoning/planning keyword (plan, design, refactor, debug, architect,
    investigate, root cause, etc. — 20+ anchors)
  - Multi-paragraph input
  - Over char/word cutoff (defaults: 160 chars, 28 words; matches hermes)

Routes to simple only for clearly-trivial chatter.
Decision includes a reason string for a future UI indicator that shows
which tier handled the turn.

Integration into query path is intentionally deferred to a follow-up PR so
the heuristics can be reviewed and tuned in isolation first.

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-21 21:50:24 +08:00
Kevin Codex
b95d2221df Feat/kimi moonshot support (#805)
* feat(provider): first-class Moonshot (Kimi) direct-API support

Moonshot's direct API (api.moonshot.ai/v1) is OpenAI-compatible and works
today via the generic OpenAI shim, including the reasoning_content channel
that Kimi returns alongside the user-visible content. But the UX was rough:
unknown context window triggered the conservative 128k fallback + a warning,
and the provider displayed as "Local OpenAI-compatible".

Makes Moonshot a recognized provider:

- src/utils/model/openaiContextWindows.ts: add the Kimi K2 family and
  moonshot-v1-* variants to both the context-window and max-output tables.
  Values from Moonshot's model card — K2.6 and K2-thinking are 256K,
  K2/K2-instruct are 128K, moonshot-v1 sizes are embedded in the model id.
- src/utils/providerDiscovery.ts: recognize the api.moonshot.ai hostname
  and label it "Moonshot (Kimi)" in the startup banner and provider UI.

Users can now launch with:

  CLAUDE_CODE_USE_OPENAI=1 \
  OPENAI_BASE_URL=https://api.moonshot.ai/v1 \
  OPENAI_API_KEY=sk-... \
  OPENAI_MODEL=kimi-k2.6 \
  openclaude

and get accurate compaction + correct labeling + correct max_tokens out
of the box.

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

* fix(openai-shim): Moonshot API compatibility — max_tokens + strip store

Moonshot's direct API (api.moonshot.ai and api.moonshot.cn) uses the
classic OpenAI `max_tokens` parameter, not the newer `max_completion_tokens`
that the shim defaults to. It also hasn't published support for `store`
and may reject it on strict-parse — same class of error as Gemini's
"Unknown name 'store': Cannot find field" 400.

- Adds isMoonshotBaseUrl() that recognizes both .ai and .cn hosts.
- Converts max_completion_tokens → max_tokens for Moonshot requests
  (alongside GitHub / Mistral / local providers).
- Strips body.store for Moonshot requests (alongside Mistral / Gemini).

Two shim tests cover both the .ai and .cn hostnames.

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

* fix: null-safe access on getCachedMCConfig() in external builds

External builds stub src/services/compact/cachedMicrocompact.ts so
getCachedMCConfig() returns null, but two call sites still dereferenced
config.supportedModels directly. The ?. operator was in the wrong place
(config.supportedModels? instead of config?.supportedModels), so the null
config threw "Cannot read properties of null (reading 'supportedModels')"
on every request.

Reproduces with any external-build provider (notably Kimi/Moonshot just
enabled in the sibling commits, but equally DeepSeek, Mistral, Groq,
Ollama, etc.):

  ❯ hey
  ⏺ Cannot read properties of null (reading 'supportedModels')

- prompts.ts: early-return from getFunctionResultClearingSection() when
  config is null, before touching .supportedModels.
- claude.ts: guard the debug-log jsonStringify with ?. so the log line
  never throws.

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

* fix(startup): show "Moonshot (Kimi)" on the startup banner

The startup-screen provider detector had regex branches for OpenRouter,
DeepSeek, Groq, Together, Azure, etc., but nothing for Moonshot. Remote
Moonshot sessions fell through to the generic "OpenAI" label —
getLocalOpenAICompatibleProviderLabel() only runs for local URLs, and
api.moonshot.ai / api.moonshot.cn are not local.

Adds a Moonshot branch matching /moonshot/ in the base URL OR /kimi/ in
the model id. Now launches with:

  OPENAI_BASE_URL=https://api.moonshot.ai/v1 OPENAI_MODEL=kimi-k2.6

display the Provider row as "Moonshot (Kimi)" instead of "OpenAI".

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

* refactor(provider): sort preset picker alphabetically; Custom at end

The /provider preset picker was in ad-hoc order (Anthropic, Ollama,
OpenAI, then a jumble of third-party / local / codex / Alibaba / custom /
nvidia / minimax). Hard to scan when you know the provider name you want.

Sorts the list alphabetically by label A→Z. Pins "Custom" to the end —
it's the catch-all / escape hatch so it's scanned last, not shuffled into
the alphabetical run where a user looking for a named provider might
grab it by mistake. First-run-only "Skip for now" stays at the very
bottom, after Custom.

Test churn:
- ProviderManager.test.tsx: four tests hardcoded press counts (1 or 3 'j'
  presses) that broke when targets moved. Replaces them with a
  navigateToPreset(stdin, label) helper driven from a declared
  PRESET_ORDER array, so future list edits only update the array.
- ConsoleOAuthFlow.test.tsx: the 13-row test frame only renders the first
  ~13 providers. "Ollama", "OpenAI", "LM Studio" sentinels moved below
  the fold; swap them for alphabetically-early providers still visible
  in-frame ("Azure OpenAI", "DeepSeek", "Google Gemini"). Test intent
  (picker opened with providers listed) is preserved.

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

---------

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-21 21:20:54 +08:00
ArkhAngelLifeJiggy
2b15e16421 feat: add model caching and benchmarking utilities (#671)
* feat: add model caching and benchmarking utilities

- Add modelCache.ts for disk caching of model lists
- Add benchmark.ts for testing model speed/quality

* fix: address review feedback - async fs, multi-provider support, error handling

* feat: add /benchmark slash command and unit tests

* feat: add /benchmark slash command and unit tests
2026-04-21 18:36:16 +08:00
Nourrisse Florian
6a62e3ff76 feat: enable 15 additional feature flags in open build (#667)
* feat: enable 16 additional feature flags in open build

Activate features whose source is fully available in the mirror and
that have no Anthropic-internal infrastructure dependencies:

UI/UX: MESSAGE_ACTIONS, HISTORY_PICKER, QUICK_SEARCH, HOOK_PROMPTS
Reasoning: ULTRATHINK, TOKEN_BUDGET, SHOT_STATS
Agents: FORK_SUBAGENT, VERIFICATION_AGENT, MCP_SKILLS
Memory: EXTRACT_MEMORIES, AWAY_SUMMARY
Optimization: CACHED_MICROCOMPACT, PROMPT_CACHE_BREAK_DETECTION
Safety: TRANSCRIPT_CLASSIFIER
Debug: DUMP_SYSTEM_PROMPT

Also reorganize featureFlags into documented sections (disabled/upstream/new)
with inline comments explaining each flag's purpose.

* feat: add centralized GrowthBook defaults map for open build

Add _openBuildDefaults in the GrowthBook stub (no-telemetry-plugin.ts)
with all 66 runtime feature keys, organized by category with inline
comments describing each flag's purpose.

Override tengu_sedge_lantern (AWAY_SUMMARY) and tengu_hive_evidence
(VERIFICATION_AGENT) to true so these features work out of the box
without requiring manual ~/.claude/feature-flags.json setup.

Priority: feature-flags.json > _openBuildDefaults > upstream default

* feat: replace refusal language with positive security guidance

Remove refusal instructions from CYBER_RISK_INSTRUCTION since they are
redundant for Anthropic models (applied server-side) and useless for
uncensored models in multi-provider setups. Keep positive guidance for
security testing contexts and add red teaming support.

* Revert "feat: replace refusal language with positive security guidance"

This reverts commit 0463676a8f.

* fix: add EXTRACT_MEMORIES runtime gate overrides to open-build defaults

EXTRACT_MEMORIES was enabled at build-time but its runtime GrowthBook
gates (tengu_passport_quail, tengu_coral_fern) still defaulted to false,
preventing the feature from activating. Add both keys to
_openBuildDefaults so memory extraction works out of the box.

Also adds test coverage for _openBuildDefaults precedence behavior.

* docs: update GrowthBook runtime keys catalog to 88 keys

Expand the reference catalog in no-telemetry-plugin.ts from ~62 to 88
unique keys, covering all tengu_* call sites found in src/. Adds 27
previously undocumented keys including VSCode gates, dynamic configs
(auto-mode, cron, bridge), security gates, and KAIROS cron keys.

Adds "not exhaustive" disclaimer as suggested by Copilot reviewer.
Reorganizes categories with section dividers for readability.
2026-04-21 18:34:51 +08:00
3kin0x
06e7684eb5 fix(api): ensure strict role sequence and filter empty assistant messages after interruption (#745 regression) (#794) 2026-04-21 18:28:57 +08:00
Juan Camilo Auriti
ae3b723f3b fix(security): harden project settings trust boundary + MCP sanitization (#789)
* fix(security): harden project settings trust boundary + MCP sanitization

- Sanitize MCP tool result text with recursivelySanitizeUnicode() to prevent
  Unicode injection via malicious MCP servers (tool definitions and prompts
  were already sanitized, but tool call results were not)
- Read sandbox.enabled only from trusted settings sources (user, local, flag,
  policy) — exclude projectSettings to prevent malicious repos from silently
  disabling the sandbox via .claude/settings.json
- Disable git hooks in plugin marketplace clone/pull/submodule operations
  with core.hooksPath=/dev/null to prevent code execution from cloned repos
- Remove ANTHROPIC_FOUNDRY_API_KEY from SAFE_ENV_VARS to prevent credential
  injection from project-scoped settings without trust verification
- Add ssrfGuardedLookup to WebFetch HTTP requests to block DNS rebinding
  attacks that could reach cloud metadata or internal services

Security: closes trust boundary gap where project settings could override
security-critical configuration. Follows the existing pattern established
by hasAllowBypassPermissionsMode() which already excludes projectSettings.

Co-authored-by: auriti <auriti@users.noreply.github.com>

* fix(security): remove unauthenticated file-based permission polling

Remove the legacy file-based permission polling from useSwarmPermissionPoller
that read from ~/.claude/teams/{name}/permissions/resolved/ — an unauthenticated
directory where any local process could forge approval files to auto-approve
tool uses for swarm teammates.

The file polling was dead code:
- The useSwarmPermissionPoller() hook was never mounted by any component
- resolvePermission() (the file writer) was never imported outside its module
- Permission responses are delivered exclusively via the mailbox system:
  Leader: sendPermissionResponseViaMailbox() → writeToMailbox()
  Worker: useInboxPoller → processMailboxPermissionResponse()

Changes:
- Remove file polling loop, processResponse(), and React hook imports from
  useSwarmPermissionPoller.ts (now a pure callback registry module)
- Mark 7 file-based functions as @deprecated in permissionSync.ts
- Add 4 regression tests verifying the removal

No exported functions removed — only deprecated. All 5 consumer modules
verified: they import only mailbox-based functions that remain unchanged.

---------

Co-authored-by: auriti <auriti@users.noreply.github.com>
2026-04-21 18:28:03 +08:00
viudes
a6a3de5ac1 feat(api): compress old tool_result content for small-context providers (#801)
* feat(api): compress old tool_result content for small-context providers

Adds a shim-layer pass that tiers tool_result content by age on
providers
  with small effective context windows (Copilot gpt-4o 128k, Mistral,
  Ollama). Recent turns remain full; mid-tier results are truncated to
2k
  chars; older results are replaced with a stub that preserves tool name
  and arguments so the model can re-invoke if needed.

  Tier sizes auto-tune via getEffectiveContextWindowSize, same
calculation
  used by auto-compact. Reuses COMPACTABLE_TOOLS and
  TOOL_RESULT_CLEARED_MESSAGE to complement (not duplicate)
microCompact.
  Configurable via /config toolHistoryCompressionEnabled.

  Addresses active-session context accumulation on Copilot where
  microCompact's time-based trigger never fires, which surfaces as
  "tools appearing in a loop" and prompt_too_long errors after ~15
turns.

* fix: config tool history
2026-04-21 17:36:26 +08:00
Juan Camilo Auriti
64582c119d fix: replace discontinued gemini-2.5-pro-preview-03-25 with stable gemini-2.5-pro (#802)
Updates both the model config mappings (configs.ts) and the runtime
fallback in getDefaultOpusModel() (model.ts) so Gemini mode no longer
falls back to the discontinued preview model when GEMINI_MODEL is unset.

Fixes #398
2026-04-21 17:01:33 +08:00
emsanakhchivan
85eab2751e fix(ui): prevent provider manager lag by deferring sync I/O (#803)
ProviderManager was blocking the main thread with synchronous file I/O
on mount (useState initializer), activation (setActiveProviderProfile),
and refresh (getProviderProfiles). This caused noticeable lag on Windows
where disk I/O can be slow due to antivirus scans, NTFS metadata, or
cache misses.

Changes to ProviderManager:
- Deferred initialization: useState now starts empty, loads via queueMicrotask
- Added isInitializing state with loading UI
- refreshProfiles() now defers reads via queueMicrotask
- activateSelectedProvider() now defers writes via queueMicrotask
- Memoized menuOptions array to prevent re-renders during navigation

Note: ProviderChooser useMemo change was reverted as it's dead code
(ProviderWizard is not used in production - /provider uses ProviderManager).

Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
2026-04-21 17:00:58 +08:00
Zartris
4d4fb2880e fix: rename .claude.json to .openclaude.json with legacy fallback (#582)
* fix: rename .claude.json to .openclaude.json with legacy fallback

Rename the global config file from ~/.claude.json to ~/.openclaude.json,
following the same migration pattern as the config directory
(~/.claude → ~/.openclaude).

- getGlobalClaudeFile() now prefers .openclaude.json; falls back to
  .claude.json only if the legacy file exists and the new one does not
- Add .openclaude.json to filesystem permissions allowlist (keep
  .claude.json for legacy file protection)
- Update all comment/string references from ~/.claude.json to
  ~/.openclaude.json across 12 files

New installs get .openclaude.json from the start. Existing users
continue using .claude.json until they rename it (or a future explicit
migration).

* test: add unit tests for getGlobalClaudeFile migration branches

Covers the three cases:
- new install (neither file exists) → .openclaude.json
- existing user (only legacy .claude.json exists) → .claude.json
- migrated user (both files exist) → .openclaude.json

---------

Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
2026-04-20 17:13:09 +08:00
Zartris
fdef4a1b4c feat: native Anthropic API mode for Claude models on GitHub Copilot (#579)
* feat: native Anthropic API mode for Claude models on GitHub Copilot

When using Claude models through GitHub Copilot, automatically switch from
the OpenAI-compatible shim to Anthropic's native messages API format.

The Copilot proxy (api.githubcopilot.com) supports Anthropic's native API
for Claude models. This enables cache_control blocks to be sent and
honoured, allowing explicit prompt caching control (as opposed to relying
solely on server-side auto-caching).

Changes:
- Add isGithubNativeAnthropicMode() in providers.ts that auto-enables when
  the resolved model starts with "claude-" and the GitHub provider is active
- Create a native Anthropic client in client.ts using the GitHub base URL
  and Bearer token authentication when native mode is detected
- Enable prompt caching in claude.ts for native GitHub mode so cache_control
  blocks are sent (previously only allowed for firstParty/bedrock/vertex)
- CLAUDE_CODE_GITHUB_ANTHROPIC_API=1 env var to force native mode for any
  model

Benefits:
- Proper Anthropic message format (no lossy OpenAI translation)
- Explicit cache_control blocks for fine-grained caching control
- Potentially better Claude model behaviour with native format

Related: #515

* fix: scope force flag to Claude models and add isGithubNativeAnthropicMode tests

- CLAUDE_CODE_GITHUB_ANTHROPIC_API=1 now returns false for non-Claude models
  (force flag still useful for aliases like 'github:copilot' with no model
  resolved yet, where it returns true when model is empty)
- Add 7 focused tests covering mode detection: off without GitHub provider,
  auto-detect via OPENAI_MODEL and resolvedModel, non-Claude model rejection,
  and force-flag behaviour for claude/non-claude/no-model cases

* fix: detect github:copilot:claude- compound format, remove force flag

OPENAI_MODEL for GitHub Copilot uses the format 'github:copilot:MODEL'
(e.g. 'github:copilot:claude-sonnet-4'), which does not start with 'claude-'.
Auto-detection now handles both bare model names and the compound format.

The CLAUDE_CODE_GITHUB_ANTHROPIC_API force flag is removed: with proper
compound-format detection there is no remaining gap it could fill, and
keeping a broad override flag without a concrete use case invites misuse.

Tests updated to cover the compound format, generic alias (false), and
non-Claude compound model (github:copilot:gpt-4o → false).

* fix: use includes('claude-') for model detection, remove force flag

Detection was broken for the standard GitHub Copilot compound format
'github:copilot:claude-sonnet-4' which does not start with 'claude-'.
Using includes('claude-') handles bare names, compound names, and any
future variants without needing updates.

The CLAUDE_CODE_GITHUB_ANTHROPIC_API force flag is removed as it was
a workaround for the broken detection, not a genuine use case.

---------

Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
2026-04-20 16:34:58 +08:00
nehan
4cb963e660 feat(api): improve local provider reliability with readiness and self-healing (#738)
* feat(api): classify openai-compatible provider failures

* Update src/services/api/providerConfig.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/errors.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* feat(api): harden openai-compatible diagnostics and env fallback

* Update src/services/api/openaiShim.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/openaiShim.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/errors.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/errors.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Apply suggestion from @Copilot

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix openaiShim duplicate requests and diagnostics

* remove unused url from http failure classifier

* dedupe env diagnostic warnings

* Remove hardcoded URLs from OpenAI error tests

Removed hardcoded URLs from network failure classification tests.

* Update providerConfig.envDiagnostics.test.ts

* fix(openai-shim): return successful responses and restore localhost classifier tests

* Update src/services/api/openaiShim.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/openaiShim.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/openaiShim.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* feat(provider): add truthful local generation readiness checks

Implement Phase 2 provider readiness behavior by adding structured Ollama generation probes, wiring setup flows to readiness states, extending system-check with generation readiness output, and updating focused tests.

* feat(api): add local self-healing fallback retries

Implement Phase 3 self-healing behavior for local OpenAI-compatible providers: retry base URL fallbacks for localhost resolution and endpoint mismatches, plus capability-gated toolless retry for tool-incompatible local models; include diagnostics and focused tests.

* fix(api): address review blockers for local provider reliability

* Update src/utils/providerDiscovery.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/openaiShim.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix: harden readiness probes and cross-platform test stability

* fix: refresh toolless retry payload and stabilize osc clipboard test

* fix: harden Ollama readiness parsing and redact provider URLs

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-20 16:24:02 +08:00
github-actions[bot]
b09972f223 chore(main): release 0.5.2 (#781)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-20 15:25:42 +08:00
Kevin Codex
336ddcc50d fix(api): replace phrase-based reasoning sanitizer with tag-based filter (#779)
Reasoning models (MiniMax M2.7, GLM-4.5/5, DeepSeek, Kimi K2) inline
chain-of-thought inside <think>...</think> tags in the content field
rather than using the reasoning_content channel. The prior phrase-matching
sanitizer (looksLikeLeakedReasoningPrefix) only caught English-prose
preambles like "I should"/"the user asked", missed tag-based leaks
entirely, and risked false-stripping legitimate assistant output.

Replace with a structural tag-based approach (same pattern as hermes-agent):

- createThinkTagFilter() — streaming state machine that buffers partial
  tags across SSE delta boundaries (<th| + |ink>), so tags split mid-chunk
  still parse correctly.
- stripThinkTags() — whole-text cleanup for non-streaming responses and
  as a safety net. Handles closed pairs, unterminated opens at block
  boundaries, and orphan tags.
- Recognizes think, thinking, reasoning, thought, REASONING_SCRATCHPAD
  case-insensitively, including tags with attributes.
- False-negative bias: flush() discards buffered partial tags at stream
  end rather than leaking them.

Existing phrase-based shim tests updated to exercise the actual <think>
tag leak. Added regression tests confirming legitimate prose starting
with "I should..." is preserved (the old sanitizer's main false-positive).

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 15:18:58 +08:00
github-actions[bot]
c0b8a59a23 chore(main): release 0.5.1 (#776)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-20 12:47:40 +08:00
Kevin Codex
aab489055c fix: require trusted approval for sandbox override (#778) 2026-04-20 12:01:44 +08:00
Kevin Codex
7002cb302b fix: enforce Bash path constraints after sandbox allow (#777) 2026-04-20 11:46:24 +08:00
Kevin Codex
739b8d1f40 fix: enforce MCP OAuth callback state before errors (#775) 2026-04-20 09:36:05 +08:00
github-actions[bot]
f166ec1a4e chore(main): release 0.5.0 (#758)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-20 08:30:58 +08:00
Kevin Codex
13e9f22a83 feat: mask provider api key input (#772) 2026-04-20 08:25:22 +08:00
Kevin Codex
f828171ef1 fix: allow provider recovery during startup (#765) 2026-04-20 06:46:05 +08:00
Allan Almeida
e6e8d9a248 feat: add OPENCLAUDE_DISABLE_STRICT_TOOLS env var to opt out of strict MCP tool schema normalization (#770)
When set, disables strict schema normalization for non-Gemini providers.
Useful for OpenAI-compatible endpoints that reject MCP tools with complex
optional params (e.g. list[dict]) with "Extra required key ... supplied"
errors.
2026-04-20 06:45:01 +08:00
Sreedhar Busanelli
2c98be7002 fix: remove cached mcpClient in diagnostic tracking to prevent stale references (#727)
* fix: remove cached mcpClient in diagnostic tracking to prevent stale references

Resolves TODO comment about not caching the connected mcpClient since it can change.

Changes:
- Remove cached mcpClient field from DiagnosticTrackingService
- Add currentMcpClients storage to track active clients
- Update beforeFileEdited, getNewDiagnostics, and ensureFileOpened to accept client parameter
- Add backward-compatible methods to maintain existing API
- Update all callers to use new methods
- Add comprehensive test coverage

This prevents using stale MCP client references during reconnections,
making diagnostic tracking more reliable.

Fixes #TODO

* docs: add my contributions section to README

Add fork-specific section highlighting:
- Diagnostic tracking enhancement (PR #727)
- Technical skills demonstrated
- Links to original project and my work
- Professional contribution showcase

* revert: remove README.md contributions section to comply with reviewer request

- Remove 'My Fork & Contributions' section from README.md
- Keep README.md focused on original project documentation
- Maintain clean, project-focused README as requested by reviewer
2026-04-19 09:02:52 +08:00
3kin0x
b786b765f0 fix(api): drop orphan tool results to satisfy strict role sequence (#745)
* fix(api): drop orphan tool results to satisfy Mistral/OpenAI strict role sequence

* test: add test for orphan tool results and restore gemini comments
2026-04-19 08:57:14 +08:00
bpawnzz
55c5f262a9 fix: use raw context window for auto-compact percentage display (#748)
Problem: After auto-compaction with DeepSeek models (e.g., deepseek-chat),
the status line displayed ~16% remaining until next auto-compact, but users
expected ~30% (since compaction reduces usage to roughly half of the full
128k context).

Root cause: calculateTokenWarningState() used the auto-compaction threshold
(effectiveContextWindow - 13k buffer) as the denominator for percentLeft.
For DeepSeek-chat:
- Raw context: 128,000
- Effective: 119,808 (128k - 8,192 output reservation)
- Threshold: 106,808 (effective - 13k buffer)
At 90k usage:
  - Old: (106,808 - 90k) / 106,808 ≈ 16%
  - Expected: (128,000 - 90k) / 128,000 ≈ 30%

Fix: Change percentLeft calculation to use raw context window from
getContextWindowForModel() as denominator, while keeping threshold-based
warnings/triggers unchanged. This makes the displayed percentage show
remaining capacity relative to the model's full context size.

Impact:
- UI now shows correct % of total context remaining
- Auto-compaction trigger point unchanged (still ~90% of effective window)
- All other threshold calculations unaffected

Testing:
- Manual verification: DeepSeek-chat at 90k tokens shows 30% remaining (was 16%)
- Manual verification: Threshold still triggers at ~106k tokens
- Build succeeds: npm run build
- No breaking changes: Callers only depend on percentLeft for display; threshold logic unchanged

Fixes the user-reported discrepancy for DeepSeek and other OpenAI-compatible models.
2026-04-19 08:55:41 +08:00
Kagura
002a8f1f6d fix(mcp): sync required array with properties in tool schemas (#754)
* fix(mcp): sync required array with properties in tool schemas

MCP servers can emit schemas where the required array contains keys
not present in properties. This causes API 400 errors:
"Extra required key 'X' supplied."

- Add sanitizeSchemaRequired() to filter required arrays
- Apply it to MCP tool inputJSONSchema before sending to API
- Also fix filterSwarmFieldsFromSchema to update required after
  removing properties

Fixes #525

* test: add MCP schema required sanitization test
2026-04-19 06:44:25 +08:00
dhenuh
3d1979ff06 fix(help): prevent /help tab crash from undefined descriptions (#732)
- Guard formatDescriptionWithSource() so missing command descriptions become ''
- Harden truncate helpers to accept undefined text/path safely
- Add regression tests covering undefined input cases
2026-04-19 06:38:44 +08:00
lunamonke
b0d9fe7112 Provider loading fix (#623)
* add mistral and gemini provider type for profile provider field

* load latest locally selected

* env variables take precedence over json save

* add gemini context windows and fix gemini defaulting for env

* load on startup fix

* fix failing tests

* clarify test message

* fix variable mismatches

* fix failing test

* delete keys and set profile.apiKey for mistral and gemini

* switch model as well when switching provider

* set model when adding a new model
2026-04-18 01:46:20 +08:00
github-actions[bot]
651123db1f chore(main): release 0.4.0 (#704)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-17 19:36:32 +08:00
emsanakhchivan
34246635fb fix(ui): show correct endpoint URL in intro screen for custom Anthropic endpoints (#735)
Previously, the startup intro screen always displayed
'https://api.anthropic.com' as the endpoint for Anthropic provider,
even when a custom endpoint was configured via ANTHROPIC_BASE_URL.

This fix reads ANTHROPIC_BASE_URL from environment and displays the
actual configured endpoint, providing accurate information to users
about where their API requests will be sent (proxy gateways, staging,
custom Anthropic-compatible APIs).

Also adds isLocal detection for local endpoints to show appropriate
visual indicator in the startup banner.

Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
2026-04-17 19:06:47 +08:00
regisksc
43ac6dba75 feat: add Alibaba Coding Plan (DashScope) provider support (#509)
* feat: add Alibaba Coding Plan provider presets

* fix: add DashScope presets to ProviderManager UI selection list

* feat: read DASHSCOPE_API_KEY env var for DashScope provider presets

* adds regression testing for alibaba models

* docs: add time descriptive comment

* feat(dashscope): add qwen3.6-plus model support

* fix(dashscope): remove MiniMax-M2.5 entries to prevent future key conflicts
2026-04-17 19:06:21 +08:00
nehan
80a00acc2c feat(api): classify openai-compatible provider failures (#708)
* feat(api): classify openai-compatible provider failures

* Update src/services/api/providerConfig.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/errors.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* feat(api): harden openai-compatible diagnostics and env fallback

* Update src/services/api/openaiShim.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/openaiShim.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/errors.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/errors.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Apply suggestion from @Copilot

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix openaiShim duplicate requests and diagnostics

* remove unused url from http failure classifier

* dedupe env diagnostic warnings

* Remove hardcoded URLs from OpenAI error tests

Removed hardcoded URLs from network failure classification tests.

* Update providerConfig.envDiagnostics.test.ts

* fix(openai-shim): return successful responses and restore localhost classifier tests

* Update src/services/api/openaiShim.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/openaiShim.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src/services/api/openaiShim.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-17 18:01:40 +08:00
Andrei Parshin
eed77e6579 fix: prevent crash in commands tab when description is undefined (#730)
This commit fixes a crash in the CLI that occurs when navigating to the /help commands tab. The issue happens because the truncate function receives an undefined value for the str parameter if a command lacks a description, causing the .indexOf() method to throw an exception. To resolve this, an early return check was added at the beginning of the function to gracefully handle empty values and prevent the UI from crashing.
2026-04-17 13:57:40 +08:00
guanjiawei
b280c740a6 fix serialize git worktree mutations and forward teammate PATH (#721) 2026-04-16 21:44:56 +08:00
guanjiawei
2ff5710329 fix retry Codex and OpenAI fetches via proxy-aware helper (#720) 2026-04-16 21:42:14 +08:00
emsanakhchivan
d6f5130c20 fix: focus "Done" option after completing provider manager actions (#718)
When returning to the provider manager menu after completing an action
(add, edit, delete, set active, etc.), the cursor now lands on "Done"
instead of the first option ("Add provider"). This prevents accidental
re-entry into the same action if the user presses Enter quickly.

On initial /provider invocation, the cursor still starts on the first
option ("Add provider") as expected.

Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
2026-04-16 21:39:13 +08:00
Rubens Oliveira
d32a2a1329 docs: add Ollama launch integration documentation (#716)
Document the new `ollama launch openclaude` command as a shortcut
for running OpenClaude through a local Ollama instance. This is
now supported in Ollama's launch system and handles all environment
variable setup automatically — no manual env vars needed.

Changes:
- README.md: Add "Using Ollama's launch command" section after the
  manual Ollama env var setup, and update the provider table to
  list `ollama launch` as a setup path for Ollama
- docs/advanced-setup.md: Add `ollama launch` as the recommended
  method at the top of the Ollama section, with the manual env var
  approach kept below as an alternative
2026-04-16 21:23:44 +08:00
henriquepasquini2
fbcd928f7f feat(vscode): add full chat interface to OpenClaude extension (#608)
Add a Claude Code-like chat experience to the VS Code extension with:
- Streaming chat panel (sidebar + editor tab) with markdown rendering
- Tool use visualization with inline diffs (replace/with display)
- Session history browser with JSONL transcript parsing
- Thinking block indicator with elapsed time and token count
- Clickable file paths that open in the editor
- Permission mode setting (acceptEdits default)
- Multi-turn conversation support via NDJSON stream-json protocol
- Status bar with live activity indicators
- Ctrl+Shift+L keybinding to open chat panel

Made-with: Cursor

Co-authored-by: henriquepasquini2 <henriquepasquini2@users.noreply.github.com>
2026-04-16 05:04:31 +08:00
Yakout
77083d769b Fix/MCP exposure v2 TODO's (#675)
* fix: OAuth tokens secure storage for Windows & Linux

* fix(mcp): MCP Tool Re-exposure & Strict Input Validation

Fixes the MCP re-exposure bug by correctly handling tool deduplication, input validation with Ajv, and structured output (including images). Also disables experimental API betas by default to prevent 500 errors on external accounts.

* fix(mcp): skip official registry prefetch in non-first-party mode

Prevents unnecessary calls to Anthropic's MCP registry when using other API providers.

* fix(cli): disable experimental API betas by default

This prevents 500 errors from Anthropic's API when tool-calling with non-Anthropic accounts or models that don't support certain beta features.

* fix: issues raised in the PR review for #675
2026-04-16 05:03:06 +08:00
emsanakhchivan
b66633ea4d Feat/multi model provider support (#692)
* test: add tests for provider model env updates and multi-model profiles

Add comprehensive tests covering:
- OPENAI_MODEL/ANTHROPIC_MODEL env updates on provider activation
- Cross-provider type switches (openai ↔ anthropic) clearing stale env
- Multi-model profile activation using only the first model for env vars
- Model options cache population from comma-separated model lists
- getProfileModelOptions generating correct ModelOption arrays

* feat: multi-model provider support and model auto-switch

Support comma-separated model names in provider profiles (e.g.
"glm-4.7, glm-4.7-flash"). The first model is used as default on
activation; all models appear in the /model picker for easy switching.

When switching active providers, the session model now automatically
updates to the new provider's first model. The multi-model list is
preserved across switches and /model selections.

Changes:
- Add parseModelList, getPrimaryModel, hasMultipleModels utilities
  with full test coverage (19 tests)
- Use getPrimaryModel when applying profiles to process.env so only
  the primary model is set in OPENAI_MODEL/ANTHROPIC_MODEL
- Update ProviderManager UI to hint at multi-model syntax and show
  model count in provider list summaries
- Populate model options cache from multi-model profiles on activation
  so all models appear in /model picker regardless of base URL type
- Guard persistActiveProviderProfileModel against overwriting
  comma-separated lists: models already in the profile are session
  selections, not profile edits
- Set AppState.mainLoopModel to the actual model string on provider
  switch so Anthropic profiles use the configured model instead of
  falling back to the built-in default

* fix: only show profile models when provider profile env is applied

Guard the profile model picker options behind a
PROFILE_ENV_APPLIED check. getActiveProviderProfile() has a
?? profiles[0] fallback that returns the first profile even when
no profile is explicitly active, causing users with inactive
profiles to lose all standard model options (Opus, Haiku, etc.)
from the /model picker.

* fix: show all model names for profiles with 3 or fewer models

Instead of a summary format for multi-model profiles, display all
model names when there are 3 or fewer. Only use the "+ N more"
format for profiles with 4+ models.

* fix: preserve standard model options in picker alongside profile models

The previous implementation used an early return that replaced all
standard picker options (Opus, Haiku, Sonnet for Anthropic; Codex/GPT
models for OpenAI) with only the profile's custom models.

Changes:
- Collect profile models into a shared array instead of early returning
- Append profile models to firstParty path (Opus + Haiku + Sonnet + custom)
- Append profile models to PAYG 3P path (Codex + Sonnet + Opus + Haiku + custom)
- Guard collection behind PROFILE_ENV_APPLIED to avoid ?? profiles[0] fallback

Fixes review feedback: standard models are no longer hidden when a
provider profile with custom models is active. Users see both the
standard options and their profile's models.

---------

Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
2026-04-16 05:01:55 +08:00
ArkhAngelLifeJiggy
51191d6132 feat: add NVIDIA NIM and MiniMax provider support (#552)
* feat: add NVIDIA NIM and MiniMax provider support

- Add nvidia-nim and minimax to --provider CLI flag
- Add model discovery for NVIDIA NIM (160+ models) and MiniMax
- Update /model picker to show provider-specific models
- Fix provider detection in startup banner
- Update .env.example with new provider options

Supported providers:
- NVIDIA NIM: https://integrate.api.nvidia.com/v1
- MiniMax: https://api.minimax.io/v1

* fix: resolve conflict in StartupScreen (keep NVIDIA/MiniMax + add Codex detection)

* fix: resolve providerProfile conflict (add imports from main, keep NVIDIA/MiniMax)

* fix: revert providerSecrets to match main (NVIDIA/MiniMax handled elsewhere)

* fix: add context window entries for NVIDIA NIM and new MiniMax models

* fix: use GLM-5 as NVIDIA NIM default and MiniMax-M2.5 for consistency

* fix: address remaining review items - add GLM/Kimi context entries, max output tokens, fix .env.example, revert to Nemotron default

* fix: filter NVIDIA NIM picker to chat/instruct models only, set provider-specific API keys from saved profiles

* chore: add more NVIDIA NIM context window entries for popular models

* fix: address remaining non-blocking items - fix base model, clear provider API keys on profile switch
2026-04-15 20:26:13 +08:00
Jeevan Mohan Pawar
6b2121da12 fix(models): prevent /models crash from non-string saved model values (#691)
* fix(models): guard GitHub default model setting against non-string values

* test(models): avoid brittle GitHub default assertion in model guard test
2026-04-15 19:47:02 +08:00
dhenuh
c207cdbdcc ci: skip release-please on fork repositories (#701) 2026-04-15 19:46:39 +08:00
Nourrisse Florian
a00b7928de fix: strip comments before scanning for missing imports (#676)
* fix: strip comments before scanning for missing imports

The scanForMissingImports regex matched require() and import() patterns
inside JSDoc comments, causing false-positive missing module detection.
A documented path like `require('./commands/proactive.js')` in a comment
was resolved from the wrong directory, marked as missing, then the global
onResolve handler intercepted ALL imports of that specifier — including
valid ones — replacing them with truthy noop stubs that broke runtime.

Strip block (/* */) and line (//) comments from source before scanning.

* fix: repair 10 pre-existing test failures

- promptIdentity.test.ts: define MACRO global (ISSUES_EXPLAINER etc.)
  for test mode where Bun.define build-time replacements aren't active
- context.test.ts: clear OPENAI_MODEL env var in each test — the user's
  environment (e.g. OPENAI_MODEL=github_copilot/gpt-5.4) polluted the
  provider-qualified lookup, returning wrong context windows
- openclaudePaths.test.ts: set CLAUDE_CONFIG_DIR to force .openclaude
  path when ~/.openclaude doesn't exist on the test machine
2026-04-15 19:42:26 +08:00
3kin0x
12dd3755c6 feat: add ripgrep to Dockerfile for faster file searching (#688) 2026-04-15 19:42:06 +08:00
dhenuh
114f772a4a tests: avoid global fetch mutation in GitHub device flow tests (#702) 2026-04-15 19:38:46 +08:00
Kevin Codex
7187fc007a docs: add Star History chart to README (#686)
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-15 02:38:05 +08:00
Fexiven
0ed50ccfe7 Fix Docker deployment (#685)
* feat: add Docker image build and push to GHCR on release

Add Dockerfile (multi-stage build with node:22-slim) and a new docker
job in the release workflow that builds and pushes to ghcr.io when
release-please creates a tag.

* feat(docker): run as non-root user and add smoke test

Run the container as a non-root appuser to reduce blast radius.
Add a smoke test step that runs --version before pushing to GHCR.

* fix(docker): use existing node user instead of creating appuser

Closes #681
2026-04-15 01:22:08 +08:00
github-actions[bot]
131b31bf0e chore(main): release 0.3.0 (#661)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-14 19:24:31 +08:00
Nourrisse Florian
c1beea9867 feat: open useful USER_TYPE-gated features to all users (#644)
* feat: open useful USER_TYPE-gated features to all users

Remove 13 process.env.USER_TYPE === 'ant' gates that restricted useful
features to Anthropic employees. These features work without Anthropic
infrastructure and are now available to all open-build users.

Features opened:
- Agent nesting (sub-agents can spawn sub-agents)
- Effort 'max' persistence in settings
- Plan mode interview phase (controlled by feature flags)
- Sandbox disabled commands (via ~/.claude/feature-flags.json)
- All tips visible to all users (plan mode, feedback, shift-tab)

Simplified:
- Fullscreen defaults to off (use /config to enable)
- Explore agent always uses haiku model
- Plan mode tool uses conservative prompt for all users

Continues the USER_TYPE cleanup from #637 (dead code) and builds
on #639 (local feature flags).

* fix: address Copilot review comments — remove residual dead code

1. bridgeConfig.ts: ungate bridge override functions — return env vars
   directly instead of hardcoded undefined
2. bridgeMain.ts + initReplBridge.ts: ungate sessionIngressUrl — read
   CLAUDE_BRIDGE_SESSION_INGRESS_URL without USER_TYPE check
3. tools.ts: remove dead ConfigTool/TungstenTool imports, narrow
   eslint-disable scope, stub REPLTool/SuggestBackgroundPRTool to null
4. readOnlyValidation.ts: remove orphaned ANT_ONLY_COMMAND_ALLOWLIST
   and unused GH_READ_ONLY_COMMANDS import
5. insights.ts: remove entire remote collection plumbing (types,
   functions, options, display logic)
6. osc.ts: hardcode supportsTabStatus() to false (internal-only feature)
7. state.ts: simplify addSlowOperation/getSlowOperations to no-ops,
   remove dead constants

* fix: address Copilot review on PR #644

1. settings/types.ts: allow 'max' effort level for all users in Zod
   schema — was still gated behind USER_TYPE=ant, causing 'max' to be
   silently dropped on settings reload
2. shouldUseSandbox.ts: defensively normalize disabledCommands from
   feature flag config with Array.isArray() guards

* fix: address second round of Copilot review on PR #644

1. shouldUseSandbox.ts: validate top-level shape of disabledCommands
   before accessing properties (handles null/primitive from feature flag)
2. fullscreen.ts: update JSDoc to reflect removal of USER_TYPE default
3. osc.ts: update JSDoc — "Ant-only" → "Currently disabled"
2026-04-14 19:08:54 +08:00
Fexiven
658d076909 feat: add Docker image build and push to GHCR on release (#656)
* feat: add Docker image build and push to GHCR on release

Add Dockerfile (multi-stage build with node:22-slim) and a new docker
job in the release workflow that builds and pushes to ghcr.io when
release-please creates a tag.

* feat(docker): run as non-root user and add smoke test

Run the container as a non-root appuser to reduce blast radius.
Add a smoke test step that runs --version before pushing to GHCR.
2026-04-14 19:03:10 +08:00
Vasanth T
a07e5ef990 fix: bump axios 1.14.0 → 1.15.0 (Dependabot #4, #5) (#670)
* fix: bump axios 1.14.0 → 1.15.0 (Dependabot #4, #5)

Resolve two critical Dependabot alerts:
- #5: Unrestricted Cloud Metadata Exfiltration via Header Injection Chain
- #4: NO_PROXY Hostname Normalization Bypass Leads to SSRF

Both require axios >= 1.15.0.

* fix: update bun.lock for axios 1.15.0

CI failed with 'lockfile had changes, but lockfile is frozen'.
Regenerated lockfile after axios bump.

---------

Co-authored-by: root <root@vm7508.lumadock.com>
2026-04-14 19:00:55 +08:00
FluxLuFFy
25ce2ca7bf fix: resolve 12 bugs across API, MCP, agent tools, web search, and context overflow (#674)
* fix: resolve 12 bugs across API, MCP, agent tools, web search, and context overflow

API fixes:
- Fix Gemini 400 error: delete 'store: false' field for Gemini endpoints
  (was globally injected, Gemini rejects unknown fields)
- Fix session timeout 500 errors after ~25min: add 120s idle timeout
  on SSE stream readers in openaiShim and codexShim to detect dead
  connections and trigger withRetry reconnection
- Fix context overflow 500 errors: add handler in errors.ts for 500
  responses caused by oversized conversation context (too many tokens),
  surfacing user-friendly message with recovery actions instead of raw
  'API Error: 500'

Agent loop fix:
- Fix premature task completion: detect continuation signals like
  'so now I have to do it' in assistant text without tool calls and
  inject a meta nudge to force the agent to continue

Web search improvements:
- Increase result counts: Bing/Tavily/Exa/Firecrawl from 10→15,
  Mojeek/You/Jina from default→10 (explicit), max_uses 8→15

MCP fixes:
- Reduce default tool timeout from ~27.8 hours to 5 minutes
  (tools no longer hang indefinitely on unresponsive servers)
- Add retry logic (3 attempts) for tools/list fetch failures
  (prevents all MCP tools from silently disappearing on timeout)
- Add abort signal check in URL elicitation retry loop
- Improve MCP error messages with server and tool name context

Agent tool fixes:
- Fix SendMessage race condition: double-check task status before
  auto-resuming stopped agents to prevent duplicate registration
- Fix auto-compact circuit breaker gap: when auto-compact fails 3+
  consecutive times, proactively block oversized context BEFORE the
  API call instead of letting it 500. Clear message with recovery
  instructions (/new, /compact, rewind).

Tests: 850 total, 0 failures (25 new bugfix tests)

* fix: address all 4 review blockers + 6 additional issues from PR #674

Blockers (from Vasanthdev2004 review):

1. Continuation nudge infinite loop — no loop guard
   Added continuationNudgeCount to State, capped at MAX_CONTINUATION_NUDGES (3).
   Counter increments on each nudge, resets on tool execution (next_turn).

2. Continuation signal regexes too broad — high false-positive rate
   Tightened all patterns to require explicit action verbs. Added completion
   marker check (done/finished/completed/summary). Broad patterns only fire
   on messages <80 chars.

3. BUGFIXES.md in repo root — scope contamination
   Removed. PR description already contains this info.

4. AgentTool dump state cleanup is comment-only, not a bug fix
   Wrapped clearInvokedSkillsForAgent and clearDumpState in individual
   try/catch blocks so one failure doesn't prevent the other.

Additional issues:

5+6. readWithTimeout ignores AbortSignal, timer leak on abort
   Added optional signal param to openaiStreamToAnthropic,
   codexStreamToAnthropic, collectCodexCompletedResponse, readSseEvents.
   Added abort listener that clears idle timer so AbortError surfaces
   cleanly instead of spurious idle timeout.

7. MCP error format change breaks consumers
   Reverted human-readable message to original errorDetails format.
   Moved server/tool context to telemetryMessage param only.

10. AgentTool test broken by comment change
   Updated test assertions to match new defensive cleanup text + try/catch.

12. Mojeek test regex dangerously broad
   Tightened to match searchParams.set('t', '10') specifically.

14. linkup.ts in providerCounts test — no result count field
   Removed from providers list (uses depth param, not result count).

15. Error message overlap between errors.ts and query.ts
   Prefixed errorDetails with 'Context overflow (500):' to distinguish.

Tests: 851 pass, 0 fail

---------

Co-authored-by: openclaude-bot <bot@openclaude.ai>
Co-authored-by: Fix Bot <fix@openclaude.dev>
2026-04-14 18:59:53 +08:00
Kevin Codex
1741f32cb7 docs: add GitLawb mirror to README (#669) 2026-04-13 22:53:56 +08:00
Henrique Fernandes
fc7dc9ca0d Add Codex OAuth provider flow for ChatGPT account sign-in (#503)
* feat: add Codex OAuth provider flow

* fix: harden Codex OAuth storage, session activation, and UI
2026-04-13 22:34:16 +08:00
Nourrisse Florian
252808bbd0 feat: activate message actions in open build (#632)
Enable the MESSAGE_ACTIONS feature flag so open-build users get the
shift+up keybinding for the message actions panel.

Gate sites: src/keybindings/defaultBindings.ts, src/screens/REPL.tsx
(5 total). Pure UI/keybinding feature with zero external dependencies.
2026-04-13 21:48:29 +08:00
Nourrisse Florian
0e48884f56 feat: local feature flag overrides via ~/.claude/feature-flags.json (#639)
* feat: local feature flag overrides via ~/.claude/feature-flags.json

Replace the GrowthBook no-op stub with a local JSON file reader that
gives open-build users control over ~50 tengu_* feature flags without
needing Anthropic's GrowthBook server.

How it works:
- On first flag lookup, lazily reads ~/.claude/feature-flags.json
- Returns the configured value if the key exists, defaultValue otherwise
- When the file is absent, behavior is identical to the current stub
- CLAUDE_FEATURE_FLAGS_FILE env var overrides the file path (CI/testing)

Example ~/.claude/feature-flags.json:
  { "tengu_kairos_cron": true, "tengu_scratch": true }

Continues the infrastructure work from #315 and #352. This is a
prerequisite for replacing remaining USER_TYPE gates with local config.

* fix: use ESM imports and validate JSON shape in growthbook stub

- Replace require('fs'/'path'/'os') with ESM imports (node: prefix)
  to avoid ReferenceError in ESM bundle output
- Validate JSON.parse result is a plain object before using `in` operator
  to prevent TypeError on non-object JSON values

Addresses Copilot review comments on #639

* fix: reset flags cache in resetGrowthBook and refreshGrowthBookFeatures

Set _flags back to undefined so subsequent lookups re-read the JSON
file. Enables runtime reload and proper test isolation.

Addresses Copilot review comment on #639

* docs: explain why checkSecurityRestrictionGate is excluded from local flags

This is a remote killswitch for bypassPermissions mode — exposing it
via the local JSON file would let users accidentally disable
--dangerously-skip-permissions without understanding why.

* test: add unit tests for growthbook stub local feature flags

Covers: valid JSON loading, missing file fallback, malformed JSON,
non-object JSON (primitive, array), cache invalidation via
resetGrowthBook/refreshGrowthBookFeatures, all getter variants,
and checkSecurityRestrictionGate always returning false.

12 tests, 21 assertions.

* fix: use Object.hasOwn instead of in operator for flag lookup

Prevents inherited prototype properties (toString, constructor, etc.)
from being returned as flag values.

Addresses Copilot review comment on #639

* fix: align gate stub signatures and add Boolean coercion

Address remaining Copilot review feedback:
- checkSecurityRestrictionGate: accept gate param to match real signature
- checkStatsigFeatureGate/checkGate: coerce with Boolean() like real impl
2026-04-13 21:40:33 +08:00
Nourrisse Florian
b818dd5958 feat: implement Monitor tool for streaming shell output (#649)
* feat: implement Monitor tool for streaming shell output

Add the Monitor tool that executes shell commands in the background and
streams stdout line-by-line as notifications to the model. This enables
real-time monitoring of logs, builds, and long-running processes.

Implementation:
- MonitorTool (src/tools/MonitorTool/) — spawns LocalShellTask with
  kind='monitor', returns immediately with task ID
- MonitorMcpTask (src/tasks/MonitorMcpTask/) — task lifecycle management
  and agent cleanup via killMonitorMcpTasksForAgent()
- MonitorPermissionRequest — permission dialog component

The codebase already had all integration points wired (tools.ts, tasks.ts,
PermissionRequest.tsx, LocalShellTask kind='monitor', BashTool prompt).
This PR provides the missing implementations.

* fix: command-specific permission rule + architecture docs

- MonitorPermissionRequest: "don't ask again" now creates a
  command-prefix rule (like BashTool) instead of a blanket
  tool-name-only rule that would auto-allow all Monitor commands
- MonitorMcpTask: clarify architecture comments explaining why
  monitor_mcp type exists as a registry stub while actual tasks
  are local_bash with kind='monitor'

* fix: address Copilot review feedback

- Fix permission rule field: expression → ruleContent (Copilot #1)
- Handle empty command prefix: skip rule creation (Copilot #2)
- Remove unused useTheme() import (Copilot #3)
- Save permission rules under 'Bash' toolName so bashToolHasPermission
  can match them — Monitor delegates to Bash permission system (Copilot #4)
- Remove unused logError import from MonitorMcpTask (Copilot #6)
- Copilot #5 (getAppState throws): same pattern as BashTool:915, not a bug
2026-04-13 21:39:07 +08:00
Nourrisse Florian
24d485f42f feat: activate local-only team memory in open build (#648)
* feat: activate local-only team memory in open build

Enable the TEAMMEM feature flag and the isTeamMemoryEnabled() gate so
team memory works in local-only mode for all open-build users.

Team memory is a shared memory system scoped per-project, stored at
~/.claude/projects/<project>/memory/team/. The implementation is
already almost entirely local — extraction, UI, prompts, file
detection, and path validation all work on local files.

The cloud sync overlay (OAuth + API) is cleanly separated: the
watcher does an early return when OAuth is unavailable, so the
feature degrades gracefully to local-only storage with no crashes.

What works locally:
- Memory extraction (auto + team, combined prompts)
- Team MEMORY.md loaded into conversation context
- File selector with team memory folder option
- Collapse tracking (read/search/write counts)
- Secret scanning before persistence
- Path validation + symlink protection

What requires OAuth (not available in open build):
- Cloud sync between team members
- Automatic push/pull via file watcher

* fix: preserve opt-out gate for team memory via feature flag

Change isTeamMemoryEnabled() to read tengu_herring_clock with default
true instead of unconditional return true. This enables team memory by
default while preserving user opt-out via ~/.claude/feature-flags.json.
2026-04-13 21:29:10 +08:00
Nourrisse Florian
99a17144ee feat: activate coordinator mode in open build (#647)
* feat: activate coordinator mode in open build

Enable the COORDINATOR_MODE feature flag and create the missing
src/coordinator/workerAgent.ts module that provides worker agent
definitions for the coordinator.

Coordinator mode is a multi-agent system where a coordinator agent
orchestrates independent workers via AgentTool, SendMessageTool,
and TaskStopTool. The implementation was already 99% complete
(19KB coordinatorMode.ts, 26 gate sites across 15 files) — only
the workerAgent module was missing from the source snapshot.

Workers get the standard built-in agents (general-purpose, explore,
plan). The coordinator system prompt (252 lines) handles all
orchestration logic.

Activate at runtime: CLAUDE_CODE_COORDINATOR_MODE=1
Optional scratchpad: set {"tengu_scratch": true} in
~/.claude/feature-flags.json (#639)

* fix: add worker agent type for coordinator mode

The coordinator system prompt instructs the model to spawn workers with
subagent_type: "worker", but no agent had agentType === 'worker'.
This caused AgentTool to throw "Agent type 'worker' not found" on
every coordinator spawn attempt.

Add a WORKER_AGENT definition that spreads GENERAL_PURPOSE_AGENT with
agentType: 'worker'. Also use the narrower BuiltInAgentDefinition type.

* feat: activate built-in explore and plan agents in open build

Enable BUILTIN_EXPLORE_PLAN_AGENTS so Explore (fast, haiku, read-only)
and Plan (architect, read-only) agents are available to all users in
both normal and coordinator modes.

This resolves the inconsistency flagged in code review: coordinator
workers had access to Explore/Plan agents while normal sessions did not.

The GrowthBook A/B test gate (tengu_amber_stoat) defaults to true via
the no-telemetry stub. Users can disable via feature-flags.json (#639).
2026-04-13 21:19:57 +08:00
muhnehh
df2b9f2b7b fix: improve fetch diagnostics for bootstrap and session requests (#646)
* fix: improve fetch diagnostics for bootstrap and session requests

* chore: derive session timeout from shared constant
2026-04-13 21:17:12 +08:00
Nourrisse Florian
adbe391e63 fix: replace broken bun:bundle shim with source pre-processing (#657)
* fix: replace broken bun:bundle shim with source pre-processing

The `onResolve`/`onLoad` plugin shim for `bun:bundle` was silently
ineffective in Bun v1.3.9+ — the `bun:` namespace is resolved by
Bun's native C++ resolver before the JS plugin phase runs. This meant
ALL `feature()` flags evaluated to `false` regardless of the
`featureFlags` map in build.ts (including `MONITOR_TOOL: true`).

Replace the shim with a source pre-processing step that:
1. Strips `import { feature } from 'bun:bundle'` from .ts/.tsx files
2. Replaces `feature('FLAG')` calls with boolean literals
3. Restores original files in a `finally` block after Bun.build()

Also extend the missing-module scanner to detect `require()` and
dynamic `import()` calls — not just static `import ... from` — since
modules behind feature() gates become resolvable when flags are enabled.

* fix: ensure source files are always restored after build

- Add SIGINT/SIGTERM handlers to restore pre-processed source files
  on abrupt termination (Ctrl+C, kill)
- Replace process.exit(1) with process.exitCode = 1 so the finally
  block runs on build failure
2026-04-13 21:07:08 +08:00
emsanakhchivan
03e0b06e07 fix: extend provider guard to protect anthropic profiles from cross-terminal override (#641)
The provider profile activation guard in applyActiveProviderProfileFromConfig()
only checked CLAUDE_CODE_USE_* environment flags, which are never set for the
default anthropic provider. This allowed two terminals sharing ~/.claude.json
to overwrite each other's active provider when one was using anthropic and
the other a third-party provider.

Now also checks the OCODE_PROVIDER_PROFILE_APPLIED flag, which is set by all
profiles including anthropic, preventing cross-terminal interference.

Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
2026-04-13 20:22:50 +08:00
Nourrisse Florian
31be66d764 feat: add allowBypassPermissionsMode setting (#658)
* feat: add allowBypassPermissionsMode setting

Allow bypass permissions mode to appear in the mode list via
settings.json without requiring the --allow-dangerously-skip-permissions
CLI flag. The disableBypassPermissionsMode setting retains priority.

* fix: address Copilot review feedback on allowBypassPermissionsMode

- Security: read allowBypassPermissionsMode only from trusted settings
  sources (user/local/flag/policy), excluding projectSettings to prevent
  a malicious repo from enabling bypass mode
- UX: update error messages to reference the correct CLI flag
  (--allow-dangerously-skip-permissions) and the new settings option
- Tests: add schema validation tests for the new field
2026-04-13 20:05:21 +08:00
Meetpatel006
7c8bdcc3e2 fix: route OpenAI Codex shortcuts to correct endpoint (#566)
* feat: enhance codex provider resolution with shortcut aliases and improved base URL handling

* fix: enhance codex alias resolution to include shell model

* feat: enhance Codex provider resolution to support new aliases and base URL handling

* fix: update base URL resolution logic for Codex models in GitHub mode

* fix: update provider transport logic to enforce Codex responses and adjust base URL handling

* fix: update provider request resolution to respect custom base URLs and adjust transport logic

* fix: restore OPENAI_MODEL environment variable handling in tests and provider config
2026-04-13 18:31:15 +08:00
Khaled Moayad
64298a663f feat: implement /loop command with fixed and dynamic scheduling (#621)
* feat: implement /loop command with fixed and dynamic scheduling modes

Enable cron tools and /loop skill without the AGENT_TRIGGERS build flag
by removing feature guards from tools.ts, REPL.tsx, and skill registration.
The isKairosCronEnabled() runtime gate now enables cron unconditionally for
open builds while preserving the GrowthBook kill switch for ant builds.

The /loop skill supports four modes: fixed-interval with prompt, fixed-interval
maintenance, dynamic-prompt (self-pacing), and dynamic maintenance (bare /loop).

* chore: remove unused DEFAULT_INTERVAL constant from loop skill

* revert: drop infra changes, scope PR to /loop skill rewrite only

The cron activation layer (AGENT_TRIGGERS guard removal, isKairosCronEnabled
hardcode) is covered by an in-flight stack (#633, #639). Scope this PR to
just the loop.ts rewrite and its tests so it can land cleanly on top.

* fix: restore infra changes needed for /loop in open build

Bun's constant folder evaluates feature('AGENT_TRIGGERS') at bundle time
through the bun:bundle shim — even when the flag is flipped to true in
build.ts, the folded value is cached from the previous build and stays false.
This means the feature-gated require() blocks for cron tools, useScheduledTasks,
and loop skill registration all compile to dead code regardless of the flag.

Fix by removing the AGENT_TRIGGERS guards from the specific paths /loop needs:
- tools.ts: cron tools always registered (isEnabled gates visibility)
- REPL.tsx: useScheduledTasks always mounted
- index.ts: registerLoopSkill via static import, called unconditionally
- prompt.ts: isKairosCronEnabled() bypasses feature flag for non-ant builds

* fix: replace backslash line continuations with explicit delimiters in loop prompts

The backslash-newline sequences inside template literals were acting as
line continuations, collapsing newlines and merging prompt content with
surrounding instruction text. Replace with --- BEGIN/END --- markers
for unambiguous delimiting.

Also add tests for trailing "every" clause parsing, human-readable unit
normalization, and the non-interval "check every PR" case.

* fix: remove remaining AGENT_TRIGGERS guards from print.ts and constants/tools.ts

Completes the cron guard removal started in the previous commit.
The cron scheduler in non-interactive (-p) mode was dead because
print.ts still gated cronSchedulerModule/cronGate requires behind
feature('AGENT_TRIGGERS'), which Bun constant-folds to false in open
builds. Similarly, cron tool names were absent from
IN_PROCESS_TEAMMATE_ALLOWED_TOOLS.

Remove all three guards so the scheduler initialises (gated at runtime
by isKairosCronEnabled) and cron tools are allowed for in-process
teammates in all builds.
2026-04-13 18:28:42 +08:00
Juan Camilo Auriti
30c866d31a fix(openai-shim): preserve tool result images and local token caps (#659)
Keep tool-result images as real image_url parts for OpenAI-compatible requests and use max_tokens for local providers like Ollama and LM Studio.
2026-04-13 18:20:05 +08:00
github-actions[bot]
f6a4455ecf chore(main): release 0.2.3 (#638)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-13 02:06:34 +08:00
Vasanth T
aeaa658f77 fix: prevent infinite auto-compact loop for unknown 3P models (#635) (#636)
- Raise context window fallback from 8k to 128k for unknown OpenAI-compat models.
  The 8k fallback caused effective context (8k minus output reservation) to go
  negative, making auto-compact fire on every single message.
- Add safety floor in getEffectiveContextWindowSize(): effective context is
  always at least reservedTokensForSummary + 13k buffer, ensuring the
  auto-compact threshold stays positive.
- Add missing MiniMax model entries (M2.5, M2.5-highspeed, M2.1, M2.1-highspeed)
  all at 204,800 context / 131,072 max output per MiniMax docs.
- Add tests for MiniMax variants, 128k fallback, and autoCompact floor.

Fixes #635

Co-authored-by: root <root@vm7508.lumadock.com>
2026-04-13 02:03:02 +08:00
250 changed files with 26745 additions and 2648 deletions

16
.dockerignore Normal file
View File

@@ -0,0 +1,16 @@
node_modules
dist
.git
.gitignore
.env
.env.*
!.env.example
coverage
reports
vscode-extension
python
docs
*.md
!README.md
.github
.tsbuildinfo

View File

@@ -225,6 +225,30 @@ ANTHROPIC_API_KEY=sk-ant-your-key-here
# GOOGLE_CLOUD_PROJECT=your-gcp-project-id
# -----------------------------------------------------------------------------
# Option 9: NVIDIA NIM
# -----------------------------------------------------------------------------
# NVIDIA NIM provides hosted inference endpoints for NVIDIA models.
# Get your API key from https://build.nvidia.com/
#
# CLAUDE_CODE_USE_OPENAI=1
# NVIDIA_API_KEY=nvapi-your-key-here
# OPENAI_BASE_URL=https://integrate.api.nvidia.com/v1
# OPENAI_MODEL=nvidia/llama-3.1-nemotron-70b-instruct
# -----------------------------------------------------------------------------
# Option 10: MiniMax
# -----------------------------------------------------------------------------
# MiniMax API provides text generation models.
# Get your API key from https://platform.minimax.io/
#
# CLAUDE_CODE_USE_OPENAI=1
# MINIMAX_API_KEY=your-minimax-key-here
# OPENAI_BASE_URL=https://api.minimax.io/v1
# OPENAI_MODEL=MiniMax-M2.5
# =============================================================================
# OPTIONAL TUNING
# =============================================================================
@@ -243,6 +267,11 @@ ANTHROPIC_API_KEY=sk-ant-your-key-here
# Disable "Co-authored-by" line in git commits made by OpenClaude
# OPENCLAUDE_DISABLE_CO_AUTHORED_BY=1
# Disable strict tool schema normalization for non-Gemini providers
# Useful when MCP tools with complex optional params (e.g. list[dict])
# trigger "Extra required key ... supplied" errors from OpenAI-compatible endpoints
# OPENCLAUDE_DISABLE_STRICT_TOOLS=1
# Custom timeout for API requests in milliseconds (default: varies)
# API_TIMEOUT_MS=60000

View File

@@ -11,6 +11,7 @@ concurrency:
jobs:
release-please:
if: ${{ github.repository == 'Gitlawb/openclaude' }}
name: Release Please
runs-on: ubuntu-latest
permissions:
@@ -86,3 +87,58 @@ jobs:
echo "- npm: https://www.npmjs.com/package/@gitlawb/openclaude"
echo "- GitHub: https://github.com/Gitlawb/openclaude/releases/tag/${{ needs.release-please.outputs.tag_name }}"
} >> "$GITHUB_STEP_SUMMARY"
docker:
name: Build & Push Docker Image
needs: release-please
if: ${{ needs.release-please.outputs.release_created == 'true' }}
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout release tag
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ needs.release-please.outputs.tag_name }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Log in to GitHub Container Registry
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: ghcr.io/${{ github.repository }}
tags: |
type=semver,pattern={{version}},value=${{ needs.release-please.outputs.version }}
type=semver,pattern={{major}}.{{minor}},value=${{ needs.release-please.outputs.version }}
type=raw,value=latest
- name: Build and load locally
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
with:
context: .
load: true
tags: openclaude:smoke
cache-from: type=gha
- name: Smoke test
run: docker run --rm openclaude:smoke --version
- name: Build and push
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max

2
.gitignore vendored
View File

@@ -7,6 +7,8 @@ dist/
.openclaude-profile.json
reports/
GEMINI.md
CLAUDE.md
package-lock.json
/.claude
coverage/
agent.log

View File

@@ -1,3 +1,3 @@
{
".": "0.2.2"
".": "0.6.0"
}

View File

@@ -1,5 +1,120 @@
# Changelog
## [0.6.0](https://github.com/Gitlawb/openclaude/compare/v0.5.2...v0.6.0) (2026-04-22)
### Features
* add model caching and benchmarking utilities ([#671](https://github.com/Gitlawb/openclaude/issues/671)) ([2b15e16](https://github.com/Gitlawb/openclaude/commit/2b15e16421f793f954a92c53933a07094544b29d))
* add thinking token extraction ([#798](https://github.com/Gitlawb/openclaude/issues/798)) ([268c039](https://github.com/Gitlawb/openclaude/commit/268c0398e4bf1ab898069c61500a2b3c226a0322))
* **api:** compress old tool_result content for small-context providers ([#801](https://github.com/Gitlawb/openclaude/issues/801)) ([a6a3de5](https://github.com/Gitlawb/openclaude/commit/a6a3de5ac155fe9d00befbfcab98d439314effd8))
* **api:** improve local provider reliability with readiness and self-healing ([#738](https://github.com/Gitlawb/openclaude/issues/738)) ([4cb963e](https://github.com/Gitlawb/openclaude/commit/4cb963e660dbd6ee438c04042700db05a9d32c59))
* **api:** smart model routing primitive (cheap-for-simple, strong-for-hard) ([#785](https://github.com/Gitlawb/openclaude/issues/785)) ([e908864](https://github.com/Gitlawb/openclaude/commit/e908864da7e7c987a98053ac5d18d702e192db2b))
* enable 15 additional feature flags in open build ([#667](https://github.com/Gitlawb/openclaude/issues/667)) ([6a62e3f](https://github.com/Gitlawb/openclaude/commit/6a62e3ff76ba9ba446b8e20cf2bb139ee76a9387))
* native Anthropic API mode for Claude models on GitHub Copilot ([#579](https://github.com/Gitlawb/openclaude/issues/579)) ([fdef4a1](https://github.com/Gitlawb/openclaude/commit/fdef4a1b4ce218ded4937ca83b30acce7c726472))
* **provider:** expose Atomic Chat in /provider picker with autodetect ([#810](https://github.com/Gitlawb/openclaude/issues/810)) ([ee19159](https://github.com/Gitlawb/openclaude/commit/ee19159c17b3de3b4a8b4a4541a6569f4261d54e))
* **provider:** zero-config autodetection primitive ([#784](https://github.com/Gitlawb/openclaude/issues/784)) ([a5bfcbb](https://github.com/Gitlawb/openclaude/commit/a5bfcbbadf8e9a1fd42f3e103d295524b8da64b0))
### Bug Fixes
* **api:** ensure strict role sequence and filter empty assistant messages after interruption ([#745](https://github.com/Gitlawb/openclaude/issues/745) regression) ([#794](https://github.com/Gitlawb/openclaude/issues/794)) ([06e7684](https://github.com/Gitlawb/openclaude/commit/06e7684eb56df8e694ac784575e163641931c44c))
* Collapse all-text arrays to string for DeepSeek compatibility ([#806](https://github.com/Gitlawb/openclaude/issues/806)) ([761924d](https://github.com/Gitlawb/openclaude/commit/761924daa7e225fe8acf41651408c7cae639a511))
* **model:** codex/nvidia-nim/minimax now read OPENAI_MODEL env ([#815](https://github.com/Gitlawb/openclaude/issues/815)) ([4581208](https://github.com/Gitlawb/openclaude/commit/458120889f6ce54cc9f0b287461d5e38eae48a20))
* **provider:** saved profile ignored when stale CLAUDE_CODE_USE_* in shell ([#807](https://github.com/Gitlawb/openclaude/issues/807)) ([13de4e8](https://github.com/Gitlawb/openclaude/commit/13de4e85df7f5fadc8cd15a76076374dc112360b))
* rename .claude.json to .openclaude.json with legacy fallback ([#582](https://github.com/Gitlawb/openclaude/issues/582)) ([4d4fb28](https://github.com/Gitlawb/openclaude/commit/4d4fb2880e4d0e3a62d8715e1ec13d932e736279))
* replace discontinued gemini-2.5-pro-preview-03-25 with stable gemini-2.5-pro ([#802](https://github.com/Gitlawb/openclaude/issues/802)) ([64582c1](https://github.com/Gitlawb/openclaude/commit/64582c119d5d0278195271379da4a68d59a89c1f)), closes [#398](https://github.com/Gitlawb/openclaude/issues/398)
* **security:** harden project settings trust boundary + MCP sanitization ([#789](https://github.com/Gitlawb/openclaude/issues/789)) ([ae3b723](https://github.com/Gitlawb/openclaude/commit/ae3b723f3b297b49925cada4728f3174aee8bf12))
* **test:** autoCompact floor assertion is flag-sensitive ([#816](https://github.com/Gitlawb/openclaude/issues/816)) ([c13842e](https://github.com/Gitlawb/openclaude/commit/c13842e91c7227246520955de6ae0636b30def9a))
* **ui:** prevent provider manager lag by deferring sync I/O ([#803](https://github.com/Gitlawb/openclaude/issues/803)) ([85eab27](https://github.com/Gitlawb/openclaude/commit/85eab2751e7d351bb0ed6a3fe0e15461d241c9cb))
## [0.5.2](https://github.com/Gitlawb/openclaude/compare/v0.5.1...v0.5.2) (2026-04-20)
### Bug Fixes
* **api:** replace phrase-based reasoning sanitizer with tag-based filter ([#779](https://github.com/Gitlawb/openclaude/issues/779)) ([336ddcc](https://github.com/Gitlawb/openclaude/commit/336ddcc50d59d79ebff50993f2673652aecb0d7d))
## [0.5.1](https://github.com/Gitlawb/openclaude/compare/v0.5.0...v0.5.1) (2026-04-20)
### Bug Fixes
* enforce Bash path constraints after sandbox allow ([#777](https://github.com/Gitlawb/openclaude/issues/777)) ([7002cb3](https://github.com/Gitlawb/openclaude/commit/7002cb302b78ea2a19da3f26226de24e2903fa1d))
* enforce MCP OAuth callback state before errors ([#775](https://github.com/Gitlawb/openclaude/issues/775)) ([739b8d1](https://github.com/Gitlawb/openclaude/commit/739b8d1f40fde0e401a5cbd2b9a55d88bd5124ad))
* require trusted approval for sandbox override ([#778](https://github.com/Gitlawb/openclaude/issues/778)) ([aab4890](https://github.com/Gitlawb/openclaude/commit/aab489055c53dd64369414116fe93226d2656273))
## [0.5.0](https://github.com/Gitlawb/openclaude/compare/v0.4.0...v0.5.0) (2026-04-20)
### Features
* add OPENCLAUDE_DISABLE_STRICT_TOOLS env var to opt out of strict MCP tool schema normalization ([#770](https://github.com/Gitlawb/openclaude/issues/770)) ([e6e8d9a](https://github.com/Gitlawb/openclaude/commit/e6e8d9a24897e4c9ef08b72df20fabbf8ef27f38))
* mask provider api key input ([#772](https://github.com/Gitlawb/openclaude/issues/772)) ([13e9f22](https://github.com/Gitlawb/openclaude/commit/13e9f22a83a2b0f85f557b1e12c9442ba61241e4))
### Bug Fixes
* allow provider recovery during startup ([#765](https://github.com/Gitlawb/openclaude/issues/765)) ([f828171](https://github.com/Gitlawb/openclaude/commit/f828171ef1ab94e2acf73a28a292799e4e26cc0d))
* **api:** drop orphan tool results to satisfy strict role sequence ([#745](https://github.com/Gitlawb/openclaude/issues/745)) ([b786b76](https://github.com/Gitlawb/openclaude/commit/b786b765f01f392652eaf28ed3579a96b7260a53))
* **help:** prevent /help tab crash from undefined descriptions ([#732](https://github.com/Gitlawb/openclaude/issues/732)) ([3d1979f](https://github.com/Gitlawb/openclaude/commit/3d1979ff066db32415e0c8321af916d81f5f2621))
* **mcp:** sync required array with properties in tool schemas ([#754](https://github.com/Gitlawb/openclaude/issues/754)) ([002a8f1](https://github.com/Gitlawb/openclaude/commit/002a8f1f6de2fcfc917165d828501d3047bad61f))
* remove cached mcpClient in diagnostic tracking to prevent stale references ([#727](https://github.com/Gitlawb/openclaude/issues/727)) ([2c98be7](https://github.com/Gitlawb/openclaude/commit/2c98be700274a4241963b5f43530bf3bd8f8963f))
* use raw context window for auto-compact percentage display ([#748](https://github.com/Gitlawb/openclaude/issues/748)) ([55c5f26](https://github.com/Gitlawb/openclaude/commit/55c5f262a9a5a8be0aa9ae8dc6c7dafc465eb2c6))
## [0.4.0](https://github.com/Gitlawb/openclaude/compare/v0.3.0...v0.4.0) (2026-04-17)
### Features
* add Alibaba Coding Plan (DashScope) provider support ([#509](https://github.com/Gitlawb/openclaude/issues/509)) ([43ac6db](https://github.com/Gitlawb/openclaude/commit/43ac6dba75537282da1e2ad8f855082bc4e25f1e))
* add NVIDIA NIM and MiniMax provider support ([#552](https://github.com/Gitlawb/openclaude/issues/552)) ([51191d6](https://github.com/Gitlawb/openclaude/commit/51191d61326e1f8319d70b3a3c0d9229e185a564))
* add ripgrep to Dockerfile for faster file searching ([#688](https://github.com/Gitlawb/openclaude/issues/688)) ([12dd375](https://github.com/Gitlawb/openclaude/commit/12dd3755c619cc27af3b151ae8fdb9d425a7b9a2))
* **api:** classify openai-compatible provider failures ([#708](https://github.com/Gitlawb/openclaude/issues/708)) ([80a00ac](https://github.com/Gitlawb/openclaude/commit/80a00acc2c6dc4657a78de7366f7a9ebc920bfbb))
* **vscode:** add full chat interface to OpenClaude extension ([#608](https://github.com/Gitlawb/openclaude/issues/608)) ([fbcd928](https://github.com/Gitlawb/openclaude/commit/fbcd928f7f8511da795aea3ad318bddf0ab9a1a7))
### Bug Fixes
* focus "Done" option after completing provider manager actions ([#718](https://github.com/Gitlawb/openclaude/issues/718)) ([d6f5130](https://github.com/Gitlawb/openclaude/commit/d6f5130c204d8ffe582212466768706cd7fd6774))
* **models:** prevent /models crash from non-string saved model values ([#691](https://github.com/Gitlawb/openclaude/issues/691)) ([6b2121d](https://github.com/Gitlawb/openclaude/commit/6b2121da12189fa7ce1f33394d18abd24cf8a01b))
* prevent crash in commands tab when description is undefined ([#730](https://github.com/Gitlawb/openclaude/issues/730)) ([eed77e6](https://github.com/Gitlawb/openclaude/commit/eed77e6579866a98384dcc948a0ad6406614ede3))
* strip comments before scanning for missing imports ([#676](https://github.com/Gitlawb/openclaude/issues/676)) ([a00b792](https://github.com/Gitlawb/openclaude/commit/a00b7928de9662ffb7ef6abd8cd040afe6f4f122))
* **ui:** show correct endpoint URL in intro screen for custom Anthropic endpoints ([#735](https://github.com/Gitlawb/openclaude/issues/735)) ([3424663](https://github.com/Gitlawb/openclaude/commit/34246635fb9a09499047a52e7f96ca9b36c8a85a))
## [0.3.0](https://github.com/Gitlawb/openclaude/compare/v0.2.3...v0.3.0) (2026-04-14)
### Features
* activate coordinator mode in open build ([#647](https://github.com/Gitlawb/openclaude/issues/647)) ([99a1714](https://github.com/Gitlawb/openclaude/commit/99a17144ee285b892a0801acb6abcc9af68879af))
* activate local-only team memory in open build ([#648](https://github.com/Gitlawb/openclaude/issues/648)) ([24d485f](https://github.com/Gitlawb/openclaude/commit/24d485f42f5b1405d2fab13f2f497d5edd3b5300))
* activate message actions in open build ([#632](https://github.com/Gitlawb/openclaude/issues/632)) ([252808b](https://github.com/Gitlawb/openclaude/commit/252808bbd0a12a6ccf97e2cb09752a0212ea3acd))
* add allowBypassPermissionsMode setting ([#658](https://github.com/Gitlawb/openclaude/issues/658)) ([31be66d](https://github.com/Gitlawb/openclaude/commit/31be66d7645ea3473334c9ce89ea1a5095b8df6e))
* add Docker image build and push to GHCR on release ([#656](https://github.com/Gitlawb/openclaude/issues/656)) ([658d076](https://github.com/Gitlawb/openclaude/commit/658d076909e14eb0459bcb98aee9aa0472118265))
* implement /loop command with fixed and dynamic scheduling ([#621](https://github.com/Gitlawb/openclaude/issues/621)) ([64298a6](https://github.com/Gitlawb/openclaude/commit/64298a663f1391b16aa1f5a49e8a877e1d3742f2))
* implement Monitor tool for streaming shell output ([#649](https://github.com/Gitlawb/openclaude/issues/649)) ([b818dd5](https://github.com/Gitlawb/openclaude/commit/b818dd5958f4e8428566ce25a1a6be5fd4fe66f8))
* local feature flag overrides via ~/.claude/feature-flags.json ([#639](https://github.com/Gitlawb/openclaude/issues/639)) ([0e48884](https://github.com/Gitlawb/openclaude/commit/0e48884f56c6c008f047a7926d3b2cb924170625))
* open useful USER_TYPE-gated features to all users ([#644](https://github.com/Gitlawb/openclaude/issues/644)) ([c1beea9](https://github.com/Gitlawb/openclaude/commit/c1beea98676a413c54152a45a6b9fbe7fb9ed028))
### Bug Fixes
* bump axios 1.14.0 → 1.15.0 (Dependabot [#4](https://github.com/Gitlawb/openclaude/issues/4), [#5](https://github.com/Gitlawb/openclaude/issues/5)) ([#670](https://github.com/Gitlawb/openclaude/issues/670)) ([a07e5ef](https://github.com/Gitlawb/openclaude/commit/a07e5ef990a5ed01a72e83fdbd1fcab36f515a08))
* extend provider guard to protect anthropic profiles from cross-terminal override ([#641](https://github.com/Gitlawb/openclaude/issues/641)) ([03e0b06](https://github.com/Gitlawb/openclaude/commit/03e0b06e0784e4ea46945b3950840b10b6e3ca49))
* improve fetch diagnostics for bootstrap and session requests ([#646](https://github.com/Gitlawb/openclaude/issues/646)) ([df2b9f2](https://github.com/Gitlawb/openclaude/commit/df2b9f2b7b4c661ee3d9ed5dc58b3064de0599d1))
* **openai-shim:** preserve tool result images and local token caps ([#659](https://github.com/Gitlawb/openclaude/issues/659)) ([30c866d](https://github.com/Gitlawb/openclaude/commit/30c866d31ad8538496460667d86ed5efbd4a8547))
* replace broken bun:bundle shim with source pre-processing ([#657](https://github.com/Gitlawb/openclaude/issues/657)) ([adbe391](https://github.com/Gitlawb/openclaude/commit/adbe391e63721918b5d147f4f845111c1a3143db))
* resolve 12 bugs across API, MCP, agent tools, web search, and context overflow ([#674](https://github.com/Gitlawb/openclaude/issues/674)) ([25ce2ca](https://github.com/Gitlawb/openclaude/commit/25ce2ca7bff8937b0b79ad7f85c6dc1c68432069))
* route OpenAI Codex shortcuts to correct endpoint ([#566](https://github.com/Gitlawb/openclaude/issues/566)) ([7c8bdcc](https://github.com/Gitlawb/openclaude/commit/7c8bdcc3e2ac1ecb98286c705c85671044be3d6b))
## [0.2.3](https://github.com/Gitlawb/openclaude/compare/v0.2.2...v0.2.3) (2026-04-12)
### Bug Fixes
* prevent infinite auto-compact loop for unknown 3P models ([#635](https://github.com/Gitlawb/openclaude/issues/635)) ([#636](https://github.com/Gitlawb/openclaude/issues/636)) ([aeaa658](https://github.com/Gitlawb/openclaude/commit/aeaa658f776fb8df95721e8b8962385f8b00f66a))
## [0.2.2](https://github.com/Gitlawb/openclaude/compare/v0.2.1...v0.2.2) (2026-04-12)

46
Dockerfile Normal file
View File

@@ -0,0 +1,46 @@
# ---- build stage ----
FROM node:22-slim AS build
# Install Bun
RUN npm install -g bun@1.3.11
WORKDIR /app
# Copy dependency manifests first for better layer caching
COPY package.json bun.lock ./
# Install all dependencies (including devDependencies for build)
RUN bun install --frozen-lockfile
# Copy source code
COPY src/ src/
COPY scripts/ scripts/
COPY bin/ bin/
COPY tsconfig.json ./
# Build the CLI bundle
RUN bun run build
# Prune devDependencies
RUN rm -rf node_modules && bun install --frozen-lockfile --production
# ---- runtime stage ----
FROM node:22-slim
WORKDIR /app
# Copy only what's needed to run
COPY --from=build /app/dist/cli.mjs dist/cli.mjs
COPY --from=build /app/bin/ bin/
COPY --from=build /app/node_modules/ node_modules/
COPY --from=build /app/package.json package.json
COPY README.md ./
# Install git and ripgrep — many CLI tool operations depend on them
RUN apt-get update && apt-get install -y --no-install-recommends git ripgrep \
&& rm -rf /var/lib/apt/lists/*
# Run as non-root user
USER node
ENTRYPOINT ["node", "/app/dist/cli.mjs"]

View File

@@ -2,7 +2,7 @@
OpenClaude is an open-source coding-agent CLI for cloud and local model providers.
Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping one terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex OAuth, Codex, Ollama, Atomic Chat, and other supported backends while keeping one terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
[![PR Checks](https://github.com/Gitlawb/openclaude/actions/workflows/pr-checks.yml/badge.svg?branch=main)](https://github.com/Gitlawb/openclaude/actions/workflows/pr-checks.yml)
[![Release](https://img.shields.io/github/v/tag/Gitlawb/openclaude?label=release&color=0ea5e9)](https://github.com/Gitlawb/openclaude/tags)
@@ -10,13 +10,20 @@ Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, a
[![Security Policy](https://img.shields.io/badge/security-policy-0f766e)](SECURITY.md)
[![License](https://img.shields.io/badge/license-MIT-2563eb)](LICENSE)
OpenClaude is also mirrored to GitLawb:
[gitlawb.com/node/repos/z6MkqDnb/openclaude](https://gitlawb.com/node/repos/z6MkqDnb/openclaude)
[Quick Start](#quick-start) | [Setup Guides](#setup-guides) | [Providers](#supported-providers) | [Source Build](#source-build-and-local-development) | [VS Code Extension](#vs-code-extension) | [Community](#community)
## Star History
[![Star History Chart](https://api.star-history.com/chart?repos=gitlawb/openclaude&type=date&legend=top-left)](https://www.star-history.com/?repos=gitlawb%2Fopenclaude&type=date&legend=top-left)
## Why OpenClaude
- Use one CLI across cloud APIs and local model backends
- Save provider profiles inside the app with `/provider`
- Run with OpenAI-compatible services, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported providers
- Run with OpenAI-compatible services, Gemini, GitHub Models, Codex OAuth, Codex, Ollama, Atomic Chat, and other supported providers
- Keep coding-agent workflows in one place: bash, file tools, grep, glob, agents, tasks, MCP, and web tools
- Use the bundled VS Code extension for launch integration and theme support
@@ -85,6 +92,16 @@ $env:OPENAI_MODEL="qwen2.5-coder:7b"
openclaude
```
### Using Ollama's launch command
If you have [Ollama](https://ollama.com) installed, you can skip the env var setup entirely:
```bash
ollama launch openclaude --model qwen2.5-coder:7b
```
This automatically sets `ANTHROPIC_BASE_URL`, model routing, and auth so all API traffic goes through your local Ollama instance. Works with any model you have pulled — local or cloud.
## Setup Guides
Beginner-friendly guides:
@@ -105,9 +122,10 @@ Advanced and source-build guides:
| OpenAI-compatible | `/provider` or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and other compatible `/v1` servers |
| Gemini | `/provider` or env vars | Supports API key, access token, or local ADC workflow on current `main` |
| GitHub Models | `/onboard-github` | Interactive onboarding with saved credentials |
| Codex | `/provider` | Uses existing Codex credentials when available |
| Ollama | `/provider` or env vars | Local inference with no API key |
| Atomic Chat | advanced setup | Local Apple Silicon backend |
| Codex OAuth | `/provider` | Opens ChatGPT sign-in in your browser and stores Codex credentials securely |
| Codex | `/provider` | Uses existing Codex CLI auth, OpenClaude secure storage, or env credentials |
| Ollama | `/provider`, env vars, or `ollama launch` | Local inference with no API key |
| Atomic Chat | `/provider`, env vars, or `bun run dev:atomic-chat` | Local Model Provider; auto-detects loaded models |
| Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments |
## What Works
@@ -313,7 +331,8 @@ For larger changes, open an issue first so the scope is clear before implementat
- `bun run build`
- `bun run test:coverage`
- `bun run smoke`
- focused `bun test ...` runs for touched areas
- focused `bun test ...` runs for files and flows you changed
## Disclaimer

View File

@@ -30,7 +30,7 @@
"@opentelemetry/semantic-conventions": "1.40.0",
"ajv": "8.18.0",
"auto-bind": "5.0.1",
"axios": "1.14.0",
"axios": "1.15.0",
"bidi-js": "1.0.3",
"chalk": "5.6.2",
"chokidar": "4.0.3",
@@ -479,7 +479,7 @@
"auto-bind": ["auto-bind@5.0.1", "", {}, "sha512-ooviqdwwgfIfNmDwo94wlshcdzfO64XV0Cg6oDsDYBJfITDz1EngD2z7DkbvCWn+XIMsIqW27sEVF6qcpJrRcg=="],
"axios": ["axios@1.14.0", "", { "dependencies": { "follow-redirects": "^1.15.11", "form-data": "^4.0.5", "proxy-from-env": "^2.1.0" } }, "sha512-3Y8yrqLSwjuzpXuZ0oIYZ/XGgLwUIBU3uLvbcpb0pidD9ctpShJd43KSlEEkVQg6DS0G9NKyzOvBfUtDKEyHvQ=="],
"axios": ["axios@1.15.0", "", { "dependencies": { "follow-redirects": "^1.15.11", "form-data": "^4.0.5", "proxy-from-env": "^2.1.0" } }, "sha512-wWyJDlAatxk30ZJer+GeCWS209sA42X+N5jU2jy6oHTp7ufw8uzUTVFBX9+wTfAlhiJXGS0Bq7X6efruWjuK9Q=="],
"base64-js": ["base64-js@1.5.1", "", {}, "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA=="],
@@ -1151,6 +1151,8 @@
"@emnapi/runtime/tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="],
"@mendable/firecrawl-js/axios": ["axios@1.14.0", "", { "dependencies": { "follow-redirects": "^1.15.11", "form-data": "^4.0.5", "proxy-from-env": "^2.1.0" } }, "sha512-3Y8yrqLSwjuzpXuZ0oIYZ/XGgLwUIBU3uLvbcpb0pidD9ctpShJd43KSlEEkVQg6DS0G9NKyzOvBfUtDKEyHvQ=="],
"@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/core": ["@opentelemetry/core@1.30.1", "", { "dependencies": { "@opentelemetry/semantic-conventions": "1.28.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-OOCM2C/QIURhJMuKaekP3TRBxBKxG/TWWA0TL2J6nXUtDnuCtccy49LUJF8xPFXMX+0LMcxFpCo8M9cGY1W6rQ=="],
"@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/otlp-exporter-base": ["@opentelemetry/otlp-exporter-base@0.57.2", "", { "dependencies": { "@opentelemetry/core": "1.30.1", "@opentelemetry/otlp-transformer": "0.57.2" }, "peerDependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-XdxEzL23Urhidyebg5E6jZoaiW5ygP/mRjxLHixogbqwDy2Faduzb5N0o/Oi+XTIJu+iyxXdVORjXax+Qgfxag=="],
@@ -1377,6 +1379,8 @@
"cliui/wrap-ansi": ["wrap-ansi@7.0.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "string-width": "^4.1.0", "strip-ansi": "^6.0.0" } }, "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q=="],
"firecrawl/axios": ["axios@1.14.0", "", { "dependencies": { "follow-redirects": "^1.15.11", "form-data": "^4.0.5", "proxy-from-env": "^2.1.0" } }, "sha512-3Y8yrqLSwjuzpXuZ0oIYZ/XGgLwUIBU3uLvbcpb0pidD9ctpShJd43KSlEEkVQg6DS0G9NKyzOvBfUtDKEyHvQ=="],
"form-data/mime-types": ["mime-types@2.1.35", "", { "dependencies": { "mime-db": "1.52.0" } }, "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="],
"gaxios/is-stream": ["is-stream@2.0.1", "", {}, "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg=="],

View File

@@ -48,6 +48,8 @@ export OPENAI_MODEL=gpt-4o
`codexplan` maps to GPT-5.4 on the Codex backend with high reasoning.
`codexspark` maps to GPT-5.3 Codex Spark for faster loops.
If you use the in-app provider wizard, choose `Codex OAuth` to open ChatGPT sign-in in your browser and let OpenClaude store Codex credentials securely.
If you already use the Codex CLI, OpenClaude reads `~/.codex/auth.json` automatically. You can also point it elsewhere with `CODEX_AUTH_JSON_PATH` or override the token directly with `CODEX_API_KEY`.
```bash
@@ -82,6 +84,16 @@ OpenRouter model availability changes over time. If a model stops working, try a
### Ollama
Using `ollama launch` (recommended if you have Ollama installed):
```bash
ollama launch openclaude --model llama3.3:70b
```
This handles all environment setup automatically — no env vars needed. Works with any local or cloud model available in your Ollama instance.
Using environment variables manually:
```bash
ollama pull llama3.3:70b

333
docs/hook-chains.md Normal file
View File

@@ -0,0 +1,333 @@
# Hook Chains (Self-Healing Agent Mesh MVP)
Hook Chains provide an event-driven recovery layer for important workflow failures.
When a matching hook event occurs, OpenClaude evaluates declarative rules and can dispatch remediation actions such as:
- `spawn_fallback_agent`
- `notify_team`
- `warm_remote_capacity`
## Disabled-By-Default Rollout
> **Rollout recommendation:** keep Hook Chains disabled until you validate rules in your environment.
>
> - Set top-level config to `"enabled": false` initially.
> - Enable per environment when ready.
> - Dispatch is gated by `feature('HOOK_CHAINS')`.
> - Env gate defaults to off unless `CLAUDE_CODE_ENABLE_HOOK_CHAINS=1` is set.
This keeps existing workflows unchanged while you tune guard windows and action behavior.
## Feature Overview
Hook Chains are loaded from a deterministic config file and evaluated on dispatched hook events.
MVP runtime trigger wiring:
- `PostToolUseFailure` hooks dispatch Hook Chains with outcome `failed`.
- `TaskCompleted` hooks dispatch Hook Chains with outcome:
- `success` when completion hooks did not block.
- `failed` when completion hooks returned blocking errors or prevented continuation.
Default config path:
- `.openclaude/hook-chains.json`
Override path:
- `CLAUDE_CODE_HOOK_CHAINS_CONFIG_PATH=/abs/or/relative/path/to/hook-chains.json`
Global gate:
- `feature('HOOK_CHAINS')` must be enabled in the build
- `CLAUDE_CODE_ENABLE_HOOK_CHAINS=0|1` (defaults to disabled when unset)
## Safety Guarantees
The runtime is intentionally conservative:
- **Depth guard:** chain dispatch is blocked when `chainDepth >= maxChainDepth`.
- **Rule cooldown:** each rule can only re-fire after cooldown expires.
- **Dedup window:** identical event/action combinations are suppressed for a window.
- **Abort-safe behavior:** if the current signal is aborted, actions skip safely.
- **Policy-aware remote warm:** `warm_remote_capacity` skips when remote sessions are policy denied.
- **Bridge inactive no-op:** `warm_remote_capacity` safely skips when no active bridge handle exists.
- **Missing team context safety:** `notify_team` skips with structured reason if no team context/team file is available.
- **Fallback launcher safety:** `spawn_fallback_agent` fails with a structured reason when launch permissions/context are unavailable.
## Configuration Schema Reference
Top-level object:
```json
{
"version": 1,
"enabled": true,
"maxChainDepth": 2,
"defaultCooldownMs": 30000,
"defaultDedupWindowMs": 30000,
"rules": []
}
```
### Top-Level Fields
| Field | Type | Required | Notes |
|---|---|---:|---|
| `version` | `1` | No | Defaults to `1`. |
| `enabled` | `boolean` | No | Global feature switch for this config file. |
| `maxChainDepth` | `integer` | No | Global depth guard (default `2`, max `10`). |
| `defaultCooldownMs` | `integer` | No | Default rule cooldown in ms (default `30000`). |
| `defaultDedupWindowMs` | `integer` | No | Default action dedup window in ms (default `30000`). |
| `rules` | `HookChainRule[]` | No | Defaults to `[]`. May be omitted or empty; when no rules are present, dispatch is a no-op and returns `enabled: false`. |
> **Note:** An empty ruleset is valid and can be used to keep Hook Chains configured but effectively disabled until rules are added.
### Rule Object (`HookChainRule`)
```json
{
"id": "task-failure-recovery",
"enabled": true,
"trigger": {
"event": "TaskCompleted",
"outcome": "failed"
},
"condition": {
"toolNames": ["Edit"],
"taskStatuses": ["failed"],
"errorIncludes": ["timeout", "permission denied"],
"eventFieldEquals": {
"meta.source": "scheduler"
}
},
"cooldownMs": 60000,
"dedupWindowMs": 30000,
"maxDepth": 2,
"actions": []
}
```
| Field | Type | Required | Notes |
|---|---|---:|---|
| `id` | `string` | Yes | Stable identifier used in telemetry/guards. |
| `enabled` | `boolean` | No | Per-rule switch. |
| `trigger.event` | `HookEvent` | Yes | Event name to match. |
| `trigger.outcome` | `"success"|"failed"|"timeout"|"unknown"` | No | Single outcome matcher. |
| `trigger.outcomes` | `Outcome[]` | No | Multi-outcome matcher. Use either `outcome` or `outcomes`. |
| `condition` | `object` | No | Optional extra matching constraints. |
| `cooldownMs` | `integer` | No | Overrides global cooldown for this rule. |
| `dedupWindowMs` | `integer` | No | Overrides global dedup for this rule. |
| `maxDepth` | `integer` | No | Per-rule depth cap. |
| `actions` | `HookChainAction[]` | Yes | One or more actions to execute in order. |
### Condition Fields
| Field | Type | Notes |
|---|---|---|
| `toolNames` | `string[]` | Matches `tool_name` / `toolName` in event payload. |
| `taskStatuses` | `string[]` | Matches `task_status` / `taskStatus` / `status`. |
| `errorIncludes` | `string[]` | Case-insensitive substring match against `error` / `reason` / `message`. |
| `eventFieldEquals` | `Record<string, string\|number\|boolean>` | Dot-path equality against payload (example: `"meta.source": "scheduler"`). |
### Actions
#### `spawn_fallback_agent`
```json
{
"type": "spawn_fallback_agent",
"id": "fallback-1",
"enabled": true,
"dedupWindowMs": 30000,
"description": "Fallback recovery for failed task",
"promptTemplate": "Recover task ${TASK_SUBJECT}. Event=${EVENT_NAME}, outcome=${OUTCOME}, error=${ERROR}. Payload=${PAYLOAD_JSON}",
"agentType": "general-purpose",
"model": "sonnet"
}
```
#### `notify_team`
```json
{
"type": "notify_team",
"id": "notify-ops",
"enabled": true,
"dedupWindowMs": 30000,
"teamName": "mesh-team",
"recipients": ["*"],
"summary": "Hook chain ${RULE_ID} fired",
"messageTemplate": "Event=${EVENT_NAME} outcome=${OUTCOME}\nTask=${TASK_ID}\nError=${ERROR}\nPayload=${PAYLOAD_JSON}"
}
```
#### `warm_remote_capacity`
```json
{
"type": "warm_remote_capacity",
"id": "warm-bridge",
"enabled": true,
"dedupWindowMs": 60000,
"createDefaultEnvironmentIfMissing": false
}
```
## Complete Example Configs
### 1) Retry via Fallback Agent
```json
{
"version": 1,
"enabled": true,
"maxChainDepth": 2,
"defaultCooldownMs": 30000,
"defaultDedupWindowMs": 30000,
"rules": [
{
"id": "retry-task-via-fallback",
"trigger": {
"event": "TaskCompleted",
"outcome": "failed"
},
"cooldownMs": 60000,
"actions": [
{
"type": "spawn_fallback_agent",
"id": "spawn-retry-agent",
"description": "Retry failed task with fallback agent",
"promptTemplate": "A task failed. Recover it safely.\nTask=${TASK_SUBJECT}\nDescription=${TASK_DESCRIPTION}\nError=${ERROR}\nPayload=${PAYLOAD_JSON}",
"agentType": "general-purpose",
"model": "sonnet"
}
]
}
]
}
```
### 2) Notify Only
```json
{
"version": 1,
"enabled": true,
"maxChainDepth": 2,
"defaultCooldownMs": 30000,
"defaultDedupWindowMs": 30000,
"rules": [
{
"id": "notify-on-tool-failure",
"trigger": {
"event": "PostToolUseFailure",
"outcome": "failed"
},
"condition": {
"toolNames": ["Edit", "Write", "Bash"]
},
"actions": [
{
"type": "notify_team",
"id": "notify-team-failure",
"recipients": ["*"],
"summary": "Tool failure detected",
"messageTemplate": "Tool failure detected.\nEvent=${EVENT_NAME} outcome=${OUTCOME}\nError=${ERROR}\nPayload=${PAYLOAD_JSON}"
}
]
}
]
}
```
### 3) Combined Fallback + Notify + Bridge Warm
```json
{
"version": 1,
"enabled": true,
"maxChainDepth": 2,
"defaultCooldownMs": 45000,
"defaultDedupWindowMs": 30000,
"rules": [
{
"id": "full-recovery-chain",
"trigger": {
"event": "TaskCompleted",
"outcomes": ["failed", "timeout"]
},
"condition": {
"errorIncludes": ["timeout", "capacity", "connection"]
},
"cooldownMs": 90000,
"actions": [
{
"type": "spawn_fallback_agent",
"id": "fallback-agent",
"description": "Recover failed task execution",
"promptTemplate": "Recover failed task and produce a concise fix summary.\nTask=${TASK_SUBJECT}\nError=${ERROR}\nPayload=${PAYLOAD_JSON}"
},
{
"type": "notify_team",
"id": "notify-team",
"recipients": ["*"],
"summary": "Recovery chain triggered",
"messageTemplate": "Recovery chain ${RULE_ID} fired.\nOutcome=${OUTCOME}\nTask=${TASK_SUBJECT}\nError=${ERROR}"
},
{
"type": "warm_remote_capacity",
"id": "warm-capacity",
"createDefaultEnvironmentIfMissing": false
}
]
}
]
}
```
## Template Variables
The following placeholders are supported by `promptTemplate`, `summary`, and `messageTemplate`:
- `${EVENT_NAME}`
- `${OUTCOME}`
- `${RULE_ID}`
- `${TASK_SUBJECT}`
- `${TASK_DESCRIPTION}`
- `${TASK_ID}`
- `${ERROR}`
- `${PAYLOAD_JSON}`
## Troubleshooting
### Rule never triggers
- Verify `trigger.event` and `trigger.outcome`/`trigger.outcomes` exactly match dispatched event data.
- Check `condition` filters (especially `toolNames` and `eventFieldEquals` dot-path keys).
- Confirm the config file is valid JSON and schema-valid.
### Actions show as skipped
Common skip reasons:
- `action disabled`
- `rule cooldown active ...`
- `dedup window active ...`
- `max chain depth reached ...`
- `No team context is available ...`
- `Team file not found ...`
- `Remote sessions are blocked by policy`
- `Bridge is not active; warm_remote_capacity is a safe no-op`
- `No fallback agent launcher is registered in runtime context`
### Config changes not reflected
- Loader uses memoization by file mtime/size.
- Ensure your editor writes the file fully and updates mtime.
- If needed, force reload from the caller side with `forceReloadConfig: true`.
### Existing workflows changed unexpectedly
- Set `"enabled": false` at top-level.
- Or globally disable with `CLAUDE_CODE_ENABLE_HOOK_CHAINS=0`.
- Re-enable gradually after validating one rule at a time.

View File

@@ -1,6 +1,6 @@
{
"name": "@gitlawb/openclaude",
"version": "0.2.2",
"version": "0.6.0",
"description": "Claude Code opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models",
"type": "module",
"bin": {
@@ -76,7 +76,7 @@
"@opentelemetry/semantic-conventions": "1.40.0",
"ajv": "8.18.0",
"auto-bind": "5.0.1",
"axios": "1.14.0",
"axios": "1.15.0",
"bidi-js": "1.0.3",
"chalk": "5.6.2",
"chokidar": "4.0.3",

View File

@@ -8,7 +8,8 @@
* - src/ path aliases
*/
import { readFileSync } from 'fs'
import { readFileSync, readdirSync, writeFileSync } from 'fs'
import { join } from 'path'
import { noTelemetryPlugin } from './no-telemetry-plugin'
const pkg = JSON.parse(readFileSync('./package.json', 'utf-8'))
@@ -18,31 +19,106 @@ const version = pkg.version
// Most Anthropic-internal features stay off; open-build features can be
// selectively enabled here when their full source exists in the mirror.
const featureFlags: Record<string, boolean> = {
VOICE_MODE: false,
PROACTIVE: false,
KAIROS: false,
BRIDGE_MODE: false,
DAEMON: false,
AGENT_TRIGGERS: false,
MONITOR_TOOL: false,
ABLATION_BASELINE: false,
DUMP_SYSTEM_PROMPT: false,
CACHED_MICROCOMPACT: false,
COORDINATOR_MODE: false,
CONTEXT_COLLAPSE: false,
COMMIT_ATTRIBUTION: false,
TEAMMEM: false,
UDS_INBOX: false,
BG_SESSIONS: false,
AWAY_SUMMARY: false,
TRANSCRIPT_CLASSIFIER: false,
WEB_BROWSER_TOOL: false,
MESSAGE_ACTIONS: false,
BUDDY: true,
CHICAGO_MCP: false,
COWORKER_TYPE_TELEMETRY: false,
// ── Disabled: require Anthropic infrastructure or missing source ─────
VOICE_MODE: false, // Push-to-talk STT via claude.ai OAuth endpoint
PROACTIVE: false, // Autonomous agent mode (missing proactive/ module)
KAIROS: false, // Persistent assistant/session mode (cloud backend)
BRIDGE_MODE: false, // Remote desktop bridge via CCR infrastructure
DAEMON: false, // Background daemon process (stubbed in open build)
AGENT_TRIGGERS: false, // Scheduled remote agent triggers
ABLATION_BASELINE: false, // A/B testing harness for eval experiments
CONTEXT_COLLAPSE: false, // Context collapsing optimization (stubbed)
COMMIT_ATTRIBUTION: false, // Co-Authored-By metadata in git commits
UDS_INBOX: false, // Unix Domain Socket inter-session messaging
BG_SESSIONS: false, // Background sessions via tmux (stubbed)
WEB_BROWSER_TOOL: false, // Built-in browser automation (source not mirrored)
CHICAGO_MCP: false, // Computer-use MCP (native Swift modules stubbed)
COWORKER_TYPE_TELEMETRY: false, // Telemetry for agent/coworker type classification
// ── Enabled: upstream defaults ──────────────────────────────────────
COORDINATOR_MODE: true, // Multi-agent coordinator with worker delegation
BUILTIN_EXPLORE_PLAN_AGENTS: true, // Built-in Explore/Plan specialized subagents
BUDDY: true, // Buddy mode for paired programming
MONITOR_TOOL: true, // MCP server monitoring/streaming tool
TEAMMEM: true, // Team memory management
MESSAGE_ACTIONS: true, // Message action buttons in the UI
// ── Enabled: new activations ────────────────────────────────────────
DUMP_SYSTEM_PROMPT: true, // --dump-system-prompt CLI flag for debugging
CACHED_MICROCOMPACT: true, // Cache-aware tool result truncation optimization
AWAY_SUMMARY: true, // "While you were away" recap after 5min blur
TRANSCRIPT_CLASSIFIER: true, // Auto-approval classifier for safe tool uses
ULTRATHINK: true, // Deep thinking mode — type "ultrathink" to boost reasoning
TOKEN_BUDGET: true, // Token budget tracking with usage warnings
HISTORY_PICKER: true, // Enhanced interactive prompt history picker
QUICK_SEARCH: true, // Ctrl+G quick search across prompts
SHOT_STATS: true, // Shot distribution stats in session summary
EXTRACT_MEMORIES: true, // Auto-extract durable memories from conversations
FORK_SUBAGENT: true, // Implicit context-forking when omitting subagent_type
VERIFICATION_AGENT: true, // Built-in read-only agent for test/verification
MCP_SKILLS: true, // Discover skills dynamically from MCP server resources
PROMPT_CACHE_BREAK_DETECTION: true, // Detect & log unexpected prompt cache invalidations
HOOK_PROMPTS: true, // Allow tools to request interactive user prompts
}
// ── Pre-process: replace feature() calls with boolean literals ──────
// Bun v1.3.9+ resolves `import { feature } from 'bun:bundle'` natively
// before plugins can intercept it via onResolve. The bun: namespace is
// handled by Bun's C++ resolver which runs before the JS plugin phase,
// so the previous onResolve/onLoad shim was silently ineffective — ALL
// feature() calls evaluated to false regardless of the featureFlags map.
//
// Fix: pre-process source files to strip the bun:bundle import and
// replace feature('FLAG') calls with their boolean literal. Files are
// modified in-place before Bun.build() and restored in a finally block.
// Match feature('FLAG') calls, including multi-line: feature(\n 'FLAG',\n)
const featureCallRe = /\bfeature\(\s*['"](\w+)['"][,\s]*\)/gs
const featureImportRe = /import\s*\{[^}]*\bfeature\b[^}]*\}\s*from\s*['"]bun:bundle['"];?\s*\n?/g
const modifiedFiles = new Map<string, string>() // path → original content
function preProcessFeatureFlags(dir: string) {
for (const ent of readdirSync(dir, { withFileTypes: true })) {
const full = join(dir, ent.name)
if (ent.isDirectory()) { preProcessFeatureFlags(full); continue }
if (!/\.(ts|tsx)$/.test(ent.name)) continue
const raw = readFileSync(full, 'utf-8')
if (!raw.includes('feature(')) continue
let contents = raw
contents = contents.replace(featureImportRe, '')
contents = contents.replace(featureCallRe, (_match, name) =>
String((featureFlags as Record<string, boolean>)[name] ?? false),
)
if (contents !== raw) {
modifiedFiles.set(full, raw)
writeFileSync(full, contents)
}
}
}
function restoreModifiedFiles() {
for (const [path, original] of modifiedFiles) {
writeFileSync(path, original)
}
modifiedFiles.clear()
}
preProcessFeatureFlags(join(import.meta.dir, '..', 'src'))
const numModified = modifiedFiles.size
// Restore source files on abrupt termination (Ctrl+C, kill, etc.)
for (const signal of ['SIGINT', 'SIGTERM'] as const) {
process.on(signal, () => {
restoreModifiedFiles()
process.exit(signal === 'SIGINT' ? 130 : 143)
})
}
try {
const result = await Bun.build({
entrypoints: ['./src/entrypoints/cli.tsx'],
outdir: './dist',
@@ -103,18 +179,11 @@ export async function handleBgFlag() { throw new Error("Background sessions are
],
] as const)
// Resolve `import { feature } from 'bun:bundle'` to a shim
build.onResolve({ filter: /^bun:bundle$/ }, () => ({
path: 'bun:bundle',
namespace: 'bun-bundle-shim',
}))
build.onLoad(
{ filter: /.*/, namespace: 'bun-bundle-shim' },
() => ({
contents: `const featureFlags = ${JSON.stringify(featureFlags)};\nexport function feature(name) { return featureFlags[name] ?? false; }`,
loader: 'js',
}),
)
// bun:bundle feature() replacement is handled by the source
// pre-processing step above (see preProcessFeatureFlags).
// The previous onResolve/onLoad shim was ineffective in Bun
// v1.3.9+ because the bun: namespace is resolved natively
// before the JS plugin phase runs.
build.onResolve(
{ filter: /^\.\.\/(daemon\/workerRegistry|daemon\/main|cli\/bg|cli\/handlers\/templateJobs|environment-runner\/main|self-hosted-runner\/main)\.js$/ },
@@ -274,16 +343,7 @@ export const SeverityNumber = {};
// Scan source to find imports that can't resolve
function scanForMissingImports() {
function walk(dir: string) {
for (const ent of fs.readdirSync(dir, { withFileTypes: true })) {
const full = pathMod.join(dir, ent.name)
if (ent.isDirectory()) { walk(full); continue }
if (!/\.(ts|tsx)$/.test(ent.name)) continue
const code: string = fs.readFileSync(full, 'utf-8')
// Collect all imports
for (const m of code.matchAll(/import\s+(?:\{([^}]*)\}|(\w+))?\s*(?:,\s*\{([^}]*)\})?\s*from\s+['"](.*?)['"]/g)) {
const specifier = m[4]
const namedPart = m[1] || m[3] || ''
function checkAndRegister(specifier: string, fileDir: string, namedPart: string) {
const names = namedPart.split(',')
.map((s: string) => s.trim().replace(/^type\s+/, ''))
.filter((s: string) => s && !s.startsWith('type '))
@@ -303,8 +363,7 @@ export const SeverityNumber = {};
}
// Check relative .js imports
else if (specifier.endsWith('.js') && (specifier.startsWith('./') || specifier.startsWith('../'))) {
const dir2 = pathMod.dirname(full)
const resolved = pathMod.resolve(dir2, specifier)
const resolved = pathMod.resolve(fileDir, specifier)
const tsVariant = resolved.replace(/\.js$/, '.ts')
const tsxVariant = resolved.replace(/\.js$/, '.tsx')
if (!fs.existsSync(resolved) && !fs.existsSync(tsVariant) && !fs.existsSync(tsxVariant)) {
@@ -317,6 +376,38 @@ export const SeverityNumber = {};
if (!missingModuleExports.has(specifier)) missingModuleExports.set(specifier, new Set())
for (const n of names) missingModuleExports.get(specifier)!.add(n)
}
}
function walk(dir: string) {
for (const ent of fs.readdirSync(dir, { withFileTypes: true })) {
const full = pathMod.join(dir, ent.name)
if (ent.isDirectory()) { walk(full); continue }
if (!/\.(ts|tsx)$/.test(ent.name)) continue
const rawCode: string = fs.readFileSync(full, 'utf-8')
const fileDir = pathMod.dirname(full)
// Strip comments before scanning for imports/requires.
// The regex scanner matches require()/import() patterns
// inside JSDoc comments, causing false-positive missing
// module detection that breaks the build with noop stubs.
const code = rawCode
.replace(/\/\*[\s\S]*?\*\//g, '') // block comments
.replace(/\/\/.*$/gm, '') // line comments
// Collect static imports: import { X } from '...'
for (const m of code.matchAll(/import\s+(?:\{([^}]*)\}|(\w+))?\s*(?:,\s*\{([^}]*)\})?\s*from\s+['"](.*?)['"]/g)) {
checkAndRegister(m[4], fileDir, m[1] || m[3] || '')
}
// Collect dynamic requires: require('...') — these are used
// behind feature() gates and become live when flags are enabled.
for (const m of code.matchAll(/require\(\s*['"](\.\.?\/[^'"]+)['"]\s*\)/g)) {
checkAndRegister(m[1], fileDir, '')
}
// Collect dynamic imports: import('...')
for (const m of code.matchAll(/import\(\s*['"](\.\.?\/[^'"]+)['"]\s*\)/g)) {
checkAndRegister(m[1], fileDir, '')
}
}
}
@@ -389,7 +480,13 @@ if (!result.success) {
for (const log of result.logs) {
console.error(log)
}
process.exit(1)
process.exitCode = 1
} else {
console.log(`✓ Built openclaude v${version} → dist/cli.mjs`)
}
console.log(`✓ Built openclaude v${version} → dist/cli.mjs`)
} finally {
// Always restore source files, even if Bun.build() throws
restoreModifiedFiles()
console.log(` 🔄 feature-flags: pre-processed ${numModified} files (restored)`)
}

View File

@@ -0,0 +1,163 @@
import { afterAll, beforeEach, describe, expect, test } from 'bun:test'
import { mkdirSync, readFileSync, rmSync, unlinkSync, writeFileSync } from 'node:fs'
import { join } from 'node:path'
import { tmpdir } from 'node:os'
// ---------------------------------------------------------------------------
// Setup: extract the growthbook stub from no-telemetry-plugin.ts, write it to
// a temp .mjs file, and dynamically import it so we can test the real code
// that gets bundled.
// ---------------------------------------------------------------------------
const pluginSource = readFileSync(join(__dirname, 'no-telemetry-plugin.ts'), 'utf-8')
const stubMatch = pluginSource.match(/'services\/analytics\/growthbook': `([\s\S]*?)`/)
if (!stubMatch) throw new Error('Could not extract growthbook stub from no-telemetry-plugin.ts')
const testDir = join(tmpdir(), `growthbook-stub-test-${process.pid}`)
const stubFile = join(testDir, 'growthbook-stub.mjs')
const flagsFile = join(testDir, 'test-flags.json')
mkdirSync(testDir, { recursive: true })
writeFileSync(stubFile, stubMatch[1])
// Point the stub at our test flags file (checked by _loadFlags on first access)
process.env.CLAUDE_FEATURE_FLAGS_FILE = flagsFile
const stub = await import(stubFile)
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
describe('growthbook stub — local feature flag overrides', () => {
beforeEach(() => {
stub.resetGrowthBook()
try { unlinkSync(flagsFile) } catch { /* may not exist */ }
})
afterAll(() => {
rmSync(testDir, { recursive: true, force: true })
delete process.env.CLAUDE_FEATURE_FLAGS_FILE
})
// ── File absent ──────────────────────────────────────────────────
test('returns defaultValue when flags file is absent', () => {
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_foo', 42)).toBe(42)
})
test('getAllGrowthBookFeatures returns {} when file is absent', () => {
expect(stub.getAllGrowthBookFeatures()).toEqual({})
})
// ── Open-build defaults (_openBuildDefaults) ────────────────────
test('returns open-build default when flags file is absent', () => {
// tengu_passport_quail is in _openBuildDefaults as true; without a
// flags file the stub should return the open-build override, not
// the call-site defaultValue.
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_passport_quail', false)).toBe(true)
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_coral_fern', false)).toBe(true)
})
test('flags file overrides open-build defaults', () => {
// User-provided feature-flags.json takes priority over _openBuildDefaults.
writeFileSync(flagsFile, JSON.stringify({ tengu_passport_quail: false }))
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_passport_quail', true)).toBe(false)
})
// ── Valid JSON object ────────────────────────────────────────────
test('loads and returns values from a valid JSON file', () => {
writeFileSync(flagsFile, JSON.stringify({ tengu_foo: true, tengu_bar: 'hello' }))
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_foo', false)).toBe(true)
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_bar', 'default')).toBe('hello')
})
test('returns defaultValue for keys not present in the file', () => {
writeFileSync(flagsFile, JSON.stringify({ tengu_foo: true }))
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_missing', 99)).toBe(99)
})
test('getAllGrowthBookFeatures returns the full flags object', () => {
const flags = { tengu_a: true, tengu_b: false, tengu_c: 42 }
writeFileSync(flagsFile, JSON.stringify(flags))
expect(stub.getAllGrowthBookFeatures()).toEqual(flags)
})
// ── Malformed / non-object JSON ──────────────────────────────────
test('falls back to defaults on malformed JSON', () => {
writeFileSync(flagsFile, '{not valid json!!!')
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_foo', 'fallback')).toBe('fallback')
})
test('falls back to defaults when JSON is a primitive (true)', () => {
writeFileSync(flagsFile, 'true')
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_foo', 'fallback')).toBe('fallback')
})
test('falls back to defaults when JSON is an array', () => {
writeFileSync(flagsFile, '["a", "b"]')
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_foo', 'fallback')).toBe('fallback')
})
// ── Cache invalidation ───────────────────────────────────────────
test('resetGrowthBook clears cache so the file is re-read', () => {
writeFileSync(flagsFile, JSON.stringify({ tengu_foo: 'first' }))
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_foo', 'x')).toBe('first')
// Update the file — cached value is still 'first'
writeFileSync(flagsFile, JSON.stringify({ tengu_foo: 'second' }))
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_foo', 'x')).toBe('first')
// After reset, the new value is picked up
stub.resetGrowthBook()
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_foo', 'x')).toBe('second')
})
test('refreshGrowthBookFeatures clears cache', async () => {
writeFileSync(flagsFile, JSON.stringify({ tengu_foo: 'v1' }))
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_foo', 'x')).toBe('v1')
writeFileSync(flagsFile, JSON.stringify({ tengu_foo: 'v2' }))
await stub.refreshGrowthBookFeatures()
expect(stub.getFeatureValue_CACHED_MAY_BE_STALE('tengu_foo', 'x')).toBe('v2')
})
// ── Multiple getter variants ─────────────────────────────────────
test('all getter functions read from local flags', async () => {
writeFileSync(flagsFile, JSON.stringify({ tengu_gate: true, tengu_config: { a: 1 } }))
expect(await stub.getFeatureValue_DEPRECATED('tengu_gate', false)).toBe(true)
stub.resetGrowthBook()
expect(stub.getFeatureValue_CACHED_WITH_REFRESH('tengu_gate', false)).toBe(true)
stub.resetGrowthBook()
expect(stub.checkStatsigFeatureGate_CACHED_MAY_BE_STALE('tengu_gate')).toBe(true)
stub.resetGrowthBook()
expect(await stub.checkGate_CACHED_OR_BLOCKING('tengu_gate')).toBe(true)
stub.resetGrowthBook()
expect(await stub.getDynamicConfig_BLOCKS_ON_INIT('tengu_config', {})).toEqual({ a: 1 })
stub.resetGrowthBook()
expect(stub.getDynamicConfig_CACHED_MAY_BE_STALE('tengu_config', {})).toEqual({ a: 1 })
})
// ── Security gate ────────────────────────────────────────────────
test('checkSecurityRestrictionGate always returns false regardless of flags', async () => {
writeFileSync(flagsFile, JSON.stringify({
tengu_disable_bypass_permissions_mode: true,
}))
expect(await stub.checkSecurityRestrictionGate()).toBe(false)
})
})

View File

@@ -34,28 +34,201 @@ export function _resetForTesting() {}
`,
'services/analytics/growthbook': `
import _fs from 'node:fs';
import _path from 'node:path';
import _os from 'node:os';
let _flags = undefined;
// ── Open-build GrowthBook overrides ───────────────────────────────────
// Override upstream defaultValue for runtime gates tied to build-time
// features. Only keys that DIFFER from upstream belong here — the
// catalog below is pure documentation and does NOT affect resolution.
//
// Priority: ~/.claude/feature-flags.json > _openBuildDefaults > defaultValue
//
// To override at runtime, create ~/.claude/feature-flags.json:
// { "tengu_some_flag": true }
const _openBuildDefaults = {
'tengu_sedge_lantern': true, // AWAY_SUMMARY — "while you were away" recap (upstream: false)
'tengu_hive_evidence': true, // VERIFICATION_AGENT — read-only test/verification agent (upstream: false)
'tengu_passport_quail': true, // EXTRACT_MEMORIES — enable memory extraction (upstream: false)
'tengu_coral_fern': true, // EXTRACT_MEMORIES — enable memory search in past context (upstream: false)
};
/* ── Known runtime feature keys (reference only) ───────────────────────
* This catalog does NOT participate in flag resolution. It documents
* the known GrowthBook keys and their upstream default values, scraped
* from src/ call sites. It is NOT exhaustive — new keys may be added
* upstream between catalog updates.
*
* Some keys have different defaults at different call sites — this is
* intentional upstream (the server unifies the value at runtime).
*
* To activate any of these, add them to ~/.claude/feature-flags.json
* or to _openBuildDefaults above.
*
* ── Reasoning & thinking ──────────────────────────────────────────────
* tengu_turtle_carbon = true ULTRATHINK deep thinking runtime gate
* tengu_thinkback = gate /thinkback replay command
*
* ── Agents & orchestration ────────────────────────────────────────────
* tengu_amber_flint = true Agent swarms coordination
* tengu_amber_stoat = true Built-in agent availability (Explore, Plan, etc.)
* tengu_agent_list_attach = true Attach file context to agent list
* tengu_auto_background_agents = false Auto-spawn background agents
* tengu_slim_subagent_claudemd = true Lighter ClaudeMD for subagents
* tengu_hive_evidence = false Verification agent / evidence tracking (4 call sites)
* tengu_ultraplan_model = model cfg ULTRAPLAN model selection (dynamic config)
*
* ── Memory & context ──────────────────────────────────────────────────
* tengu_passport_quail = false EXTRACT_MEMORIES main gate (isExtractModeActive)
* tengu_coral_fern = false EXTRACT_MEMORIES search in past context
* tengu_slate_thimble = false Memory dir paths (non-interactive sessions)
* tengu_herring_clock = true/false Team memory paths (varies by call site)
* tengu_bramble_lintel = null Extract memories throttle (null → every turn)
* tengu_sedge_lantern = false AWAY_SUMMARY "while you were away" recap
* tengu_session_memory = false Session memory service
* tengu_sm_config = {} Session memory config (dynamic)
* tengu_sm_compact_config = {} Session memory compaction config (dynamic)
* tengu_cobalt_raccoon = false Reactive compaction (suppress auto-compact)
* tengu_pebble_leaf_prune = false Session storage pruning
*
* ── Kairos & cron ─────────────────────────────────────────────────────
* tengu_kairos_brief = false Brief layout mode (KAIROS)
* tengu_kairos_brief_config = {} Brief config (dynamic)
* tengu_kairos_cron = true Cron scheduler enable
* tengu_kairos_cron_durable = true Durable (disk-persistent) cron tasks
* tengu_kairos_cron_config = {} Cron jitter config (dynamic)
*
* ── Bridge & remote (require Anthropic infra) ─────────────────────────
* tengu_ccr_bridge = false CCR bridge connection
* tengu_ccr_bridge_multi_session = gate Multi-session spawn mode
* tengu_ccr_mirror = false CCR session mirroring
* tengu_ccr_bundle_seed_enabled = gate Git bundle seeding for CCR
* tengu_ccr_bundle_max_bytes = null Bundle size limit (null → default)
* tengu_bridge_repl_v2 = false Environment-less REPL bridge v2
* tengu_bridge_repl_v2_cse_shim_enabled = true CSE→Session tag retag shim
* tengu_bridge_min_version = {min:'0'} Min CLI version for bridge (dynamic)
* tengu_bridge_initial_history_cap = 200 Initial history cap for bridge
* tengu_bridge_system_init = false Bridge system initialization
* tengu_cobalt_harbor = false Auto-connect CCR at startup
* tengu_cobalt_lantern = false Remote setup preconditions
* tengu_remote_backend = false Remote TUI backend
* tengu_surreal_dali = false Remote agent tasks / triggers
*
* ── Prompt & API ──────────────────────────────────────────────────────
* tengu_attribution_header = true Attribution header in API requests
* tengu_basalt_3kr = true MCP instructions delta
* tengu_slate_prism = true/false Message formatting (varies by call site)
* tengu_amber_prism = false Message content formatting
* tengu_amber_json_tools = false JSON format for tool schemas
* tengu_fgts = false API feature gates
* tengu_otk_slot_v1 = false One-time key slots for API auth
* tengu_cicada_nap_ms = 0 Background GrowthBook refresh throttle (ms)
* tengu_miraculo_the_bard = false Service initialization gate
* tengu_immediate_model_command = false Immediate /model command execution
* tengu_chomp_inflection = false Prompt suggestions after responses
* tengu_tool_pear = gate API betas for tool use
* tengu-off-switch = {act:false} Service kill switch (dynamic; uses dash)
*
* ── Permissions & security ────────────────────────────────────────────
* tengu_birch_trellis = true Bash auto-mode permissions config
* tengu_auto_mode_config = {} Auto-mode configuration (dynamic, many call sites)
* tengu_iron_gate_closed = true Permission iron gate (with refresh)
* tengu_destructive_command_warning = false Warning for destructive bash commands
* tengu_disable_bypass_permissions_mode = security Security killswitch (always false in open build)
*
* ── UI & UX ───────────────────────────────────────────────────────────
* tengu_willow_mode = 'off' REPL rendering mode
* tengu_terminal_panel = false Terminal panel keybinding
* tengu_terminal_sidebar = false Terminal sidebar in REPL/config
* tengu_marble_sandcastle = false Fast mode gate
* tengu_jade_anvil_4 = false Rate limit options UI ordering
* tengu_collage_kaleidoscope = true Native clipboard image paste (macOS)
* tengu_lapis_finch = false Plugin/hint recommendation
* tengu_lodestone_enabled = false Deep links claude-cli:// protocol
* tengu_copper_panda = false Skill improvement suggestions
* tengu_desktop_upsell = {} Desktop app upsell config (dynamic)
* tengu-top-of-feed-tip = {} Emergency tip of feed (dynamic; uses dash)
*
* ── File operations ───────────────────────────────────────────────────
* tengu_quartz_lantern = false File read/write dedup optimization
* tengu_moth_copse = false Attachments handling (variant A)
* tengu_marble_fox = false Attachments handling (variant B)
* tengu_scratch = gate Scratchpad filesystem access / coordinator
*
* ── MCP & plugins ─────────────────────────────────────────────────────
* tengu_harbor = false MCP channel allowlist verification
* tengu_harbor_permissions = false MCP channel permissions enforcement
* tengu_copper_bridge = false Chrome MCP bridge
* tengu_chrome_auto_enable = false Auto-enable Chrome MCP on startup
* tengu_glacier_2xr = false Enhanced tool search / ToolSearchTool
* tengu_malort_pedway = {} Computer-use (Chicago) config (dynamic)
*
* ── VSCode / IDE ──────────────────────────────────────────────────────
* tengu_quiet_fern = false VSCode browser support
* tengu_vscode_cc_auth = false VSCode in-band OAuth via claude_authenticate
* tengu_vscode_review_upsell = gate VSCode review upsell
* tengu_vscode_onboarding = gate VSCode onboarding experience
*
* ── Voice ─────────────────────────────────────────────────────────────
* tengu_amber_quartz_disabled = false VOICE_MODE kill-switch (false = voice allowed)
*
* ── Auto-updater (stubbed in open build) ──────────────────────────────
* tengu_version_config = {min:'0'} Min version enforcement (dynamic)
* tengu_max_version_config = {} Max version / deprecation config (dynamic)
*
* ── Telemetry & tracing ───────────────────────────────────────────────
* tengu_trace_lantern = false Beta session tracing
* tengu_chair_sermon = gate Analytics / message formatting gate
* tengu_strap_foyer = false Settings sync to cloud
*/
function _loadFlags() {
if (_flags !== undefined) return;
try {
const flagsPath = process.env.CLAUDE_FEATURE_FLAGS_FILE
|| _path.join(_os.homedir(), '.claude', 'feature-flags.json');
const parsed = JSON.parse(_fs.readFileSync(flagsPath, 'utf-8'));
_flags = (parsed && typeof parsed === 'object' && !Array.isArray(parsed)) ? parsed : null;
} catch {
_flags = null;
}
}
function _getFlagValue(key, defaultValue) {
_loadFlags();
if (_flags != null && Object.hasOwn(_flags, key)) return _flags[key];
if (Object.hasOwn(_openBuildDefaults, key)) return _openBuildDefaults[key];
return defaultValue;
}
const noop = () => {};
export function onGrowthBookRefresh() { return noop; }
export function hasGrowthBookEnvOverride() { return false; }
export function getAllGrowthBookFeatures() { return {}; }
export function getAllGrowthBookFeatures() { _loadFlags(); return _flags || {}; }
export function getGrowthBookConfigOverrides() { return {}; }
export function setGrowthBookConfigOverride() {}
export function clearGrowthBookConfigOverrides() {}
export function getApiBaseUrlHost() { return undefined; }
export const initializeGrowthBook = async () => null;
export async function getFeatureValue_DEPRECATED(feature, defaultValue) { return defaultValue; }
export function getFeatureValue_CACHED_MAY_BE_STALE(feature, defaultValue) { return defaultValue; }
export function getFeatureValue_CACHED_WITH_REFRESH(feature, defaultValue) { return defaultValue; }
export function checkStatsigFeatureGate_CACHED_MAY_BE_STALE() { return false; }
export async function checkSecurityRestrictionGate() { return false; }
export async function checkGate_CACHED_OR_BLOCKING() { return false; }
export async function getFeatureValue_DEPRECATED(feature, defaultValue) { return _getFlagValue(feature, defaultValue); }
export function getFeatureValue_CACHED_MAY_BE_STALE(feature, defaultValue) { return _getFlagValue(feature, defaultValue); }
export function getFeatureValue_CACHED_WITH_REFRESH(feature, defaultValue) { return _getFlagValue(feature, defaultValue); }
export function checkStatsigFeatureGate_CACHED_MAY_BE_STALE(gate) { return Boolean(_getFlagValue(gate, false)); }
// Security killswitch — always false in the open build. Anthropic uses this
// gate to remotely disable bypassPermissions mode; exposing it via local flags
// would let users accidentally lock themselves out of --dangerously-skip-permissions.
export async function checkSecurityRestrictionGate(gate) { return false; }
export async function checkGate_CACHED_OR_BLOCKING(gate) { return Boolean(_getFlagValue(gate, false)); }
export function refreshGrowthBookAfterAuthChange() {}
export function resetGrowthBook() {}
export async function refreshGrowthBookFeatures() {}
export function resetGrowthBook() { _flags = undefined; }
export async function refreshGrowthBookFeatures() { _flags = undefined; }
export function setupPeriodicGrowthBookRefresh() {}
export function stopPeriodicGrowthBookRefresh() {}
export async function getDynamicConfig_BLOCKS_ON_INIT(configName, defaultValue) { return defaultValue; }
export function getDynamicConfig_CACHED_MAY_BE_STALE(configName, defaultValue) { return defaultValue; }
export async function getDynamicConfig_BLOCKS_ON_INIT(configName, defaultValue) { return _getFlagValue(configName, defaultValue); }
export function getDynamicConfig_CACHED_MAY_BE_STALE(configName, defaultValue) { return _getFlagValue(configName, defaultValue); }
`,
'services/analytics/sink': `

View File

@@ -20,6 +20,23 @@ describe('formatReachabilityFailureDetail', () => {
)
})
test('redacts credentials and sensitive query parameters in endpoint details', () => {
const detail = formatReachabilityFailureDetail(
'http://user:pass@localhost:11434/v1/models?token=abc123&mode=test',
502,
'bad gateway',
{
transport: 'chat_completions',
requestedModel: 'llama3.1:8b',
resolvedModel: 'llama3.1:8b',
},
)
expect(detail).toBe(
'Unexpected status 502 from http://redacted:redacted@localhost:11434/v1/models?token=redacted&mode=test. Body: bad gateway',
)
})
test('adds alias/entitlement hint for codex model support 400s', () => {
const detail = formatReachabilityFailureDetail(
'https://chatgpt.com/backend-api/codex/responses',

View File

@@ -7,6 +7,11 @@ import {
resolveProviderRequest,
isLocalProviderUrl as isProviderLocalUrl,
} from '../src/services/api/providerConfig.js'
import {
getLocalOpenAICompatibleProviderLabel,
probeOllamaGenerationReadiness,
} from '../src/utils/providerDiscovery.js'
import { redactUrlForDisplay } from '../src/utils/urlRedaction.js'
type CheckResult = {
ok: boolean
@@ -69,7 +74,7 @@ export function formatReachabilityFailureDetail(
},
): string {
const compactBody = responseBody.trim().replace(/\s+/g, ' ').slice(0, 240)
const base = `Unexpected status ${status} from ${endpoint}.`
const base = `Unexpected status ${status} from ${redactUrlForDisplay(endpoint)}.`
const bodySuffix = compactBody ? ` Body: ${compactBody}` : ''
if (request.transport !== 'codex_responses' || status !== 400) {
@@ -255,7 +260,7 @@ function checkOpenAIEnv(): CheckResult[] {
results.push(pass('OPENAI_MODEL', process.env.OPENAI_MODEL))
}
results.push(pass('OPENAI_BASE_URL', request.baseUrl))
results.push(pass('OPENAI_BASE_URL', redactUrlForDisplay(request.baseUrl)))
if (request.transport === 'codex_responses') {
const credentials = resolveCodexApiCredentials(process.env)
@@ -308,7 +313,7 @@ async function checkBaseUrlReachability(): Promise<CheckResult> {
return pass('Provider reachability', 'Skipped (OpenAI-compatible mode disabled).')
}
if (useGithub) {
if (useGithub && !useOpenAI) {
return pass(
'Provider reachability',
'Skipped for GitHub Models (inference endpoint differs from OpenAI /models probe).',
@@ -326,6 +331,7 @@ async function checkBaseUrlReachability(): Promise<CheckResult> {
const endpoint = request.transport === 'codex_responses'
? `${request.baseUrl}/responses`
: `${request.baseUrl}/models`
const redactedEndpoint = redactUrlForDisplay(endpoint)
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 4000)
@@ -375,7 +381,10 @@ async function checkBaseUrlReachability(): Promise<CheckResult> {
})
if (response.status === 200 || response.status === 401 || response.status === 403) {
return pass('Provider reachability', `Reached ${endpoint} (status ${response.status}).`)
return pass(
'Provider reachability',
`Reached ${redactedEndpoint} (status ${response.status}).`,
)
}
const responseBody = await response.text().catch(() => '')
@@ -391,12 +400,100 @@ async function checkBaseUrlReachability(): Promise<CheckResult> {
)
} catch (error) {
const message = error instanceof Error ? error.message : String(error)
return fail('Provider reachability', `Failed to reach ${endpoint}: ${message}`)
return fail(
'Provider reachability',
`Failed to reach ${redactedEndpoint}: ${message}`,
)
} finally {
clearTimeout(timeout)
}
}
async function checkProviderGenerationReadiness(): Promise<CheckResult> {
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const useMistral = isTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
if (!useGemini && !useOpenAI && !useGithub && !useMistral) {
return pass('Provider generation readiness', 'Skipped (OpenAI-compatible mode disabled).')
}
if (useGithub && !useOpenAI) {
return pass(
'Provider generation readiness',
'Skipped for GitHub Models (runtime generation uses a different endpoint flow).',
)
}
if (useGemini || useMistral) {
return pass(
'Provider generation readiness',
'Skipped for managed provider mode.',
)
}
if (!useOpenAI) {
return pass('Provider generation readiness', 'Skipped (OpenAI-compatible mode disabled).')
}
const request = resolveProviderRequest({
model: process.env.OPENAI_MODEL,
baseUrl: process.env.OPENAI_BASE_URL,
})
if (request.transport === 'codex_responses') {
return pass(
'Provider generation readiness',
'Skipped for Codex responses (reachability probe already performs a lightweight generation request).',
)
}
if (!isLocalBaseUrl(request.baseUrl)) {
return pass('Provider generation readiness', 'Skipped for non-local provider URL.')
}
const localProviderLabel = getLocalOpenAICompatibleProviderLabel(request.baseUrl)
if (localProviderLabel !== 'Ollama') {
return pass(
'Provider generation readiness',
`Skipped for ${localProviderLabel} (no provider-specific generation probe).`,
)
}
const readiness = await probeOllamaGenerationReadiness({
baseUrl: request.baseUrl,
model: request.requestedModel,
})
if (readiness.state === 'ready') {
return pass(
'Provider generation readiness',
`Generated a test response with ${readiness.probeModel ?? request.requestedModel}.`,
)
}
if (readiness.state === 'unreachable') {
return fail(
'Provider generation readiness',
`Could not reach Ollama at ${redactUrlForDisplay(request.baseUrl)}.`,
)
}
if (readiness.state === 'no_models') {
return fail(
'Provider generation readiness',
'Ollama is reachable, but no installed models were found. Pull a model first (for example: ollama pull qwen2.5-coder:7b).',
)
}
const detailSuffix = readiness.detail ? ` Detail: ${readiness.detail}.` : ''
return fail(
'Provider generation readiness',
`Ollama is reachable, but generation failed for ${readiness.probeModel ?? request.requestedModel}.${detailSuffix}`,
)
}
function isAtomicChatUrl(baseUrl: string): boolean {
try {
const parsed = new URL(baseUrl)
@@ -567,6 +664,7 @@ async function main(): Promise<void> {
results.push(checkBuildArtifacts())
results.push(...checkOpenAIEnv())
results.push(await checkBaseUrlReachability())
results.push(await checkProviderGenerationReadiness())
results.push(checkOllamaProcessorMode())
if (!options.json) {

View File

@@ -249,6 +249,11 @@ export type ToolUseContext = {
/** When true, canUseTool must always be called even when hooks auto-approve.
* Used by speculation for overlay file path rewriting. */
requireCanUseTool?: boolean
/**
* Optional callback used by hook-chain fallback actions that launch
* AgentTool from hook runtime paths.
*/
hookChainsCanUseTool?: CanUseToolFn
messages: Message[]
fileReadingLimits?: {
maxTokens?: number

View File

@@ -0,0 +1,282 @@
/**
* Tests for Bug Fixes applied to openclaude.
*
* Covers:
* 1. Gemini `store: false` rejection fix
* 2. Session timeout / 500 error fix (stream idle timeout)
* 3. Agent loop continuation nudge
* 4. Web search result count improvements
*/
import { describe, test, expect } from 'bun:test'
import { resolve } from 'path'
const SRC = resolve(import.meta.dir, '..')
const file = (relative: string) => Bun.file(resolve(SRC, relative))
// ---------------------------------------------------------------------------
// Fix 1: Gemini `store: false` rejection
// ---------------------------------------------------------------------------
describe('Gemini store field fix', () => {
test('isGeminiMode is imported and used in openaiShim', async () => {
const content = await file('services/api/openaiShim.ts').text()
// Verify the fix: store deletion should check for Gemini mode
expect(content).toContain('isGeminiMode()')
expect(content).toContain("mistral and gemini don't recognize body.store")
// Ensure the delete body.store is guarded for both Mistral and Gemini
expect(content).toMatch(/isMistral\s*\|\|\s*isGeminiMode\(\)/)
})
test('store: false is still set by default (OpenAI needs it)', async () => {
const content = await file('services/api/openaiShim.ts').text()
// The body should still have store: false by default
expect(content).toMatch(/store:\s*false/)
// But it should be deleted for non-OpenAI providers
expect(content).toMatch(/delete body\.store/)
})
})
// ---------------------------------------------------------------------------
// Fix 2: Session timeout — stream idle timeout
// ---------------------------------------------------------------------------
describe('Session timeout fix', () => {
test('openaiShim has idle timeout for SSE streams', async () => {
const content = await file('services/api/openaiShim.ts').text()
expect(content).toContain('STREAM_IDLE_TIMEOUT_MS')
expect(content).toContain('readWithTimeout')
expect(content).toMatch(/readWithTimeout\(\)/)
})
test('codexShim has idle timeout for SSE streams', async () => {
const content = await file('services/api/codexShim.ts').text()
expect(content).toContain('STREAM_IDLE_TIMEOUT_MS')
expect(content).toContain('readWithTimeout')
expect(content).toMatch(/readWithTimeout\(\)/)
})
test('idle timeout is set to a reasonable value (>= 60s)', async () => {
const content = await file('services/api/openaiShim.ts').text()
// Extract the timeout value (supports numeric separators like 120_000)
const match = content.match(/STREAM_IDLE_TIMEOUT_MS\s*=\s*([\d_]+)/)
expect(match).not.toBeNull()
const timeoutMs = parseInt(match![1].replace(/_/g, ''), 10)
expect(timeoutMs).toBeGreaterThanOrEqual(60_000)
})
})
// ---------------------------------------------------------------------------
// Fix 3: Agent loop continuation nudge
// ---------------------------------------------------------------------------
describe('Agent loop continuation nudge', () => {
test('query.ts has continuation signal detection', async () => {
const content = await file('query.ts').text()
expect(content).toContain('continuationSignals')
expect(content).toContain('Continuation nudge triggered')
expect(content).toContain('continuation_nudge')
})
test('continuation signals include tightened patterns', async () => {
const content = await file('query.ts').text()
// Should detect tightened patterns requiring explicit action verbs
expect(content).toMatch(/so now \(i\|let me\|we\)/)
expect(content).toContain('completionMarkers')
expect(content).toContain('MAX_CONTINUATION_NUDGES')
// Verify the nudge counter guard exists
expect(content).toMatch(/continuationNudgeCount\s*<\s*MAX_CONTINUATION_NUDGES/)
})
test('nudge creates a meta user message to continue', async () => {
const content = await file('query.ts').text()
expect(content).toContain(
'Continue with the task. Use the appropriate tools to proceed.',
)
})
})
// ---------------------------------------------------------------------------
// Fix 4: Web search result count improvements
// ---------------------------------------------------------------------------
describe('Web search result count improvements', () => {
test('Bing provider requests at least 15 results', async () => {
const content = await file(
'tools/WebSearchTool/providers/bing.ts',
).text()
expect(content).toMatch(/count.*['"]15['"]/)
})
test('Tavily provider requests at least 15 results', async () => {
const content = await file(
'tools/WebSearchTool/providers/tavily.ts',
).text()
expect(content).toMatch(/max_results:\s*15/)
})
test('Exa provider requests at least 15 results', async () => {
const content = await file(
'tools/WebSearchTool/providers/exa.ts',
).text()
expect(content).toMatch(/numResults:\s*15/)
})
test('Firecrawl provider requests at least 15 results', async () => {
const content = await file(
'tools/WebSearchTool/providers/firecrawl.ts',
).text()
expect(content).toMatch(/limit:\s*15/)
})
test('Mojeek provider requests at least 10 results', async () => {
const content = await file(
'tools/WebSearchTool/providers/mojeek.ts',
).text()
// Mojeek uses 't' param for result count — verify it's set to 10
expect(content).toMatch(/searchParams\.set\('t',\s*'10'\)/)
})
test('You.com provider requests at least 10 results', async () => {
const content = await file(
'tools/WebSearchTool/providers/you.ts',
).text()
expect(content).toMatch(/num_web_results.*['"]10['"]/)
})
test('Jina provider requests at least 10 results', async () => {
const content = await file(
'tools/WebSearchTool/providers/jina.ts',
).text()
expect(content).toMatch(/count.*['"]10['"]/)
})
test('Native Anthropic web search max_uses increased to 15', async () => {
const content = await file(
'tools/WebSearchTool/WebSearchTool.ts',
).text()
expect(content).toMatch(/max_uses:\s*15/)
})
})
// ---------------------------------------------------------------------------
// Fix 5: MCP tool timeout fix
// ---------------------------------------------------------------------------
describe('MCP tool timeout fix', () => {
test('default MCP tool timeout is reasonable (not 27 hours)', async () => {
const content = await file('services/mcp/client.ts').text()
// Should NOT have the old ~27.8 hour default
expect(content).not.toContain('100_000_000')
// Should have a reasonable timeout (5 minutes = 300_000ms)
expect(content).toMatch(/DEFAULT_MCP_TOOL_TIMEOUT_MS\s*=\s*300_000/)
})
test('MCP tools/list has retry logic', async () => {
const content = await file('services/mcp/client.ts').text()
expect(content).toContain('tools/list failed (attempt')
expect(content).toContain('Retrying...')
})
test('MCP URL elicitation checks abort signal', async () => {
const content = await file('services/mcp/client.ts').text()
expect(content).toContain('signal.aborted')
expect(content).toContain('Tool call aborted during URL elicitation')
})
test('MCP tool error messages include server and tool name in telemetry', async () => {
const content = await file('services/mcp/client.ts').text()
// Telemetry message should include context like "MCP tool [serverName] toolName: error"
// The human-readable message stays unchanged to avoid breaking error consumers
expect(content).toContain('MCP tool [${name}] ${tool}:')
})
})
// ---------------------------------------------------------------------------
// Cross-cutting: verify no regressions
// ---------------------------------------------------------------------------
describe('Regression checks', () => {
test('store field is still set for OpenAI (not deleted unconditionally)', async () => {
const content = await file('services/api/openaiShim.ts').text()
// store: false should exist in body construction
expect(content).toMatch(/store:\s*false/)
// But delete body.store should be conditional (guarded by if)
const deleteLines = content.split('\n').filter(l => l.includes('delete body.store'))
expect(deleteLines.length).toBeGreaterThan(0)
// Verify the delete is inside a conditional block by checking surrounding context
for (const line of deleteLines) {
const trimmed = line.trim()
// Should be either inside an if block (indented delete) or a comment
expect(
trimmed.startsWith('delete') && !trimmed.includes('// unconditional'),
).toBe(true)
}
})
})
// ---------------------------------------------------------------------------
// Fix 6: SendMessageTool race condition guard
// ---------------------------------------------------------------------------
describe('SendMessageTool race condition fix', () => {
test('SendMessageTool has double-check for concurrent resume', async () => {
const content = await file('tools/SendMessageTool/SendMessageTool.ts').text()
// Should have a second status check before resuming to prevent race
expect(content).toContain('was concurrently resumed')
// The freshTask check should re-read from getAppState
expect(content).toMatch(/const freshTask = context\.getAppState\(\)\.tasks\[agentId\]/)
})
})
// ---------------------------------------------------------------------------
// Fix 7: AgentTool dump state cleanup
// ---------------------------------------------------------------------------
describe('AgentTool cleanup fix', () => {
test('backgrounded agent always cleans up dump state', async () => {
const content = await file('tools/AgentTool/AgentTool.tsx').text()
// The backgrounded agent's finally block should clean up regardless
// of whether the agent crashed or completed normally
expect(content).toContain('Defensive cleanup: wrap each call so one failure')
// Verify cleanup is wrapped in try/catch for defensive execution
expect(content).toMatch(/try\s*\{\s*clearInvokedSkillsForAgent/)
expect(content).toMatch(/try\s*\{\s*clearDumpState/)
})
})
// ---------------------------------------------------------------------------
// Fix 8: Context overflow 500 error handling
// ---------------------------------------------------------------------------
describe('Context overflow 500 fix', () => {
test('errors.ts has handler for context overflow 500 errors', async () => {
const content = await file('services/api/errors.ts').text()
expect(content).toContain('500 errors caused by context overflow')
expect(content).toContain('too many tokens')
expect(content).toContain('The conversation has grown too large')
})
test('query.ts has circuit breaker safety net for oversized context', async () => {
const content = await file('query.ts').text()
expect(content).toContain('Safety net: when auto-compact')
expect(content).toContain('circuit breaker has tripped')
expect(content).toContain('automatic compaction has failed')
})
})

View File

@@ -0,0 +1,55 @@
/**
* Tests for Web Search Provider result count configurations.
*/
import { describe, test, expect } from 'bun:test'
import { resolve } from 'path'
const SRC = resolve(import.meta.dir, '..', 'tools', 'WebSearchTool', 'providers')
const file = (name: string) => Bun.file(resolve(SRC, name))
describe('Provider result counts', () => {
const providers = [
'bing.ts',
'tavily.ts',
'exa.ts',
'firecrawl.ts',
'mojeek.ts',
'you.ts',
'jina.ts',
'duckduckgo.ts',
// linkup.ts excluded — uses depth param, not a result count field
]
for (const name of providers) {
test(`${name} exists and is readable`, async () => {
const f = file(name)
expect(await f.exists()).toBe(true)
const content = await f.text()
expect(content.length).toBeGreaterThan(100)
})
}
test('No provider hardcodes a limit below 10', async () => {
const suspiciousPatterns = [
/count['":\s]*['"]([1-9])['"]/i,
/limit['":\s]*([1-9])\b/,
/max_results['":\s]*([1-9])\b/,
/numResults['":\s]*([1-9])\b/,
]
for (const name of providers) {
const content = await file(name).text()
for (const pattern of suspiciousPatterns) {
const match = content.match(pattern)
if (match) {
const num = parseInt(match[1], 10)
expect(num).toBeGreaterThanOrEqual(
10,
`${name} has suspiciously low result count: ${match[0]}`,
)
}
}
}
})
})

View File

@@ -0,0 +1,191 @@
/**
* Security hardening regression tests.
*
* Covers:
* 1. MCP tool result Unicode sanitization
* 2. Sandbox settings source filtering (exclude projectSettings)
* 3. Plugin git clone/pull hooks disabled
* 4. ANTHROPIC_FOUNDRY_API_KEY removed from SAFE_ENV_VARS
* 5. WebFetch SSRF protection via ssrfGuardedLookup
*/
import { describe, test, expect } from 'bun:test'
import { resolve } from 'path'
const SRC = resolve(import.meta.dir, '..')
const file = (relative: string) => Bun.file(resolve(SRC, relative))
// ---------------------------------------------------------------------------
// Fix 1: MCP tool result Unicode sanitization
// ---------------------------------------------------------------------------
describe('MCP tool result sanitization', () => {
test('transformResultContent sanitizes text content', async () => {
const content = await file('services/mcp/client.ts').text()
// Tool definitions are already sanitized (line ~1798)
expect(content).toContain('recursivelySanitizeUnicode(result.tools)')
// Tool results must also be sanitized
expect(content).toMatch(
/case 'text':[\s\S]*?recursivelySanitizeUnicode\(resultContent\.text\)/,
)
})
test('resource text content is also sanitized', async () => {
const content = await file('services/mcp/client.ts').text()
expect(content).toMatch(
/recursivelySanitizeUnicode\(\s*`\$\{prefix\}\$\{resource\.text\}`/,
)
})
})
// ---------------------------------------------------------------------------
// Fix 2: Sandbox settings source filtering
// ---------------------------------------------------------------------------
describe('Sandbox settings trust boundary', () => {
test('getSandboxEnabledSetting does not use getSettings_DEPRECATED', async () => {
const content = await file('utils/sandbox/sandbox-adapter.ts').text()
// Extract the getSandboxEnabledSetting function body
const fnMatch = content.match(
/function getSandboxEnabledSetting\(\)[^{]*\{([\s\S]*?)\n\}/,
)
expect(fnMatch).not.toBeNull()
const fnBody = fnMatch![1]
// Must NOT use getSettings_DEPRECATED (reads all sources including project)
expect(fnBody).not.toContain('getSettings_DEPRECATED')
// Must use getSettingsForSource for individual trusted sources
expect(fnBody).toContain("getSettingsForSource('userSettings')")
expect(fnBody).toContain("getSettingsForSource('policySettings')")
// Must NOT read from projectSettings
expect(fnBody).not.toContain("'projectSettings'")
})
})
// ---------------------------------------------------------------------------
// Fix 3: Plugin git hooks disabled
// ---------------------------------------------------------------------------
describe('Plugin git operations disable hooks', () => {
test('gitClone includes core.hooksPath=/dev/null', async () => {
const content = await file('utils/plugins/marketplaceManager.ts').text()
// The clone args must disable hooks
const cloneSection = content.slice(
content.indexOf('export async function gitClone('),
content.indexOf('export async function gitClone(') + 2000,
)
expect(cloneSection).toContain("'core.hooksPath=/dev/null'")
})
test('gitPull includes core.hooksPath=/dev/null', async () => {
const content = await file('utils/plugins/marketplaceManager.ts').text()
const pullSection = content.slice(
content.indexOf('export async function gitPull('),
content.indexOf('export async function gitPull(') + 2000,
)
expect(pullSection).toContain("'core.hooksPath=/dev/null'")
})
test('gitSubmoduleUpdate includes core.hooksPath=/dev/null', async () => {
const content = await file('utils/plugins/marketplaceManager.ts').text()
const subSection = content.slice(
content.indexOf('async function gitSubmoduleUpdate('),
content.indexOf('async function gitSubmoduleUpdate(') + 1000,
)
expect(subSection).toContain("'core.hooksPath=/dev/null'")
})
})
// ---------------------------------------------------------------------------
// Fix 4: ANTHROPIC_FOUNDRY_API_KEY not in SAFE_ENV_VARS
// ---------------------------------------------------------------------------
describe('SAFE_ENV_VARS excludes credentials', () => {
test('ANTHROPIC_FOUNDRY_API_KEY is not in SAFE_ENV_VARS', async () => {
const content = await file('utils/managedEnvConstants.ts').text()
// Extract the SAFE_ENV_VARS set definition
const safeStart = content.indexOf('export const SAFE_ENV_VARS')
const safeEnd = content.indexOf('])', safeStart)
const safeSection = content.slice(safeStart, safeEnd)
expect(safeSection).not.toContain('ANTHROPIC_FOUNDRY_API_KEY')
})
})
// ---------------------------------------------------------------------------
// Fix 5: WebFetch SSRF protection
// ---------------------------------------------------------------------------
describe('WebFetch SSRF guard', () => {
test('getWithPermittedRedirects uses ssrfGuardedLookup', async () => {
const content = await file('tools/WebFetchTool/utils.ts').text()
expect(content).toContain(
"import { ssrfGuardedLookup } from '../../utils/hooks/ssrfGuard.js'",
)
// The axios.get call in getWithPermittedRedirects must include lookup
const fnSection = content.slice(
content.indexOf('export async function getWithPermittedRedirects('),
content.indexOf('export async function getWithPermittedRedirects(') +
1000,
)
expect(fnSection).toContain('lookup: ssrfGuardedLookup')
})
})
// ---------------------------------------------------------------------------
// Fix 6: Swarm permission file polling removed (security hardening)
// ---------------------------------------------------------------------------
describe('Swarm permission file polling removed', () => {
test('useSwarmPermissionPoller hook no longer exists', async () => {
const content = await file(
'hooks/useSwarmPermissionPoller.ts',
).text()
// The file-based polling hook must not exist — it read from an
// unauthenticated resolved/ directory where any local process could
// forge approval files.
expect(content).not.toContain('function useSwarmPermissionPoller(')
// The file-based processResponse must not exist
expect(content).not.toContain('function processResponse(')
})
test('poller does not import from permissionSync', async () => {
const content = await file(
'hooks/useSwarmPermissionPoller.ts',
).text()
// Must not import anything from permissionSync — all file-based
// functions have been removed from this module's dependencies
expect(content).not.toContain('permissionSync')
})
test('file-based permission functions are marked deprecated', async () => {
const content = await file(
'utils/swarm/permissionSync.ts',
).text()
// All file-based functions must have @deprecated JSDoc
const deprecatedFns = [
'writePermissionRequest',
'readPendingPermissions',
'readResolvedPermission',
'resolvePermission',
'pollForResponse',
'removeWorkerResponse',
]
for (const fn of deprecatedFns) {
// Find the function and check that @deprecated appears before it
const fnIndex = content.indexOf(`export async function ${fn}(`)
if (fnIndex === -1) continue // submitPermissionRequest is a const, not async function
const preceding = content.slice(Math.max(0, fnIndex - 500), fnIndex)
expect(preceding).toContain('@deprecated')
}
})
test('mailbox-based functions are NOT deprecated', async () => {
const content = await file(
'utils/swarm/permissionSync.ts',
).text()
// These are the active path — must not be deprecated
const activeFns = [
'sendPermissionRequestViaMailbox',
'sendPermissionResponseViaMailbox',
]
for (const fn of activeFns) {
const fnIndex = content.indexOf(`export async function ${fn}(`)
expect(fnIndex).not.toBe(-1)
const preceding = content.slice(Math.max(0, fnIndex - 300), fnIndex)
expect(preceding).not.toContain('@deprecated')
}
})
})

View File

@@ -1562,29 +1562,8 @@ export function clearInvokedSkillsForAgent(agentId: string): void {
}
}
// Slow operations tracking for dev bar
const MAX_SLOW_OPERATIONS = 10
const SLOW_OPERATION_TTL_MS = 10000
export function addSlowOperation(operation: string, durationMs: number): void {
if (process.env.USER_TYPE !== 'ant') return
// Skip tracking for editor sessions (user editing a prompt file in $EDITOR)
// These are intentionally slow since the user is drafting text
if (operation.includes('exec') && operation.includes('claude-prompt-')) {
return
}
const now = Date.now()
// Remove stale operations
STATE.slowOperations = STATE.slowOperations.filter(
op => now - op.timestamp < SLOW_OPERATION_TTL_MS,
)
// Add new operation
STATE.slowOperations.push({ operation, durationMs, timestamp: now })
// Keep only the most recent operations
if (STATE.slowOperations.length > MAX_SLOW_OPERATIONS) {
STATE.slowOperations = STATE.slowOperations.slice(-MAX_SLOW_OPERATIONS)
}
}
// Slow operations tracking removed (was internal-only).
// Functions kept as no-ops to avoid breaking callers.
const EMPTY_SLOW_OPERATIONS: ReadonlyArray<{
operation: string
@@ -1592,32 +1571,17 @@ const EMPTY_SLOW_OPERATIONS: ReadonlyArray<{
timestamp: number
}> = []
export function addSlowOperation(
_operation: string,
_durationMs: number,
): void {}
export function getSlowOperations(): ReadonlyArray<{
operation: string
durationMs: number
timestamp: number
}> {
// Most common case: nothing tracked. Return a stable reference so the
// caller's setState() can bail via Object.is instead of re-rendering at 2fps.
if (STATE.slowOperations.length === 0) {
return EMPTY_SLOW_OPERATIONS
}
const now = Date.now()
// Only allocate a new array when something actually expired; otherwise keep
// the reference stable across polls while ops are still fresh.
if (
STATE.slowOperations.some(op => now - op.timestamp >= SLOW_OPERATION_TTL_MS)
) {
STATE.slowOperations = STATE.slowOperations.filter(
op => now - op.timestamp < SLOW_OPERATION_TTL_MS,
)
if (STATE.slowOperations.length === 0) {
return EMPTY_SLOW_OPERATIONS
}
}
// Safe to return directly: addSlowOperation() reassigns STATE.slowOperations
// before pushing, so the array held in React state is never mutated.
return STATE.slowOperations
return EMPTY_SLOW_OPERATIONS
}
export function getMainThreadAgentType(): string | undefined {

View File

@@ -14,21 +14,14 @@
import { getOauthConfig } from '../constants/oauth.js'
import { getClaudeAIOAuthTokens } from '../utils/auth.js'
/** Ant-only dev override: CLAUDE_BRIDGE_OAUTH_TOKEN, else undefined. */
/** Dev override: CLAUDE_BRIDGE_OAUTH_TOKEN, else undefined. */
export function getBridgeTokenOverride(): string | undefined {
return (
(process.env.USER_TYPE === 'ant' &&
process.env.CLAUDE_BRIDGE_OAUTH_TOKEN) ||
undefined
)
return process.env.CLAUDE_BRIDGE_OAUTH_TOKEN || undefined
}
/** Ant-only dev override: CLAUDE_BRIDGE_BASE_URL, else undefined. */
/** Dev override: CLAUDE_BRIDGE_BASE_URL, else undefined. */
export function getBridgeBaseUrlOverride(): string | undefined {
return (
(process.env.USER_TYPE === 'ant' && process.env.CLAUDE_BRIDGE_BASE_URL) ||
undefined
)
return process.env.CLAUDE_BRIDGE_BASE_URL || undefined
}
/**

View File

@@ -2194,14 +2194,10 @@ export async function bridgeMain(args: string[]): Promise<void> {
// Session ingress URL for WebSocket connections. In production this is the
// same as baseUrl (Envoy routes /v1/session_ingress/* to session-ingress).
// Locally, session-ingress runs on a different port (9413) than the
// contain-provide-api (8211), so CLAUDE_BRIDGE_SESSION_INGRESS_URL must be
// set explicitly. Ant-only, matching CLAUDE_BRIDGE_BASE_URL.
// Locally, session-ingress may run on a different port, so
// CLAUDE_BRIDGE_SESSION_INGRESS_URL can override the default.
const sessionIngressUrl =
process.env.USER_TYPE === 'ant' &&
process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL
? process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL
: baseUrl
process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL || baseUrl
const { getBranch, getRemoteUrl, findGitRoot } = await import(
'../utils/git.js'
@@ -2851,10 +2847,7 @@ export async function runBridgeHeadless(
)
}
const sessionIngressUrl =
process.env.USER_TYPE === 'ant' &&
process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL
? process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL
: baseUrl
process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL || baseUrl
const { getBranch, getRemoteUrl, findGitRoot } = await import(
'../utils/git.js'

View File

@@ -217,25 +217,39 @@ export async function getBridgeSession(
}
const url = `${opts?.baseUrl ?? getOauthConfig().BASE_API_URL}/v1/sessions/${sessionId}`
const timeoutMs = 10_000
logForDebugging(`[bridge] Fetching session ${sessionId}`)
let response
try {
response = await axios.get<{ environment_id?: string; title?: string }>(
url,
{ headers, timeout: 10_000, validateStatus: s => s < 500 },
{ headers, timeout: timeoutMs, validateStatus: s => s < 500 },
)
} catch (err: unknown) {
logForDebugging(
`[bridge] Session fetch request failed: ${errorMessage(err)}`,
)
if (axios.isAxiosError(err)) {
const status = err.response?.status ?? 'no-response'
const code = err.code ?? 'unknown-code'
const requestUrl = err.config?.url ?? url
const method = err.config?.method?.toUpperCase() ?? 'GET'
const message = err.message ?? errorMessage(err)
const timeout = err.config?.timeout ?? timeoutMs
logForDebugging(
`[bridge] Session fetch request failed: status=${status} code=${code} method=${method} url=${requestUrl} timeout=${timeout} message=${message}`,
)
} else {
logForDebugging(
`[bridge] Session fetch request failed: url=${url} timeout=${timeoutMs} message=${errorMessage(err)}`,
)
}
return null
}
if (response.status !== 200) {
const detail = extractErrorDetail(response.data)
logForDebugging(
`[bridge] Session fetch failed with status ${response.status}${detail ? `: ${detail}` : ''}`,
`[bridge] Session fetch failed with status ${response.status} url=${url}${detail ? `: ${detail}` : ''}`,
)
return null
}

View File

@@ -465,10 +465,7 @@ export async function initReplBridge(
const branch = await getBranch()
const gitRepoUrl = await getRemoteUrl()
const sessionIngressUrl =
process.env.USER_TYPE === 'ant' &&
process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL
? process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL
: baseUrl
process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL || baseUrl
// Assistant-mode sessions advertise a distinct worker_type so the web UI
// can filter them into a dedicated picker. KAIROS guard keeps the

View File

@@ -11,7 +11,12 @@ import { MCPServerDesktopImportDialog } from '../../components/MCPServerDesktopI
import { render } from '../../ink.js';
import { KeybindingSetup } from '../../keybindings/KeybindingProviderSetup.js';
import { type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS, logEvent } from '../../services/analytics/index.js';
import { clearMcpClientConfig, clearServerTokensFromLocalStorage, readClientSecret, saveMcpClientSecret } from '../../services/mcp/auth.js';
import {
clearMcpClientConfig,
clearServerTokensFromSecureStorage,
readClientSecret,
saveMcpClientSecret,
} from '../../services/mcp/auth.js'
import { doctorAllServers, doctorServer, type McpDoctorReport, type McpDoctorScopeFilter } from '../../services/mcp/doctor.js';
import { connectToServer, getMcpServerConnectionBatchSize } from '../../services/mcp/client.js';
import { addMcpConfig, getAllMcpConfigs, getMcpConfigByName, getMcpConfigsByScope, removeMcpConfig } from '../../services/mcp/config.js';

View File

@@ -362,15 +362,9 @@ const proactiveModule =
feature('PROACTIVE') || feature('KAIROS')
? (require('../proactive/index.js') as typeof import('../proactive/index.js'))
: null
const cronSchedulerModule = feature('AGENT_TRIGGERS')
? (require('../utils/cronScheduler.js') as typeof import('../utils/cronScheduler.js'))
: null
const cronJitterConfigModule = feature('AGENT_TRIGGERS')
? (require('../utils/cronJitterConfig.js') as typeof import('../utils/cronJitterConfig.js'))
: null
const cronGate = feature('AGENT_TRIGGERS')
? (require('../tools/ScheduleCronTool/prompt.js') as typeof import('../tools/ScheduleCronTool/prompt.js'))
: null
const cronSchedulerModule = require('../utils/cronScheduler.js') as typeof import('../utils/cronScheduler.js')
const cronJitterConfigModule = require('../utils/cronJitterConfig.js') as typeof import('../utils/cronJitterConfig.js')
const cronGate = require('../tools/ScheduleCronTool/prompt.js') as typeof import('../tools/ScheduleCronTool/prompt.js')
const extractMemoriesModule = feature('EXTRACT_MEMORIES')
? (require('../services/extractMemories/extractMemories.js') as typeof import('../services/extractMemories/extractMemories.js'))
: null
@@ -2701,11 +2695,7 @@ function runHeadlessStreaming(
// the end of run() picks up the queued command.
let cronScheduler: import('../utils/cronScheduler.js').CronScheduler | null =
null
if (
feature('AGENT_TRIGGERS') &&
cronSchedulerModule &&
cronGate?.isKairosCronEnabled()
) {
if (cronGate.isKairosCronEnabled()) {
cronScheduler = cronSchedulerModule.createCronScheduler({
onFire: prompt => {
if (inputClosed) return
@@ -2727,8 +2717,8 @@ function runHeadlessStreaming(
void run()
},
isLoading: () => running || inputClosed,
getJitterConfig: cronJitterConfigModule?.getCronJitterConfig,
isKilled: () => !cronGate?.isKairosCronEnabled(),
getJitterConfig: cronJitterConfigModule.getCronJitterConfig,
isKilled: () => !cronGate.isKairosCronEnabled(),
})
cronScheduler.start()
}
@@ -4592,7 +4582,7 @@ function handleSetPermissionMode(
subtype: 'error',
request_id: requestId,
error:
'Cannot set permission mode to bypassPermissions because the session was not launched with --dangerously-skip-permissions',
'Cannot set permission mode to bypassPermissions. Enable it with --allow-dangerously-skip-permissions or set permissions.allowBypassPermissionsMode in settings.json',
},
})
return toolPermissionContext

30
src/commands.test.ts Normal file
View File

@@ -0,0 +1,30 @@
import { formatDescriptionWithSource } from './commands.js'
describe('formatDescriptionWithSource', () => {
test('returns empty text for prompt commands missing a description', () => {
const command = {
name: 'example',
type: 'prompt',
source: 'builtin',
description: undefined,
} as any
expect(formatDescriptionWithSource(command)).toBe('')
})
test('formats plugin commands with missing description safely', () => {
const command = {
name: 'example',
type: 'prompt',
source: 'plugin',
description: undefined,
pluginInfo: {
pluginManifest: {
name: 'MyPlugin',
},
},
} as any
expect(formatDescriptionWithSource(command)).toBe('(MyPlugin) ')
})
})

View File

@@ -740,23 +740,23 @@ export function getCommand(commandName: string, commands: Command[]): Command {
*/
export function formatDescriptionWithSource(cmd: Command): string {
if (cmd.type !== 'prompt') {
return cmd.description
return cmd.description ?? ''
}
if (cmd.kind === 'workflow') {
return `${cmd.description} (workflow)`
return `${cmd.description ?? ''} (workflow)`
}
if (cmd.source === 'plugin') {
const pluginName = cmd.pluginInfo?.pluginManifest.name
if (pluginName) {
return `(${pluginName}) ${cmd.description}`
return `(${pluginName}) ${cmd.description ?? ''}`
}
return `${cmd.description} (plugin)`
return `${cmd.description ?? ''} (plugin)`
}
if (cmd.source === 'builtin' || cmd.source === 'mcp') {
return cmd.description
return cmd.description ?? ''
}
if (cmd.source === 'bundled') {

56
src/commands/benchmark.ts Normal file
View File

@@ -0,0 +1,56 @@
import type { ToolUseContext } from '../Tool.js'
import type { Command } from '../types/command.js'
import {
benchmarkModel,
benchmarkMultipleModels,
formatBenchmarkResults,
isBenchmarkSupported,
} from '../utils/model/benchmark.js'
import { getOllamaModelOptions } from '../utils/model/ollamaModels.js'
async function runBenchmark(
model?: string,
context?: ToolUseContext,
): Promise<void> {
if (!isBenchmarkSupported()) {
context?.stdout?.write(
'Benchmark not supported for this provider.\n' +
'Supported: OpenAI-compatible endpoints (Ollama, NVIDIA NIM, MiniMax)\n',
)
return
}
let modelsToBenchmark: string[]
if (model) {
modelsToBenchmark = [model]
} else {
const ollamaModels = getOllamaModelOptions()
modelsToBenchmark = ollamaModels.slice(0, 3).map((m) => m.value)
}
context?.stdout?.write(`Benchmarking ${modelsToBenchmark.length} model(s)...\n`)
const results = await benchmarkMultipleModels(
modelsToBenchmark,
(completed, total, result) => {
context?.stdout?.write(
`[${completed}/${total}] ${result.model}: ` +
`${result.success ? result.tokensPerSecond.toFixed(1) + ' tps' : 'FAILED'}\n`,
)
},
)
context?.stdout?.write('\n' + formatBenchmarkResults(results) + '\n')
}
export const benchmark: Command = {
name: 'benchmark',
async onExecute(context: ToolUseContext): Promise<void> {
const args = context.args ?? {}
const model = args.model as string | undefined
await runBenchmark(model, context)
},
}

View File

@@ -1,17 +1,12 @@
import { execFileSync } from 'child_process'
import { diffLines } from 'diff'
import { constants as fsConstants } from 'fs'
import {
copyFile,
mkdir,
mkdtemp,
readdir,
readFile,
rm,
unlink,
writeFile,
} from 'fs/promises'
import { tmpdir } from 'os'
import { extname, join } from 'path'
import type { Command } from '../commands.js'
import { queryWithModel } from '../services/api/claude.js'
@@ -22,7 +17,6 @@ import {
import type { LogOption } from '../types/logs.js'
import { getClaudeConfigHomeDir } from '../utils/envUtils.js'
import { toError } from '../utils/errors.js'
import { execFileNoThrow } from '../utils/execFileNoThrow.js'
import { logError } from '../utils/log.js'
import { extractTextContent } from '../utils/messages.js'
import { getDefaultOpusModel } from '../utils/model/model.js'
@@ -47,180 +41,6 @@ function getInsightsModel(): string {
return getDefaultOpusModel()
}
// ============================================================================
// Homespace Data Collection
// ============================================================================
type RemoteHostInfo = {
name: string
sessionCount: number
}
/* eslint-disable custom-rules/no-process-env-top-level */
const getRunningRemoteHosts: () => Promise<string[]> =
process.env.USER_TYPE === 'ant'
? async () => {
const { stdout, code } = await execFileNoThrow(
'coder',
['list', '-o', 'json'],
{ timeout: 30000 },
)
if (code !== 0) return []
try {
const workspaces = jsonParse(stdout) as Array<{
name: string
latest_build?: { status?: string }
}>
return workspaces
.filter(w => w.latest_build?.status === 'running')
.map(w => w.name)
} catch {
return []
}
}
: async () => []
const getRemoteHostSessionCount: (hs: string) => Promise<number> =
process.env.USER_TYPE === 'ant'
? async (homespace: string) => {
const { stdout, code } = await execFileNoThrow(
'ssh',
[
`${homespace}.coder`,
'find /root/.claude/projects -name "*.jsonl" 2>/dev/null | wc -l',
],
{ timeout: 30000 },
)
if (code !== 0) return 0
return parseInt(stdout.trim(), 10) || 0
}
: async () => 0
const collectFromRemoteHost: (
hs: string,
destDir: string,
) => Promise<{ copied: number; skipped: number }> =
process.env.USER_TYPE === 'ant'
? async (homespace: string, destDir: string) => {
const result = { copied: 0, skipped: 0 }
// Create temp directory
const tempDir = await mkdtemp(join(tmpdir(), 'claude-hs-'))
try {
// SCP the projects folder
const scpResult = await execFileNoThrow(
'scp',
['-rq', `${homespace}.coder:/root/.claude/projects/`, tempDir],
{ timeout: 300000 },
)
if (scpResult.code !== 0) {
// SCP failed
return result
}
const projectsDir = join(tempDir, 'projects')
let projectDirents: Awaited<ReturnType<typeof readdir>>
try {
projectDirents = await readdir(projectsDir, { withFileTypes: true })
} catch {
return result
}
// Merge into destination (parallel per project directory)
await Promise.all(
projectDirents.map(async dirent => {
const projectName = dirent.name
const projectPath = join(projectsDir, projectName)
// Skip if not a directory
if (!dirent.isDirectory()) return
const destProjectName = `${projectName}__${homespace}`
const destProjectPath = join(destDir, destProjectName)
try {
await mkdir(destProjectPath, { recursive: true })
} catch {
// Directory may already exist
}
// Copy session files (skip existing)
let files: Awaited<ReturnType<typeof readdir>>
try {
files = await readdir(projectPath, { withFileTypes: true })
} catch {
return
}
await Promise.all(
files.map(async fileDirent => {
const fileName = fileDirent.name
if (!fileName.endsWith('.jsonl')) return
const srcFile = join(projectPath, fileName)
const destFile = join(destProjectPath, fileName)
try {
await copyFile(srcFile, destFile, fsConstants.COPYFILE_EXCL)
result.copied++
} catch {
// EEXIST from COPYFILE_EXCL means dest already exists
result.skipped++
}
}),
)
}),
)
} finally {
try {
await rm(tempDir, { recursive: true, force: true })
} catch {
// Ignore cleanup errors
}
}
return result
}
: async () => ({ copied: 0, skipped: 0 })
const collectAllRemoteHostData: (destDir: string) => Promise<{
hosts: RemoteHostInfo[]
totalCopied: number
totalSkipped: number
}> =
process.env.USER_TYPE === 'ant'
? async (destDir: string) => {
const rHosts = await getRunningRemoteHosts()
const result: RemoteHostInfo[] = []
let totalCopied = 0
let totalSkipped = 0
// Collect from all hosts in parallel (SCP per host can take seconds)
const hostResults = await Promise.all(
rHosts.map(async hs => {
const sessionCount = await getRemoteHostSessionCount(hs)
if (sessionCount > 0) {
const { copied, skipped } = await collectFromRemoteHost(
hs,
destDir,
)
return { name: hs, sessionCount, copied, skipped }
}
return { name: hs, sessionCount, copied: 0, skipped: 0 }
}),
)
for (const hr of hostResults) {
result.push({ name: hr.name, sessionCount: hr.sessionCount })
totalCopied += hr.copied
totalSkipped += hr.skipped
}
return { hosts: result, totalCopied, totalSkipped }
}
: async () => ({ hosts: [], totalCopied: 0, totalSkipped: 0 })
/* eslint-enable custom-rules/no-process-env-top-level */
// ============================================================================
// Types
// ============================================================================
@@ -2659,7 +2479,6 @@ export type InsightsExport = {
claude_code_version: string
date_range: { start: string; end: string }
session_count: number
remote_hosts_collected?: string[]
}
aggregated_data: AggregatedData
insights: InsightResults
@@ -2680,14 +2499,9 @@ export function buildExportData(
data: AggregatedData,
insights: InsightResults,
facets: Map<string, SessionFacets>,
remoteStats?: { hosts: RemoteHostInfo[]; totalCopied: number },
): InsightsExport {
const version = typeof MACRO !== 'undefined' ? MACRO.VERSION : 'unknown'
const remote_hosts_collected = remoteStats?.hosts
.filter(h => h.sessionCount > 0)
.map(h => h.name)
const facets_summary = {
total: facets.size,
goal_categories: {} as Record<string, number>,
@@ -2725,10 +2539,6 @@ export function buildExportData(
claude_code_version: version,
date_range: data.date_range,
session_count: data.total_sessions,
...(remote_hosts_collected &&
remote_hosts_collected.length > 0 && {
remote_hosts_collected,
}),
},
aggregated_data: data,
insights,
@@ -2793,24 +2603,12 @@ async function scanAllSessions(): Promise<LiteSessionInfo[]> {
// Main Function
// ============================================================================
export async function generateUsageReport(options?: {
collectRemote?: boolean
}): Promise<{
export async function generateUsageReport(): Promise<{
insights: InsightResults
htmlPath: string
data: AggregatedData
remoteStats?: { hosts: RemoteHostInfo[]; totalCopied: number }
facets: Map<string, SessionFacets>
}> {
let remoteStats: { hosts: RemoteHostInfo[]; totalCopied: number } | undefined
// Optionally collect data from remote hosts first (internal-only)
if (process.env.USER_TYPE === 'ant' && options?.collectRemote) {
const destDir = join(getClaudeConfigHomeDir(), 'projects')
const { hosts, totalCopied } = await collectAllRemoteHostData(destDir)
remoteStats = { hosts, totalCopied }
}
// Phase 1: Lite scan — filesystem metadata only (no JSONL parsing)
const allScannedSessions = await scanAllSessions()
const totalSessionsScanned = allScannedSessions.length
@@ -3017,7 +2815,6 @@ export async function generateUsageReport(options?: {
insights,
htmlPath,
data: aggregated,
remoteStats,
facets: substantiveFacets,
}
}
@@ -3043,31 +2840,8 @@ const usageReport: Command = {
contentLength: 0, // Dynamic content
progressMessage: 'analyzing your sessions',
source: 'builtin',
async getPromptForCommand(args) {
let collectRemote = false
let remoteHosts: string[] = []
let hasRemoteHosts = false
if (process.env.USER_TYPE === 'ant') {
// Parse --homespaces flag
collectRemote = args?.includes('--homespaces') ?? false
// Check for available remote hosts
remoteHosts = await getRunningRemoteHosts()
hasRemoteHosts = remoteHosts.length > 0
// Show collection message if collecting
if (collectRemote && hasRemoteHosts) {
// biome-ignore lint/suspicious/noConsole: intentional
console.error(
`Collecting sessions from ${remoteHosts.length} homespace(s): ${remoteHosts.join(', ')}...`,
)
}
}
const { insights, htmlPath, data, remoteStats } = await generateUsageReport(
{ collectRemote },
)
async getPromptForCommand(_args) {
const { insights, htmlPath, data } = await generateUsageReport()
let reportUrl = `file://${htmlPath}`
let uploadHint = ''
@@ -3085,20 +2859,6 @@ const usageReport: Command = {
`${data.git_commits} commits`,
].join(' · ')
// Build remote host info (internal-only)
let remoteInfo = ''
if (process.env.USER_TYPE === 'ant') {
if (remoteStats && remoteStats.totalCopied > 0) {
const hsNames = remoteStats.hosts
.filter(h => h.sessionCount > 0)
.map(h => h.name)
.join(', ')
remoteInfo = `\n_Collected ${remoteStats.totalCopied} new sessions from: ${hsNames}_\n`
} else if (!collectRemote && hasRemoteHosts) {
// Suggest using --homespaces if they have remote hosts but didn't use the flag
remoteInfo = `\n_Tip: Run \`/insights --homespaces\` to include sessions from your ${remoteHosts.length} running homespace(s)_\n`
}
}
// Build markdown summary from insights
const atAGlance = insights.at_a_glance
@@ -3118,7 +2878,6 @@ ${atAGlance.ambitious_workflows ? `**Ambitious workflows:** ${atAGlance.ambitiou
${stats}
${data.date_range.start} to ${data.date_range.end}
${remoteInfo}
`
const userSummary = `${header}${summaryText}

View File

@@ -1,20 +1,28 @@
import { PassThrough } from 'node:stream'
import { expect, test } from 'bun:test'
import { afterEach, expect, mock, test } from 'bun:test'
import React from 'react'
import stripAnsi from 'strip-ansi'
import { createRoot, render, useApp } from '../../ink.js'
import { AppStateProvider } from '../../state/AppState.js'
import {
applySavedProfileToCurrentSession,
buildCodexOAuthProfileEnv,
buildCurrentProviderSummary,
buildProfileSaveMessage,
getProviderWizardDefaults,
ProviderWizard,
TextEntryDialog,
} from './provider.js'
import { createProfileFile } from '../../utils/providerProfile.js'
const SYNC_START = '\x1B[?2026h'
const SYNC_END = '\x1B[?2026l'
const ORIGINAL_SIMPLE_ENV = process.env.CLAUDE_CODE_SIMPLE
const ORIGINAL_CODEX_API_KEY = process.env.CODEX_API_KEY
const ORIGINAL_CHATGPT_ACCOUNT_ID = process.env.CHATGPT_ACCOUNT_ID
const ORIGINAL_CODEX_ACCOUNT_ID = process.env.CODEX_ACCOUNT_ID
function extractLastFrame(output: string): string {
let lastFrame: string | null = null
@@ -60,6 +68,51 @@ async function renderFinalFrame(node: React.ReactNode): Promise<string> {
return stripAnsi(extractLastFrame(getOutput()))
}
async function waitForOutput(
getOutput: () => string,
predicate: (output: string) => boolean,
timeoutMs = 2500,
): Promise<string> {
const startedAt = Date.now()
while (Date.now() - startedAt < timeoutMs) {
const output = stripAnsi(extractLastFrame(getOutput()))
if (predicate(output)) {
return output
}
await Bun.sleep(10)
}
throw new Error('Timed out waiting for ProviderWizard test output')
}
async function renderProviderWizardFrame(): Promise<string> {
const { stdout, stdin, getOutput } = createTestStreams()
const root = await createRoot({
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
root.render(
<AppStateProvider>
<ProviderWizard onDone={() => {}} />
</AppStateProvider>,
)
try {
return await waitForOutput(
getOutput,
output => output.includes('Set up a provider profile'),
)
} finally {
root.unmount()
stdin.end()
stdout.end()
await Bun.sleep(0)
}
}
function createTestStreams(): {
stdout: PassThrough
stdin: PassThrough & {
@@ -94,6 +147,34 @@ function createTestStreams(): {
}
}
afterEach(() => {
mock.restore()
if (ORIGINAL_SIMPLE_ENV === undefined) {
delete process.env.CLAUDE_CODE_SIMPLE
} else {
process.env.CLAUDE_CODE_SIMPLE = ORIGINAL_SIMPLE_ENV
}
if (ORIGINAL_CODEX_API_KEY === undefined) {
delete process.env.CODEX_API_KEY
} else {
process.env.CODEX_API_KEY = ORIGINAL_CODEX_API_KEY
}
if (ORIGINAL_CHATGPT_ACCOUNT_ID === undefined) {
delete process.env.CHATGPT_ACCOUNT_ID
} else {
process.env.CHATGPT_ACCOUNT_ID = ORIGINAL_CHATGPT_ACCOUNT_ID
}
if (ORIGINAL_CODEX_ACCOUNT_ID === undefined) {
delete process.env.CODEX_ACCOUNT_ID
} else {
process.env.CODEX_ACCOUNT_ID = ORIGINAL_CODEX_ACCOUNT_ID
}
})
function StepChangeHarness(): React.ReactNode {
const { exit } = useApp()
const [step, setStep] = React.useState<'api' | 'model'>('api')
@@ -233,6 +314,167 @@ test('buildProfileSaveMessage describes Gemini access token / ADC mode clearly',
expect(message).not.toContain('AIza')
})
test('buildProfileSaveMessage reflects immediate Codex activation for existing credentials', () => {
const message = buildProfileSaveMessage(
'codex',
{
OPENAI_MODEL: 'codexplan',
OPENAI_BASE_URL: 'https://chatgpt.com/backend-api/codex',
CHATGPT_ACCOUNT_ID: 'acct_codex',
},
'D:/codings/Opensource/openclaude/.openclaude-profile.json',
{
activatedInSession: true,
},
)
expect(message).toContain('Saved Codex profile.')
expect(message).toContain('OpenClaude switched to it for this session.')
expect(message).not.toContain('Restart OpenClaude to use it.')
})
test('buildProfileSaveMessage reflects immediate Codex OAuth activation when the session switched successfully', () => {
const message = buildProfileSaveMessage(
'codex',
{
OPENAI_MODEL: 'codexplan',
OPENAI_BASE_URL: 'https://chatgpt.com/backend-api/codex',
CHATGPT_ACCOUNT_ID: 'acct_codex',
CODEX_CREDENTIAL_SOURCE: 'oauth',
},
'D:/codings/Opensource/openclaude/.openclaude-profile.json',
{
activatedInSession: true,
},
)
expect(message).toContain('Saved Codex profile.')
expect(message).toContain('OpenClaude switched to it for this session.')
expect(message).not.toContain('Restart OpenClaude to use it.')
})
test('buildCodexOAuthProfileEnv uses the fresh OAuth account id without persisting an API key', () => {
process.env.CODEX_API_KEY = 'stale-codex-key'
process.env.CHATGPT_ACCOUNT_ID = 'acct_stale'
const env = buildCodexOAuthProfileEnv({
accessToken: 'oauth-access-token',
accountId: 'acct_oauth',
})
expect(env).toEqual({
OPENAI_BASE_URL: 'https://chatgpt.com/backend-api/codex',
OPENAI_MODEL: 'codexplan',
CHATGPT_ACCOUNT_ID: 'acct_oauth',
CODEX_CREDENTIAL_SOURCE: 'oauth',
})
expect(env).not.toHaveProperty('CODEX_API_KEY')
})
test('buildCodexProfileEnv derives oauth source from secure storage when no explicit source is provided', async () => {
const actualProviderConfig = await import('../../services/api/providerConfig.js')
mock.module('../../services/api/providerConfig.js', () => ({
...actualProviderConfig,
resolveCodexApiCredentials: () => ({
apiKey: 'stored-access-token',
accountId: 'acct_secure_storage',
source: 'secure-storage' as const,
}),
}))
// @ts-expect-error cache-busting query string for Bun module mocks
const { buildCodexProfileEnv } = await import(
'../../utils/providerProfile.js?secure-storage-codex-source'
)
const env = buildCodexProfileEnv({
model: 'codexplan',
processEnv: {},
})
expect(env).toEqual({
OPENAI_BASE_URL: 'https://chatgpt.com/backend-api/codex',
OPENAI_MODEL: 'codexplan',
CHATGPT_ACCOUNT_ID: 'acct_secure_storage',
CODEX_CREDENTIAL_SOURCE: 'oauth',
})
})
test('explicitly declared env takes precedence over applySavedProfileToCurrentSession', async () => {
// @ts-expect-error cache-busting query string for Bun module mocks
const { applySavedProfileToCurrentSession } = await import(
'../../utils/providerProfile.js?apply-saved-profile-codex'
)
const processEnv: NodeJS.ProcessEnv = {
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_MODEL: 'gpt-4o',
OPENAI_BASE_URL: 'https://api.openai.com/v1',
OPENAI_API_KEY: 'sk-openai',
CODEX_API_KEY: 'codex-live',
CHATGPT_ACCOUNT_ID: 'acct_codex',
CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED: '1',
CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID: 'provider_old',
}
const profileFile = createProfileFile('codex', {
OPENAI_MODEL: 'codexplan',
OPENAI_BASE_URL: 'https://chatgpt.com/backend-api/codex',
CODEX_API_KEY: 'codex-live',
CHATGPT_ACCOUNT_ID: 'acct_codex',
})
const warning = await applySavedProfileToCurrentSession({
profileFile,
processEnv,
})
expect(warning).toBeNull()
expect(processEnv.CLAUDE_CODE_USE_OPENAI).toBe('1')
expect(processEnv.OPENAI_MODEL).toBe('gpt-4o')
expect(processEnv.OPENAI_BASE_URL).toBe(
"https://api.openai.com/v1",
)
expect(processEnv.CODEX_API_KEY).toBeUndefined()
expect(processEnv.CHATGPT_ACCOUNT_ID).toBeUndefined()
expect(processEnv.OPENAI_API_KEY).toBe("sk-openai")
expect(processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED).toBeUndefined()
expect(processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID).toBeUndefined()
})
test('explicitly declared env takes precedence over applySavedProfileToCurrentSession', async () => {
// @ts-expect-error cache-busting query string for Bun module mocks
const { applySavedProfileToCurrentSession } = await import(
'../../utils/providerProfile.js?apply-saved-profile-codex-oauth'
)
const processEnv: NodeJS.ProcessEnv = {
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_MODEL: 'gpt-4o',
OPENAI_BASE_URL: 'https://api.openai.com/v1',
CODEX_API_KEY: 'stale-codex-key',
CHATGPT_ACCOUNT_ID: 'acct_stale',
}
const profileFile = createProfileFile('codex', {
OPENAI_MODEL: 'codexplan',
OPENAI_BASE_URL: 'https://chatgpt.com/backend-api/codex',
CHATGPT_ACCOUNT_ID: 'acct_oauth',
CODEX_CREDENTIAL_SOURCE: 'oauth',
})
const warning = await applySavedProfileToCurrentSession({
profileFile,
processEnv,
})
expect(warning).not.toBeUndefined()
expect(processEnv.OPENAI_MODEL).toBe('gpt-4o')
expect(processEnv.OPENAI_BASE_URL).toBe(
"https://api.openai.com/v1",
)
expect(processEnv.CODEX_API_KEY).toBe("stale-codex-key")
expect(processEnv.CHATGPT_ACCOUNT_ID).toBe('acct_stale')
expect(processEnv.CHATGPT_ACCOUNT_ID).toBeTruthy()
})
test('buildCurrentProviderSummary redacts poisoned model and endpoint values', () => {
const summary = buildCurrentProviderSummary({
processEnv: {
@@ -245,8 +487,8 @@ test('buildCurrentProviderSummary redacts poisoned model and endpoint values', (
})
expect(summary.providerLabel).toBe('OpenAI-compatible')
expect(summary.modelLabel).toBe('sk-...5678')
expect(summary.endpointLabel).toBe('sk-...5678')
expect(summary.modelLabel).toBe('sk-...678')
expect(summary.endpointLabel).toBe('sk-...678')
})
test('buildCurrentProviderSummary labels generic local openai-compatible providers', () => {
@@ -264,7 +506,7 @@ test('buildCurrentProviderSummary labels generic local openai-compatible provide
expect(summary.endpointLabel).toBe('http://127.0.0.1:8080/v1')
})
test('buildCurrentProviderSummary does not relabel local gpt-5.4 providers as Codex', () => {
test('buildCurrentProviderSummary does not relabel local gpt-5.4 providers as Codex when custom base URL is set', () => {
const summary = buildCurrentProviderSummary({
processEnv: {
CLAUDE_CODE_USE_OPENAI: '1',
@@ -307,3 +549,12 @@ test('getProviderWizardDefaults ignores poisoned current provider values', () =>
expect(defaults.openAIBaseUrl).toBe('https://api.openai.com/v1')
expect(defaults.geminiModel).toBe('gemini-2.0-flash')
})
test('ProviderWizard hides Codex OAuth while running in bare mode', async () => {
process.env.CLAUDE_CODE_SIMPLE = '1'
const output = await renderProviderWizardFrame()
expect(output).toContain('Set up a provider profile')
expect(output).not.toContain('Codex OAuth')
})

View File

@@ -10,8 +10,12 @@ import {
} from '../../components/CustomSelect/index.js'
import { Dialog } from '../../components/design-system/Dialog.js'
import { LoadingState } from '../../components/design-system/LoadingState.js'
import { useCodexOAuthFlow } from '../../components/useCodexOAuthFlow.js'
import { useTerminalSize } from '../../hooks/useTerminalSize.js'
import { Box, Text } from '../../ink.js'
import {
type CodexOAuthTokens,
} from '../../services/api/codexOAuth.js'
import {
DEFAULT_CODEX_BASE_URL,
DEFAULT_OPENAI_BASE_URL,
@@ -20,6 +24,8 @@ import {
resolveProviderRequest,
} from '../../services/api/providerConfig.js'
import {
applySavedProfileToCurrentSession as applySharedProfileToCurrentSession,
buildCodexOAuthProfileEnv as buildSharedCodexOAuthProfileEnv,
buildCodexProfileEnv,
buildGeminiProfileEnv,
buildMistralProfileEnv,
@@ -49,6 +55,7 @@ import {
readGeminiAccessToken,
saveGeminiAccessToken,
} from '../../utils/geminiCredentials.js'
import { isBareMode } from '../../utils/envUtils.js'
import {
getGoalDefaultOpenAIModel,
normalizeRecommendationGoal,
@@ -57,12 +64,47 @@ import {
type RecommendationGoal,
} from '../../utils/providerRecommendation.js'
import {
getOllamaChatBaseUrl,
getLocalOpenAICompatibleProviderLabel,
hasLocalOllama,
listOllamaModels,
probeOllamaGenerationReadiness,
type OllamaGenerationReadiness,
} from '../../utils/providerDiscovery.js'
type ProviderChoice = 'auto' | ProviderProfile | 'clear'
function describeOllamaReadinessIssue(
readiness: OllamaGenerationReadiness,
options?: {
baseUrl?: string
allowManualFallback?: boolean
},
): string {
const endpoint = options?.baseUrl ?? 'http://localhost:11434'
if (readiness.state === 'unreachable') {
return `Could not reach Ollama at ${endpoint}. Start Ollama first, then run /provider again.`
}
if (readiness.state === 'no_models') {
const manualSuffix = options?.allowManualFallback
? ', or enter details manually'
: ''
return `Ollama is running, but no installed models were found. Pull a chat model such as qwen2.5-coder:7b or llama3.1:8b first${manualSuffix}.`
}
if (readiness.state === 'generation_failed') {
const modelHint = readiness.probeModel ?? 'the selected model'
const detailSuffix = readiness.detail
? ` Details: ${readiness.detail}.`
: ''
const manualSuffix = options?.allowManualFallback
? ' You can also enter details manually.'
: ''
return `Ollama is reachable and models are installed, but a generation probe failed for ${modelHint}.${detailSuffix} Run "ollama run ${modelHint}" once and retry.${manualSuffix}`
}
return ''
}
type ProviderChoice = 'auto' | ProviderProfile | 'codex-oauth' | 'clear'
type Step =
| { name: 'choose' }
@@ -93,6 +135,7 @@ type Step =
apiKey?: string
authMode: 'api-key' | 'access-token' | 'adc'
}
| { name: 'codex-oauth' }
| { name: 'codex-check' }
type CurrentProviderSummary = {
@@ -131,6 +174,8 @@ type ProviderWizardDefaults = {
mistralBaseUrl: string
}
type SecretSourceEnv = NodeJS.ProcessEnv & Partial<ProfileEnv>
function isEnvTruthy(value: string | undefined): boolean {
if (!value) return false
const normalized = value.trim().toLowerCase()
@@ -139,7 +184,7 @@ function isEnvTruthy(value: string | undefined): boolean {
function getSafeDisplayValue(
value: string | undefined,
processEnv: NodeJS.ProcessEnv,
processEnv: SecretSourceEnv,
profileEnv?: ProfileEnv,
fallback = '(not set)',
): string {
@@ -151,14 +196,15 @@ function getSafeDisplayValue(
export function getProviderWizardDefaults(
processEnv: NodeJS.ProcessEnv = process.env,
): ProviderWizardDefaults {
const secretSource = processEnv as SecretSourceEnv
const safeOpenAIModel =
sanitizeProviderConfigValue(processEnv.OPENAI_MODEL, processEnv) ||
sanitizeProviderConfigValue(processEnv.OPENAI_MODEL, secretSource) ||
'gpt-4o'
const safeOpenAIBaseUrl =
sanitizeProviderConfigValue(processEnv.OPENAI_BASE_URL, processEnv) ||
sanitizeProviderConfigValue(processEnv.OPENAI_BASE_URL, secretSource) ||
DEFAULT_OPENAI_BASE_URL
const safeGeminiModel =
sanitizeProviderConfigValue(processEnv.GEMINI_MODEL, processEnv) ||
sanitizeProviderConfigValue(processEnv.GEMINI_MODEL, secretSource) ||
DEFAULT_GEMINI_MODEL
const safeMistralModel =
sanitizeProviderConfigValue(processEnv.MISTRAL_MODEL, processEnv) ||
@@ -181,6 +227,7 @@ export function buildCurrentProviderSummary(options?: {
persisted?: ProfileFile | null
}): CurrentProviderSummary {
const processEnv = options?.processEnv ?? process.env
const secretSource = processEnv as SecretSourceEnv
const persisted = options?.persisted ?? loadProfileFile()
const savedProfileLabel = persisted?.profile ?? 'none'
@@ -189,11 +236,11 @@ export function buildCurrentProviderSummary(options?: {
providerLabel: 'Google Gemini',
modelLabel: getSafeDisplayValue(
processEnv.GEMINI_MODEL ?? DEFAULT_GEMINI_MODEL,
processEnv,
secretSource,
),
endpointLabel: getSafeDisplayValue(
processEnv.GEMINI_BASE_URL ?? DEFAULT_GEMINI_BASE_URL,
processEnv,
secretSource,
),
savedProfileLabel,
}
@@ -219,13 +266,13 @@ export function buildCurrentProviderSummary(options?: {
providerLabel: 'GitHub Models',
modelLabel: getSafeDisplayValue(
processEnv.OPENAI_MODEL ?? 'github:copilot',
processEnv,
secretSource,
),
endpointLabel: getSafeDisplayValue(
processEnv.OPENAI_BASE_URL ??
processEnv.OPENAI_API_BASE ??
'https://models.github.ai/inference',
processEnv,
secretSource,
),
savedProfileLabel,
}
@@ -246,8 +293,8 @@ export function buildCurrentProviderSummary(options?: {
return {
providerLabel,
modelLabel: getSafeDisplayValue(request.requestedModel, processEnv),
endpointLabel: getSafeDisplayValue(request.baseUrl, processEnv),
modelLabel: getSafeDisplayValue(request.requestedModel, secretSource),
endpointLabel: getSafeDisplayValue(request.baseUrl, secretSource),
savedProfileLabel,
}
}
@@ -258,11 +305,11 @@ export function buildCurrentProviderSummary(options?: {
processEnv.ANTHROPIC_MODEL ??
processEnv.CLAUDE_MODEL ??
'claude-sonnet-4-6',
processEnv,
secretSource,
),
endpointLabel: getSafeDisplayValue(
processEnv.ANTHROPIC_BASE_URL ?? 'https://api.anthropic.com',
processEnv,
secretSource,
),
savedProfileLabel,
}
@@ -376,6 +423,10 @@ export function buildProfileSaveMessage(
profile: ProviderProfile,
env: ProfileEnv,
filePath: string,
options?: {
activatedInSession?: boolean
activationWarning?: string | null
},
): string {
const summary = buildSavedProfileSummary(profile, env)
const lines = [
@@ -389,13 +440,24 @@ export function buildProfileSaveMessage(
}
lines.push(`Profile: ${filePath}`)
lines.push('Restart OpenClaude to use it.')
if (options?.activatedInSession) {
lines.push('OpenClaude switched to it for this session.')
} else if (options?.activationWarning) {
lines.push(
`Saved for next startup. Warning: could not activate it in this session (${options.activationWarning}).`,
)
} else {
lines.push('Restart OpenClaude to use it.')
}
return lines.join('\n')
}
function buildUsageText(): string {
const summary = buildCurrentProviderSummary()
const availableProviders = isBareMode()
? 'Choose Auto, Ollama, OpenAI-compatible, Gemini, or Codex, then save a provider profile.'
: 'Choose Auto, Ollama, OpenAI-compatible, Gemini, Codex, or Codex OAuth, then save a provider profile.'
return [
'Usage: /provider',
'',
@@ -406,7 +468,7 @@ function buildUsageText(): string {
`Current endpoint: ${summary.endpointLabel}`,
`Saved profile: ${summary.savedProfileLabel}`,
'',
'Choose Auto, Ollama, OpenAI-compatible, Gemini, or Codex, then save a profile for the next OpenClaude restart.',
availableProviders,
].join('\n')
}
@@ -415,12 +477,45 @@ function finishProfileSave(
profile: ProviderProfile,
env: ProfileEnv,
): void {
void saveProfileAndNotify(onDone, profile, env)
}
export function buildCodexOAuthProfileEnv(
tokens: Pick<CodexOAuthTokens, 'accessToken' | 'idToken' | 'accountId'>,
): ProfileEnv | null {
return buildSharedCodexOAuthProfileEnv(tokens)
}
export async function applySavedProfileToCurrentSession(options: {
profileFile: ProfileFile
processEnv?: NodeJS.ProcessEnv
}): Promise<string | null> {
return applySharedProfileToCurrentSession(options)
}
async function saveProfileAndNotify(
onDone: LocalJSXCommandOnDone,
profile: ProviderProfile,
env: ProfileEnv,
): Promise<void> {
try {
const profileFile = createProfileFile(profile, env)
const filePath = saveProfileFile(profileFile)
onDone(buildProfileSaveMessage(profile, env, filePath), {
display: 'system',
})
const shouldActivateInSession = profile === 'codex'
const activationWarning = shouldActivateInSession
? await applySharedProfileToCurrentSession({ profileFile })
: null
onDone(
buildProfileSaveMessage(profile, env, filePath, {
activatedInSession:
shouldActivateInSession && activationWarning === null,
activationWarning,
}),
{
display: 'system',
},
)
} catch (error) {
const message = error instanceof Error ? error.message : String(error)
onDone(`Failed to save provider profile: ${message}`, {
@@ -504,6 +599,10 @@ function ProviderChooser({
onCancel: () => void
}): React.ReactNode {
const summary = buildCurrentProviderSummary()
const canUseCodexOAuth = !isBareMode()
const helperText = canUseCodexOAuth
? 'Save a provider profile without editing environment variables first. Codex profiles backed by env, auth.json, or OpenClaude secure storage can switch this session immediately when validation succeeds.'
: 'Save a provider profile without editing environment variables first. Codex profiles backed by env or auth.json can switch this session immediately.'
const options: OptionWithDescription<ProviderChoice>[] = [
{
label: 'Auto',
@@ -537,6 +636,16 @@ function ProviderChooser({
value: 'codex',
description: 'Use existing ChatGPT Codex CLI auth or env credentials',
},
...(canUseCodexOAuth
? [
{
label: 'Codex OAuth',
value: 'codex-oauth' as const,
description:
'Sign in with ChatGPT in your browser and store Codex tokens securely',
},
]
: []),
]
if (summary.savedProfileLabel !== 'none') {
@@ -554,10 +663,7 @@ function ProviderChooser({
onCancel={onCancel}
>
<Box flexDirection="column" gap={1}>
<Text>
Save a provider profile for the next OpenClaude restart without
editing environment variables first.
</Text>
<Text>{helperText}</Text>
<Box flexDirection="column">
<Text dimColor>Current model: {summary.modelLabel}</Text>
<Text dimColor>Current endpoint: {summary.endpointLabel}</Text>
@@ -643,6 +749,7 @@ function AutoRecommendationStep({
| {
state: 'openai'
defaultModel: string
reason: string
}
| {
state: 'error'
@@ -656,19 +763,27 @@ function AutoRecommendationStep({
void (async () => {
const defaultModel = getGoalDefaultOpenAIModel(goal)
try {
const ollamaAvailable = await hasLocalOllama()
if (!ollamaAvailable) {
const readiness = await probeOllamaGenerationReadiness()
if (readiness.state !== 'ready') {
if (!cancelled) {
setStatus({ state: 'openai', defaultModel })
setStatus({
state: 'openai',
defaultModel,
reason: describeOllamaReadinessIssue(readiness),
})
}
return
}
const models = await listOllamaModels()
const recommended = recommendOllamaModel(models, goal)
const recommended = recommendOllamaModel(readiness.models, goal)
if (!recommended) {
if (!cancelled) {
setStatus({ state: 'openai', defaultModel })
setStatus({
state: 'openai',
defaultModel,
reason:
'Ollama responded to a generation probe, but no recommended chat model matched this goal.',
})
}
return
}
@@ -709,7 +824,9 @@ function AutoRecommendationStep({
{ label: 'Back', value: 'back' },
{ label: 'Cancel', value: 'cancel' },
]}
onChange={value => (value === 'back' ? onBack() : onCancel())}
onChange={(value: string) =>
value === 'back' ? onBack() : onCancel()
}
onCancel={onCancel}
/>
</Box>
@@ -722,17 +839,17 @@ function AutoRecommendationStep({
<Dialog title="Auto setup fallback" onCancel={onCancel}>
<Box flexDirection="column" gap={1}>
<Text>
No viable local Ollama chat model was detected. Auto setup can
continue into OpenAI-compatible setup with a default model of{' '}
Auto setup can continue into OpenAI-compatible setup with a default model of{' '}
{status.defaultModel}.
</Text>
<Text dimColor>{status.reason}</Text>
<Select
options={[
{ label: 'Continue to OpenAI-compatible setup', value: 'continue' },
{ label: 'Back', value: 'back' },
{ label: 'Cancel', value: 'cancel' },
]}
onChange={value => {
onChange={(value: string) => {
if (value === 'continue') {
onNeedOpenAI(status.defaultModel)
} else if (value === 'back') {
@@ -765,7 +882,7 @@ function AutoRecommendationStep({
{ label: 'Back', value: 'back' },
{ label: 'Cancel', value: 'cancel' },
]}
onChange={value => {
onChange={(value: string) => {
if (value === 'save') {
onSave(
'ollama',
@@ -809,32 +926,19 @@ function OllamaModelStep({
let cancelled = false
void (async () => {
const available = await hasLocalOllama()
if (!available) {
const readiness = await probeOllamaGenerationReadiness()
if (readiness.state !== 'ready') {
if (!cancelled) {
setStatus({
state: 'unavailable',
message:
'Could not reach Ollama at http://localhost:11434. Start Ollama first, then run /provider again.',
message: describeOllamaReadinessIssue(readiness),
})
}
return
}
const models = await listOllamaModels()
if (models.length === 0) {
if (!cancelled) {
setStatus({
state: 'unavailable',
message:
'Ollama is running, but no installed models were found. Pull a chat model such as qwen2.5-coder:7b or llama3.1:8b first.',
})
}
return
}
const ranked = rankOllamaModels(models, 'balanced')
const recommended = recommendOllamaModel(models, 'balanced')
const ranked = rankOllamaModels(readiness.models, 'balanced')
const recommended = recommendOllamaModel(readiness.models, 'balanced')
if (!cancelled) {
setStatus({
state: 'ready',
@@ -867,7 +971,9 @@ function OllamaModelStep({
{ label: 'Back', value: 'back' },
{ label: 'Cancel', value: 'cancel' },
]}
onChange={value => (value === 'back' ? onBack() : onCancel())}
onChange={(value: string) =>
value === 'back' ? onBack() : onCancel()
}
onCancel={onCancel}
/>
</Box>
@@ -888,7 +994,7 @@ function OllamaModelStep({
defaultFocusValue={status.defaultValue}
inlineDescriptions
visibleOptionCount={Math.min(8, status.options.length)}
onChange={value => {
onChange={(value: string) => {
onSave(
'ollama',
buildOllamaProfileEnv(value, {
@@ -903,6 +1009,84 @@ function OllamaModelStep({
)
}
function CodexOAuthStep({
onSave,
onBack,
onCancel,
}: {
onSave: (profile: ProviderProfile, env: ProfileEnv) => void
onBack: () => void
onCancel: () => void
}): React.ReactNode {
const handleAuthenticated = React.useCallback(async (
tokens: CodexOAuthTokens,
persistCredentials: (options?: { profileId?: string }) => void,
) => {
const env = buildCodexOAuthProfileEnv(tokens)
if (!env) {
throw new Error(
'Codex OAuth succeeded, but OpenClaude could not build a Codex profile from the stored credentials.',
)
}
persistCredentials()
onSave('codex', env)
}, [onSave])
const status = useCodexOAuthFlow({
onAuthenticated: handleAuthenticated,
})
if (status.state === 'error') {
return (
<Dialog title="Codex OAuth failed" onCancel={onCancel} color="warning">
<Box flexDirection="column" gap={1}>
<Text>{status.message}</Text>
<Select
options={[
{ label: 'Back', value: 'back' },
{ label: 'Cancel', value: 'cancel' },
]}
onChange={(value: string) =>
value === 'back' ? onBack() : onCancel()
}
onCancel={onCancel}
/>
</Box>
</Dialog>
)
}
if (status.state === 'starting') {
return <LoadingState message="Starting Codex OAuth..." />
}
return (
<Dialog title="Codex OAuth" onCancel={onBack}>
<Box flexDirection="column" gap={1}>
<Text>
Finish signing in with ChatGPT in your browser. OpenClaude will store
the resulting Codex credentials securely for future sessions.
</Text>
{status.browserOpened === false ? (
<Text color="warning">
Browser did not open automatically. Visit this URL to continue:
</Text>
) : status.browserOpened === true ? (
<Text dimColor>
Browser opened. Complete the sign-in there, then OpenClaude will
finish setup automatically.
</Text>
) : (
<Text dimColor>Opening your browser...</Text>
)}
<Text>{status.authUrl}</Text>
<Text dimColor>Press Esc to cancel and go back.</Text>
</Box>
</Dialog>
)
}
function CodexCredentialStep({
onSave,
onBack,
@@ -924,7 +1108,9 @@ function CodexCredentialStep({
{ label: 'Back', value: 'back' },
{ label: 'Cancel', value: 'cancel' },
]}
onChange={value => (value === 'back' ? onBack() : onCancel())}
onChange={(value: string) =>
value === 'back' ? onBack() : onCancel()
}
onCancel={onCancel}
/>
</Box>
@@ -958,9 +1144,10 @@ function CodexCredentialStep({
defaultFocusValue="codexplan"
inlineDescriptions
visibleOptionCount={options.length}
onChange={value => {
onChange={(value: string) => {
const env = buildCodexProfileEnv({
model: value,
credentialSource: credentials.credentialSource,
processEnv: process.env,
})
if (env) {
@@ -975,9 +1162,16 @@ function CodexCredentialStep({
}
function resolveCodexCredentials(processEnv: NodeJS.ProcessEnv):
| { ok: true; sourceDescription: string }
| {
ok: true
sourceDescription: string
credentialSource: 'oauth' | 'existing'
}
| { ok: false; message: string } {
const credentials = resolveCodexApiCredentials(processEnv)
const oauthHint = isBareMode()
? 'Re-login with the Codex CLI'
: 'Choose Codex OAuth in /provider, or re-login with the Codex CLI'
if (!credentials.apiKey) {
const authHint = credentials.authPath
@@ -985,7 +1179,7 @@ function resolveCodexCredentials(processEnv: NodeJS.ProcessEnv):
: 'Set CODEX_API_KEY or re-login with the Codex CLI.'
return {
ok: false,
message: `Codex setup needs existing credentials. Re-login with the Codex CLI or set CODEX_API_KEY. ${authHint}`,
message: `Codex setup needs existing credentials. ${oauthHint}, or set CODEX_API_KEY. ${authHint}`,
}
}
@@ -993,15 +1187,19 @@ function resolveCodexCredentials(processEnv: NodeJS.ProcessEnv):
return {
ok: false,
message:
'Codex auth is missing chatgpt_account_id. Re-login with the Codex CLI or set CHATGPT_ACCOUNT_ID/CODEX_ACCOUNT_ID first.',
`Codex auth is missing chatgpt_account_id. ${oauthHint}, or set CHATGPT_ACCOUNT_ID/CODEX_ACCOUNT_ID first.`,
}
}
return {
ok: true,
credentialSource:
credentials.source === 'secure-storage' ? 'oauth' : 'existing',
sourceDescription:
credentials.source === 'env'
? 'the current shell environment'
: credentials.source === 'secure-storage'
? 'OpenClaude secure storage'
: credentials.authPath ?? DEFAULT_CODEX_BASE_URL,
}
}
@@ -1035,6 +1233,8 @@ export function ProviderWizard({
name: 'mistral-key',
defaultModel: defaults.mistralModel,
})
} else if (value === 'codex-oauth') {
setStep({ name: 'codex-oauth' })
} else if (value === 'clear') {
const filePath = deleteProfileFile()
onDone(`Removed saved provider profile at ${filePath}. Restart OpenClaude to go back to normal startup.`, {
@@ -1314,7 +1514,7 @@ export function ProviderWizard({
options={options}
inlineDescriptions
visibleOptionCount={options.length}
onChange={value => {
onChange={(value: string) => {
if (value === 'api-key') {
setStep({ name: 'gemini-key' })
} else if (value === 'access-token') {
@@ -1470,6 +1670,15 @@ export function ProviderWizard({
onCancel={() => onDone()}
/>
)
case 'codex-oauth':
return (
<CodexOAuthStep
onSave={(profile, env) => finishProfileSave(onDone, profile, env)}
onBack={() => setStep({ name: 'choose' })}
onCancel={() => onDone()}
/>
)
}
}

View File

@@ -112,8 +112,10 @@ test('third-party provider branch opens the first-run provider manager', async (
)
expect(output).toContain('Set up provider')
// Use alphabetically-early sentinels so they remain visible in the
// 13-row test frame after the provider list was sorted A→Z.
expect(output).toContain('Anthropic')
expect(output).toContain('OpenAI')
expect(output).toContain('Ollama')
expect(output).toContain('LM Studio')
expect(output).toContain('Azure OpenAI')
expect(output).toContain('DeepSeek')
expect(output).toContain('Google Gemini')
})

View File

@@ -101,9 +101,9 @@ export function EffortPicker({ onSelect, onCancel }: Props) {
<Box marginBottom={1} flexDirection="column">
<Text color="remember" bold={true}>Set effort level</Text>
<Text dimColor={true}>
{usesOpenAIEffort
? `OpenAI/Codex provider (${provider})`
: supportsEffort
{supportsEffort && usesOpenAIEffort
? `OpenAI/Codex provider (${provider})`
: supportsEffort
? `Claude model · ${provider} provider`
: `Effort not supported for this model`
}

View File

@@ -5,13 +5,14 @@ import React from 'react'
import stripAnsi from 'strip-ansi'
import { createRoot } from '../ink.js'
import { AppStateProvider } from '../state/AppState.js'
import { KeybindingSetup } from '../keybindings/KeybindingProviderSetup.js'
import { AppStateProvider } from '../state/AppState.js'
const SYNC_START = '\x1B[?2026h'
const SYNC_END = '\x1B[?2026l'
const ORIGINAL_ENV = {
CLAUDE_CODE_SIMPLE: process.env.CLAUDE_CODE_SIMPLE,
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
GH_TOKEN: process.env.GH_TOKEN,
@@ -96,6 +97,47 @@ async function waitForCondition(
throw new Error('Timed out waiting for ProviderManager test condition')
}
// Provider list is sorted alphabetically by label in the preset picker, so
// reaching a given provider takes more keypresses than it used to. Keep the
// target-by-label indirection here so these tests survive future list edits
// without further churn.
//
// Order matches ProviderManager.renderPresetSelection() when
// canUseCodexOAuth === true (default in mocked tests).
const PRESET_ORDER = [
'Alibaba Coding Plan',
'Alibaba Coding Plan (China)',
'Anthropic',
'Atomic Chat',
'Azure OpenAI',
'Codex OAuth',
'DeepSeek',
'Google Gemini',
'Groq',
'LM Studio',
'MiniMax',
'Mistral',
'Moonshot AI',
'NVIDIA NIM',
'Ollama',
'OpenAI',
'OpenRouter',
'Together AI',
'Custom',
] as const
async function navigateToPreset(
stdin: { write: (data: string) => void },
label: (typeof PRESET_ORDER)[number],
): Promise<void> {
const index = PRESET_ORDER.indexOf(label)
if (index < 0) throw new Error(`Unknown preset label: ${label}`)
for (let i = 0; i < index; i++) {
stdin.write('j')
await Bun.sleep(25)
}
}
function createDeferred<T>(): {
promise: Promise<T>
resolve: (value: T) => void
@@ -109,6 +151,9 @@ function createDeferred<T>(): {
function mockProviderProfilesModule(options?: {
addProviderProfile?: (...args: unknown[]) => unknown
getProviderProfiles?: () => unknown[]
updateProviderProfile?: (...args: unknown[]) => unknown
setActiveProviderProfile?: (...args: unknown[]) => unknown
}): void {
mock.module('../utils/providerProfiles.js', () => ({
addProviderProfile: options?.addProviderProfile ?? (() => null),
@@ -131,48 +176,135 @@ function mockProviderProfilesModule(options?: {
model: 'mock-model',
apiKey: '',
},
getProviderProfiles: () => [],
setActiveProviderProfile: () => null,
updateProviderProfile: () => null,
getProviderProfiles: options?.getProviderProfiles ?? (() => []),
setActiveProviderProfile: options?.setActiveProviderProfile ?? (() => null),
updateProviderProfile: options?.updateProviderProfile ?? (() => null),
}))
}
function mockProviderManagerDependencies(
syncRead: () => string | undefined,
asyncRead: () => Promise<string | undefined>,
githubSyncRead: () => string | undefined,
githubAsyncRead: () => Promise<string | undefined>,
options?: {
addProviderProfile?: (...args: unknown[]) => unknown
hasLocalOllama?: () => Promise<boolean>
listOllamaModels?: () => Promise<
Array<{
name: string
sizeBytes?: number | null
family?: string | null
families?: string[]
parameterSize?: string | null
quantizationLevel?: string | null
}>
>
applySavedProfileToCurrentSession?: (...args: unknown[]) => Promise<string | null>
clearCodexCredentials?: () => { success: boolean; warning?: string }
getProviderProfiles?: () => unknown[]
probeOllamaGenerationReadiness?: () => Promise<{
state: 'ready' | 'unreachable' | 'no_models' | 'generation_failed'
models: Array<
{
name: string
sizeBytes?: number | null
family?: string | null
families?: string[]
parameterSize?: string | null
quantizationLevel?: string | null
}
>
probeModel?: string
detail?: string
}>
codexSyncRead?: () => unknown
codexAsyncRead?: () => Promise<unknown>
updateProviderProfile?: (...args: unknown[]) => unknown
setActiveProviderProfile?: (...args: unknown[]) => unknown
useCodexOAuthFlow?: (options: {
onAuthenticated: (tokens: {
accessToken: string
refreshToken: string
accountId?: string
idToken?: string
apiKey?: string
}, persistCredentials: (options?: { profileId?: string }) => void) =>
void | Promise<void>
}) => {
state: 'starting' | 'waiting' | 'error'
authUrl?: string
browserOpened?: boolean | null
message?: string
}
},
): void {
mockProviderProfilesModule({ addProviderProfile: options?.addProviderProfile })
mockProviderProfilesModule({
addProviderProfile: options?.addProviderProfile,
getProviderProfiles: options?.getProviderProfiles,
updateProviderProfile: options?.updateProviderProfile,
setActiveProviderProfile: options?.setActiveProviderProfile,
})
mock.module('../utils/providerDiscovery.js', () => ({
hasLocalOllama: options?.hasLocalOllama ?? (async () => false),
listOllamaModels: options?.listOllamaModels ?? (async () => []),
probeOllamaGenerationReadiness:
options?.probeOllamaGenerationReadiness ??
(async () => ({
state: 'unreachable' as const,
models: [],
})),
}))
mock.module('../utils/githubModelsCredentials.js', () => ({
clearGithubModelsToken: () => ({ success: true }),
GITHUB_MODELS_HYDRATED_ENV_MARKER: 'CLAUDE_CODE_GITHUB_TOKEN_HYDRATED',
hydrateGithubModelsTokenFromSecureStorage: () => {},
readGithubModelsToken: syncRead,
readGithubModelsTokenAsync: asyncRead,
readGithubModelsToken: githubSyncRead,
readGithubModelsTokenAsync: githubAsyncRead,
}))
mock.module('../utils/codexCredentials.js', () => ({
attachCodexProfileIdToStoredCredentials: () => ({ success: true }),
clearCodexCredentials:
options?.clearCodexCredentials ?? (() => ({ success: true })),
readCodexCredentials:
options?.codexSyncRead ?? (() => undefined),
readCodexCredentialsAsync:
options?.codexAsyncRead ?? (async () => undefined),
}))
mock.module('../utils/providerProfile.js', () => ({
applySavedProfileToCurrentSession:
options?.applySavedProfileToCurrentSession ?? (async () => null),
buildCodexOAuthProfileEnv: (tokens: {
accessToken: string
accountId?: string
idToken?: string
}) => {
const accountId =
tokens.accountId ??
(tokens.idToken ? 'acct_from_id_token' : undefined) ??
(tokens.accessToken ? 'acct_from_access_token' : undefined)
if (!accountId) {
return null
}
return {
OPENAI_BASE_URL: 'https://chatgpt.com/backend-api/codex',
OPENAI_MODEL: 'codexplan',
CHATGPT_ACCOUNT_ID: accountId,
CODEX_CREDENTIAL_SOURCE: 'oauth' as const,
}
},
clearPersistedCodexOAuthProfile: () => null,
createProfileFile: (profile: string, env: Record<string, unknown>) => ({
profile,
env,
createdAt: '2026-04-10T00:00:00.000Z',
}),
}))
mock.module('../utils/settings/settings.js', () => ({
updateSettingsForSource: () => ({ error: null }),
}))
mock.module('./useCodexOAuthFlow.js', () => ({
useCodexOAuthFlow:
options?.useCodexOAuthFlow ??
(() => ({
state: 'waiting' as const,
authUrl: 'https://chatgpt.com/codex',
browserOpened: true,
})),
}))
}
async function waitForFrameOutput(
@@ -240,9 +372,9 @@ async function renderProviderManagerFrame(
onDone: (result?: unknown) => void
}>,
options?: {
mode?: 'first-run' | 'manage'
waitForOutput?: (output: string) => boolean
timeoutMs?: number
mode?: 'first-run' | 'manage'
},
): Promise<string> {
const mounted = await mountProviderManager(ProviderManager, {
@@ -305,96 +437,6 @@ test('ProviderManager resolves GitHub virtual provider from async storage withou
expect(asyncRead).toHaveBeenCalled()
})
test('ProviderManager first-run Ollama preset auto-detects installed models', async () => {
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const onDone = mock(() => {})
const addProviderProfile = mock((payload: {
provider: string
name: string
baseUrl: string
model: string
apiKey?: string
}) => ({
id: 'provider_ollama',
provider: payload.provider,
name: payload.name,
baseUrl: payload.baseUrl,
model: payload.model,
apiKey: payload.apiKey,
}))
mockProviderManagerDependencies(
() => undefined,
async () => undefined,
{
addProviderProfile,
hasLocalOllama: async () => true,
listOllamaModels: async () => [
{
name: 'gemma4:31b-cloud',
family: 'gemma',
parameterSize: '31b',
},
{
name: 'kimi-k2.5:cloud',
family: 'kimi',
parameterSize: '2.5b',
},
],
},
)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const mounted = await mountProviderManager(ProviderManager, {
mode: 'first-run',
onDone,
})
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Set up provider') && frame.includes('Ollama'),
)
mounted.stdin.write('j')
await Bun.sleep(50)
mounted.stdin.write('\r')
const modelFrame = await waitForFrameOutput(
mounted.getOutput,
frame =>
frame.includes('Choose an Ollama model') &&
frame.includes('gemma4:31b-cloud') &&
frame.includes('kimi-k2.5:cloud'),
)
expect(modelFrame).toContain('Choose an Ollama model')
expect(modelFrame).toContain('gemma4:31b-cloud')
await Bun.sleep(25)
mounted.stdin.write('\r')
await waitForCondition(() => onDone.mock.calls.length > 0)
expect(addProviderProfile).toHaveBeenCalled()
expect(addProviderProfile.mock.calls[0]?.[0]).toMatchObject({
name: 'Ollama',
baseUrl: 'http://localhost:11434/v1',
model: 'gemma4:31b-cloud',
})
expect(onDone).toHaveBeenCalledWith(
expect.objectContaining({
action: 'saved',
message: 'Provider configured: Ollama',
}),
)
await mounted.dispose()
})
test('ProviderManager avoids first-frame false negative while stored-token lookup is pending', async () => {
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
@@ -435,3 +477,489 @@ test('ProviderManager avoids first-frame false negative while stored-token looku
expect(syncRead).not.toHaveBeenCalled()
expect(asyncRead).toHaveBeenCalled()
})
test('ProviderManager first-run Ollama preset auto-detects installed models', async () => {
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const onDone = mock(() => {})
const addProviderProfile = mock((payload: {
provider: string
name: string
baseUrl: string
model: string
apiKey?: string
}) => ({
id: 'provider_ollama',
provider: payload.provider,
name: payload.name,
baseUrl: payload.baseUrl,
model: payload.model,
apiKey: payload.apiKey,
}))
mockProviderManagerDependencies(
() => undefined,
async () => undefined,
{
addProviderProfile,
probeOllamaGenerationReadiness: async () => ({
state: 'ready',
models: [
{
name: 'gemma4:31b-cloud',
family: 'gemma',
parameterSize: '31b',
},
{
name: 'kimi-k2.5:cloud',
family: 'kimi',
parameterSize: '2.5b',
},
],
probeModel: 'gemma4:31b-cloud',
}),
},
)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const mounted = await mountProviderManager(ProviderManager, {
mode: 'first-run',
onDone,
})
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Set up provider'),
)
await navigateToPreset(mounted.stdin, 'Ollama')
mounted.stdin.write('\r')
const modelFrame = await waitForFrameOutput(
mounted.getOutput,
frame =>
frame.includes('Choose an Ollama model') &&
frame.includes('gemma4:31b-cloud') &&
frame.includes('kimi-k2.5:cloud'),
)
expect(modelFrame).toContain('Choose an Ollama model')
expect(modelFrame).toContain('gemma4:31b-cloud')
await Bun.sleep(25)
mounted.stdin.write('\r')
await waitForCondition(() => onDone.mock.calls.length > 0)
expect(addProviderProfile).toHaveBeenCalled()
expect(addProviderProfile.mock.calls[0]?.[0]).toMatchObject({
name: 'Ollama',
baseUrl: 'http://localhost:11434/v1',
model: 'gemma4:31b-cloud',
})
expect(onDone).toHaveBeenCalledWith(
expect.objectContaining({
action: 'saved',
message: 'Provider configured: Ollama',
}),
)
await mounted.dispose()
})
test('ProviderManager first-run Codex OAuth switches the current session after login completes', async () => {
delete process.env.CLAUDE_CODE_SIMPLE
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const onDone = mock(() => {})
const applySavedProfileToCurrentSession = mock(async () => null)
const persistCredentials = mock(() => {})
const addProviderProfile = mock((payload: {
provider: string
name: string
baseUrl: string
model: string
apiKey?: string
}) => ({
id: 'provider_codex_oauth',
provider: payload.provider,
name: payload.name,
baseUrl: payload.baseUrl,
model: payload.model,
apiKey: payload.apiKey,
}))
mockProviderManagerDependencies(
() => undefined,
async () => undefined,
{
addProviderProfile,
applySavedProfileToCurrentSession,
useCodexOAuthFlow: ({ onAuthenticated }) => {
React.useEffect(() => {
void onAuthenticated({
accessToken: 'oauth-access-token',
refreshToken: 'oauth-refresh-token',
accountId: 'acct_oauth',
}, persistCredentials)
}, [onAuthenticated])
return {
state: 'waiting',
authUrl: 'https://chatgpt.com/codex',
browserOpened: true,
}
},
},
)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const mounted = await mountProviderManager(ProviderManager, {
mode: 'first-run',
onDone,
})
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Set up provider') && frame.includes('Codex OAuth'),
)
await navigateToPreset(mounted.stdin, 'Codex OAuth')
mounted.stdin.write('\r')
await waitForCondition(() => onDone.mock.calls.length > 0)
expect(addProviderProfile).toHaveBeenCalledWith(
expect.objectContaining({
provider: 'openai',
name: 'Codex OAuth',
baseUrl: 'https://chatgpt.com/backend-api/codex',
model: 'codexplan',
apiKey: '',
}),
expect.objectContaining({ makeActive: true }),
)
expect(applySavedProfileToCurrentSession).toHaveBeenCalled()
expect(persistCredentials).toHaveBeenCalledWith({
profileId: 'provider_codex_oauth',
})
expect(onDone).toHaveBeenCalledWith(
expect.objectContaining({
action: 'saved',
message:
'Codex OAuth configured. OpenClaude switched to it for this session.',
}),
)
await mounted.dispose()
})
test('ProviderManager first-run Codex OAuth reports next-startup fallback when session activation fails', async () => {
delete process.env.CLAUDE_CODE_SIMPLE
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const onDone = mock(() => {})
const applySavedProfileToCurrentSession = mock(
async () => 'validation failed',
)
const persistCredentials = mock(() => {})
const addProviderProfile = mock((payload: {
provider: string
name: string
baseUrl: string
model: string
apiKey?: string
}) => ({
id: 'provider_codex_oauth',
provider: payload.provider,
name: payload.name,
baseUrl: payload.baseUrl,
model: payload.model,
apiKey: payload.apiKey,
}))
mockProviderManagerDependencies(
() => undefined,
async () => undefined,
{
addProviderProfile,
applySavedProfileToCurrentSession,
useCodexOAuthFlow: ({ onAuthenticated }) => {
React.useEffect(() => {
void onAuthenticated({
accessToken: 'oauth-access-token',
refreshToken: 'oauth-refresh-token',
accountId: 'acct_oauth',
}, persistCredentials)
}, [onAuthenticated])
return {
state: 'waiting',
authUrl: 'https://chatgpt.com/codex',
browserOpened: true,
}
},
},
)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const mounted = await mountProviderManager(ProviderManager, {
mode: 'first-run',
onDone,
})
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Set up provider') && frame.includes('Codex OAuth'),
)
await navigateToPreset(mounted.stdin, 'Codex OAuth')
mounted.stdin.write('\r')
await waitForCondition(() => onDone.mock.calls.length > 0)
expect(persistCredentials).toHaveBeenCalledWith({
profileId: 'provider_codex_oauth',
})
expect(onDone).toHaveBeenCalledWith(
expect.objectContaining({
action: 'saved',
message:
'Codex OAuth configured. Saved for next startup. Warning: validation failed.',
}),
)
await mounted.dispose()
})
test('ProviderManager does not hijack a manual Codex profile when OAuth credentials are not yet linked', async () => {
delete process.env.CLAUDE_CODE_SIMPLE
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const onDone = mock(() => {})
const manualProfile = {
id: 'provider_manual_codex',
provider: 'openai',
name: 'Codex OAuth',
baseUrl: 'https://chatgpt.com/backend-api/codex',
model: 'gpt-5.4',
apiKey: 'manual-key',
}
const addProviderProfile = mock((payload: {
provider: string
name: string
baseUrl: string
model: string
apiKey?: string
}) => ({
id: 'provider_codex_oauth',
provider: payload.provider,
name: payload.name,
baseUrl: payload.baseUrl,
model: payload.model,
apiKey: payload.apiKey,
}))
const updateProviderProfile = mock(() => manualProfile)
const persistCredentials = mock(() => {})
mockProviderManagerDependencies(
() => undefined,
async () => undefined,
{
addProviderProfile,
getProviderProfiles: () => [manualProfile],
updateProviderProfile,
useCodexOAuthFlow: ({ onAuthenticated }) => {
const hasAuthenticated = React.useRef(false)
React.useEffect(() => {
if (hasAuthenticated.current) {
return
}
hasAuthenticated.current = true
void onAuthenticated({
accessToken: 'oauth-access-token',
refreshToken: 'oauth-refresh-token',
accountId: 'acct_oauth',
}, persistCredentials)
}, [onAuthenticated])
return {
state: 'waiting',
authUrl: 'https://chatgpt.com/codex',
browserOpened: true,
}
},
},
)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const mounted = await mountProviderManager(ProviderManager, {
mode: 'first-run',
onDone,
})
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Set up provider') && frame.includes('Codex OAuth'),
)
await navigateToPreset(mounted.stdin, 'Codex OAuth')
mounted.stdin.write('\r')
await waitForCondition(() => onDone.mock.calls.length > 0)
expect(addProviderProfile).toHaveBeenCalledTimes(1)
expect(updateProviderProfile).not.toHaveBeenCalled()
expect(persistCredentials).toHaveBeenCalledWith({
profileId: 'provider_codex_oauth',
})
await mounted.dispose()
})
test('ProviderManager keeps Codex OAuth as next-startup only when activating the session fails from the menu', async () => {
delete process.env.CLAUDE_CODE_SIMPLE
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const codexProfile = {
id: 'provider_codex_oauth',
provider: 'openai',
name: 'Codex OAuth',
baseUrl: 'https://chatgpt.com/backend-api/codex',
model: 'codexplan',
apiKey: '',
}
const applySavedProfileToCurrentSession = mock(
async () => 'validation failed',
)
const setActiveProviderProfile = mock(() => codexProfile)
mockProviderManagerDependencies(
() => undefined,
async () => undefined,
{
applySavedProfileToCurrentSession,
getProviderProfiles: () => [codexProfile],
setActiveProviderProfile,
codexAsyncRead: async () => ({
accessToken: 'oauth-access-token',
refreshToken: 'oauth-refresh-token',
accountId: 'acct_oauth',
profileId: 'provider_codex_oauth',
}),
},
)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const mounted = await mountProviderManager(ProviderManager)
await waitForFrameOutput(
mounted.getOutput,
frame =>
frame.includes('Provider manager') &&
frame.includes('Set active provider') &&
frame.includes('Log out Codex OAuth'),
)
mounted.stdin.write('j')
await Bun.sleep(25)
mounted.stdin.write('\r')
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Set active provider') && frame.includes('Codex OAuth'),
)
await Bun.sleep(25)
mounted.stdin.write('\r')
await waitForCondition(() => setActiveProviderProfile.mock.calls.length > 0)
await waitForCondition(
() => applySavedProfileToCurrentSession.mock.calls.length > 0,
)
await Bun.sleep(50)
const output = stripAnsi(extractLastFrame(mounted.getOutput()))
expect(output).toContain(
'Active provider: Codex OAuth. Saved for next startup. Warning: validation failed.',
)
expect(applySavedProfileToCurrentSession).toHaveBeenCalled()
expect(setActiveProviderProfile).toHaveBeenCalledWith('provider_codex_oauth')
await mounted.dispose()
})
test('ProviderManager resolves Codex OAuth state from async storage without sync reads in render flow', async () => {
delete process.env.CLAUDE_CODE_SIMPLE
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const githubSyncRead = mock(() => undefined)
const githubAsyncRead = mock(async () => undefined)
const codexSyncRead = mock(() => {
throw new Error('sync codex credential read should not run in ProviderManager render flow')
})
const codexAsyncRead = mock(async () => ({
accessToken: 'codex-access-token',
refreshToken: 'codex-refresh-token',
}))
mockProviderManagerDependencies(githubSyncRead, githubAsyncRead, {
codexSyncRead,
codexAsyncRead,
})
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const output = await renderProviderManagerFrame(ProviderManager, {
waitForOutput: frame =>
frame.includes('Provider manager') &&
frame.includes('Log out Codex OAuth'),
})
expect(output).toContain('Provider manager')
expect(output).toContain('Log out Codex OAuth')
expect(codexSyncRead).not.toHaveBeenCalled()
expect(codexAsyncRead).toHaveBeenCalled()
})
test('ProviderManager hides Codex OAuth setup in bare mode', async () => {
process.env.CLAUDE_CODE_SIMPLE = '1'
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const githubSyncRead = mock(() => undefined)
const githubAsyncRead = mock(async () => undefined)
mockProviderManagerDependencies(githubSyncRead, githubAsyncRead)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const output = await renderProviderManagerFrame(ProviderManager, {
mode: 'first-run',
waitForOutput: frame =>
frame.includes('Set up provider') && frame.includes('OpenAI'),
})
expect(output).toContain('Set up provider')
expect(output).not.toContain('Codex OAuth')
})

File diff suppressed because it is too large Load Diff

View File

@@ -281,6 +281,24 @@ export function Config({
enabled: autoCompactEnabled
});
}
}, {
id: 'toolHistoryCompressionEnabled',
label: 'Tool history compression',
value: globalConfig.toolHistoryCompressionEnabled,
type: 'boolean' as const,
onChange(toolHistoryCompressionEnabled: boolean) {
saveGlobalConfig(current => ({
...current,
toolHistoryCompressionEnabled
}));
setGlobalConfig({
...getGlobalConfig(),
toolHistoryCompressionEnabled
});
logEvent('tengu_tool_history_compression_setting_changed', {
enabled: toolHistoryCompressionEnabled
});
}
}, {
id: 'spinnerTipsEnabled',
label: 'Show tips',
@@ -1158,6 +1176,9 @@ export function Config({
if (globalConfig.autoCompactEnabled !== initialConfig.current.autoCompactEnabled) {
formattedChanges.push(`${globalConfig.autoCompactEnabled ? 'Enabled' : 'Disabled'} auto-compact`);
}
if (globalConfig.toolHistoryCompressionEnabled !== initialConfig.current.toolHistoryCompressionEnabled) {
formattedChanges.push(`${globalConfig.toolHistoryCompressionEnabled ? 'Enabled' : 'Disabled'} tool history compression`);
}
if (globalConfig.respectGitignore !== initialConfig.current.respectGitignore) {
formattedChanges.push(`${globalConfig.respectGitignore ? 'Enabled' : 'Disabled'} respect .gitignore in file picker`);
}

View File

@@ -5,7 +5,7 @@
* Addresses: https://github.com/Gitlawb/openclaude/issues/55
*/
import { isLocalProviderUrl } from '../services/api/providerConfig.js'
import { isLocalProviderUrl, resolveProviderRequest } from '../services/api/providerConfig.js'
import { getLocalOpenAICompatibleProviderLabel } from '../utils/providerDiscovery.js'
import { getSettings_DEPRECATED } from '../utils/settings/settings.js'
import { parseUserSpecifiedModel } from '../utils/model/model.js'
@@ -110,39 +110,42 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
if (useOpenAI) {
const rawModel = process.env.OPENAI_MODEL || 'gpt-4o'
const baseUrl = process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1'
const resolvedRequest = resolveProviderRequest({
model: rawModel,
baseUrl: process.env.OPENAI_BASE_URL,
})
const baseUrl = resolvedRequest.baseUrl
const isLocal = isLocalProviderUrl(baseUrl)
let name = 'OpenAI'
if (/deepseek/i.test(baseUrl) || /deepseek/i.test(rawModel)) name = 'DeepSeek'
else if (/openrouter/i.test(baseUrl)) name = 'OpenRouter'
else if (/together/i.test(baseUrl)) name = 'Together AI'
else if (/groq/i.test(baseUrl)) name = 'Groq'
else if (/mistral/i.test(baseUrl) || /mistral/i.test(rawModel)) name = 'Mistral'
else if (/azure/i.test(baseUrl)) name = 'Azure OpenAI'
else if (/llama/i.test(rawModel)) name = 'Meta Llama'
else if (isLocal) name = getLocalOpenAICompatibleProviderLabel(baseUrl)
if (/nvidia/i.test(baseUrl) || /nvidia/i.test(rawModel) || process.env.NVIDIA_NIM)
name = 'NVIDIA NIM'
else if (/minimax/i.test(baseUrl) || /minimax/i.test(rawModel) || process.env.MINIMAX_API_KEY)
name = 'MiniMax'
else if (resolvedRequest.transport === 'codex_responses' || baseUrl.includes('chatgpt.com/backend-api/codex'))
name = 'Codex'
else if (/moonshot/i.test(baseUrl) || /kimi/i.test(rawModel))
name = 'Moonshot (Kimi)'
else if (/deepseek/i.test(baseUrl) || /deepseek/i.test(rawModel))
name = 'DeepSeek'
else if (/openrouter/i.test(baseUrl))
name = 'OpenRouter'
else if (/together/i.test(baseUrl))
name = 'Together AI'
else if (/groq/i.test(baseUrl))
name = 'Groq'
else if (/mistral/i.test(baseUrl) || /mistral/i.test(rawModel))
name = 'Mistral'
else if (/azure/i.test(baseUrl))
name = 'Azure OpenAI'
else if (/llama/i.test(rawModel))
name = 'Meta Llama'
else if (isLocal)
name = getLocalOpenAICompatibleProviderLabel(baseUrl)
// Resolve model alias to actual model name + reasoning effort
let displayModel = rawModel
const codexAliases: Record<string, { model: string; reasoningEffort?: string }> = {
codexplan: { model: 'gpt-5.4', reasoningEffort: 'high' },
'gpt-5.4': { model: 'gpt-5.4', reasoningEffort: 'high' },
'gpt-5.3-codex': { model: 'gpt-5.3-codex', reasoningEffort: 'high' },
'gpt-5.3-codex-spark': { model: 'gpt-5.3-codex-spark' },
codexspark: { model: 'gpt-5.3-codex-spark' },
'gpt-5.2-codex': { model: 'gpt-5.2-codex', reasoningEffort: 'high' },
'gpt-5.1-codex-max': { model: 'gpt-5.1-codex-max', reasoningEffort: 'high' },
'gpt-5.1-codex-mini': { model: 'gpt-5.1-codex-mini' },
'gpt-5.4-mini': { model: 'gpt-5.4-mini', reasoningEffort: 'medium' },
'gpt-5.2': { model: 'gpt-5.2', reasoningEffort: 'medium' },
}
const alias = rawModel.toLowerCase()
if (alias in codexAliases) {
const resolved = codexAliases[alias]
displayModel = resolved.model
if (resolved.reasoningEffort) {
displayModel = `${displayModel} (${resolved.reasoningEffort})`
}
let displayModel = resolvedRequest.resolvedModel
if (resolvedRequest.reasoning?.effort) {
displayModel = `${displayModel} (${resolvedRequest.reasoning.effort})`
}
return { name, model: displayModel, baseUrl, isLocal }
@@ -152,7 +155,9 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
const settings = getSettings_DEPRECATED() || {}
const modelSetting = settings.model || process.env.ANTHROPIC_MODEL || process.env.CLAUDE_MODEL || 'claude-sonnet-4-6'
const resolvedModel = parseUserSpecifiedModel(modelSetting)
return { name: 'Anthropic', model: resolvedModel, baseUrl: 'https://api.anthropic.com', isLocal: false }
const baseUrl = process.env.ANTHROPIC_BASE_URL ?? 'https://api.anthropic.com'
const isLocal = isLocalProviderUrl(baseUrl)
return { name: 'Anthropic', model: resolvedModel, baseUrl, isLocal }
}
// ─── Box drawing ──────────────────────────────────────────────────────────────

View File

@@ -6,6 +6,7 @@ import stripAnsi from 'strip-ansi'
import { createRoot } from '../ink.js'
import { AppStateProvider } from '../state/AppState.js'
import { maskTextWithVisibleEdges } from '../utils/Cursor.js'
import TextInput from './TextInput.js'
import VimTextInput from './VimTextInput.js'
@@ -199,6 +200,13 @@ test('TextInput renders typed characters before delayed parent value commits', a
expect(output).not.toContain('Type here...')
})
test('maskTextWithVisibleEdges preserves only the first and last three chars', () => {
expect(maskTextWithVisibleEdges('sk-secret-12345678', '*')).toBe(
'sk-************678',
)
expect(maskTextWithVisibleEdges('abcdef', '*')).toBe('******')
})
test('VimTextInput preserves rapid typed characters before delayed parent value commits', async () => {
const { stdout, stdin, getOutput } = createTestStreams()
const root = await createRoot({

View File

@@ -53,17 +53,20 @@ describe('getProjectMemoryPathForSelector', () => {
})
test('defaults to a new AGENTS.md in the current cwd when no project file is loaded', () => {
expect(getProjectMemoryPathForSelector([], '/repo/packages/app')).toBe(
'/repo/packages/app/AGENTS.md',
const cwd = join('/repo', 'packages', 'app')
expect(getProjectMemoryPathForSelector([], cwd)).toBe(
join(cwd, 'AGENTS.md'),
)
})
test('ignores loaded project instruction files outside the current cwd ancestry', () => {
const outsideRepoPath = join('/other-worktree', 'AGENTS.md')
const cwd = join('/repo', 'packages', 'app')
expect(
getProjectMemoryPathForSelector(
[projectFile('/other-worktree/AGENTS.md')],
'/repo/packages/app',
[projectFile(outsideRepoPath)],
cwd,
),
).toBe('/repo/packages/app/AGENTS.md')
).toBe(join(cwd, 'AGENTS.md'))
})
})

View File

@@ -0,0 +1,173 @@
import React from 'react'
import { getOriginalCwd } from '../../../bootstrap/state.js'
import { Box, Text } from '../../../ink.js'
import { sanitizeToolNameForAnalytics } from '../../../services/analytics/metadata.js'
import { env } from '../../../utils/env.js'
import { shouldShowAlwaysAllowOptions } from '../../../utils/permissions/permissionsLoader.js'
import { usePermissionRequestLogging } from '../hooks.js'
import { PermissionDialog } from '../PermissionDialog.js'
import {
PermissionPrompt,
type PermissionPromptOption,
} from '../PermissionPrompt.js'
import type { PermissionRequestProps } from '../PermissionRequest.js'
import { PermissionRuleExplanation } from '../PermissionRuleExplanation.js'
import { logUnaryPermissionEvent } from '../utils.js'
type OptionValue = 'yes' | 'yes-dont-ask-again' | 'no'
export function MonitorPermissionRequest({
toolUseConfirm,
onDone,
onReject,
workerBadge,
}: PermissionRequestProps) {
const { command, description } = toolUseConfirm.input as {
command?: string
description?: string
}
usePermissionRequestLogging(toolUseConfirm, {
completion_type: 'tool_use_single',
language_name: 'none',
})
const handleSelect = (
value: OptionValue,
feedback?: string,
) => {
switch (value) {
case 'yes': {
logUnaryPermissionEvent({
completion_type: 'tool_use_single',
event: 'accept',
metadata: {
language_name: 'none',
message_id: toolUseConfirm.assistantMessage.message.id,
platform: env.platform,
},
})
toolUseConfirm.onAllow(toolUseConfirm.input, [], feedback)
onDone()
break
}
case 'yes-dont-ask-again': {
logUnaryPermissionEvent({
completion_type: 'tool_use_single',
event: 'accept',
metadata: {
language_name: 'none',
message_id: toolUseConfirm.assistantMessage.message.id,
platform: env.platform,
},
})
// Save the rule under 'Bash' toolName because checkPermissions
// delegates to bashToolHasPermission which matches rules against
// BashTool. Using 'Monitor' here would create a rule that's never
// checked. Command-specific prefix (like BashTool's shellRuleMatching).
const cmdForRule = command?.trim() || ''
const prefix = cmdForRule.split(/\s+/).slice(0, 2).join(' ')
toolUseConfirm.onAllow(toolUseConfirm.input, prefix ? [
{
type: 'addRules',
rules: [{ toolName: 'Bash', ruleContent: `${prefix}:*` }],
behavior: 'allow',
destination: 'localSettings',
},
] : [])
onDone()
break
}
case 'no': {
logUnaryPermissionEvent({
completion_type: 'tool_use_single',
event: 'reject',
metadata: {
language_name: 'none',
message_id: toolUseConfirm.assistantMessage.message.id,
platform: env.platform,
},
})
toolUseConfirm.onReject(feedback)
onReject()
onDone()
break
}
}
}
const handleCancel = () => {
logUnaryPermissionEvent({
completion_type: 'tool_use_single',
event: 'reject',
metadata: {
language_name: 'none',
message_id: toolUseConfirm.assistantMessage.message.id,
platform: env.platform,
},
})
toolUseConfirm.onReject()
onReject()
onDone()
}
const showAlwaysAllow = shouldShowAlwaysAllowOptions()
const originalCwd = getOriginalCwd()
const options: PermissionPromptOption<OptionValue>[] = [
{
label: 'Yes',
value: 'yes',
feedbackConfig: { type: 'accept' },
},
]
if (showAlwaysAllow) {
options.push({
label: (
<Text>
Yes, and don&apos;t ask again for{' '}
<Text bold>Monitor</Text> commands in{' '}
<Text bold>{originalCwd}</Text>
</Text>
),
value: 'yes-dont-ask-again',
})
}
options.push({
label: 'No',
value: 'no',
feedbackConfig: { type: 'reject' },
})
const toolAnalyticsContext = {
toolName: sanitizeToolNameForAnalytics(toolUseConfirm.tool.name),
isMcp: toolUseConfirm.tool.isMcp ?? false,
}
return (
<PermissionDialog title="Monitor" workerBadge={workerBadge}>
<Box flexDirection="column" paddingX={2} paddingY={1}>
<Text>
Monitor({command ?? ''})
</Text>
{description ? (
<Text dimColor>{description}</Text>
) : null}
</Box>
<Box flexDirection="column">
<PermissionRuleExplanation
permissionResult={toolUseConfirm.permissionResult}
toolType="tool"
/>
<PermissionPrompt
options={options}
onSelect={handleSelect}
onCancel={handleCancel}
toolAnalyticsContext={toolAnalyticsContext}
/>
</Box>
</PermissionDialog>
)
}

View File

@@ -0,0 +1,220 @@
import { PassThrough } from 'node:stream'
import { afterEach, expect, mock, test } from 'bun:test'
import React from 'react'
import { createRoot, Text } from '../ink.js'
const SYNC_START = '\x1B[?2026h'
const SYNC_END = '\x1B[?2026l'
function createTestStreams(): {
stdout: PassThrough
stdin: PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
getOutput: () => string
} {
let output = ''
const stdout = new PassThrough()
const stdin = new PassThrough() as PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
stdin.isTTY = true
stdin.setRawMode = () => {}
stdin.ref = () => {}
stdin.unref = () => {}
;(stdout as unknown as { columns: number }).columns = 120
stdout.on('data', chunk => {
output += chunk.toString()
})
return {
stdout,
stdin,
getOutput: () => output,
}
}
async function waitForCondition(
predicate: () => boolean,
options?: { timeoutMs?: number; intervalMs?: number },
): Promise<void> {
const timeoutMs = options?.timeoutMs ?? 5000
const intervalMs = options?.intervalMs ?? 10
const startedAt = Date.now()
while (Date.now() - startedAt < timeoutMs) {
if (predicate()) {
return
}
await Bun.sleep(intervalMs)
}
throw new Error('Timed out waiting for useCodexOAuthFlow test condition')
}
function extractLastFrame(output: string): string {
let lastFrame: string | null = null
let cursor = 0
while (cursor < output.length) {
const start = output.indexOf(SYNC_START, cursor)
if (start === -1) break
const contentStart = start + SYNC_START.length
const end = output.indexOf(SYNC_END, contentStart)
if (end === -1) break
const frame = output.slice(contentStart, end)
if (frame.trim().length > 0) {
lastFrame = frame
}
cursor = end + SYNC_END.length
}
return lastFrame ?? output
}
const TOKENS = {
accessToken: 'oauth-access-token',
refreshToken: 'oauth-refresh-token',
accountId: 'acct_oauth',
idToken: 'oauth-id-token',
apiKey: 'oauth-api-key',
}
afterEach(() => {
mock.restore()
})
test('does not persist credentials when downstream setup rejects', async () => {
const saveCodexCredentials = mock(() => ({ success: true }))
const cleanup = mock(() => {})
const onAuthenticated = mock(async () => {
throw new Error('profile save failed')
})
const deps = {
createOAuthService: () => ({
async startOAuthFlow(
onAuthorizationUrl: (authUrl: string) => void | Promise<void>,
) {
await onAuthorizationUrl('https://chatgpt.com/codex')
return TOKENS
},
cleanup,
}),
openBrowser: async () => true,
saveCodexCredentials,
isBareMode: () => false,
}
const { useCodexOAuthFlow } = await import(
`./useCodexOAuthFlow.js?real-reject-${Date.now()}-${Math.random()}`
)
function Harness(): React.ReactNode {
const handleAuthenticated = React.useCallback(onAuthenticated, [onAuthenticated])
const status = useCodexOAuthFlow({
onAuthenticated: handleAuthenticated,
deps,
})
return <Text>{status.state === 'error' ? status.message : status.state}</Text>
}
const streams = createTestStreams()
const root = await createRoot({
stdout: streams.stdout as unknown as NodeJS.WriteStream,
stdin: streams.stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
root.render(<Harness />)
try {
await waitForCondition(() => onAuthenticated.mock.calls.length === 1)
await Bun.sleep(0)
await Bun.sleep(0)
expect(onAuthenticated).toHaveBeenCalled()
expect(saveCodexCredentials).not.toHaveBeenCalled()
} finally {
root.unmount()
streams.stdin.end()
streams.stdout.end()
await Bun.sleep(0)
}
})
test('persists credentials with profile linkage after downstream setup succeeds', async () => {
const saveCodexCredentials = mock(() => ({ success: true }))
const onAuthenticated = mock(
async (
_tokens: typeof TOKENS,
persistCredentials: (options?: { profileId?: string }) => void,
) => {
persistCredentials({ profileId: 'profile_codex_oauth' })
},
)
const cleanup = mock(() => {})
const deps = {
createOAuthService: () => ({
async startOAuthFlow(
onAuthorizationUrl: (authUrl: string) => void | Promise<void>,
) {
await onAuthorizationUrl('https://chatgpt.com/codex')
return TOKENS
},
cleanup,
}),
openBrowser: async () => true,
saveCodexCredentials,
isBareMode: () => false,
}
const { useCodexOAuthFlow } = await import(
`./useCodexOAuthFlow.js?real-persist-${Date.now()}-${Math.random()}`
)
function Harness(): React.ReactNode {
const handleAuthenticated = React.useCallback(onAuthenticated, [onAuthenticated])
useCodexOAuthFlow({
onAuthenticated: handleAuthenticated,
deps,
})
return <Text>waiting</Text>
}
const streams = createTestStreams()
const root = await createRoot({
stdout: streams.stdout as unknown as NodeJS.WriteStream,
stdin: streams.stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
root.render(<Harness />)
try {
await waitForCondition(() => onAuthenticated.mock.calls.length === 1)
await waitForCondition(() => saveCodexCredentials.mock.calls.length === 1)
expect(onAuthenticated).toHaveBeenCalled()
expect(saveCodexCredentials).toHaveBeenCalledWith({
apiKey: TOKENS.apiKey,
accessToken: TOKENS.accessToken,
refreshToken: TOKENS.refreshToken,
idToken: TOKENS.idToken,
accountId: TOKENS.accountId,
profileId: 'profile_codex_oauth',
})
} finally {
root.unmount()
streams.stdin.end()
streams.stdout.end()
await Bun.sleep(0)
}
})

View File

@@ -0,0 +1,134 @@
import * as React from 'react'
import {
CodexOAuthService,
type CodexOAuthTokens,
} from '../services/api/codexOAuth.js'
import { openBrowser } from '../utils/browser.js'
import { saveCodexCredentials } from '../utils/codexCredentials.js'
import { isBareMode } from '../utils/envUtils.js'
export type CodexOAuthFlowStatus =
| { state: 'starting' }
| {
state: 'waiting'
authUrl: string
browserOpened: boolean | null
}
| {
state: 'error'
message: string
}
type PersistCodexOAuthCredentials = (options?: {
profileId?: string
}) => void
type CodexOAuthFlowDependencies = {
createOAuthService?: () => Pick<
CodexOAuthService,
'startOAuthFlow' | 'cleanup'
>
openBrowser?: typeof openBrowser
saveCodexCredentials?: typeof saveCodexCredentials
isBareMode?: typeof isBareMode
}
function createDefaultOAuthService(): Pick<
CodexOAuthService,
'startOAuthFlow' | 'cleanup'
> {
return new CodexOAuthService()
}
export function useCodexOAuthFlow(options: {
onAuthenticated: (
tokens: CodexOAuthTokens,
persistCredentials: PersistCodexOAuthCredentials,
) => void | Promise<void>
deps?: CodexOAuthFlowDependencies
}): CodexOAuthFlowStatus {
const { onAuthenticated } = options
const createOAuthService =
options.deps?.createOAuthService ?? createDefaultOAuthService
const openBrowserFn = options.deps?.openBrowser ?? openBrowser
const saveCredentials =
options.deps?.saveCodexCredentials ?? saveCodexCredentials
const isBareModeFn = options.deps?.isBareMode ?? isBareMode
const [status, setStatus] = React.useState<CodexOAuthFlowStatus>({
state: 'starting',
})
React.useEffect(() => {
if (isBareModeFn()) {
setStatus({
state: 'error',
message:
'Codex OAuth is unavailable in --bare because secure storage is disabled.',
})
return
}
let cancelled = false
const oauthService = createOAuthService()
void oauthService
.startOAuthFlow(async authUrl => {
if (cancelled) return
setStatus({
state: 'waiting',
authUrl,
browserOpened: null,
})
const browserOpened = await openBrowserFn(authUrl)
if (cancelled) return
setStatus({
state: 'waiting',
authUrl,
browserOpened,
})
})
.then(async tokens => {
if (cancelled) return
const persistCredentials: PersistCodexOAuthCredentials = options => {
const saved = saveCredentials({
apiKey: tokens.apiKey,
accessToken: tokens.accessToken,
refreshToken: tokens.refreshToken,
idToken: tokens.idToken,
accountId: tokens.accountId,
profileId: options?.profileId,
})
if (!saved.success) {
throw new Error(
saved.warning ??
'Codex OAuth succeeded, but credentials could not be saved securely.',
)
}
}
await onAuthenticated(tokens, persistCredentials)
})
.catch(error => {
if (cancelled) return
setStatus({
state: 'error',
message: error instanceof Error ? error.message : String(error),
})
})
return () => {
cancelled = true
oauthService.cleanup()
}
}, [
createOAuthService,
isBareModeFn,
onAuthenticated,
openBrowserFn,
saveCredentials,
])
return status
}

View File

@@ -1,5 +1,16 @@
import { afterEach, expect, test } from 'bun:test'
// MACRO is replaced at build time by Bun.define but not in test mode.
// Define it globally so tests that import modules using MACRO don't crash.
;(globalThis as Record<string, unknown>).MACRO = {
VERSION: '99.0.0',
DISPLAY_VERSION: '0.0.0-test',
BUILD_TIME: new Date().toISOString(),
ISSUES_EXPLAINER: 'report the issue at https://github.com/anthropics/claude-code/issues',
PACKAGE_URL: '@gitlawb/openclaude',
NATIVE_PACKAGE_URL: undefined,
}
import { getSystemPrompt, DEFAULT_AGENT_PROMPT } from './prompts.js'
import { CLI_SYSPROMPT_PREFIXES, getCLISyspromptPrefix } from './system.js'
import { CLAUDE_CODE_GUIDE_AGENT } from '../tools/AgentTool/built-in/claudeCodeGuideAgent.js'

View File

@@ -823,6 +823,11 @@ function getFunctionResultClearingSection(model: string): string | null {
return null
}
const config = getCachedMCConfigForFRC()
if (!config) {
// External/stub builds return null from getCachedMCConfig — abort the
// section rather than trying to read .supportedModels off null.
return null
}
const isModelSupported = config.supportedModels?.some(pattern =>
model.includes(pattern),
)

View File

@@ -37,8 +37,6 @@ export const ALL_AGENT_DISALLOWED_TOOLS = new Set([
TASK_OUTPUT_TOOL_NAME,
EXIT_PLAN_MODE_V2_TOOL_NAME,
ENTER_PLAN_MODE_TOOL_NAME,
// Allow Agent tool for agents when user is ant (enables nested agents)
...(process.env.USER_TYPE === 'ant' ? [] : [AGENT_TOOL_NAME]),
ASK_USER_QUESTION_TOOL_NAME,
TASK_STOP_TOOL_NAME,
// Prevent recursive workflow execution inside subagents.
@@ -82,9 +80,9 @@ export const IN_PROCESS_TEAMMATE_ALLOWED_TOOLS = new Set([
SEND_MESSAGE_TOOL_NAME,
// Teammate-created crons are tagged with the creating agentId and routed to
// that teammate's pendingUserMessages queue (see useScheduledTasks.ts).
...(feature('AGENT_TRIGGERS')
? [CRON_CREATE_TOOL_NAME, CRON_DELETE_TOOL_NAME, CRON_LIST_TOOL_NAME]
: []),
CRON_CREATE_TOOL_NAME,
CRON_DELETE_TOOL_NAME,
CRON_LIST_TOOL_NAME,
])
/*

View File

@@ -0,0 +1,18 @@
import type { BuiltInAgentDefinition } from '../tools/AgentTool/loadAgentsDir.js'
import { EXPLORE_AGENT } from '../tools/AgentTool/built-in/exploreAgent.js'
import { GENERAL_PURPOSE_AGENT } from '../tools/AgentTool/built-in/generalPurposeAgent.js'
import { PLAN_AGENT } from '../tools/AgentTool/built-in/planAgent.js'
// The coordinator system prompt instructs the model to spawn workers with
// subagent_type: "worker". This agent definition matches that type so
// AgentTool.tsx can resolve it. It reuses GENERAL_PURPOSE_AGENT's capabilities.
const WORKER_AGENT: BuiltInAgentDefinition = {
...GENERAL_PURPOSE_AGENT,
agentType: 'worker',
whenToUse:
'Worker agent for coordinator mode. Executes tasks autonomously — research, implementation, or verification.',
}
export function getCoordinatorAgents(): BuiltInAgentDefinition[] {
return [WORKER_AGENT, GENERAL_PURPOSE_AGENT, EXPLORE_AGENT, PLAN_AGENT]
}

View File

@@ -5,7 +5,7 @@ import {
} from '../utils/providerProfile.js'
import {
getProviderValidationError,
validateProviderEnvOrExit,
validateProviderEnvForStartupOrExit,
} from '../utils/providerValidation.js'
// OpenClaude: polyfill globalThis.File for Node < 20.
@@ -132,7 +132,7 @@ async function main(): Promise<void> {
hydrateGithubModelsTokenFromSecureStorage()
}
await validateProviderEnvOrExit()
await validateProviderEnvForStartupOrExit()
// Print the gradient startup screen before the Ink UI loads
const { printStartupScreen } = await import('../components/StartupScreen.js')

View File

@@ -0,0 +1,75 @@
import { describe, it, expect, mock } from 'bun:test'
import { getCombinedTools, loadReexposedMcpTools } from './mcp.js'
import type { Tool as InternalTool } from '../Tool.js'
import type { MCPServerConnection } from '../services/mcp/types.js'
import type { Tool } from '@modelcontextprotocol/sdk/types.js'
// Mock the MCP client service to control the tools and connections returned
const mockGetMcpToolsCommandsAndResources = mock(async (onConnectionAttempt: any) => {})
mock.module('../services/mcp/client.js', () => ({
getMcpToolsCommandsAndResources: mockGetMcpToolsCommandsAndResources
}))
describe('getCombinedTools', () => {
it('deduplicates builtins when mcpTools have the same name, prioritizing mcpTools', () => {
const builtinBash = { name: 'Bash', isMcp: false } as unknown as InternalTool
const builtinRead = { name: 'Read', isMcp: false } as unknown as InternalTool
const mcpBash = { name: 'Bash', isMcp: true } as unknown as InternalTool
const builtins = [builtinBash, builtinRead]
const mcpTools = [mcpBash]
const result = getCombinedTools(builtins, mcpTools)
expect(result).toHaveLength(2)
expect(result[0]).toBe(mcpBash)
expect(result[1]).toBe(builtinRead)
})
})
describe('loadReexposedMcpTools', () => {
it('loads tools and clients regardless of connection state (including needs-auth)', async () => {
// Setup the mock to simulate yielding a needs-auth server and a connected server
mockGetMcpToolsCommandsAndResources.mockImplementation(async (onConnectionAttempt) => {
const needsAuthClient = {
name: 'auth-server',
type: 'needs-auth',
config: {}
} as MCPServerConnection
const authTool = {
name: 'mcp__auth-server__authenticate',
isMcp: true
} as unknown as InternalTool
const connectedClient = {
name: 'connected-server',
type: 'connected',
config: {},
client: {}
} as MCPServerConnection
const connectedTool = {
name: 'mcp__connected-server__do_thing',
isMcp: true
} as unknown as InternalTool
// Simulate the callback behavior
onConnectionAttempt({ client: needsAuthClient, tools: [authTool], commands: [] })
onConnectionAttempt({ client: connectedClient, tools: [connectedTool], commands: [] })
})
const { mcpClients, mcpTools } = await loadReexposedMcpTools()
expect(mcpClients).toHaveLength(2)
expect(mcpClients[0].type).toBe('needs-auth')
expect(mcpClients[1].type).toBe('connected')
expect(mcpTools).toHaveLength(2)
expect(mcpTools[0].name).toBe('mcp__auth-server__authenticate')
expect(mcpTools[1].name).toBe('mcp__connected-server__do_thing')
// Reset mock for other tests
mockGetMcpToolsCommandsAndResources.mockReset()
})
})

View File

@@ -7,6 +7,7 @@ process.env.CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS ??= 'true'
import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
import { ZodError } from 'zod'
import {
CallToolRequestSchema,
type CallToolResult,
@@ -17,9 +18,12 @@ import {
import { getDefaultAppState } from 'src/state/AppStateStore.js'
import review from '../commands/review.js'
import type { Command } from '../commands.js'
import { getMcpToolsCommandsAndResources } from '../services/mcp/client.js'
import type { MCPServerConnection } from '../services/mcp/types.js'
import {
findToolByName,
getEmptyToolPermissionContext,
type Tool as InternalTool,
type ToolUseContext,
} from '../Tool.js'
import { getTools } from '../tools.js'
@@ -39,6 +43,32 @@ type ToolOutput = Tool['outputSchema']
const MCP_COMMANDS: Command[] = [review]
export function getCombinedTools(
builtins: InternalTool[],
mcpTools: InternalTool[],
): InternalTool[] {
const mcpToolNames = new Set(mcpTools.map(t => t.name))
const deduplicatedBuiltins = builtins.filter(t => !mcpToolNames.has(t.name))
return [...mcpTools, ...deduplicatedBuiltins]
}
export async function loadReexposedMcpTools(): Promise<{
mcpClients: MCPServerConnection[]
mcpTools: InternalTool[]
}> {
const mcpClients: MCPServerConnection[] = []
const mcpTools: InternalTool[] = []
// Load configured MCP clients and their tools
await getMcpToolsCommandsAndResources(({ client, tools: clientTools }) => {
mcpClients.push(client)
mcpTools.push(...clientTools)
})
return { mcpClients, mcpTools }
}
export async function startMCPServer(
cwd: string,
debug: boolean,
@@ -63,12 +93,13 @@ export async function startMCPServer(
},
)
const { mcpClients, mcpTools } = await loadReexposedMcpTools()
server.setRequestHandler(
ListToolsRequestSchema,
async (): Promise<ListToolsResult> => {
// TODO: Also re-expose any MCP tools
const toolPermissionContext = getEmptyToolPermissionContext()
const tools = getTools(toolPermissionContext)
const tools = getCombinedTools(getTools(toolPermissionContext), mcpTools)
return {
tools: await Promise.all(
tools.map(async tool => {
@@ -94,7 +125,7 @@ export async function startMCPServer(
tools,
agents: [],
}),
inputSchema: zodToJsonSchema(tool.inputSchema) as ToolInput,
inputSchema: (tool.inputJSONSchema ?? zodToJsonSchema(tool.inputSchema)) as ToolInput,
outputSchema,
}
}),
@@ -107,8 +138,7 @@ export async function startMCPServer(
CallToolRequestSchema,
async ({ params: { name, arguments: args } }): Promise<CallToolResult> => {
const toolPermissionContext = getEmptyToolPermissionContext()
// TODO: Also re-expose any MCP tools
const tools = getTools(toolPermissionContext)
const tools = getCombinedTools(getTools(toolPermissionContext), mcpTools)
const tool = findToolByName(tools, name)
if (!tool) {
throw new Error(`Tool ${name} not found`)
@@ -123,7 +153,7 @@ export async function startMCPServer(
tools,
mainLoopModel: getMainLoopModel(),
thinkingConfig: { type: 'disabled' },
mcpClients: [],
mcpClients,
mcpResources: {},
isNonInteractiveSession: true,
debug,
@@ -140,13 +170,16 @@ export async function startMCPServer(
updateAttributionState: () => {},
}
// TODO: validate input types with zod
try {
if (!tool.isEnabled()) {
throw new Error(`Tool ${name} is not enabled`)
}
// Validate input types with zod
const parsedArgs = tool.inputSchema.parse(args ?? {})
const validationResult = await tool.validateInput?.(
(args as never) ?? {},
(parsedArgs as never) ?? {},
toolUseContext,
)
if (validationResult && !validationResult.result) {
@@ -155,7 +188,7 @@ export async function startMCPServer(
)
}
const finalResult = await tool.call(
(args ?? {}) as never,
(parsedArgs ?? {}) as never,
toolUseContext,
hasPermissionsToUseTool,
createAssistantMessage({
@@ -163,20 +196,50 @@ export async function startMCPServer(
}),
)
let content: CallToolResult['content']
const data = finalResult.data as string | { type: string; text?: string; source?: { type: string; media_type: string; data: string } }[] | unknown
if (typeof data === 'string') {
content = [{ type: 'text', text: data }]
} else if (Array.isArray(data)) {
content = data.map((block: any) => {
if (block.type === 'text') {
return { type: 'text', text: block.text || '' }
} else if (block.type === 'image' && block.source) {
return {
type: 'image',
data: block.source.data,
mimeType: block.source.media_type,
}
} else {
// eslint-disable-next-line custom-rules/no-top-level-side-effects, no-console
console.warn(`Unmapped content block type from tool ${name}: ${block.type || 'unknown'}`)
return { type: 'text', text: jsonStringify(block) }
}
}) as CallToolResult['content']
} else {
content = [{ type: 'text', text: jsonStringify(data) }]
}
return {
content: [
{
type: 'text' as const,
text:
typeof finalResult === 'string'
? finalResult
: jsonStringify(finalResult.data),
},
],
content,
isError: !!(finalResult as any).isError,
}
} catch (error) {
logError(error)
if (error instanceof ZodError) {
return {
isError: true,
content: [
{
type: 'text',
text: `Tool ${name} input is invalid:\n${error.errors.map(e => `- ${e.path.join('.')}: ${e.message}`).join('\n')}`,
},
],
}
}
const parts =
error instanceof Error ? getErrorParts(error) : [String(error)]
const errorText = parts.filter(Boolean).join('\n').trim() || 'Error'
@@ -201,3 +264,4 @@ export async function startMCPServer(
return await runServer()
}

View File

@@ -114,8 +114,8 @@ export const SandboxSettingsSchema = lazySchema(() =>
.boolean()
.optional()
.describe(
'Allow commands to run outside the sandbox via the dangerouslyDisableSandbox parameter. ' +
'When false, the dangerouslyDisableSandbox parameter is completely ignored and all commands must run sandboxed. ' +
'Allow trusted, user-initiated commands to run outside the sandbox. ' +
'When false, sandbox override requests are ignored and all commands must run sandboxed. ' +
'Default: true.',
),
network: SandboxNetworkConfigSchema(),

View File

@@ -0,0 +1,123 @@
import { PassThrough } from 'node:stream'
import { afterEach, expect, mock, test } from 'bun:test'
import React from 'react'
import { createRoot, Text } from '../ink.js'
type AuthState = {
anthropicAuthEnabled: boolean
claudeSubscriber: boolean
key?: string
source?: string
}
function createTestStreams(): {
stdout: PassThrough
stdin: PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
} {
const stdout = new PassThrough()
const stdin = new PassThrough() as PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
stdin.isTTY = true
stdin.setRawMode = () => {}
stdin.ref = () => {}
stdin.unref = () => {}
;(stdout as unknown as { columns: number }).columns = 120
return { stdout, stdin }
}
async function waitForCondition(
predicate: () => boolean,
timeoutMs = 2000,
): Promise<void> {
const startedAt = Date.now()
while (Date.now() - startedAt < timeoutMs) {
if (predicate()) {
return
}
await Bun.sleep(10)
}
throw new Error('Timed out waiting for useApiKeyVerification test state')
}
afterEach(() => {
mock.restore()
})
test('useApiKeyVerification resets stale missing status when the session switches to a third-party provider', async () => {
const authState: AuthState = {
anthropicAuthEnabled: true,
claudeSubscriber: false,
}
const seenStatuses: string[] = []
mock.module('../utils/auth.js', () => ({
getAnthropicApiKeyWithSource: () => ({
key: authState.key,
source: authState.source,
}),
getApiKeyFromApiKeyHelper: async () => undefined,
isAnthropicAuthEnabled: () => authState.anthropicAuthEnabled,
isClaudeAISubscriber: () => authState.claudeSubscriber,
}))
mock.module('../bootstrap/state.js', () => ({
getIsNonInteractiveSession: () => false,
}))
mock.module('../services/api/claude.js', () => ({
verifyApiKey: async () => true,
}))
// @ts-expect-error cache-busting query string for Bun module mocks
const { useApiKeyVerification } = await import(
'./useApiKeyVerification.ts?switch-to-third-party'
)
function Harness(): React.ReactNode {
const { status } = useApiKeyVerification()
React.useEffect(() => {
seenStatuses.push(status)
}, [status])
return <Text>{status}</Text>
}
const { stdout, stdin } = createTestStreams()
const root = await createRoot({
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
root.render(<Harness />)
await waitForCondition(() => seenStatuses.includes('missing'))
authState.anthropicAuthEnabled = false
root.render(<Harness />)
await waitForCondition(() => seenStatuses.includes('valid'))
root.unmount()
stdin.end()
stdout.end()
await Bun.sleep(0)
expect(seenStatuses[0]).toBe('missing')
expect(seenStatuses).toContain('valid')
})

View File

@@ -1,4 +1,4 @@
import { useCallback, useState } from 'react'
import { useCallback, useEffect, useState } from 'react'
import { getIsNonInteractiveSession } from '../bootstrap/state.js'
import { verifyApiKey } from '../services/api/claude.js'
import {
@@ -21,24 +21,43 @@ export type ApiKeyVerificationResult = {
error: Error | null
}
export function useApiKeyVerification(): ApiKeyVerificationResult {
const [status, setStatus] = useState<VerificationStatus>(() => {
if (!isAnthropicAuthEnabled() || isClaudeAISubscriber()) {
return 'valid'
}
// Use skipRetrievingKeyFromApiKeyHelper to avoid executing apiKeyHelper
// before trust dialog is shown (security: prevents RCE via settings.json)
const { key, source } = getAnthropicApiKeyWithSource({
skipRetrievingKeyFromApiKeyHelper: true,
})
// If apiKeyHelper is configured, we have a key source even though we
// haven't executed it yet - return 'loading' to indicate we'll verify later
if (key || source === 'apiKeyHelper') {
return 'loading'
}
return 'missing'
function getInitialVerificationStatus(): VerificationStatus {
if (!isAnthropicAuthEnabled() || isClaudeAISubscriber()) {
return 'valid'
}
// Use skipRetrievingKeyFromApiKeyHelper to avoid executing apiKeyHelper
// before trust dialog is shown (security: prevents RCE via settings.json)
const { key, source } = getAnthropicApiKeyWithSource({
skipRetrievingKeyFromApiKeyHelper: true,
})
// If apiKeyHelper is configured, we have a key source even though we
// haven't executed it yet - return 'loading' to indicate we'll verify later
if (key || source === 'apiKeyHelper') {
return 'loading'
}
return 'missing'
}
export function useApiKeyVerification(): ApiKeyVerificationResult {
const [status, setStatus] = useState<VerificationStatus>(
getInitialVerificationStatus,
)
const [error, setError] = useState<Error | null>(null)
const anthropicVerificationEnabled =
isAnthropicAuthEnabled() && !isClaudeAISubscriber()
useEffect(() => {
const nextStatus = anthropicVerificationEnabled
? getInitialVerificationStatus()
: 'valid'
setStatus(currentStatus =>
currentStatus === nextStatus ? currentStatus : nextStatus,
)
if (nextStatus !== 'error') {
setError(null)
}
}, [anthropicVerificationEnabled])
const verify = useCallback(async (): Promise<void> => {
if (!isAnthropicAuthEnabled() || isClaudeAISubscriber()) {

View File

@@ -19,7 +19,7 @@ async function _temp() {
logForDebugging("Showing marketplace config save failure notification");
notifs.push({
key: "marketplace-config-save-failed",
jsx: <Text color="error">Failed to save marketplace retry info · Check ~/.claude.json permissions</Text>,
jsx: <Text color="error">Failed to save marketplace retry info · Check ~/.openclaude.json permissions</Text>,
priority: "immediate",
timeoutMs: 10000
});

View File

@@ -1,5 +1,8 @@
import { expect, test } from 'bun:test'
import { supportsClipboardImageFallback } from './usePasteHandler.ts'
import {
shouldHandleInputAsPaste,
supportsClipboardImageFallback,
} from './usePasteHandler.ts'
test('supports clipboard image fallback on Windows', () => {
expect(supportsClipboardImageFallback('windows')).toBe(true)
@@ -20,3 +23,42 @@ test('does not support clipboard image fallback on WSL', () => {
test('does not support clipboard image fallback on unknown platforms', () => {
expect(supportsClipboardImageFallback('unknown')).toBe(false)
})
test('does not treat a bracketed paste as pending when no paste handlers are provided', () => {
expect(
shouldHandleInputAsPaste({
hasTextPasteHandler: false,
hasImagePasteHandler: false,
inputLength: 'kimi-k2.5'.length,
pastePending: false,
hasImageFilePath: false,
isFromPaste: true,
}),
).toBe(false)
})
test('treats bracketed text paste as pending when a text paste handler exists', () => {
expect(
shouldHandleInputAsPaste({
hasTextPasteHandler: true,
hasImagePasteHandler: false,
inputLength: 'kimi-k2.5'.length,
pastePending: false,
hasImageFilePath: false,
isFromPaste: true,
}),
).toBe(true)
})
test('treats image path paste as pending when only an image handler exists', () => {
expect(
shouldHandleInputAsPaste({
hasTextPasteHandler: false,
hasImagePasteHandler: true,
inputLength: 'C:\\Users\\jat\\image.png'.length,
pastePending: false,
hasImageFilePath: true,
isFromPaste: false,
}),
).toBe(true)
})

View File

@@ -35,6 +35,24 @@ type PasteHandlerProps = {
) => void
}
export function shouldHandleInputAsPaste(options: {
hasTextPasteHandler: boolean
hasImagePasteHandler: boolean
inputLength: number
pastePending: boolean
hasImageFilePath: boolean
isFromPaste: boolean
}): boolean {
return (
(options.hasTextPasteHandler &&
(options.inputLength > PASTE_THRESHOLD ||
options.pastePending ||
options.hasImageFilePath ||
options.isFromPaste)) ||
(options.hasImagePasteHandler && options.hasImageFilePath)
)
}
export function usePasteHandler({
onPaste,
onInput,
@@ -236,11 +254,6 @@ export function usePasteHandler({
// The keypress parser sets isPasted=true for content within bracketed paste.
const isFromPaste = event.keypress.isPasted
// If this is pasted content, set isPasting state for UI feedback
if (isFromPaste) {
setIsPasting(true)
}
// Handle large pastes (>PASTE_THRESHOLD chars)
// Usually we get one or two input characters at a time. If we
// get more than the threshold, the user has probably pasted.
@@ -268,6 +281,7 @@ export function usePasteHandler({
canFallbackToClipboardImage &&
onImagePaste
) {
setIsPasting(true)
checkClipboardForImage()
// Reset isPasting since there's no text content to process
setIsPasting(false)
@@ -275,14 +289,17 @@ export function usePasteHandler({
}
// Check if we should handle as paste (from bracketed paste, large input, or continuation)
const shouldHandleAsPaste =
onPaste &&
(input.length > PASTE_THRESHOLD ||
pastePendingRef.current ||
hasImageFilePath ||
isFromPaste)
const shouldHandleAsPaste = shouldHandleInputAsPaste({
hasTextPasteHandler: Boolean(onPaste),
hasImagePasteHandler: Boolean(onImagePaste),
inputLength: input.length,
pastePending: pastePendingRef.current,
hasImageFilePath,
isFromPaste,
})
if (shouldHandleAsPaste) {
setIsPasting(true)
pastePendingRef.current = true
setPasteState(({ chunks, timeoutId }) => {
return {

View File

@@ -434,7 +434,7 @@ export function useReplBridge(messages: Message[], setMessages: (action: React.S
if (!store.getState().toolPermissionContext.isBypassPermissionsModeAvailable) {
return {
ok: false,
error: 'Cannot set permission mode to bypassPermissions because the session was not launched with --dangerously-skip-permissions'
error: 'Cannot set permission mode to bypassPermissions. Enable it with --allow-dangerously-skip-permissions or set permissions.allowBypassPermissionsMode in settings.json'
};
}
}

View File

@@ -1,34 +1,23 @@
/**
* Swarm Permission Poller Hook
* Swarm Permission Callback Registry
*
* This hook polls for permission responses from the team leader when running
* as a worker agent in a swarm. When a response is received, it calls the
* appropriate callback (onAllow/onReject) to continue execution.
* Manages callback registrations for permission requests and responses
* in agent swarms. Responses are delivered exclusively via the mailbox
* system (useInboxPoller → processMailboxPermissionResponse).
*
* This hook should be used in conjunction with the worker-side integration
* in useCanUseTool.ts, which creates pending requests that this hook monitors.
* The legacy file-based polling (resolved/ directory) has been removed
* because it created an unauthenticated attack surface — any local process
* could forge approval files. The mailbox path is the sole active channel.
*/
import { useCallback, useEffect, useRef } from 'react'
import { useInterval } from 'usehooks-ts'
import { logForDebugging } from '../utils/debug.js'
import { errorMessage } from '../utils/errors.js'
import {
type PermissionUpdate,
permissionUpdateSchema,
} from '../utils/permissions/PermissionUpdateSchema.js'
import {
isSwarmWorker,
type PermissionResponse,
pollForResponse,
removeWorkerResponse,
} from '../utils/swarm/permissionSync.js'
import { getAgentName, getTeamName } from '../utils/teammate.js'
const POLL_INTERVAL_MS = 500
/**
* Validate permissionUpdates from external sources (mailbox IPC, disk polling).
* Validate permissionUpdates from external sources (mailbox IPC).
* Malformed entries from buggy/old teammate processes are filtered out rather
* than propagated unchecked into callback.onAllow().
*/
@@ -225,106 +214,9 @@ export function processSandboxPermissionResponse(params: {
return true
}
/**
* Process a permission response by invoking the registered callback
*/
function processResponse(response: PermissionResponse): boolean {
const callback = pendingCallbacks.get(response.requestId)
if (!callback) {
logForDebugging(
`[SwarmPermissionPoller] No callback registered for request ${response.requestId}`,
)
return false
}
logForDebugging(
`[SwarmPermissionPoller] Processing response for request ${response.requestId}: ${response.decision}`,
)
// Remove from registry before invoking callback
pendingCallbacks.delete(response.requestId)
if (response.decision === 'approved') {
const permissionUpdates = parsePermissionUpdates(response.permissionUpdates)
const updatedInput = response.updatedInput
callback.onAllow(updatedInput, permissionUpdates)
} else {
callback.onReject(response.feedback)
}
return true
}
/**
* Hook that polls for permission responses when running as a swarm worker.
*
* This hook:
* 1. Only activates when isSwarmWorker() returns true
* 2. Polls every 500ms for responses
* 3. When a response is found, invokes the registered callback
* 4. Cleans up the response file after processing
*/
export function useSwarmPermissionPoller(): void {
const isProcessingRef = useRef(false)
const poll = useCallback(async () => {
// Don't poll if not a swarm worker
if (!isSwarmWorker()) {
return
}
// Prevent concurrent polling
if (isProcessingRef.current) {
return
}
// Don't poll if no callbacks are registered
if (pendingCallbacks.size === 0) {
return
}
isProcessingRef.current = true
try {
const agentName = getAgentName()
const teamName = getTeamName()
if (!agentName || !teamName) {
return
}
// Check each pending request for a response
for (const [requestId, _callback] of pendingCallbacks) {
const response = await pollForResponse(requestId, agentName, teamName)
if (response) {
// Process the response
const processed = processResponse(response)
if (processed) {
// Clean up the response from the worker's inbox
await removeWorkerResponse(requestId, agentName, teamName)
}
}
}
} catch (error) {
logForDebugging(
`[SwarmPermissionPoller] Error during poll: ${errorMessage(error)}`,
)
} finally {
isProcessingRef.current = false
}
}, [])
// Only poll if we're a swarm worker
const shouldPoll = isSwarmWorker()
useInterval(() => void poll(), shouldPoll ? POLL_INTERVAL_MS : null)
// Initial poll on mount
useEffect(() => {
if (isSwarmWorker()) {
void poll()
}
}, [poll])
}
// Legacy file-based polling (useSwarmPermissionPoller, processResponse)
// has been removed. Permission responses are now delivered exclusively
// via the mailbox system:
// Leader: sendPermissionResponseViaMailbox() → writeToMailbox()
// Worker: useInboxPoller → processMailboxPermissionResponse()
// See: fix(security) — remove unauthenticated file-based permission channel

View File

@@ -11,14 +11,16 @@ const execFileNoThrowMock = mock(
async () => ({ code: 0, stdout: '', stderr: '' }),
)
mock.module('../../utils/execFileNoThrow.js', () => ({
execFileNoThrow: execFileNoThrowMock,
execFileNoThrowWithCwd: execFileNoThrowMock,
}))
function installOscMocks(): void {
mock.module('../../utils/execFileNoThrow.js', () => ({
execFileNoThrow: execFileNoThrowMock,
execFileNoThrowWithCwd: execFileNoThrowMock,
}))
mock.module('../../utils/tempfile.js', () => ({
generateTempFilePath: generateTempFilePathMock,
}))
mock.module('../../utils/tempfile.js', () => ({
generateTempFilePath: generateTempFilePathMock,
}))
}
async function importFreshOscModule() {
return import(`./osc.ts?ts=${Date.now()}-${Math.random()}`)
@@ -45,6 +47,7 @@ async function waitForExecCall(
describe('Windows clipboard fallback', () => {
beforeEach(() => {
installOscMocks()
execFileNoThrowMock.mockClear()
generateTempFilePathMock.mockClear()
process.env = { ...originalEnv }
@@ -62,14 +65,12 @@ describe('Windows clipboard fallback', () => {
const { setClipboard } = await importFreshOscModule()
await setClipboard('Привет мир')
await flushClipboardCopy()
const windowsCall = await waitForExecCall('powershell')
expect(execFileNoThrowMock.mock.calls.some(([cmd]) => cmd === 'clip')).toBe(
false,
)
expect(
execFileNoThrowMock.mock.calls.some(([cmd]) => cmd === 'powershell'),
).toBe(true)
expect(windowsCall).toBeDefined()
})
test('passes Windows clipboard text through a UTF-8 temp file instead of stdin', async () => {
@@ -97,6 +98,7 @@ describe('Windows clipboard fallback', () => {
describe('clipboard path behavior remains stable', () => {
beforeEach(() => {
installOscMocks()
execFileNoThrowMock.mockClear()
process.env = { ...originalEnv }
delete process.env['SSH_CONNECTION']

View File

@@ -481,16 +481,16 @@ export const CLEAR_TAB_STATUS = osc(
)
/**
* Gate for emitting OSC 21337 (tab-status indicator). Ant-only while the
* spec is unstable. Terminals that don't recognize it discard silently, so
* emission is safe unconditionally — we don't gate on terminal detection
* Gate for emitting OSC 21337 (tab-status indicator). Currently disabled
* (spec is unstable). Terminals that don't recognize it discard silently,
* so emission is safe unconditionally — we don't gate on terminal detection
* since support is expected across several terminals.
*
* Callers must wrap output with wrapForMultiplexer() so tmux/screen
* DCS-passthrough carries the sequence to the outer terminal.
*/
export function supportsTabStatus(): boolean {
return process.env.USER_TYPE === 'ant'
return false
}
/**

View File

@@ -74,7 +74,7 @@ export function isTeamMemoryEnabled(): boolean {
if (!isAutoMemoryEnabled()) {
return false
}
return getFeatureValue_CACHED_MAY_BE_STALE('tengu_herring_clock', false)
return getFeatureValue_CACHED_MAY_BE_STALE('tengu_herring_clock', true)
}
/**

View File

@@ -12,7 +12,7 @@ import {
* One-shot migration: clear skipAutoPermissionPrompt for users who accepted
* the old 2-option AutoModeOptInDialog but don't have auto as their default.
* Re-surfaces the dialog so they see the new "make it my default mode" option.
* Guard lives in GlobalConfig (~/.claude.json), not settings.json, so it
* Guard lives in GlobalConfig (~/.openclaude.json), not settings.json, so it
* survives settings resets and doesn't re-arm itself.
*
* Only runs when tengu_auto_mode_config.enabled === 'enabled'. For 'opt-in'

View File

@@ -160,6 +160,7 @@ function* yieldMissingToolResultBlocks(
* rules, ye will be punished with an entire day of debugging and hair pulling.
*/
const MAX_OUTPUT_TOKENS_RECOVERY_LIMIT = 3
const MAX_CONTINUATION_NUDGES = 3
/**
* Is this a max_output_tokens error message? If so, the streaming loop should
@@ -209,6 +210,10 @@ type State = {
pendingToolUseSummary: Promise<ToolUseSummaryMessage | null> | undefined
stopHookActive: boolean | undefined
turnCount: number
// Count of consecutive continuation nudges within the current turn.
// Capped at MAX_CONTINUATION_NUDGES to prevent infinite nudge loops
// when the model keeps matching continuation signals without tool calls.
continuationNudgeCount: number
// Why the previous iteration continued. Undefined on first iteration.
// Lets tests assert recovery paths fired without inspecting message contents.
transition: Continue | undefined
@@ -272,6 +277,7 @@ async function* queryLoop(
maxOutputTokensRecoveryCount: 0,
hasAttemptedReactiveCompact: false,
turnCount: 1,
continuationNudgeCount: 0,
pendingToolUseSummary: undefined,
transition: undefined,
}
@@ -645,6 +651,35 @@ async function* queryLoop(
}
}
// Safety net: when auto-compact's circuit breaker has tripped (3+
// consecutive failures), the normal blocking check above is gated on
// reactiveCompact. If reactiveCompact is also enabled but ALSO fails
// (or is disabled), the oversized context goes straight to the API and
// gets a 500. This check catches that gap — if compaction is exhausted
// and context is still over the autocompact threshold, block immediately
// with a clear message instead of burning an API call that will 500.
if (
tracking?.consecutiveFailures !== undefined &&
tracking.consecutiveFailures >= 3 &&
isAutoCompactEnabled()
) {
const model = toolUseContext.options.mainLoopModel
const tokenUsage = tokenCountWithEstimation(messagesForQuery) - snipTokensFreed
const { isAboveAutoCompactThreshold } = calculateTokenWarningState(
tokenUsage,
model,
)
if (isAboveAutoCompactThreshold) {
yield createAssistantAPIErrorMessage({
content:
'The conversation has exceeded the context limit and automatic compaction has failed. ' +
'Press esc twice to go up a few messages and try again, or start a new session with /new.',
error: 'invalid_request',
})
return { reason: 'blocking_limit' }
}
}
let attemptWithFallback = true
queryCheckpoint('query_api_loop_start')
@@ -1102,6 +1137,7 @@ async function* queryLoop(
pendingToolUseSummary: undefined,
stopHookActive: undefined,
turnCount,
continuationNudgeCount: state.continuationNudgeCount,
transition: {
reason: 'collapse_drain_retry',
committed: drained.committed,
@@ -1155,6 +1191,7 @@ async function* queryLoop(
pendingToolUseSummary: undefined,
stopHookActive: undefined,
turnCount,
continuationNudgeCount: state.continuationNudgeCount,
transition: { reason: 'reactive_compact_retry' },
}
state = next
@@ -1210,6 +1247,7 @@ async function* queryLoop(
pendingToolUseSummary: undefined,
stopHookActive: undefined,
turnCount,
continuationNudgeCount: state.continuationNudgeCount,
transition: { reason: 'max_output_tokens_escalate' },
}
state = next
@@ -1238,6 +1276,7 @@ async function* queryLoop(
pendingToolUseSummary: undefined,
stopHookActive: undefined,
turnCount,
continuationNudgeCount: state.continuationNudgeCount,
transition: {
reason: 'max_output_tokens_recovery',
attempt: maxOutputTokensRecoveryCount + 1,
@@ -1295,6 +1334,7 @@ async function* queryLoop(
pendingToolUseSummary: undefined,
stopHookActive: true,
turnCount,
continuationNudgeCount: state.continuationNudgeCount,
transition: { reason: 'stop_hook_blocking' },
}
state = next
@@ -1331,6 +1371,7 @@ async function* queryLoop(
pendingToolUseSummary: undefined,
stopHookActive: undefined,
turnCount,
continuationNudgeCount: state.continuationNudgeCount,
transition: { reason: 'token_budget_continuation' },
}
continue
@@ -1350,6 +1391,77 @@ async function* queryLoop(
}
}
// Continuation nudge: detect when the model signals intent to continue
// (e.g., "so now I have to do it", "let me now...", "I'll need to...")
// but returned no tool calls. This prevents premature task completion.
//
// Guard: capped at MAX_CONTINUATION_NUDGES to prevent infinite loops
// when the model keeps matching signals without ever calling tools.
if (
assistantMessages.length > 0 &&
turnCount < (maxTurns ?? Infinity) &&
state.continuationNudgeCount < MAX_CONTINUATION_NUDGES
) {
const lastAssistant = assistantMessages.at(-1)
if (lastAssistant?.type === 'assistant') {
const lastText = lastAssistant.message.content
.filter((b): b is { type: 'text'; text: string } => b.type === 'text')
.map(b => b.text)
.join(' ')
.toLowerCase()
// Tightened patterns: require explicit action verbs and exclude
// common explanatory phrasing to reduce false positives.
const continuationSignals = [
// Only match "so now I/let me/we" followed by an action verb
/\bso now (i|let me|we) (need to|have to|should|must|will) (do|create|write|edit|update|fix|implement|add|run|check|make|build|set up)\b/,
// "now I'll" + action (not "now I'll explain" etc.)
/\bnow i('ll| will) (do|create|write|edit|update|fix|implement|add|run|check|make|build|set up|go|proceed)\b/,
// "let me" + action (not "let me think/explain/show")
/\blet me (go ahead and |now )?(do|create|write|edit|update|fix|implement|add|run|check|make|build|set up|proceed)\b/,
// "I'll/I need to/I have to" + action, only if message is short (<80 chars)
...(lastText.length < 80
? [/\b(i('ll| will| need to| have to| must) (now )?(do|create|write|edit|update|fix|implement|add|run|check|make|build|set up))\b/]
: []),
// "time to" + action
/\btime to (do|create|write|edit|update|fix|implement|add|run|check|make|build|get started|begin)\b/,
// "next, I'll/let me" + action, only if message is short
...(lastText.length < 80
? [/\bnext,?\s+(i('ll| will)|let me|i need to) (do|create|write|edit|update|fix|implement|add|run|check|make|build)\b/]
: []),
]
// Don't nudge if the text contains completion markers
const completionMarkers = /\b(done|finished|completed|complete|summary|that's all|that is all|all set|hope this helps|let me know if)\b/
if (completionMarkers.test(lastText)) {
// Model signaled completion — don't nudge
} else if (continuationSignals.some(re => re.test(lastText))) {
logForDebugging(
`Continuation nudge triggered (${state.continuationNudgeCount + 1}/${MAX_CONTINUATION_NUDGES}): model said "${lastText.slice(-120)}" without tool calls`,
)
const nudge = createUserMessage({
content: 'Continue with the task. Use the appropriate tools to proceed.',
isMeta: true,
})
const next: State = {
messages: [...messagesForQuery, ...assistantMessages, nudge],
toolUseContext,
autoCompactTracking: tracking,
maxOutputTokensRecoveryCount: 0,
hasAttemptedReactiveCompact: false,
maxOutputTokensOverride: undefined,
pendingToolUseSummary: undefined,
stopHookActive: undefined,
turnCount,
continuationNudgeCount: state.continuationNudgeCount + 1,
transition: { reason: 'continuation_nudge' },
}
state = next
continue
}
}
}
return { reason: 'completed' }
}
@@ -1715,6 +1827,7 @@ async function* queryLoop(
turnCount: nextTurnCount,
maxOutputTokensRecoveryCount: 0,
hasAttemptedReactiveCompact: false,
continuationNudgeCount: 0,
pendingToolUseSummary: nextPendingToolUseSummary,
maxOutputTokensOverride: undefined,
stopHookActive,

View File

@@ -196,7 +196,7 @@ const PROACTIVE_NO_OP_SUBSCRIBE = (_cb: () => void) => () => { };
const PROACTIVE_FALSE = () => false;
const SUGGEST_BG_PR_NOOP = (_p: string, _n: string): boolean => false;
const useProactive = feature('PROACTIVE') || feature('KAIROS') ? require('../proactive/useProactive.js').useProactive : null;
const useScheduledTasks = feature('AGENT_TRIGGERS') ? require('../hooks/useScheduledTasks.js').useScheduledTasks : null;
const useScheduledTasks = require('../hooks/useScheduledTasks.js').useScheduledTasks;
/* eslint-enable @typescript-eslint/no-require-imports */
import { isAgentSwarmsEnabled } from '../utils/agentSwarmsEnabled.js';
import { useTaskListWatcher } from '../hooks/useTaskListWatcher.js';
@@ -3873,7 +3873,7 @@ export function REPL({
// empty to non-empty, not on every length change -- otherwise a render loop
// (concurrent onQuery thrashing, etc.) spams saveGlobalConfig, which hits
// ELOCKED under concurrent sessions and falls back to unlocked writes.
// That write storm is the primary trigger for ~/.claude.json corruption
// That write storm is the primary trigger for ~/.openclaude.json corruption
// (GH #3117).
const hasCountedQueueUseRef = useRef(false);
useEffect(() => {
@@ -4076,21 +4076,13 @@ export function REPL({
});
// Scheduled tasks from .claude/scheduled_tasks.json (CronCreate/Delete/List)
if (feature('AGENT_TRIGGERS')) {
// Assistant mode bypasses the isLoading gate (the proactive tick →
// Sleep → tick loop would otherwise starve the scheduler).
// kairosEnabled is set once in initialState (main.tsx) and never mutated — no
// subscription needed. The tengu_kairos_cron runtime gate is checked inside
// useScheduledTasks's effect (not here) since wrapping a hook call in a dynamic
// condition would break rules-of-hooks.
const assistantMode = store.getState().kairosEnabled;
// biome-ignore lint/correctness/useHookAtTopLevel: feature() is a compile-time constant
useScheduledTasks!({
isLoading,
assistantMode,
setMessages
});
}
// and session-only /loop runs.
const assistantMode = store.getState().kairosEnabled;
useScheduledTasks({
isLoading,
assistantMode,
setMessages
});
// Note: Permission polling is now handled by useInboxPoller
// - Workers receive permission responses via mailbox messages

View File

@@ -334,7 +334,7 @@ async function processRemoteEvalPayload(
// Empty object is truthy — without the length check, `{features: {}}`
// (transient server bug, truncated response) would pass, clear the maps
// below, return true, and syncRemoteEvalToDisk would wholesale-write `{}`
// to disk: total flag blackout for every process sharing ~/.claude.json.
// to disk: total flag blackout for every process sharing ~/.openclaude.json.
if (!payload?.features || Object.keys(payload.features).length === 0) {
return false
}

View File

@@ -116,9 +116,21 @@ async function fetchBootstrapAPI(): Promise<BootstrapResponse | null> {
return parsed.data
})
} catch (error) {
logForDebugging(
`[Bootstrap] Fetch failed: ${axios.isAxiosError(error) ? (error.response?.status ?? error.code) : 'unknown'}`,
)
if (axios.isAxiosError(error)) {
const status = error.response?.status ?? 'no-response'
const code = error.code ?? 'unknown-code'
const method = error.config?.method?.toUpperCase() ?? 'UNKNOWN'
const requestUrl = error.config?.url ?? 'unknown-url'
const message = error.message ?? 'unknown axios error'
logForDebugging(
`[Bootstrap] Fetch failed: status=${status} code=${code} method=${method} url=${requestUrl} message=${message}`,
)
} else {
const message = error instanceof Error ? error.message : String(error)
logForDebugging(`[Bootstrap] Fetch failed: ${message}`)
}
throw error
}
}

View File

@@ -23,6 +23,7 @@ import { randomUUID } from 'crypto'
import {
getAPIProvider,
isFirstPartyAnthropicBaseUrl,
isGithubNativeAnthropicMode,
} from 'src/utils/model/providers.js'
import {
getAttributionHeader,
@@ -334,8 +335,13 @@ export function getPromptCachingEnabled(model: string): boolean {
// Prompt caching is an Anthropic-specific feature. Third-party providers
// do not understand cache_control blocks and strict backends (e.g. Azure
// Foundry) reject or flag requests that contain them.
//
// Exception: when the GitHub provider is configured in native Anthropic API
// mode (CLAUDE_CODE_GITHUB_ANTHROPIC_API=1), requests are sent in Anthropic
// format, so cache_control blocks are supported.
const provider = getAPIProvider()
if (provider !== 'firstParty' && provider !== 'bedrock' && provider !== 'vertex') {
const isNativeGithub = isGithubNativeAnthropicMode(model)
if (provider !== 'firstParty' && provider !== 'bedrock' && provider !== 'vertex' && !isNativeGithub) {
return false
}
@@ -1211,7 +1217,7 @@ async function* queryModel(
cachedMCEnabled = featureEnabled && modelSupported
const config = getCachedMCConfig()
logForDebugging(
`Cached MC gate: enabled=${featureEnabled} modelSupported=${modelSupported} model=${options.model} supportedModels=${jsonStringify(config.supportedModels)}`,
`Cached MC gate: enabled=${featureEnabled} modelSupported=${modelSupported} model=${options.model} supportedModels=${jsonStringify(config?.supportedModels)}`,
)
}

View File

@@ -14,6 +14,7 @@ import { getSmallFastModel } from 'src/utils/model/model.js'
import {
getAPIProvider,
isFirstPartyAnthropicBaseUrl,
isGithubNativeAnthropicMode,
} from 'src/utils/model/providers.js'
import { getProxyFetchOptions } from 'src/utils/proxy.js'
import {
@@ -174,6 +175,25 @@ export async function getAnthropicClient({
providerOverride,
}) as unknown as Anthropic
}
// GitHub provider in native Anthropic API mode: send requests in Anthropic
// format so cache_control blocks are honoured and prompt caching works.
// Requires the GitHub endpoint (OPENAI_BASE_URL) to support Anthropic's
// messages API — set CLAUDE_CODE_GITHUB_ANTHROPIC_API=1 to opt in.
if (isGithubNativeAnthropicMode(model)) {
const githubBaseUrl =
process.env.OPENAI_BASE_URL?.replace(/\/$/, '') ??
'https://api.githubcopilot.com'
const githubToken =
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN ?? ''
const nativeArgs: ConstructorParameters<typeof Anthropic>[0] = {
...ARGS,
baseURL: githubBaseUrl,
authToken: githubToken,
// No apiKey — we authenticate via Bearer token (authToken)
apiKey: null,
}
return new Anthropic(nativeArgs)
}
if (
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) ||

View File

@@ -0,0 +1,166 @@
import { createServer } from 'node:http'
import { afterEach, expect, mock, test } from 'bun:test'
import { CodexOAuthService } from './codexOAuth.js'
const originalFetch = globalThis.fetch
const originalCallbackPort = process.env.CODEX_OAUTH_CALLBACK_PORT
const originalClientId = process.env.CODEX_OAUTH_CLIENT_ID
afterEach(() => {
mock.restore()
globalThis.fetch = originalFetch
if (originalCallbackPort === undefined) {
delete process.env.CODEX_OAUTH_CALLBACK_PORT
} else {
process.env.CODEX_OAUTH_CALLBACK_PORT = originalCallbackPort
}
if (originalClientId === undefined) {
delete process.env.CODEX_OAUTH_CLIENT_ID
} else {
process.env.CODEX_OAUTH_CLIENT_ID = originalClientId
}
})
async function getFreePort(): Promise<number> {
return await new Promise((resolve, reject) => {
const server = createServer()
server.once('error', reject)
server.listen(0, '127.0.0.1', () => {
const address = server.address()
if (!address || typeof address === 'string') {
server.close(() => reject(new Error('Failed to allocate test port.')))
return
}
const { port } = address
server.close(error => {
if (error) {
reject(error)
return
}
resolve(port)
})
})
})
}
function buildCallbackRequest(authUrl: string): string {
const authorizeUrl = new URL(authUrl)
const redirectUri = authorizeUrl.searchParams.get('redirect_uri')
const state = authorizeUrl.searchParams.get('state')
if (!redirectUri || !state) {
throw new Error('Codex OAuth test did not receive a valid authorization URL.')
}
const callbackUrl = new URL(redirectUri)
callbackUrl.searchParams.set('code', 'auth-code')
callbackUrl.searchParams.set('state', state)
return callbackUrl.toString()
}
test('serves updated success copy after a successful Codex OAuth flow', async () => {
const callbackPort = await getFreePort()
process.env.CODEX_OAUTH_CALLBACK_PORT = String(callbackPort)
process.env.CODEX_OAUTH_CLIENT_ID = 'test-client-id'
globalThis.fetch = mock(async (input, init) => {
const url = String(input)
if (url.startsWith('http://localhost:')) {
return originalFetch(input, init)
}
return new Response(
JSON.stringify({
access_token: 'access-token',
refresh_token: 'refresh-token',
}),
{
status: 200,
headers: { 'Content-Type': 'application/json' },
},
)
}) as typeof fetch
const service = new CodexOAuthService()
let callbackResponsePromise!: Promise<Response>
const flowPromise = service.startOAuthFlow(async authUrl => {
callbackResponsePromise = originalFetch(buildCallbackRequest(authUrl))
})
const tokens = await flowPromise
const callbackResponse = await callbackResponsePromise
const html = await callbackResponse.text()
expect(tokens.accessToken).toBe('access-token')
expect(tokens.refreshToken).toBe('refresh-token')
expect(html).toContain('You can return to OpenClaude now.')
expect(html).toContain(
'OpenClaude will finish activating your new Codex OAuth login.',
)
expect(html).not.toContain('continue automatically')
})
test('cancellation during token exchange returns a cancelled page and rejects the flow', async () => {
const callbackPort = await getFreePort()
process.env.CODEX_OAUTH_CALLBACK_PORT = String(callbackPort)
process.env.CODEX_OAUTH_CLIENT_ID = 'test-client-id'
let resolveFetchStart!: () => void
const fetchStarted = new Promise<void>(resolve => {
resolveFetchStart = resolve
})
globalThis.fetch = mock((input, init) => {
const url = String(input)
if (url.startsWith('http://localhost:')) {
return originalFetch(input, init)
}
return new Promise<Response>((_resolve, reject) => {
resolveFetchStart()
const signal = init?.signal
if (!signal) {
return
}
if (signal.aborted) {
reject(signal.reason)
return
}
signal.addEventListener(
'abort',
() => {
reject(signal.reason)
},
{ once: true },
)
})
}) as typeof fetch
const service = new CodexOAuthService()
let callbackResponsePromise!: Promise<Response>
const flowPromise = service.startOAuthFlow(async authUrl => {
callbackResponsePromise = originalFetch(buildCallbackRequest(authUrl))
})
await fetchStarted
service.cleanup()
await expect(flowPromise).rejects.toThrow('Codex OAuth flow was cancelled.')
const callbackResponse = await callbackResponsePromise
const html = await callbackResponse.text()
expect(html).toContain('Codex login cancelled')
expect(html).toContain('retry in OpenClaude')
})

View File

@@ -0,0 +1,307 @@
import { AuthCodeListener } from '../oauth/auth-code-listener.js'
import {
generateCodeChallenge,
generateCodeVerifier,
generateState,
} from '../oauth/crypto.js'
import {
asTrimmedString,
CODEX_OAUTH_ISSUER,
CODEX_OAUTH_ORIGINATOR,
CODEX_OAUTH_SCOPE,
escapeHtml,
exchangeCodexIdTokenForApiKey,
getCodexOAuthCallbackPort,
getCodexOAuthClientId,
parseChatgptAccountId,
} from './codexOAuthShared.js'
type CodexOAuthTokenResponse = {
id_token?: string
access_token?: string
refresh_token?: string
}
export type CodexOAuthTokens = {
apiKey?: string
accessToken: string
refreshToken: string
idToken?: string
accountId?: string
}
function buildCodexAuthorizeUrl(options: {
port: number
codeChallenge: string
state: string
}): string {
const redirectUri = `http://localhost:${options.port}/auth/callback`
const authUrl = new URL(`${CODEX_OAUTH_ISSUER}/oauth/authorize`)
authUrl.searchParams.append('response_type', 'code')
authUrl.searchParams.append('client_id', getCodexOAuthClientId())
authUrl.searchParams.append('redirect_uri', redirectUri)
authUrl.searchParams.append('scope', CODEX_OAUTH_SCOPE)
authUrl.searchParams.append('code_challenge', options.codeChallenge)
authUrl.searchParams.append('code_challenge_method', 'S256')
authUrl.searchParams.append('id_token_add_organizations', 'true')
authUrl.searchParams.append('codex_cli_simplified_flow', 'true')
authUrl.searchParams.append('state', options.state)
authUrl.searchParams.append('originator', CODEX_OAUTH_ORIGINATOR)
return authUrl.toString()
}
function renderSuccessPage(): string {
return `<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Codex Login Complete</title>
<style>
body { font-family: sans-serif; padding: 32px; line-height: 1.5; color: #111827; }
h1 { margin: 0 0 12px; font-size: 22px; }
p { margin: 0 0 10px; }
</style>
</head>
<body>
<h1>Codex login complete</h1>
<p>You can return to OpenClaude now.</p>
<p>OpenClaude will finish activating your new Codex OAuth login.</p>
</body>
</html>`
}
function renderErrorPage(message: string): string {
const safeMessage = escapeHtml(message)
return `<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Codex Login Failed</title>
<style>
body { font-family: sans-serif; padding: 32px; line-height: 1.5; color: #111827; }
h1 { margin: 0 0 12px; font-size: 22px; color: #991b1b; }
p { margin: 0 0 10px; }
</style>
</head>
<body>
<h1>Codex login failed</h1>
<p>${safeMessage}</p>
<p>You can close this window and try again in OpenClaude.</p>
</body>
</html>`
}
function renderCancelledPage(): string {
return `<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Codex Login Cancelled</title>
<style>
body { font-family: sans-serif; padding: 32px; line-height: 1.5; color: #111827; }
h1 { margin: 0 0 12px; font-size: 22px; }
p { margin: 0 0 10px; }
</style>
</head>
<body>
<h1>Codex login cancelled</h1>
<p>You can close this window and retry in OpenClaude.</p>
</body>
</html>`
}
async function exchangeAuthorizationCode(options: {
authorizationCode: string
codeVerifier: string
port: number
signal?: AbortSignal
}): Promise<CodexOAuthTokens> {
const redirectUri = `http://localhost:${options.port}/auth/callback`
const body = new URLSearchParams({
grant_type: 'authorization_code',
code: options.authorizationCode,
redirect_uri: redirectUri,
client_id: getCodexOAuthClientId(),
code_verifier: options.codeVerifier,
})
const response = await fetch(`${CODEX_OAUTH_ISSUER}/oauth/token`, {
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
},
body,
signal: options.signal
? AbortSignal.any([options.signal, AbortSignal.timeout(15_000)])
: AbortSignal.timeout(15_000),
})
if (!response.ok) {
const errorText = await response.text().catch(() => '')
throw new Error(
errorText.trim()
? `Codex OAuth token exchange failed (${response.status}): ${errorText.trim()}`
: `Codex OAuth token exchange failed with status ${response.status}.`,
)
}
const payload = (await response.json()) as CodexOAuthTokenResponse
const accessToken = asTrimmedString(payload.access_token)
const refreshToken = asTrimmedString(payload.refresh_token)
if (!accessToken || !refreshToken) {
throw new Error(
'Codex OAuth completed, but the token response was missing credentials.',
)
}
const idToken = asTrimmedString(payload.id_token)
const apiKey = idToken
? await exchangeCodexIdTokenForApiKey(idToken).catch(() => undefined)
: undefined
return {
apiKey,
accessToken,
refreshToken,
idToken,
accountId:
parseChatgptAccountId(idToken) ?? parseChatgptAccountId(accessToken),
}
}
export class CodexOAuthService {
private authCodeListener: AuthCodeListener | null = null
private port: number | null = null
private tokenExchangeAbortController: AbortController | null = null
private buildCancellationError(): Error {
return new Error('Codex OAuth flow was cancelled.')
}
async startOAuthFlow(
authURLHandler: (authUrl: string) => Promise<void>,
): Promise<CodexOAuthTokens> {
const codeVerifier = generateCodeVerifier()
const callbackPort = getCodexOAuthCallbackPort()
const authCodeListener = new AuthCodeListener('/auth/callback')
this.authCodeListener = authCodeListener
this.port = null
try {
const port = await authCodeListener.start(callbackPort)
this.port = port
const state = generateState()
const codeChallenge = await generateCodeChallenge(codeVerifier)
const authUrl = buildCodexAuthorizeUrl({
port,
codeChallenge,
state,
})
try {
const authorizationCode = await authCodeListener.waitForAuthorization(
state,
async () => {
await authURLHandler(authUrl)
},
)
const tokenExchangeAbortController = new AbortController()
this.tokenExchangeAbortController = tokenExchangeAbortController
let tokens: CodexOAuthTokens
try {
tokens = await exchangeAuthorizationCode({
authorizationCode,
codeVerifier,
port,
signal: tokenExchangeAbortController.signal,
})
} finally {
if (
this.tokenExchangeAbortController === tokenExchangeAbortController
) {
this.tokenExchangeAbortController = null
}
}
if (this.authCodeListener !== authCodeListener) {
throw this.buildCancellationError()
}
authCodeListener.handleSuccessRedirect([], res => {
res.writeHead(200, {
'Content-Type': 'text/html; charset=utf-8',
})
res.end(renderSuccessPage())
})
return tokens
} catch (error) {
const resolvedError =
this.authCodeListener === authCodeListener
? error
: this.buildCancellationError()
if (authCodeListener.hasPendingResponse()) {
const isCancellation =
resolvedError instanceof Error &&
resolvedError.message === 'Codex OAuth flow was cancelled.'
authCodeListener.handleErrorRedirect(res => {
res.writeHead(isCancellation ? 200 : 400, {
'Content-Type': 'text/html; charset=utf-8',
})
res.end(
isCancellation
? renderCancelledPage()
: renderErrorPage(
resolvedError instanceof Error
? resolvedError.message
: String(resolvedError),
),
)
})
}
throw resolvedError
} finally {
this.cleanup()
}
} catch (error) {
const message = error instanceof Error ? error.message : String(error)
if (
message.includes('EADDRINUSE') ||
message.includes(String(callbackPort))
) {
throw new Error(
`Codex OAuth needs localhost:${callbackPort} for its callback. Close any app already using that port and try again.`,
)
}
throw error
}
}
cleanup(): void {
const cancellationError = this.buildCancellationError()
this.tokenExchangeAbortController?.abort(cancellationError)
this.tokenExchangeAbortController = null
if (this.authCodeListener?.hasPendingResponse()) {
this.authCodeListener.handleErrorRedirect(res => {
res.writeHead(200, {
'Content-Type': 'text/html; charset=utf-8',
})
res.end(renderCancelledPage())
})
}
this.authCodeListener?.cancelPendingAuthorization(cancellationError)
this.authCodeListener = null
this.port = null
}
}

View File

@@ -0,0 +1,139 @@
export const CODEX_OAUTH_ISSUER = 'https://auth.openai.com'
export const CODEX_REFRESH_URL = `${CODEX_OAUTH_ISSUER}/oauth/token`
export const DEFAULT_CODEX_OAUTH_CLIENT_ID = 'app_EMoamEEZ73f0CkXaXp7hrann'
export const DEFAULT_CODEX_OAUTH_CALLBACK_PORT = 1455
export const CODEX_OAUTH_SCOPE =
'openid profile email offline_access api.connectors.read api.connectors.invoke'
export const CODEX_OAUTH_ORIGINATOR = 'codex_cli_rs'
export const CODEX_API_KEY_TOKEN_NAME = 'openai-api-key'
export const CODEX_ID_TOKEN_SUBJECT_TYPE =
'urn:ietf:params:oauth:token-type:id_token'
export const CODEX_TOKEN_EXCHANGE_GRANT =
'urn:ietf:params:oauth:grant-type:token-exchange'
export function asTrimmedString(value: unknown): string | undefined {
if (typeof value !== 'string') return undefined
const trimmed = value.trim()
return trimmed ? trimmed : undefined
}
export function getCodexOAuthClientId(
env: NodeJS.ProcessEnv = process.env,
): string {
return asTrimmedString(env.CODEX_OAUTH_CLIENT_ID) ?? DEFAULT_CODEX_OAUTH_CLIENT_ID
}
export function getCodexOAuthCallbackPort(
env: NodeJS.ProcessEnv = process.env,
): number {
const rawPort = asTrimmedString(env.CODEX_OAUTH_CALLBACK_PORT)
if (!rawPort) {
return DEFAULT_CODEX_OAUTH_CALLBACK_PORT
}
const parsed = Number.parseInt(rawPort, 10)
if (Number.isInteger(parsed) && parsed > 0 && parsed <= 65535) {
return parsed
}
return DEFAULT_CODEX_OAUTH_CALLBACK_PORT
}
export function decodeJwtPayload(
token: string,
): Record<string, unknown> | undefined {
const parts = token.split('.')
if (parts.length < 2) return undefined
try {
const normalized = parts[1].replace(/-/g, '+').replace(/_/g, '/')
const padded = normalized + '='.repeat((4 - (normalized.length % 4)) % 4)
const json = Buffer.from(padded, 'base64').toString('utf8')
const parsed = JSON.parse(json)
return parsed && typeof parsed === 'object'
? (parsed as Record<string, unknown>)
: undefined
} catch {
return undefined
}
}
export function parseChatgptAccountId(
token: string | undefined,
): string | undefined {
if (!token) return undefined
const payload = decodeJwtPayload(token)
const nestedAuth =
payload?.['https://api.openai.com/auth'] &&
typeof payload['https://api.openai.com/auth'] === 'object'
? (payload['https://api.openai.com/auth'] as Record<string, unknown>)
: undefined
return (
asTrimmedString(
nestedAuth?.chatgpt_account_id ??
payload?.['https://api.openai.com/auth.chatgpt_account_id'] ??
payload?.chatgpt_account_id,
) ?? undefined
)
}
export function escapeHtml(value: string): string {
return value.replace(/[&<>"']/g, char => {
switch (char) {
case '&':
return '&amp;'
case '<':
return '&lt;'
case '>':
return '&gt;'
case '"':
return '&quot;'
case '\'':
return '&#39;'
default:
return char
}
})
}
export async function exchangeCodexIdTokenForApiKey(
idToken: string,
): Promise<string> {
const body = new URLSearchParams({
grant_type: CODEX_TOKEN_EXCHANGE_GRANT,
client_id: getCodexOAuthClientId(),
requested_token: CODEX_API_KEY_TOKEN_NAME,
subject_token: idToken,
subject_token_type: CODEX_ID_TOKEN_SUBJECT_TYPE,
})
const response = await fetch(CODEX_REFRESH_URL, {
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
},
body,
signal: AbortSignal.timeout(15_000),
})
if (!response.ok) {
const bodyText = await response.text().catch(() => '')
throw new Error(
bodyText.trim()
? `Codex API key exchange failed (${response.status}): ${bodyText.trim()}`
: `Codex API key exchange failed with status ${response.status}.`,
)
}
const payload = (await response.json()) as { access_token?: string }
const apiKey = asTrimmedString(payload.access_token)
if (!apiKey) {
throw new Error(
'Codex API key exchange completed, but no API key token was returned.',
)
}
return apiKey
}

View File

@@ -8,16 +8,13 @@ import {
convertCodexResponseToAnthropicMessage,
convertToolsToResponsesTools,
} from './codexShim.js'
import {
resolveCodexApiCredentials,
resolveProviderRequest,
} from './providerConfig.js'
const tempDirs: string[] = []
const originalEnv = {
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_BASE: process.env.OPENAI_API_BASE,
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
OPENAI_MODEL: process.env.OPENAI_MODEL,
}
afterEach(() => {
@@ -30,6 +27,9 @@ afterEach(() => {
if (originalEnv.CLAUDE_CODE_USE_GITHUB === undefined) delete process.env.CLAUDE_CODE_USE_GITHUB
else process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
if (originalEnv.OPENAI_MODEL === undefined) delete process.env.OPENAI_MODEL
else process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
while (tempDirs.length > 0) {
const dir = tempDirs.pop()
if (dir) rmSync(dir, { recursive: true, force: true })
@@ -59,6 +59,10 @@ async function collectStreamEventTypes(responseText: string): Promise<string[]>
return events
}
async function importFreshProviderConfigModule() {
return import(`./providerConfig.js?ts=${Date.now()}-${Math.random()}`)
}
describe('Codex provider config', () => {
const originalOpenaiBaseUrl = process.env.OPENAI_BASE_URL
const originalOpenaiApiBase = process.env.OPENAI_API_BASE
@@ -75,7 +79,8 @@ describe('Codex provider config', () => {
else process.env.OPENAI_API_BASE = originalOpenaiApiBase
})
test('resolves codexplan alias to Codex transport with reasoning', () => {
test('resolves codexplan alias to Codex transport with reasoning', async () => {
const { resolveProviderRequest } = await importFreshProviderConfigModule()
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_BASE
delete process.env.CLAUDE_CODE_USE_GITHUB
@@ -84,9 +89,23 @@ describe('Codex provider config', () => {
expect(resolved.transport).toBe('codex_responses')
expect(resolved.resolvedModel).toBe('gpt-5.4')
expect(resolved.reasoning).toEqual({ effort: 'high' })
expect(resolved.baseUrl).toBe('https://chatgpt.com/backend-api/codex')
})
test('does not force Codex transport when a local non-Codex base URL is explicit', () => {
test('resolves codexspark alias to Codex transport with Codex base URL', async () => {
const { resolveProviderRequest } = await importFreshProviderConfigModule()
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_BASE
delete process.env.CLAUDE_CODE_USE_GITHUB
const resolved = resolveProviderRequest({ model: 'codexspark' })
expect(resolved.transport).toBe('codex_responses')
expect(resolved.resolvedModel).toBe('gpt-5.3-codex-spark')
expect(resolved.baseUrl).toBe('https://chatgpt.com/backend-api/codex')
})
test('does not force Codex transport when a local non-Codex base URL is explicit', async () => {
const { resolveProviderRequest } = await importFreshProviderConfigModule()
const resolved = resolveProviderRequest({
model: 'codexplan',
baseUrl: 'http://127.0.0.1:8080/v1',
@@ -97,7 +116,8 @@ describe('Codex provider config', () => {
expect(resolved.resolvedModel).toBe('gpt-5.4')
})
test('resolves codexplan to Codex transport even when OPENAI_BASE_URL is the string "undefined"', () => {
test('resolves codexplan to Codex transport even when OPENAI_BASE_URL is the string "undefined"', async () => {
const { resolveProviderRequest } = await importFreshProviderConfigModule()
// On Windows, env vars can leak as the literal string "undefined" instead of
// the JS value undefined when not properly unset (issue #336).
process.env.OPENAI_BASE_URL = 'undefined'
@@ -105,20 +125,57 @@ describe('Codex provider config', () => {
expect(resolved.transport).toBe('codex_responses')
})
test('resolves codexplan to Codex transport even when OPENAI_BASE_URL is an empty string', () => {
test('resolves codexplan to Codex transport even when OPENAI_BASE_URL is an empty string', async () => {
const { resolveProviderRequest } = await importFreshProviderConfigModule()
process.env.OPENAI_BASE_URL = ''
const resolved = resolveProviderRequest({ model: 'codexplan' })
expect(resolved.transport).toBe('codex_responses')
})
test('prefers explicit baseUrl option over env var', () => {
test('prefers explicit baseUrl option over env var', async () => {
const { resolveProviderRequest } = await importFreshProviderConfigModule()
process.env.OPENAI_BASE_URL = 'https://example.com/v1'
const resolved = resolveProviderRequest({ model: 'codexplan', baseUrl: 'https://chatgpt.com/backend-api/codex' })
expect(resolved.transport).toBe('codex_responses')
expect(resolved.baseUrl).toBe('https://chatgpt.com/backend-api/codex')
})
test('loads Codex credentials from auth.json fallback', () => {
test('default gpt-4o uses OpenAI base URL (no regression)', async () => {
const { resolveProviderRequest } = await importFreshProviderConfigModule()
delete process.env.OPENAI_BASE_URL
delete process.env.CLAUDE_CODE_USE_GITHUB
const resolved = resolveProviderRequest({ model: 'gpt-4o' })
expect(resolved.transport).toBe('chat_completions')
expect(resolved.baseUrl).toBe('https://api.openai.com/v1')
expect(resolved.resolvedModel).toBe('gpt-4o')
})
test('resolves codexplan from env var OPENAI_MODEL to Codex endpoint', async () => {
const { resolveProviderRequest } = await importFreshProviderConfigModule()
process.env.OPENAI_MODEL = 'codexplan'
delete process.env.OPENAI_BASE_URL
delete process.env.CLAUDE_CODE_USE_GITHUB
const resolved = resolveProviderRequest()
expect(resolved.transport).toBe('codex_responses')
expect(resolved.baseUrl).toBe('https://chatgpt.com/backend-api/codex')
expect(resolved.resolvedModel).toBe('gpt-5.4')
})
test('does not override custom base URL for codexplan (e.g., local provider)', async () => {
const { resolveProviderRequest } = await importFreshProviderConfigModule()
process.env.OPENAI_MODEL = 'codexplan'
process.env.OPENAI_BASE_URL = 'http://localhost:11434/v1'
delete process.env.CLAUDE_CODE_USE_GITHUB
const resolved = resolveProviderRequest()
expect(resolved.transport).toBe('chat_completions')
expect(resolved.baseUrl).toBe('http://localhost:11434/v1')
})
test('loads Codex credentials from auth.json fallback', async () => {
const { resolveCodexApiCredentials } = await importFreshProviderConfigModule()
const authPath = createTempAuthJson({
tokens: {
access_token: 'header.payload.signature',
@@ -134,6 +191,31 @@ describe('Codex provider config', () => {
expect(credentials.accountId).toBe('acct_test')
expect(credentials.source).toBe('auth.json')
})
test('does not treat auth.json id_token as a Codex bearer credential', async () => {
const { resolveCodexApiCredentials } = await importFreshProviderConfigModule()
const idTokenPayload = Buffer.from(
JSON.stringify({
'https://api.openai.com/auth': {
chatgpt_account_id: 'acct_from_id_token',
},
}),
'utf8',
).toString('base64url')
const authPath = createTempAuthJson({
tokens: {
id_token: `header.${idTokenPayload}.signature`,
},
})
const credentials = resolveCodexApiCredentials({
CODEX_AUTH_JSON_PATH: authPath,
} as NodeJS.ProcessEnv)
expect(credentials.apiKey).toBe('')
expect(credentials.accountId).toBe('acct_from_id_token')
expect(credentials.source).toBe('none')
})
})
describe('Codex request translation', () => {
@@ -465,7 +547,7 @@ describe('Codex request translation', () => {
])
})
test('strips leaked reasoning preamble from completed Codex text responses', () => {
test('strips <think> tag block from completed Codex text responses', () => {
const message = convertCodexResponseToAnthropicMessage(
{
id: 'resp_1',
@@ -478,7 +560,7 @@ describe('Codex request translation', () => {
{
type: 'output_text',
text:
'The user just said "hey" - a simple greeting. I should respond briefly and friendly.\n\nHey! How can I help you today?',
'<think>user wants a greeting, respond briefly</think>Hey! How can I help you today?',
},
],
},
@@ -496,6 +578,37 @@ describe('Codex request translation', () => {
])
})
test('strips unterminated <think> tag at block boundary in Codex completed response', () => {
const message = convertCodexResponseToAnthropicMessage(
{
id: 'resp_1',
model: 'gpt-5.4',
output: [
{
type: 'message',
role: 'assistant',
content: [
{
type: 'output_text',
text:
'Here is the answer.\n<think>wait, let me reconsider the user request',
},
],
},
],
usage: { input_tokens: 12, output_tokens: 4 },
},
'gpt-5.4',
)
expect(message.content).toEqual([
{
type: 'text',
text: 'Here is the answer.',
},
])
})
test('translates Codex SSE text stream into Anthropic events', async () => {
const responseText = [
'event: response.output_item.added',
@@ -527,7 +640,7 @@ describe('Codex request translation', () => {
])
})
test('strips leaked reasoning preamble from Codex SSE text stream', async () => {
test('strips <think> tag block from Codex SSE text stream', async () => {
const responseText = [
'event: response.output_item.added',
'data: {"type":"response.output_item.added","item":{"id":"msg_1","type":"message","status":"in_progress","content":[],"role":"assistant"},"output_index":0,"sequence_number":0}',
@@ -536,13 +649,13 @@ describe('Codex request translation', () => {
'data: {"type":"response.content_part.added","content_index":0,"item_id":"msg_1","output_index":0,"part":{"type":"output_text","text":""},"sequence_number":1}',
'',
'event: response.output_text.delta',
'data: {"type":"response.output_text.delta","content_index":0,"delta":"The user just said \\"hey\\" - a simple greeting. I should respond briefly and friendly.\\n\\nHey! How can I help you today?","item_id":"msg_1","output_index":0,"sequence_number":2}',
'data: {"type":"response.output_text.delta","content_index":0,"delta":"<think>user wants a greeting, respond briefly</think>Hey! How can I help you today?","item_id":"msg_1","output_index":0,"sequence_number":2}',
'',
'event: response.output_item.done',
'data: {"type":"response.output_item.done","item":{"id":"msg_1","type":"message","status":"completed","content":[{"type":"output_text","text":"The user just said \\"hey\\" - a simple greeting. I should respond briefly and friendly.\\n\\nHey! How can I help you today?"}],"role":"assistant"},"output_index":0,"sequence_number":3}',
'data: {"type":"response.output_item.done","item":{"id":"msg_1","type":"message","status":"completed","content":[{"type":"output_text","text":"<think>user wants a greeting, respond briefly</think>Hey! How can I help you today?"}],"role":"assistant"},"output_index":0,"sequence_number":3}',
'',
'event: response.completed',
'data: {"type":"response.completed","response":{"id":"resp_1","status":"completed","model":"gpt-5.4","output":[{"type":"message","role":"assistant","content":[{"type":"output_text","text":"The user just said \\"hey\\" - a simple greeting. I should respond briefly and friendly.\\n\\nHey! How can I help you today?"}]}],"usage":{"input_tokens":2,"output_tokens":1}},"sequence_number":4}',
'data: {"type":"response.completed","response":{"id":"resp_1","status":"completed","model":"gpt-5.4","output":[{"type":"message","role":"assistant","content":[{"type":"output_text","text":"<think>user wants a greeting, respond briefly</think>Hey! How can I help you today?"}]}],"usage":{"input_tokens":2,"output_tokens":1}},"sequence_number":4}',
'',
].join('\n')
@@ -564,6 +677,50 @@ describe('Codex request translation', () => {
}
}
expect(textDeltas).toEqual(['Hey! How can I help you today?'])
expect(textDeltas.join('')).toBe('Hey! How can I help you today?')
})
test('preserves prose without tags (no phrase-based false positive)', async () => {
// Regression test: older phrase-based sanitizer would incorrectly strip text
// starting with "I should" or "The user". The tag-based approach leaves it alone.
const responseText = [
'event: response.output_item.added',
'data: {"type":"response.output_item.added","item":{"id":"msg_1","type":"message","status":"in_progress","content":[],"role":"assistant"},"output_index":0,"sequence_number":0}',
'',
'event: response.content_part.added',
'data: {"type":"response.content_part.added","content_index":0,"item_id":"msg_1","output_index":0,"part":{"type":"output_text","text":""},"sequence_number":1}',
'',
'event: response.output_text.delta',
'data: {"type":"response.output_text.delta","content_index":0,"delta":"I should note that the user role requires a briefly concise friendly response format.","item_id":"msg_1","output_index":0,"sequence_number":2}',
'',
'event: response.output_item.done',
'data: {"type":"response.output_item.done","item":{"id":"msg_1","type":"message","status":"completed","content":[{"type":"output_text","text":"I should note that the user role requires a briefly concise friendly response format."}],"role":"assistant"},"output_index":0,"sequence_number":3}',
'',
'event: response.completed',
'data: {"type":"response.completed","response":{"id":"resp_1","status":"completed","model":"gpt-5.4","output":[{"type":"message","role":"assistant","content":[{"type":"output_text","text":"I should note that the user role requires a briefly concise friendly response format."}]}],"usage":{"input_tokens":2,"output_tokens":1}},"sequence_number":4}',
'',
].join('\n')
const stream = new ReadableStream({
start(controller) {
controller.enqueue(new TextEncoder().encode(responseText))
controller.close()
},
})
const textDeltas: string[] = []
for await (const event of codexStreamToAnthropic(
new Response(stream),
'gpt-5.4',
)) {
const delta = (event as { delta?: { type?: string; text?: string } }).delta
if (delta?.type === 'text_delta' && typeof delta.text === 'string') {
textDeltas.push(delta.text)
}
}
expect(textDeltas.join('')).toBe(
'I should note that the user role requires a briefly concise friendly response format.',
)
})
})

View File

@@ -1,14 +1,15 @@
import { APIError } from '@anthropic-ai/sdk'
import { compressToolHistory } from './compressToolHistory.js'
import { fetchWithProxyRetry } from './fetchWithProxyRetry.js'
import type {
ResolvedCodexCredentials,
ResolvedProviderRequest,
} from './providerConfig.js'
import { sanitizeSchemaForOpenAICompat } from './openaiSchemaSanitizer.js'
import {
looksLikeLeakedReasoningPrefix,
shouldBufferPotentialReasoningPrefix,
stripLeakedReasoningPreamble,
} from './reasoningLeakSanitizer.js'
createThinkTagFilter,
stripThinkTags,
} from './thinkTagSanitizer.js'
export interface AnthropicUsage {
input_tokens: number
@@ -484,13 +485,15 @@ export async function performCodexRequest(options: {
defaultHeaders: Record<string, string>
signal?: AbortSignal
}): Promise<Response> {
const input = convertAnthropicMessagesToResponsesInput(
const compressedMessages = compressToolHistory(
options.params.messages as Array<{
role?: string
message?: { role?: string; content?: unknown }
content?: unknown
}>,
options.request.resolvedModel,
)
const input = convertAnthropicMessagesToResponsesInput(compressedMessages)
const body: Record<string, unknown> = {
model: options.request.resolvedModel,
input: input.length > 0
@@ -559,12 +562,15 @@ export async function performCodexRequest(options: {
}
headers.originator ??= 'openclaude'
const response = await fetch(`${options.request.baseUrl}/responses`, {
method: 'POST',
headers,
body: JSON.stringify(body),
signal: options.signal,
})
const response = await fetchWithProxyRetry(
`${options.request.baseUrl}/responses`,
{
method: 'POST',
headers,
body: JSON.stringify(body),
signal: options.signal,
},
)
if (!response.ok) {
const errorBody = await response.text().catch(() => 'unknown error')
@@ -580,15 +586,55 @@ export async function performCodexRequest(options: {
return response
}
async function* readSseEvents(response: Response): AsyncGenerator<CodexSseEvent> {
async function* readSseEvents(response: Response, signal?: AbortSignal): AsyncGenerator<CodexSseEvent> {
const reader = response.body?.getReader()
if (!reader) return
const decoder = new TextDecoder()
let buffer = ''
const STREAM_IDLE_TIMEOUT_MS = 120_000 // 2 minutes without data
let lastDataTime = Date.now()
/**
* Read from the stream with an idle timeout. Respects the caller's
* AbortSignal — clears the idle timer on abort so the AbortError
* surfaces cleanly instead of a spurious idle timeout.
*/
async function readWithTimeout(): Promise<ReadableStreamReadResult<Uint8Array>> {
return new Promise((resolve, reject) => {
const timeoutId = setTimeout(() => {
const elapsed = Math.round((Date.now() - lastDataTime) / 1000)
reject(new Error(
`Codex SSE stream idle for ${elapsed}s (limit: ${STREAM_IDLE_TIMEOUT_MS / 1000}s). Connection likely dropped.`,
))
}, STREAM_IDLE_TIMEOUT_MS)
let abortCleanup: (() => void) | undefined
if (signal) {
abortCleanup = () => {
clearTimeout(timeoutId)
}
signal.addEventListener('abort', abortCleanup, { once: true })
}
reader.read().then(
result => {
clearTimeout(timeoutId)
if (signal && abortCleanup) signal.removeEventListener('abort', abortCleanup)
if (result.value) lastDataTime = Date.now()
resolve(result)
},
err => {
clearTimeout(timeoutId)
if (signal && abortCleanup) signal.removeEventListener('abort', abortCleanup)
reject(err)
},
)
})
}
while (true) {
const { done, value } = await reader.read()
const { done, value } = await readWithTimeout()
if (done) break
buffer += decoder.decode(value, { stream: true })
@@ -649,10 +695,11 @@ function determineStopReason(
export async function collectCodexCompletedResponse(
response: Response,
signal?: AbortSignal,
): Promise<Record<string, any>> {
let completedResponse: Record<string, any> | undefined
for await (const event of readSseEvents(response)) {
for await (const event of readSseEvents(response, signal)) {
if (event.event === 'response.failed') {
const msg = event.data?.response?.error?.message ??
event.data?.error?.message ?? 'Codex response failed'
@@ -681,6 +728,7 @@ export async function collectCodexCompletedResponse(
export async function* codexStreamToAnthropic(
response: Response,
model: string,
signal?: AbortSignal,
): AsyncGenerator<AnthropicStreamEvent> {
const messageId = makeMessageId()
const toolBlocksByItemId = new Map<
@@ -688,25 +736,22 @@ export async function* codexStreamToAnthropic(
{ index: number; toolUseId: string }
>()
let activeTextBlockIndex: number | null = null
let activeTextBuffer = ''
let textBufferMode: 'none' | 'pending' | 'strip' = 'none'
const thinkFilter = createThinkTagFilter()
let nextContentBlockIndex = 0
let sawToolUse = false
let finalResponse: Record<string, any> | undefined
const closeActiveTextBlock = async function* () {
if (activeTextBlockIndex === null) return
if (textBufferMode !== 'none') {
const sanitized = stripLeakedReasoningPreamble(activeTextBuffer)
if (sanitized) {
yield {
type: 'content_block_delta',
index: activeTextBlockIndex,
delta: {
type: 'text_delta',
text: sanitized,
},
}
const tail = thinkFilter.flush()
if (tail) {
yield {
type: 'content_block_delta',
index: activeTextBlockIndex,
delta: {
type: 'text_delta',
text: tail,
},
}
}
yield {
@@ -714,8 +759,6 @@ export async function* codexStreamToAnthropic(
index: activeTextBlockIndex,
}
activeTextBlockIndex = null
activeTextBuffer = ''
textBufferMode = 'none'
}
const startTextBlockIfNeeded = async function* () {
@@ -742,7 +785,7 @@ export async function* codexStreamToAnthropic(
},
}
for await (const event of readSseEvents(response)) {
for await (const event of readSseEvents(response, signal)) {
const payload = event.data
if (event.event === 'response.output_item.added') {
@@ -791,43 +834,17 @@ export async function* codexStreamToAnthropic(
if (event.event === 'response.output_text.delta') {
yield* startTextBlockIfNeeded()
activeTextBuffer += payload.delta ?? ''
if (activeTextBlockIndex !== null) {
if (
textBufferMode === 'strip' ||
looksLikeLeakedReasoningPrefix(activeTextBuffer)
) {
textBufferMode = 'strip'
continue
}
if (textBufferMode === 'pending') {
if (shouldBufferPotentialReasoningPrefix(activeTextBuffer)) {
continue
}
const visible = thinkFilter.feed(payload.delta ?? '')
if (visible) {
yield {
type: 'content_block_delta',
index: activeTextBlockIndex,
delta: {
type: 'text_delta',
text: activeTextBuffer,
text: visible,
},
}
textBufferMode = 'none'
continue
}
if (shouldBufferPotentialReasoningPrefix(activeTextBuffer)) {
textBufferMode = 'pending'
continue
}
yield {
type: 'content_block_delta',
index: activeTextBlockIndex,
delta: {
type: 'text_delta',
text: payload.delta ?? '',
},
}
}
continue
@@ -923,7 +940,7 @@ export function convertCodexResponseToAnthropicMessage(
if (part?.type === 'output_text') {
content.push({
type: 'text',
text: stripLeakedReasoningPreamble(part.text ?? ''),
text: stripThinkTags(part.text ?? ''),
})
}
}

View File

@@ -1,7 +1,13 @@
import {
readCodexCredentialsAsync,
refreshCodexAccessTokenIfNeeded,
} from '../../utils/codexCredentials.js'
import { logForDebugging } from '../../utils/debug.js'
import { isBareMode } from '../../utils/envUtils.js'
import {
DEFAULT_CODEX_BASE_URL,
isCodexBaseUrl,
resolveCodexApiCredentials,
resolveRuntimeCodexCredentials,
resolveProviderRequest,
} from './providerConfig.js'
@@ -391,6 +397,18 @@ export function getCodexUsageUrl(baseUrl = DEFAULT_CODEX_BASE_URL): string {
}
export async function fetchCodexUsage(): Promise<CodexUsageData> {
const refreshResult = await refreshCodexAccessTokenIfNeeded().catch(
async error => {
logForDebugging(
`[codex] access token refresh failed before usage fetch: ${error instanceof Error ? error.message : String(error)}`,
{ level: 'warn' },
)
return {
refreshed: false,
credentials: await readCodexCredentialsAsync(),
}
},
)
const request = resolveProviderRequest({
model: process.env.OPENAI_MODEL,
baseUrl: process.env.OPENAI_BASE_URL,
@@ -401,16 +419,19 @@ export async function fetchCodexUsage(): Promise<CodexUsageData> {
)
}
const credentials = resolveCodexApiCredentials()
const credentials = resolveRuntimeCodexCredentials({
storedCredentials: refreshResult.credentials,
})
if (!credentials.apiKey) {
const oauthHint = isBareMode() ? '' : ', choose Codex OAuth in /provider'
const authHint = credentials.authPath
? ` or place a Codex auth.json at ${credentials.authPath}`
: ''
? `${oauthHint} or place a Codex auth.json at ${credentials.authPath}`
: oauthHint
throw new Error(`Codex auth is required. Set CODEX_API_KEY${authHint}.`)
}
if (!credentials.accountId) {
throw new Error(
'Codex auth is missing chatgpt_account_id. Re-login with the Codex CLI or set CHATGPT_ACCOUNT_ID/CODEX_ACCOUNT_ID.',
'Codex auth is missing chatgpt_account_id. Re-login with Codex OAuth, the Codex CLI, or set CHATGPT_ACCOUNT_ID/CODEX_ACCOUNT_ID.',
)
}

View File

@@ -0,0 +1,572 @@
import { afterEach, beforeEach, expect, mock, test } from 'bun:test'
import { compressToolHistory, getTiers } from './compressToolHistory.js'
// Mock the two dependencies so tests are deterministic and don't read disk config.
const mockState = {
enabled: true,
effectiveWindow: 100_000,
}
mock.module('../../utils/config.js', () => ({
getGlobalConfig: () => ({
toolHistoryCompressionEnabled: mockState.enabled,
}),
}))
mock.module('../compact/autoCompact.js', () => ({
getEffectiveContextWindowSize: () => mockState.effectiveWindow,
}))
beforeEach(() => {
mockState.enabled = true
mockState.effectiveWindow = 100_000
})
afterEach(() => {
mockState.enabled = true
mockState.effectiveWindow = 100_000
})
type Block = Record<string, unknown>
type Msg = { role: string; content: Block[] | string }
function bigText(n: number): string {
return 'x'.repeat(n)
}
function buildToolExchange(id: number, resultLength: number): Msg[] {
return [
{
role: 'assistant',
content: [
{
type: 'tool_use',
id: `toolu_${id}`,
name: 'Read',
input: { file_path: `/path/to/file${id}.ts` },
},
],
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: `toolu_${id}`,
content: bigText(resultLength),
},
],
},
]
}
function buildConversation(numToolExchanges: number, resultLength = 5_000): Msg[] {
const out: Msg[] = [{ role: 'user', content: 'Initial request' }]
for (let i = 0; i < numToolExchanges; i++) {
out.push(...buildToolExchange(i, resultLength))
}
return out
}
function getResultMessages(messages: Msg[]): Msg[] {
return messages.filter(
m => Array.isArray(m.content) && m.content.some((b: any) => b.type === 'tool_result'),
)
}
function getResultBlock(msg: Msg): Block {
return (msg.content as Block[]).find((b: any) => b.type === 'tool_result') as Block
}
function getResultText(msg: Msg): string {
const block = getResultBlock(msg)
const c = block.content
if (typeof c === 'string') return c
if (Array.isArray(c)) {
return c
.filter((b: any) => b.type === 'text')
.map((b: any) => b.text)
.join('\n')
}
return ''
}
// ---------- getTiers ----------
test('getTiers: < 16k window → recent=2, mid=3', () => {
expect(getTiers(8_000)).toEqual({ recent: 2, mid: 3 })
})
test('getTiers: 16k32k → recent=3, mid=5', () => {
expect(getTiers(20_000)).toEqual({ recent: 3, mid: 5 })
})
test('getTiers: 32k64k → recent=4, mid=8', () => {
expect(getTiers(48_000)).toEqual({ recent: 4, mid: 8 })
})
test('getTiers: 64k128k (Copilot gpt-4o) → recent=5, mid=10', () => {
expect(getTiers(100_000)).toEqual({ recent: 5, mid: 10 })
})
test('getTiers: 128k256k (Copilot Claude) → recent=8, mid=15', () => {
expect(getTiers(200_000)).toEqual({ recent: 8, mid: 15 })
})
test('getTiers: 256k500k → recent=12, mid=25', () => {
expect(getTiers(400_000)).toEqual({ recent: 12, mid: 25 })
})
test('getTiers: ≥ 500k (gpt-4.1 1M) → recent=25, mid=50', () => {
expect(getTiers(1_000_000)).toEqual({ recent: 25, mid: 50 })
})
// ---------- master switch ----------
test('pass-through when toolHistoryCompressionEnabled is false', () => {
mockState.enabled = false
const messages = buildConversation(20)
const result = compressToolHistory(messages, 'gpt-4o')
expect(result).toBe(messages) // same reference (no transformation)
})
test('pass-through when total tool_results <= recent tier', () => {
// 100k effective → recent=5; only 4 exchanges → no compression
const messages = buildConversation(4)
const result = compressToolHistory(messages, 'gpt-4o')
expect(result).toBe(messages)
})
// ---------- per-tier behavior ----------
test('recent tier: tool_result content untouched', () => {
// 100k effective → recent=5, mid=10. With 6 exchanges, only the oldest is touched.
const messages = buildConversation(6, 5_000)
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
// Last 5 should be untouched (full 5000 chars)
for (let i = resultMsgs.length - 5; i < resultMsgs.length; i++) {
expect(getResultText(resultMsgs[i]).length).toBe(5_000)
}
})
test('mid tier: long content truncated to MID_MAX_CHARS with marker', () => {
// 100k → recent=5, mid=10. 10 exchanges: 5 recent + 5 mid (none old).
const messages = buildConversation(10, 5_000)
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
// First 5 are mid tier — should be truncated to ~2000 chars + marker
for (let i = 0; i < 5; i++) {
const text = getResultText(resultMsgs[i])
expect(text).toContain('[…truncated')
expect(text).toContain('chars from tool history]')
// Should be roughly 2000 chars + marker (under 2200)
expect(text.length).toBeLessThan(2_200)
expect(text.length).toBeGreaterThan(2_000)
}
})
test('mid tier: short content (< MID_MAX_CHARS) untouched', () => {
const messages = buildConversation(10, 500) // 500 < MID_MAX_CHARS
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
for (let i = 0; i < 5; i++) {
expect(getResultText(resultMsgs[i])).toBe(bigText(500))
}
})
test('old tier: content replaced with stub [name args={...} → N chars omitted]', () => {
// 100k → recent=5, mid=10, old=rest. 20 exchanges → 5 old + 10 mid + 5 recent.
const messages = buildConversation(20, 5_000)
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
// First 5 are old tier — should be stubs
for (let i = 0; i < 5; i++) {
const text = getResultText(resultMsgs[i])
expect(text).toMatch(/^\[Read args=\{.*\} → 5000 chars omitted\]$/)
}
})
test('old tier: stub args truncated to 200 chars', () => {
const longArg = bigText(500)
const messages: Msg[] = [
{ role: 'user', content: 'start' },
{
role: 'assistant',
content: [
{
type: 'tool_use',
id: 'toolu_x',
name: 'Bash',
input: { command: longArg },
},
],
},
{
role: 'user',
content: [
{ type: 'tool_result', tool_use_id: 'toolu_x', content: 'output' },
],
},
// Pad with enough recent exchanges to push the above into old tier
...buildConversation(20, 100).slice(1),
]
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
const text = getResultText(resultMsgs[0])
// Stub format: [Bash args=<json≤200chars> → N chars omitted]
// The args portion (between args= and →) must be ≤ 200 chars.
const argsMatch = text.match(/args=(.*?) →/)
expect(argsMatch).not.toBeNull()
expect(argsMatch![1].length).toBeLessThanOrEqual(200)
})
test('old tier: orphan tool_result (no matching tool_use) falls back to "tool"', () => {
const messages: Msg[] = [
{ role: 'user', content: 'start' },
// Orphan: tool_result without matching tool_use in history
{
role: 'user',
content: [
{ type: 'tool_result', tool_use_id: 'orphan_id', content: 'data' },
],
},
...buildConversation(20, 100).slice(1),
]
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
const text = getResultText(resultMsgs[0])
expect(text).toMatch(/^\[tool args=\{\} → 4 chars omitted\]$/)
})
// ---------- structural preservation ----------
test('tool_use blocks always preserved', () => {
const messages = buildConversation(20, 5_000)
const result = compressToolHistory(messages, 'gpt-4o')
const useCount = (msgs: Msg[]) =>
msgs.reduce((sum, m) => {
if (!Array.isArray(m.content)) return sum
return sum + m.content.filter((b: any) => b.type === 'tool_use').length
}, 0)
expect(useCount(result as Msg[])).toBe(useCount(messages))
})
test('text blocks always preserved', () => {
const messages: Msg[] = [
{ role: 'user', content: 'first' },
{
role: 'assistant',
content: [
{ type: 'text', text: 'reasoning before tool' },
{ type: 'tool_use', id: 'toolu_1', name: 'Read', input: {} },
],
},
{
role: 'user',
content: [{ type: 'tool_result', tool_use_id: 'toolu_1', content: bigText(5000) }],
},
...buildConversation(20, 5_000).slice(1),
]
const result = compressToolHistory(messages, 'gpt-4o')
const assistantMsg = (result as Msg[])[1]
const textBlock = (assistantMsg.content as Block[]).find((b: any) => b.type === 'text')
expect(textBlock).toEqual({ type: 'text', text: 'reasoning before tool' })
})
test('thinking blocks always preserved', () => {
const messages: Msg[] = [
{ role: 'user', content: 'first' },
{
role: 'assistant',
content: [
{ type: 'thinking', thinking: 'internal reasoning', signature: 'sig' },
{ type: 'tool_use', id: 'toolu_1', name: 'Read', input: {} },
],
},
{
role: 'user',
content: [{ type: 'tool_result', tool_use_id: 'toolu_1', content: bigText(5000) }],
},
...buildConversation(20, 5_000).slice(1),
]
const result = compressToolHistory(messages, 'gpt-4o')
const assistantMsg = (result as Msg[])[1]
const thinking = (assistantMsg.content as Block[]).find((b: any) => b.type === 'thinking')
expect(thinking).toEqual({
type: 'thinking',
thinking: 'internal reasoning',
signature: 'sig',
})
})
test('non-array content (string) handled gracefully', () => {
const messages: Msg[] = [
{ role: 'user', content: 'plain string content' },
...buildConversation(20, 100).slice(1),
]
const result = compressToolHistory(messages, 'gpt-4o')
expect((result as Msg[])[0].content).toBe('plain string content')
})
test('empty content array handled gracefully', () => {
const messages: Msg[] = [
{ role: 'user', content: [] },
...buildConversation(20, 100).slice(1),
]
expect(() => compressToolHistory(messages, 'gpt-4o')).not.toThrow()
})
// ---------- message shape compatibility ----------
test('wrapped shape ({ message: { role, content } }) handled', () => {
type WrappedMsg = { message: { role: string; content: Block[] | string } }
const wrap = (m: Msg): WrappedMsg => ({ message: { role: m.role, content: m.content } })
const messages = buildConversation(20, 5_000).map(wrap)
const result = compressToolHistory(messages as any, 'gpt-4o')
// First wrapped tool-result message should have stub content (old tier)
const firstResultMsg = (result as WrappedMsg[]).find(
m =>
Array.isArray(m.message.content) &&
m.message.content.some((b: any) => b.type === 'tool_result'),
)
const block = (firstResultMsg!.message.content as Block[]).find(
(b: any) => b.type === 'tool_result',
) as Block
const text = ((block.content as Block[])[0] as any).text
expect(text).toMatch(/^\[Read args=.*→ 5000 chars omitted\]$/)
})
test('flat shape ({ role, content }) handled', () => {
const messages = buildConversation(20, 5_000)
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
expect(getResultText(resultMsgs[0])).toMatch(/^\[Read args=.*→ 5000 chars omitted\]$/)
})
// ---------- tier boundary correctness ----------
test('tier boundaries: 6 exchanges → 1 mid + 5 recent (recent=5)', () => {
const messages = buildConversation(6, 5_000)
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
// Oldest: mid (truncated)
expect(getResultText(resultMsgs[0])).toContain('[…truncated')
// Last 5: untouched
for (let i = 1; i < 6; i++) {
expect(getResultText(resultMsgs[i]).length).toBe(5_000)
}
})
test('tier boundaries: 16 exchanges → 1 old + 10 mid + 5 recent', () => {
const messages = buildConversation(16, 5_000)
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
// Oldest 1: stub (old tier)
expect(getResultText(resultMsgs[0])).toMatch(/^\[Read .*chars omitted\]$/)
// Next 10: mid (truncated)
for (let i = 1; i < 11; i++) {
expect(getResultText(resultMsgs[i])).toContain('[…truncated')
}
// Last 5: untouched
for (let i = 11; i < 16; i++) {
expect(getResultText(resultMsgs[i]).length).toBe(5_000)
}
})
test('large window (1M) with 30 exchanges: all untouched (recent=25 ≥ 30 - 5)', () => {
// ≥500k → recent=25, mid=50. 30 exchanges → 5 mid + 25 recent. None old.
mockState.effectiveWindow = 1_000_000
const messages = buildConversation(30, 5_000)
const result = compressToolHistory(messages, 'gpt-4.1')
const resultMsgs = getResultMessages(result)
// Last 25: untouched
for (let i = 5; i < 30; i++) {
expect(getResultText(resultMsgs[i]).length).toBe(5_000)
}
})
// ---------- attribute preservation ----------
test('is_error flag preserved in mid tier', () => {
const messages: Msg[] = [
{ role: 'user', content: 'start' },
{
role: 'assistant',
content: [{ type: 'tool_use', id: 'toolu_err', name: 'Bash', input: {} }],
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'toolu_err',
is_error: true,
content: bigText(5_000),
},
],
},
// Pad with enough recent exchanges to push the above into MID tier
...buildConversation(10, 100).slice(1),
]
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
const block = getResultBlock(resultMsgs[0]) as { is_error?: boolean; content: unknown }
expect(block.is_error).toBe(true)
expect(getResultText(resultMsgs[0])).toContain('[…truncated')
})
test('is_error flag preserved in old tier (stub)', () => {
const messages: Msg[] = [
{ role: 'user', content: 'start' },
{
role: 'assistant',
content: [{ type: 'tool_use', id: 'toolu_err', name: 'Bash', input: {} }],
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'toolu_err',
is_error: true,
content: bigText(5_000),
},
],
},
...buildConversation(20, 100).slice(1),
]
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
const block = getResultBlock(resultMsgs[0]) as { is_error?: boolean; content: unknown }
expect(block.is_error).toBe(true)
expect(getResultText(resultMsgs[0])).toMatch(/^\[Bash .*chars omitted\]$/)
})
// ---------- COMPACTABLE_TOOLS filter ----------
test('non-compactable tool (e.g. Task/Agent) is NEVER compressed', () => {
// Build conversation where the OLDEST exchange uses a non-compactable tool name
const messages: Msg[] = [
{ role: 'user', content: 'start' },
{
role: 'assistant',
content: [
{ type: 'tool_use', id: 'task_1', name: 'Task', input: { goal: 'plan' } },
],
},
{
role: 'user',
content: [
{ type: 'tool_result', tool_use_id: 'task_1', content: bigText(5_000) },
],
},
// Pad with 20 compactable exchanges to push Task into old tier
...buildConversation(20, 100).slice(1),
]
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
// First tool_result is for Task (non-compactable) → must remain full
expect(getResultText(resultMsgs[0]).length).toBe(5_000)
expect(getResultText(resultMsgs[0])).not.toContain('chars omitted')
expect(getResultText(resultMsgs[0])).not.toContain('[…truncated')
})
test('mcp__ prefixed tools ARE compactable (matches microCompact behavior)', () => {
const messages: Msg[] = [
{ role: 'user', content: 'start' },
{
role: 'assistant',
content: [
{ type: 'tool_use', id: 'mcp_1', name: 'mcp__github__get_issue', input: {} },
],
},
{
role: 'user',
content: [
{ type: 'tool_result', tool_use_id: 'mcp_1', content: bigText(5_000) },
],
},
...buildConversation(20, 100).slice(1),
]
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
// MCP tool result is compressed (gets stub since it's in old tier)
expect(getResultText(resultMsgs[0])).toMatch(/^\[mcp__github__get_issue .*chars omitted\]$/)
})
// ---------- skip already-cleared blocks ----------
test('blocks already cleared by microCompact are NOT re-compressed', () => {
const messages: Msg[] = [
{ role: 'user', content: 'start' },
{
role: 'assistant',
content: [{ type: 'tool_use', id: 'cleared_1', name: 'Read', input: {} }],
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'cleared_1',
content: '[Old tool result content cleared]', // microCompact's marker
},
],
},
...buildConversation(20, 100).slice(1),
]
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
// Already-cleared marker survives untouched (no double processing)
expect(getResultText(resultMsgs[0])).toBe('[Old tool result content cleared]')
})
test('extra block attributes (e.g. cache_control) preserved across rewrites', () => {
const cacheControl = { type: 'ephemeral' }
const messages: Msg[] = [
{ role: 'user', content: 'start' },
{
role: 'assistant',
content: [{ type: 'tool_use', id: 'toolu_cc', name: 'Read', input: {} }],
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'toolu_cc',
cache_control: cacheControl,
content: bigText(5_000),
},
],
},
...buildConversation(20, 100).slice(1),
]
const result = compressToolHistory(messages, 'gpt-4o')
const resultMsgs = getResultMessages(result)
const block = getResultBlock(resultMsgs[0]) as { cache_control?: unknown }
// The custom attribute survived the stub rewrite via ...block spread
expect(block.cache_control).toEqual(cacheControl)
})

View File

@@ -0,0 +1,255 @@
/**
* Compresses old tool_result content for stateless OpenAI-compatible providers
* (Copilot, Mistral, Ollama). Preserves all conversation structure — tool_use,
* tool_result pairing, text, thinking, and is_error all survive intact. Only
* the BULK text of older tool_results is shrunk to delay context saturation.
*
* Tier sizes scale with the model's effective context window via
* getEffectiveContextWindowSize() — same calculation used by auto-compact, so
* the two systems stay aligned.
*
* Complements (does not replace) microCompact.ts:
* - microCompact: time/cache-based, runs from query.ts, binary clear/keep,
* limited to Claude (cache editing) or idle gaps (time-based).
* - compressToolHistory: size-based, runs at the shim layer, tiered
* compression, covers the gap for active sessions on non-Claude providers.
*
* Reuses isCompactableTool from microCompact to avoid touching tools the
* project already classifies as unsafe to compress (e.g. Task, Agent).
* Skips blocks already cleared by microCompact (TOOL_RESULT_CLEARED_MESSAGE).
*
* Anthropic native bypasses both shims, so it is unaffected by this module.
*/
import { getEffectiveContextWindowSize } from '../compact/autoCompact.js'
import { isCompactableTool } from '../compact/microCompact.js'
import { TOOL_RESULT_CLEARED_MESSAGE } from '../../utils/toolResultStorage.js'
import { getGlobalConfig } from '../../utils/config.js'
// Mid-tier truncation budget. 2k chars ≈ 500 tokens, enough to preserve the
// shape of most tool outputs (file headers, command stderr, top grep hits)
// without ballooning context. Bump too high and the tier loses its purpose.
const MID_MAX_CHARS = 2_000
// Stub args budget. JSON.stringify of a typical tool input fits in 200 chars
// (file paths, short commands, small queries). Long inputs are rare and clamping
// here keeps the stub size bounded even when callers pass oversized arguments.
const STUB_ARGS_MAX_CHARS = 200
type AnyMessage = {
role?: string
message?: { role?: string; content?: unknown }
content?: unknown
}
type ToolResultBlock = {
type: 'tool_result'
tool_use_id?: string
is_error?: boolean
content?: unknown
}
type ToolUseBlock = {
type: 'tool_use'
id?: string
name?: string
input?: unknown
}
type Tiers = { recent: number; mid: number }
// Tier sizes scale with effective window. Targets roughly:
// - recent tier stays under ~25% of available window (full fidelity kept)
// - recent + mid tier stays under ~50% of available window (bounded bulk)
// - everything older collapses to ~15-token stubs
// Values assume ~5KB avg tool_result, which matches the Copilot default case
// (parallel_tool_calls=true means multiple Read/Bash outputs per turn). For
// ≥ 500k models the tiers are so generous that compression is effectively
// inert for any realistic session — see compressToolHistory.test.ts.
export function getTiers(effectiveWindow: number): Tiers {
if (effectiveWindow < 16_000) return { recent: 2, mid: 3 }
if (effectiveWindow < 32_000) return { recent: 3, mid: 5 }
if (effectiveWindow < 64_000) return { recent: 4, mid: 8 }
if (effectiveWindow < 128_000) return { recent: 5, mid: 10 }
if (effectiveWindow < 256_000) return { recent: 8, mid: 15 }
if (effectiveWindow < 500_000) return { recent: 12, mid: 25 }
return { recent: 25, mid: 50 }
}
function extractText(content: unknown): string {
if (typeof content === 'string') return content
if (Array.isArray(content)) {
return content
.filter(
(b: { type?: string; text?: string }) =>
b?.type === 'text' && typeof b.text === 'string',
)
.map((b: { text?: string }) => b.text ?? '')
.join('\n')
}
return ''
}
// Old-tier compression strategy. Replaces content entirely with a one-line
// metadata marker ~10× more token-efficient than a 500-char truncation AND
// unambiguous — partial truncations can look authoritative to the model. The
// stub format encodes tool name + args so the model can re-invoke the same
// tool if it needs the omitted output back.
function buildStub(
block: ToolResultBlock,
toolUsesById: Map<string, ToolUseBlock>,
): ToolResultBlock {
const original = extractText(block.content)
const toolUse = toolUsesById.get(block.tool_use_id ?? '')
const name = toolUse?.name ?? 'tool'
const args = toolUse?.input
? JSON.stringify(toolUse.input).slice(0, STUB_ARGS_MAX_CHARS)
: '{}'
return {
...block,
content: [
{
type: 'text',
text: `[${name} args=${args}${original.length} chars omitted]`,
},
],
}
}
// Mid-tier compression. The trailing marker is load-bearing: without it, the
// model can't distinguish "tool returned 2000 chars" from "tool returned 20k
// chars that we cut to 2000". Distinguishing those matters for the model's
// decision to re-invoke the tool.
function truncateBlock(
block: ToolResultBlock,
maxChars: number,
): ToolResultBlock {
const text = extractText(block.content)
if (text.length <= maxChars) return block
const omitted = text.length - maxChars
return {
...block,
content: [
{
type: 'text',
text: `${text.slice(0, maxChars)}\n[…truncated ${omitted} chars from tool history]`,
},
],
}
}
function getInner(msg: AnyMessage): { role?: string; content?: unknown } {
return (msg.message ?? msg) as { role?: string; content?: unknown }
}
function indexToolUses(messages: AnyMessage[]): Map<string, ToolUseBlock> {
const map = new Map<string, ToolUseBlock>()
for (const msg of messages) {
const content = getInner(msg).content
if (!Array.isArray(content)) continue
for (const b of content as Array<{ type?: string; id?: string }>) {
if (b?.type === 'tool_use' && b.id) {
map.set(b.id, b as ToolUseBlock)
}
}
}
return map
}
function indexToolResultMessages(messages: AnyMessage[]): number[] {
const indices: number[] = []
for (let i = 0; i < messages.length; i++) {
const inner = getInner(messages[i])
const role = inner.role ?? messages[i].role
const content = inner.content
if (
role === 'user' &&
Array.isArray(content) &&
content.some((b: { type?: string }) => b?.type === 'tool_result')
) {
indices.push(i)
}
}
return indices
}
function rewriteMessage<T extends AnyMessage>(
msg: T,
newContent: unknown[],
): T {
if (msg.message) {
return { ...msg, message: { ...msg.message, content: newContent } }
}
return { ...msg, content: newContent }
}
// microCompact.maybeTimeBasedMicrocompact may have already replaced old
// tool_result content with TOOL_RESULT_CLEARED_MESSAGE before we see it.
// Re-compressing produces a stub over a marker (e.g. `[Read args={} → 40
// chars omitted]`), wasteful and less informative than the canonical marker.
function isAlreadyCleared(block: ToolResultBlock): boolean {
const text = extractText(block.content)
return text === TOOL_RESULT_CLEARED_MESSAGE
}
function shouldCompressBlock(
block: ToolResultBlock,
toolUsesById: Map<string, ToolUseBlock>,
): boolean {
if (isAlreadyCleared(block)) return false
const toolUse = toolUsesById.get(block.tool_use_id ?? '')
// Unknown tool name (orphan tool_result with no matching tool_use) falls
// through to compression with a generic "tool" stub. Safer default: the
// original tool_use vanished so there's no downstream use for the output.
if (!toolUse?.name) return true
// Respect microCompact's curated safe-to-compress set (Read/Bash/Grep/…/
// mcp__*) so user-facing flow tools (Task, Agent, custom) stay intact.
return isCompactableTool(toolUse.name)
}
export function compressToolHistory<T extends AnyMessage>(
messages: T[],
model: string,
): T[] {
// Master kill-switch. Returns the original reference so callers skip a
// defensive copy when the feature is disabled.
if (!getGlobalConfig().toolHistoryCompressionEnabled) return messages
const tiers = getTiers(getEffectiveContextWindowSize(model))
const toolResultIndices = indexToolResultMessages(messages)
const total = toolResultIndices.length
// If every tool-result fits in the recent tier, no boundary crosses; return
// the same reference for the same copy-elision reason.
if (total <= tiers.recent) return messages
// O(1) lookup: messageIndex → tool-result position (0 = oldest). Replaces
// the naive Array.indexOf(i) that was O(n²) across the .map below.
const positionByIndex = new Map<number, number>()
for (let pos = 0; pos < toolResultIndices.length; pos++) {
positionByIndex.set(toolResultIndices[pos], pos)
}
const toolUsesById = indexToolUses(messages)
return messages.map((msg, i) => {
const pos = positionByIndex.get(i)
if (pos === undefined) return msg
const fromEnd = total - 1 - pos
if (fromEnd < tiers.recent) return msg
const inMidWindow = fromEnd < tiers.recent + tiers.mid
const content = getInner(msg).content as unknown[]
const newContent = content.map(block => {
const b = block as { type?: string }
if (b?.type !== 'tool_result') return block
const tr = block as ToolResultBlock
if (!shouldCompressBlock(tr, toolUsesById)) return block
return inMidWindow
? truncateBlock(tr, MID_MAX_CHARS)
: buildStub(tr, toolUsesById)
})
return rewriteMessage(msg, newContent)
})
}

View File

@@ -0,0 +1,44 @@
import { APIError } from '@anthropic-ai/sdk'
import { expect, test } from 'bun:test'
import { getAssistantMessageFromError } from './errors.js'
function getFirstText(message: ReturnType<typeof getAssistantMessageFromError>): string {
const first = message.message.content[0]
if (!first || typeof first !== 'object' || !('text' in first)) {
return ''
}
return typeof first.text === 'string' ? first.text : ''
}
test('maps endpoint_not_found category markers to actionable setup guidance', () => {
const error = APIError.generate(
404,
undefined,
'OpenAI API error 404: Not Found [openai_category=endpoint_not_found] Hint: Confirm OPENAI_BASE_URL includes /v1.',
new Headers(),
)
const message = getAssistantMessageFromError(error, 'qwen2.5-coder:7b')
const text = getFirstText(message)
expect(message.isApiErrorMessage).toBe(true)
expect(text).toContain('Provider endpoint was not found')
expect(text).toContain('OPENAI_BASE_URL')
expect(text).toContain('/v1')
})
test('maps tool_call_incompatible category markers to model/tool guidance', () => {
const error = APIError.generate(
400,
undefined,
'OpenAI API error 400: tool_calls are not supported [openai_category=tool_call_incompatible]',
new Headers(),
)
const message = getAssistantMessageFromError(error, 'qwen2.5-coder:7b')
const text = getFirstText(message)
expect(text).toContain('rejected tool-calling payloads')
expect(text).toContain('/model')
})

View File

@@ -50,9 +50,110 @@ import {
} from '../claudeAiLimits.js'
import { shouldProcessRateLimits } from '../rateLimitMocking.js' // Used for /mock-limits command
import { extractConnectionErrorDetails, formatAPIError } from './errorUtils.js'
import {
extractOpenAICategoryMarker,
type OpenAICompatibilityFailureCategory,
} from './openaiErrorClassification.js'
export const API_ERROR_MESSAGE_PREFIX = 'API Error'
function stripOpenAICompatibilityMetadata(message: string): string {
return message
.replace(/\s*\[openai_category=[a-z_]+\]\s*/g, ' ')
.replace(/\s{2,}/g, ' ')
.trim()
}
function mapOpenAICompatibilityFailureToAssistantMessage(options: {
category: OpenAICompatibilityFailureCategory
model: string
rawMessage: string
}): AssistantMessage {
const switchCmd = getIsNonInteractiveSession() ? '--model' : '/model'
const compactHint = getIsNonInteractiveSession()
? 'Reduce prompt size or start a new session.'
: 'Run /compact or start a new session with /new.'
switch (options.category) {
case 'localhost_resolution_failed':
case 'connection_refused':
return createAssistantAPIErrorMessage({
content:
'Could not connect to the local OpenAI-compatible provider. Ensure the local server is running, then use OPENAI_BASE_URL=http://127.0.0.1:11434/v1 for Ollama.',
error: 'unknown',
})
case 'endpoint_not_found':
return createAssistantAPIErrorMessage({
content:
'Provider endpoint was not found. Confirm OPENAI_BASE_URL targets an OpenAI-compatible /v1 endpoint (for Ollama: http://127.0.0.1:11434/v1).',
error: 'invalid_request',
})
case 'model_not_found':
return createAssistantAPIErrorMessage({
content: `The selected model (${options.model}) is not available on this provider. Run ${switchCmd} to choose another model, or verify installed local models (for Ollama: ollama list).`,
error: 'invalid_request',
})
case 'auth_invalid':
return createAssistantAPIErrorMessage({
content: `${API_ERROR_MESSAGE_PREFIX}: Authentication failed for your OpenAI-compatible provider. Verify OPENAI_API_KEY and endpoint-specific auth requirements.`,
error: 'authentication_failed',
})
case 'rate_limited':
return createAssistantAPIErrorMessage({
content: `${API_ERROR_MESSAGE_PREFIX}: Provider rate limit reached. Retry in a few seconds.`,
error: 'rate_limit',
})
case 'request_timeout':
return createAssistantAPIErrorMessage({
content: `${API_ERROR_MESSAGE_PREFIX}: Provider request timed out. Local models may be loading or overloaded; retry shortly or increase API_TIMEOUT_MS.`,
error: 'unknown',
})
case 'context_overflow':
return createAssistantAPIErrorMessage({
content: `The conversation exceeded the provider context limit. ${compactHint}`,
error: 'invalid_request',
})
case 'tool_call_incompatible':
return createAssistantAPIErrorMessage({
content: `The selected provider/model rejected tool-calling payloads. Try ${switchCmd} to pick a tool-capable model or continue without tools.`,
error: 'invalid_request',
})
case 'malformed_provider_response':
return createAssistantAPIErrorMessage({
content: `${API_ERROR_MESSAGE_PREFIX}: Provider returned a malformed response. Confirm endpoint compatibility and check local proxy/network middleware.`,
error: 'unknown',
errorDetails: stripOpenAICompatibilityMetadata(options.rawMessage),
})
case 'provider_unavailable':
return createAssistantAPIErrorMessage({
content: `${API_ERROR_MESSAGE_PREFIX}: Provider is temporarily unavailable. Retry in a moment.`,
error: 'unknown',
})
case 'network_error':
case 'unknown':
return createAssistantAPIErrorMessage({
content: `${API_ERROR_MESSAGE_PREFIX}: ${stripOpenAICompatibilityMetadata(options.rawMessage)}`,
error: 'unknown',
})
default:
return createAssistantAPIErrorMessage({
content: `${API_ERROR_MESSAGE_PREFIX}: ${stripOpenAICompatibilityMetadata(options.rawMessage)}`,
error: 'unknown',
})
}
}
export function startsWithApiErrorPrefix(text: string): boolean {
return (
text.startsWith(API_ERROR_MESSAGE_PREFIX) ||
@@ -457,6 +558,19 @@ export function getAssistantMessageFromError(
})
}
// OpenAI-compatible transport and HTTP failures include structured category
// markers from openaiShim.ts for actionable end-user remediation.
if (error instanceof APIError) {
const openaiCategory = extractOpenAICategoryMarker(error.message)
if (openaiCategory) {
return mapOpenAICompatibilityFailureToAssistantMessage({
category: openaiCategory,
model,
rawMessage: error.message,
})
}
}
// Check for emergency capacity off switch for Opus PAYG users
if (
error instanceof Error &&
@@ -924,6 +1038,30 @@ export function getAssistantMessageFromError(
})
}
// 500 errors caused by context overflow — the API returns 500 instead of 400
// when the request body (including conversation context) exceeds limits.
// This happens when auto-compact fails or the token estimation undercounts.
// Detect by checking for context-related keywords in 500 responses.
if (
error instanceof APIError &&
error.status >= 500 &&
(error.message.toLowerCase().includes('too many tokens') ||
error.message.toLowerCase().includes('request too large') ||
error.message.toLowerCase().includes('context length') ||
error.message.toLowerCase().includes('maximum context') ||
error.message.toLowerCase().includes('input length') ||
error.message.toLowerCase().includes('payload too large'))
) {
const rewindInstruction = getIsNonInteractiveSession()
? ''
: ' Press esc twice to go up a few messages, or run /compact to reduce context.'
return createAssistantAPIErrorMessage({
content: `The conversation has grown too large for the API to process.${rewindInstruction} Alternatively, start a new session with /new.`,
error: 'invalid_request',
errorDetails: `Context overflow (500): ${error.message}`,
})
}
// Connection errors (non-timeout) — use formatAPIError for detailed messages
if (error instanceof APIConnectionError) {
return createAssistantAPIErrorMessage({

View File

@@ -0,0 +1,86 @@
import { afterEach, beforeEach, expect, test } from 'bun:test'
import { _resetKeepAliveForTesting } from '../../utils/proxy.js'
import {
fetchWithProxyRetry,
isRetryableFetchError,
} from './fetchWithProxyRetry.js'
type FetchType = typeof globalThis.fetch
const originalFetch = globalThis.fetch
const originalEnv = {
HTTP_PROXY: process.env.HTTP_PROXY,
HTTPS_PROXY: process.env.HTTPS_PROXY,
}
function restoreEnv(key: 'HTTP_PROXY' | 'HTTPS_PROXY', value: string | undefined): void {
if (value === undefined) {
delete process.env[key]
} else {
process.env[key] = value
}
}
beforeEach(() => {
process.env.HTTP_PROXY = 'http://127.0.0.1:15236'
delete process.env.HTTPS_PROXY
_resetKeepAliveForTesting()
})
afterEach(() => {
globalThis.fetch = originalFetch
restoreEnv('HTTP_PROXY', originalEnv.HTTP_PROXY)
restoreEnv('HTTPS_PROXY', originalEnv.HTTPS_PROXY)
_resetKeepAliveForTesting()
})
test('isRetryableFetchError matches Bun socket-closed failures', () => {
expect(
isRetryableFetchError(
new Error(
'The socket connection was closed unexpectedly. For more information, pass `verbose: true` in the second argument to fetch()',
),
),
).toBe(true)
})
test('fetchWithProxyRetry retries once with keepalive disabled after socket closure', async () => {
const calls: Array<RequestInit | undefined> = []
globalThis.fetch = (async (_input, init) => {
calls.push(init)
if (calls.length === 1) {
throw new Error(
'The socket connection was closed unexpectedly. For more information, pass `verbose: true` in the second argument to fetch()',
)
}
return new Response('ok')
}) as FetchType
const response = await fetchWithProxyRetry('https://example.com/search', {
method: 'POST',
})
expect(await response.text()).toBe('ok')
expect(calls).toHaveLength(2)
expect((calls[0] as RequestInit & { proxy?: string }).proxy).toBe(
'http://127.0.0.1:15236',
)
expect((calls[0] as RequestInit).keepalive).toBeUndefined()
expect((calls[1] as RequestInit).keepalive).toBe(false)
})
test('fetchWithProxyRetry does not retry non-network errors', async () => {
let attempts = 0
globalThis.fetch = (async () => {
attempts += 1
throw new Error('400 bad request')
}) as FetchType
await expect(fetchWithProxyRetry('https://example.com')).rejects.toThrow(
'400 bad request',
)
expect(attempts).toBe(1)
})

View File

@@ -0,0 +1,44 @@
import { disableKeepAlive, getProxyFetchOptions } from '../../utils/proxy.js'
const RETRYABLE_FETCH_ERROR_PATTERN =
/socket connection was closed unexpectedly|ECONNRESET|EPIPE|socket hang up|Connection reset by peer|fetch failed/i
export function isRetryableFetchError(error: unknown): boolean {
if (!(error instanceof Error)) {
return false
}
if (error.name === 'AbortError') {
return false
}
return RETRYABLE_FETCH_ERROR_PATTERN.test(error.message)
}
export async function fetchWithProxyRetry(
input: string | URL | Request,
init?: RequestInit,
options?: { forAnthropicAPI?: boolean; maxAttempts?: number },
): Promise<Response> {
const maxAttempts = Math.max(1, options?.maxAttempts ?? 2)
let lastError: unknown
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
return await fetch(input, {
...init,
...getProxyFetchOptions({
forAnthropicAPI: options?.forAnthropicAPI,
}),
})
} catch (error) {
lastError = error
if (attempt >= maxAttempts || !isRetryableFetchError(error)) {
throw error
}
disableKeepAlive()
}
}
throw lastError instanceof Error
? lastError
: new Error('Fetch failed without an error object')
}

View File

@@ -0,0 +1,97 @@
import { expect, test } from 'bun:test'
import {
buildOpenAICompatibilityErrorMessage,
classifyOpenAIHttpFailure,
classifyOpenAINetworkFailure,
extractOpenAICategoryMarker,
formatOpenAICategoryMarker,
} from './openaiErrorClassification.js'
test('classifies localhost ECONNREFUSED as connection_refused', () => {
const error = Object.assign(new TypeError('fetch failed'), {
code: 'ECONNREFUSED',
})
const failure = classifyOpenAINetworkFailure(error, {
url: 'http://localhost:11434/v1/chat/completions',
})
expect(failure.category).toBe('connection_refused')
expect(failure.retryable).toBe(true)
expect(failure.code).toBe('ECONNREFUSED')
expect(failure.hint).toContain('local server is running')
})
test('classifies localhost ENOTFOUND as localhost_resolution_failed', () => {
const error = Object.assign(new TypeError('getaddrinfo ENOTFOUND localhost'), {
code: 'ENOTFOUND',
})
const failure = classifyOpenAINetworkFailure(error, {
url: 'http://localhost:11434/v1/chat/completions',
})
expect(failure.category).toBe('localhost_resolution_failed')
expect(failure.retryable).toBe(true)
expect(failure.code).toBe('ENOTFOUND')
expect(failure.hint).toContain('127.0.0.1')
})
test('classifies model-not-found 404 responses', () => {
const failure = classifyOpenAIHttpFailure({
status: 404,
body: 'The model qwen2.5-coder:7b was not found',
})
expect(failure.category).toBe('model_not_found')
expect(failure.retryable).toBe(false)
})
test('classifies generic 404 responses as endpoint_not_found', () => {
const failure = classifyOpenAIHttpFailure({
status: 404,
body: 'Not Found',
})
expect(failure.category).toBe('endpoint_not_found')
expect(failure.hint).toContain('/v1')
})
test('classifies context-overflow responses', () => {
const failure = classifyOpenAIHttpFailure({
status: 500,
body: 'request too large: maximum context length exceeded',
})
expect(failure.category).toBe('context_overflow')
expect(failure.retryable).toBe(false)
})
test('classifies tool compatibility failures', () => {
const failure = classifyOpenAIHttpFailure({
status: 400,
body: 'tool_calls are not supported by this model',
})
expect(failure.category).toBe('tool_call_incompatible')
})
test('embeds and extracts category markers in formatted messages', () => {
const marker = formatOpenAICategoryMarker('endpoint_not_found')
expect(marker).toBe('[openai_category=endpoint_not_found]')
const formatted = buildOpenAICompatibilityErrorMessage('OpenAI API error 404: Not Found', {
category: 'endpoint_not_found',
hint: 'Confirm OPENAI_BASE_URL includes /v1.',
})
expect(formatted).toContain('[openai_category=endpoint_not_found]')
expect(formatted).toContain('Hint: Confirm OPENAI_BASE_URL includes /v1.')
expect(extractOpenAICategoryMarker(formatted)).toBe('endpoint_not_found')
})
test('ignores unknown category markers during extraction', () => {
const malformed = 'OpenAI API error 500 [openai_category=totally_fake_category]'
expect(extractOpenAICategoryMarker(malformed)).toBeUndefined()
})

View File

@@ -0,0 +1,352 @@
export type OpenAICompatibilityFailureCategory =
| 'connection_refused'
| 'localhost_resolution_failed'
| 'request_timeout'
| 'network_error'
| 'auth_invalid'
| 'rate_limited'
| 'model_not_found'
| 'endpoint_not_found'
| 'context_overflow'
| 'tool_call_incompatible'
| 'malformed_provider_response'
| 'provider_unavailable'
| 'unknown'
export type OpenAICompatibilityFailure = {
source: 'network' | 'http'
category: OpenAICompatibilityFailureCategory
retryable: boolean
message: string
hint?: string
code?: string
status?: number
}
const OPENAI_CATEGORY_MARKER_PREFIX = '[openai_category='
const LOCALHOST_HOSTNAMES = new Set(['localhost', '127.0.0.1', '::1'])
const OPENAI_COMPATIBILITY_FAILURE_CATEGORIES: ReadonlySet<OpenAICompatibilityFailureCategory> =
new Set<OpenAICompatibilityFailureCategory>([
'connection_refused',
'localhost_resolution_failed',
'request_timeout',
'network_error',
'auth_invalid',
'rate_limited',
'model_not_found',
'endpoint_not_found',
'context_overflow',
'tool_call_incompatible',
'malformed_provider_response',
'provider_unavailable',
'unknown',
])
function isOpenAICompatibilityFailureCategory(
value: string,
): value is OpenAICompatibilityFailureCategory {
return OPENAI_COMPATIBILITY_FAILURE_CATEGORIES.has(
value as OpenAICompatibilityFailureCategory,
)
}
function getErrorCode(error: unknown): string | undefined {
let current: unknown = error
const maxDepth = 5
for (let depth = 0; depth < maxDepth; depth++) {
if (
current &&
typeof current === 'object' &&
'code' in current &&
typeof (current as { code?: unknown }).code === 'string'
) {
return (current as { code: string }).code
}
if (
current &&
typeof current === 'object' &&
'cause' in current &&
(current as { cause?: unknown }).cause !== current
) {
current = (current as { cause?: unknown }).cause
continue
}
break
}
return undefined
}
function getHostname(url: string): string | null {
try {
return new URL(url).hostname.toLowerCase()
} catch {
return null
}
}
function isLocalhostLikeHostname(hostname: string | null): boolean {
if (!hostname) return false
if (LOCALHOST_HOSTNAMES.has(hostname)) return true
return /^127\./.test(hostname)
}
function isContextOverflowMessage(body: string): boolean {
const lower = body.toLowerCase()
return (
lower.includes('too many tokens') ||
lower.includes('request too large') ||
lower.includes('context length') ||
lower.includes('maximum context') ||
lower.includes('input length') ||
lower.includes('payload too large') ||
lower.includes('prompt is too long')
)
}
function isToolCompatibilityMessage(body: string): boolean {
const lower = body.toLowerCase()
return (
lower.includes('tool_calls') ||
lower.includes('tool_call') ||
lower.includes('tool_use') ||
lower.includes('tool_result') ||
lower.includes('function calling') ||
lower.includes('function call')
)
}
function isMalformedProviderResponse(body: string): boolean {
const lower = body.toLowerCase()
return (
lower.includes('<!doctype html') ||
lower.includes('<html') ||
lower.includes('invalid json') ||
lower.includes('malformed') ||
lower.includes('unexpected token') ||
lower.includes('cannot parse') ||
lower.includes('not valid json')
)
}
function isModelNotFoundMessage(body: string): boolean {
const lower = body.toLowerCase()
return (
lower.includes('model') &&
(
lower.includes('not found') ||
lower.includes('does not exist') ||
lower.includes('unknown model') ||
lower.includes('unavailable model')
)
)
}
export function formatOpenAICategoryMarker(
category: OpenAICompatibilityFailureCategory,
): string {
return `${OPENAI_CATEGORY_MARKER_PREFIX}${category}]`
}
export function extractOpenAICategoryMarker(
message: string,
): OpenAICompatibilityFailureCategory | undefined {
const match = message.match(/\[openai_category=([a-z_]+)]/)
const category = match?.[1]
if (!category || !isOpenAICompatibilityFailureCategory(category)) {
return undefined
}
return category
}
export function buildOpenAICompatibilityErrorMessage(
baseMessage: string,
failure: Pick<OpenAICompatibilityFailure, 'category' | 'hint'>,
): string {
const marker = formatOpenAICategoryMarker(failure.category)
const hint = failure.hint ? ` Hint: ${failure.hint}` : ''
return `${baseMessage} ${marker}${hint}`
}
export function classifyOpenAINetworkFailure(
error: unknown,
options: { url: string },
): OpenAICompatibilityFailure {
const message = error instanceof Error ? error.message : String(error)
const lowerMessage = message.toLowerCase()
const code = getErrorCode(error)
const hostname = getHostname(options.url)
const isLocalHost = isLocalhostLikeHostname(hostname)
if (
code === 'ETIMEDOUT' ||
code === 'UND_ERR_CONNECT_TIMEOUT' ||
lowerMessage.includes('timeout') ||
lowerMessage.includes('timed out') ||
lowerMessage.includes('aborterror')
) {
return {
source: 'network',
category: 'request_timeout',
retryable: true,
message,
code,
hint: 'The provider took too long to respond. Check local model load time or increase API timeout.',
}
}
if (
isLocalHost &&
(
code === 'ENOTFOUND' ||
code === 'EAI_AGAIN' ||
lowerMessage.includes('getaddrinfo') ||
(code === undefined && lowerMessage.includes('fetch failed'))
)
) {
return {
source: 'network',
category: 'localhost_resolution_failed',
retryable: true,
message,
code,
hint: 'Localhost failed for this request. Retry with 127.0.0.1 and confirm Ollama is serving on the configured port.',
}
}
if (code === 'ECONNREFUSED') {
return {
source: 'network',
category: 'connection_refused',
retryable: true,
message,
code,
hint: isLocalHost
? 'Connection to the local provider was refused. Ensure the local server is running and listening on the configured port.'
: 'Connection was refused by the provider endpoint. Ensure the server is running and the port is correct.',
}
}
return {
source: 'network',
category: 'network_error',
retryable: true,
message,
code,
hint: 'Network transport failed before a provider response was received.',
}
}
export function classifyOpenAIHttpFailure(options: {
status: number
body: string
}): OpenAICompatibilityFailure {
const body = options.body ?? ''
if (options.status === 401 || options.status === 403) {
return {
source: 'http',
category: 'auth_invalid',
retryable: false,
status: options.status,
message: body,
hint: 'Authentication failed. Verify API key, token source, and endpoint-specific auth headers.',
}
}
if (options.status === 429) {
return {
source: 'http',
category: 'rate_limited',
retryable: true,
status: options.status,
message: body,
hint: 'Provider rate-limited the request. Retry after backoff.',
}
}
if (options.status === 404 && isModelNotFoundMessage(body)) {
return {
source: 'http',
category: 'model_not_found',
retryable: false,
status: options.status,
message: body,
hint: 'The selected model is not installed or not available on this endpoint.',
}
}
if (options.status === 404) {
return {
source: 'http',
category: 'endpoint_not_found',
retryable: false,
status: options.status,
message: body,
hint: 'Endpoint was not found. Confirm OPENAI_BASE_URL includes /v1 for OpenAI-compatible local providers.',
}
}
if (
options.status === 413 ||
((options.status === 400 || options.status >= 500) &&
isContextOverflowMessage(body))
) {
return {
source: 'http',
category: 'context_overflow',
retryable: false,
status: options.status,
message: body,
hint: 'Prompt context exceeded model/server limits. Reduce context or increase provider context length.',
}
}
if (options.status === 400 && isToolCompatibilityMessage(body)) {
return {
source: 'http',
category: 'tool_call_incompatible',
retryable: false,
status: options.status,
message: body,
hint: 'Provider/model rejected tool-calling payload. Retry without tools or use a tool-capable model.',
}
}
if (options.status >= 400 && isMalformedProviderResponse(body)) {
return {
source: 'http',
category: 'malformed_provider_response',
retryable: false,
status: options.status,
message: body,
hint: 'Provider returned malformed or non-JSON response where JSON was expected.',
}
}
if (options.status >= 500) {
return {
source: 'http',
category: 'provider_unavailable',
retryable: true,
status: options.status,
message: body,
hint: 'Provider reported a server-side failure. Retry after a short delay.',
}
}
return {
source: 'http',
category: 'unknown',
retryable: false,
status: options.status,
message: body,
}
}

View File

@@ -0,0 +1,317 @@
import { afterEach, beforeEach, expect, mock, test } from 'bun:test'
import { createOpenAIShimClient } from './openaiShim.js'
type FetchType = typeof globalThis.fetch
const originalFetch = globalThis.fetch
const originalEnv = {
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
OPENAI_MODEL: process.env.OPENAI_MODEL,
}
// Mock config + autoCompact so the shim sees deterministic state.
const mockState = {
enabled: true,
effectiveWindow: 100_000, // Copilot gpt-4o tier
}
mock.module('../../utils/config.js', () => ({
getGlobalConfig: () => ({
toolHistoryCompressionEnabled: mockState.enabled,
autoCompactEnabled: false,
}),
}))
mock.module('../compact/autoCompact.js', () => ({
getEffectiveContextWindowSize: () => mockState.effectiveWindow,
}))
type OpenAIShimClient = {
beta: {
messages: {
create: (
params: Record<string, unknown>,
options?: Record<string, unknown>,
) => Promise<unknown>
}
}
}
function bigText(n: number): string {
return 'A'.repeat(n)
}
function buildToolExchange(id: number, resultLength: number) {
return [
{
role: 'assistant',
content: [
{
type: 'tool_use',
id: `toolu_${id}`,
name: 'Read',
input: { file_path: `/path/to/file${id}.ts` },
},
],
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: `toolu_${id}`,
content: bigText(resultLength),
},
],
},
]
}
function buildLongConversation(numExchanges: number, resultLength = 5_000) {
const out: Array<{ role: string; content: unknown }> = [
{ role: 'user', content: 'start the work' },
]
for (let i = 0; i < numExchanges; i++) {
out.push(...buildToolExchange(i, resultLength))
}
return out
}
function makeFakeResponse(): Response {
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'gpt-4o',
choices: [
{
message: { role: 'assistant', content: 'done' },
finish_reason: 'stop',
},
],
usage: { prompt_tokens: 8, completion_tokens: 2, total_tokens: 10 },
}),
{ headers: { 'Content-Type': 'application/json' } },
)
}
beforeEach(() => {
process.env.OPENAI_BASE_URL = 'http://example.test/v1'
process.env.OPENAI_API_KEY = 'test-key'
delete process.env.OPENAI_MODEL
mockState.enabled = true
mockState.effectiveWindow = 100_000
})
afterEach(() => {
if (originalEnv.OPENAI_BASE_URL === undefined) delete process.env.OPENAI_BASE_URL
else process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
if (originalEnv.OPENAI_API_KEY === undefined) delete process.env.OPENAI_API_KEY
else process.env.OPENAI_API_KEY = originalEnv.OPENAI_API_KEY
if (originalEnv.OPENAI_MODEL === undefined) delete process.env.OPENAI_MODEL
else process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
globalThis.fetch = originalFetch
})
async function captureRequestBody(
messages: Array<{ role: string; content: unknown }>,
model: string,
): Promise<Record<string, unknown>> {
let captured: Record<string, unknown> | undefined
globalThis.fetch = (async (_input, init) => {
captured = JSON.parse(String(init?.body))
return makeFakeResponse()
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create({
model,
system: 'system prompt',
messages,
})
if (!captured) throw new Error('request not captured')
return captured
}
function getToolMessages(body: Record<string, unknown>): Array<{ content: string }> {
const messages = body.messages as Array<{ role: string; content: string }>
return messages.filter(m => m.role === 'tool')
}
function getAssistantToolCalls(body: Record<string, unknown>): unknown[] {
const messages = body.messages as Array<{
role: string
tool_calls?: unknown[]
}>
return messages
.filter(m => m.role === 'assistant' && Array.isArray(m.tool_calls))
.flatMap(m => m.tool_calls ?? [])
}
// ============================================================================
// BUG REPRO: without compression, full tool history is resent every turn
// ============================================================================
test('BUG REPRO: without compression, all 30 tool results are sent at full size', async () => {
mockState.enabled = false
const messages = buildLongConversation(30, 5_000)
const body = await captureRequestBody(messages, 'gpt-4o')
const toolMessages = getToolMessages(body)
const payloadSize = JSON.stringify(body).length
// All 30 tool results present, none truncated
expect(toolMessages.length).toBe(30)
for (const m of toolMessages) {
expect(m.content.length).toBeGreaterThanOrEqual(5_000)
expect(m.content).not.toContain('[…truncated')
expect(m.content).not.toContain('chars omitted')
}
// Total payload is large (~150KB raw) — this is the cost being paid every turn
expect(payloadSize).toBeGreaterThan(150_000)
})
// ============================================================================
// FIX: with compression, recent kept full, mid truncated, old stubbed
// ============================================================================
test('FIX: with compression on Copilot gpt-4o (tier 5/10/rest), 30 turns shrinks dramatically', async () => {
mockState.enabled = true
mockState.effectiveWindow = 100_000 // 64128k → recent=5, mid=10
const messages = buildLongConversation(30, 5_000)
const body = await captureRequestBody(messages, 'gpt-4o')
const toolMessages = getToolMessages(body)
const payloadSize = JSON.stringify(body).length
// Structure preserved: still 30 tool messages, no orphan tool_calls
expect(toolMessages.length).toBe(30)
expect(getAssistantToolCalls(body).length).toBe(30)
// Tier breakdown (oldest → newest):
// indices 0..14 → old tier (stubs)
// indices 15..24 → mid tier (truncated)
// indices 25..29 → recent (full)
for (let i = 0; i <= 14; i++) {
expect(toolMessages[i].content).toMatch(/^\[Read args=.*chars omitted\]$/)
}
for (let i = 15; i <= 24; i++) {
expect(toolMessages[i].content).toContain('[…truncated')
}
for (let i = 25; i <= 29; i++) {
expect(toolMessages[i].content.length).toBe(5_000)
expect(toolMessages[i].content).not.toContain('[…truncated')
expect(toolMessages[i].content).not.toContain('chars omitted')
}
// Significant reduction: from ~150KB to <60KB (10 mid×2KB + structure overhead)
expect(payloadSize).toBeLessThan(60_000)
})
// ============================================================================
// FIX: large-context model gets generous tiers — compression effectively inert
// ============================================================================
test('FIX: gpt-4.1 (1M context) with 25 exchanges keeps all full (recent tier=25)', async () => {
mockState.enabled = true
mockState.effectiveWindow = 1_000_000 // ≥500k → recent=25, mid=50
const messages = buildLongConversation(25, 5_000)
const body = await captureRequestBody(messages, 'gpt-4.1')
const toolMessages = getToolMessages(body)
expect(toolMessages.length).toBe(25)
for (const m of toolMessages) {
expect(m.content.length).toBe(5_000)
expect(m.content).not.toContain('[…truncated')
expect(m.content).not.toContain('chars omitted')
}
})
test('FIX: gpt-4.1 (1M context) with 30 exchanges → only first 5 mid-truncated', async () => {
mockState.enabled = true
mockState.effectiveWindow = 1_000_000 // recent=25, mid=50
const messages = buildLongConversation(30, 5_000)
const body = await captureRequestBody(messages, 'gpt-4.1')
const toolMessages = getToolMessages(body)
// 30 total: indices 0..4 mid, indices 5..29 recent
for (let i = 0; i < 5; i++) {
expect(toolMessages[i].content).toContain('[…truncated')
}
for (let i = 5; i < 30; i++) {
expect(toolMessages[i].content.length).toBe(5_000)
}
})
// ============================================================================
// FIX: stub preserves tool name and args — model can re-invoke if needed
// ============================================================================
test('FIX: stub format includes original tool name and arguments', async () => {
mockState.enabled = true
mockState.effectiveWindow = 100_000
const messages = buildLongConversation(30, 5_000)
const body = await captureRequestBody(messages, 'gpt-4o')
const toolMessages = getToolMessages(body)
const oldestStub = toolMessages[0].content
// Format: [<tool_name> args=<json> → <N> chars omitted]
expect(oldestStub).toMatch(/^\[Read /)
expect(oldestStub).toMatch(/file_path/)
expect(oldestStub).toMatch(/→ 5000 chars omitted\]$/)
})
// ============================================================================
// FIX: tool_use blocks (assistant tool_calls) are never modified
// ============================================================================
test('FIX: every tool_call retains its full id, name, and arguments', async () => {
mockState.enabled = true
mockState.effectiveWindow = 100_000
const messages = buildLongConversation(30, 5_000)
const body = await captureRequestBody(messages, 'gpt-4o')
const toolCalls = getAssistantToolCalls(body) as Array<{
id: string
function: { name: string; arguments: string }
}>
expect(toolCalls.length).toBe(30)
for (let i = 0; i < toolCalls.length; i++) {
expect(toolCalls[i].id).toBe(`toolu_${i}`)
expect(toolCalls[i].function.name).toBe('Read')
expect(JSON.parse(toolCalls[i].function.arguments)).toEqual({
file_path: `/path/to/file${i}.ts`,
})
}
})
// ============================================================================
// FIX: small-context provider (Mistral 32k) gets aggressive compression
// ============================================================================
test('FIX: 32k window (Mistral tier) → recent=3 keeps last 3 only', async () => {
mockState.enabled = true
mockState.effectiveWindow = 24_000 // 1632k → recent=3, mid=5
const messages = buildLongConversation(15, 3_000)
const body = await captureRequestBody(messages, 'mistral-large-latest')
const toolMessages = getToolMessages(body)
// 15 total: indices 0..6 old, 7..11 mid, 12..14 recent
for (let i = 0; i <= 6; i++) {
expect(toolMessages[i].content).toContain('chars omitted')
}
for (let i = 7; i <= 11; i++) {
expect(toolMessages[i].content).toContain('[…truncated')
}
for (let i = 12; i <= 14; i++) {
expect(toolMessages[i].content.length).toBe(3_000)
}
})

View File

@@ -0,0 +1,286 @@
import { afterEach, expect, mock, test } from 'bun:test'
const originalFetch = globalThis.fetch
const originalEnv = {
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
OPENAI_MODEL: process.env.OPENAI_MODEL,
}
function restoreEnv(key: string, value: string | undefined): void {
if (value === undefined) {
delete process.env[key]
} else {
process.env[key] = value
}
}
afterEach(() => {
globalThis.fetch = originalFetch
restoreEnv('OPENAI_BASE_URL', originalEnv.OPENAI_BASE_URL)
restoreEnv('OPENAI_API_KEY', originalEnv.OPENAI_API_KEY)
restoreEnv('OPENAI_MODEL', originalEnv.OPENAI_MODEL)
mock.restore()
})
test('logs classified transport diagnostics with category and code', async () => {
const debugSpy = mock(() => {})
mock.module('../../utils/debug.js', () => ({
logForDebugging: debugSpy,
}))
const nonce = `${Date.now()}-${Math.random()}`
const { createOpenAIShimClient } = await import(`./openaiShim.ts?ts=${nonce}`)
process.env.OPENAI_BASE_URL = 'http://localhost:11434/v1'
process.env.OPENAI_API_KEY = 'ollama'
const transportError = Object.assign(new TypeError('fetch failed'), {
code: 'ECONNREFUSED',
})
globalThis.fetch = mock(async () => {
throw transportError
}) as typeof globalThis.fetch
const client = createOpenAIShimClient({}) as {
beta: {
messages: {
create: (params: Record<string, unknown>) => Promise<unknown>
}
}
}
await expect(
client.beta.messages.create({
model: 'qwen2.5-coder:7b',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
}),
).rejects.toThrow('openai_category=connection_refused')
const transportLog = debugSpy.mock.calls.find(call =>
typeof call?.[0] === 'string' && call[0].includes('transport failure'),
)
expect(transportLog).toBeDefined()
expect(String(transportLog?.[0])).toContain('category=connection_refused')
expect(String(transportLog?.[0])).toContain('code=ECONNREFUSED')
expect(transportLog?.[1]).toEqual({ level: 'warn' })
})
test('redacts credentials in transport diagnostic URL logs', async () => {
const debugSpy = mock(() => {})
mock.module('../../utils/debug.js', () => ({
logForDebugging: debugSpy,
}))
const nonce = `${Date.now()}-${Math.random()}`
const { createOpenAIShimClient } = await import(`./openaiShim.ts?ts=${nonce}`)
process.env.OPENAI_BASE_URL = 'http://user:supersecret@localhost:11434/v1'
process.env.OPENAI_API_KEY = 'supersecret'
const transportError = Object.assign(new TypeError('fetch failed'), {
code: 'ECONNREFUSED',
})
globalThis.fetch = mock(async () => {
throw transportError
}) as typeof globalThis.fetch
const client = createOpenAIShimClient({}) as {
beta: {
messages: {
create: (params: Record<string, unknown>) => Promise<unknown>
}
}
}
await expect(
client.beta.messages.create({
model: 'qwen2.5-coder:7b',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
}),
).rejects.toThrow('openai_category=connection_refused')
const transportLog = debugSpy.mock.calls.find(call =>
typeof call?.[0] === 'string' && call[0].includes('transport failure'),
)
expect(transportLog).toBeDefined()
const logLine = String(transportLog?.[0])
expect(logLine).toContain('url=http://redacted:redacted@localhost:11434/v1/chat/completions')
expect(logLine).not.toContain('user:supersecret')
expect(logLine).not.toContain('supersecret@')
})
test('logs self-heal localhost fallback with redacted from/to URLs', async () => {
const debugSpy = mock(() => {})
mock.module('../../utils/debug.js', () => ({
logForDebugging: debugSpy,
}))
const nonce = `${Date.now()}-${Math.random()}`
const { createOpenAIShimClient } = await import(`./openaiShim.ts?ts=${nonce}`)
process.env.OPENAI_BASE_URL = 'http://user:supersecret@localhost:11434/v1'
process.env.OPENAI_API_KEY = 'supersecret'
globalThis.fetch = mock(async (input: string | Request) => {
const url = typeof input === 'string' ? input : input.url
if (url.includes('localhost')) {
throw Object.assign(new TypeError('fetch failed'), {
code: 'ENOTFOUND',
})
}
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'qwen2.5-coder:7b',
choices: [
{
message: {
role: 'assistant',
content: 'ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 5,
completion_tokens: 2,
total_tokens: 7,
},
}),
{
status: 200,
headers: {
'Content-Type': 'application/json',
},
},
)
}) as typeof globalThis.fetch
const client = createOpenAIShimClient({}) as {
beta: {
messages: {
create: (params: Record<string, unknown>) => Promise<unknown>
}
}
}
await expect(
client.beta.messages.create({
model: 'qwen2.5-coder:7b',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
}),
).resolves.toBeDefined()
const fallbackLog = debugSpy.mock.calls.find(call =>
typeof call?.[0] === 'string' &&
call[0].includes('self-heal retry reason=localhost_resolution_failed'),
)
expect(fallbackLog).toBeDefined()
const logLine = String(fallbackLog?.[0])
expect(logLine).toContain('from=http://redacted:redacted@localhost:11434/v1/chat/completions')
expect(logLine).toContain('to=http://redacted:redacted@127.0.0.1:11434/v1/chat/completions')
expect(logLine).not.toContain('supersecret')
})
test('logs self-heal toolless retry for local tool-call incompatibility', async () => {
const debugSpy = mock(() => {})
mock.module('../../utils/debug.js', () => ({
logForDebugging: debugSpy,
}))
const nonce = `${Date.now()}-${Math.random()}`
const { createOpenAIShimClient } = await import(`./openaiShim.ts?ts=${nonce}`)
process.env.OPENAI_BASE_URL = 'http://localhost:11434/v1'
process.env.OPENAI_API_KEY = 'ollama'
let callCount = 0
globalThis.fetch = mock(async () => {
callCount += 1
if (callCount === 1) {
return new Response('tool_calls are not supported', {
status: 400,
headers: {
'Content-Type': 'text/plain',
},
})
}
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'qwen2.5-coder:7b',
choices: [
{
message: {
role: 'assistant',
content: 'ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 7,
completion_tokens: 3,
total_tokens: 10,
},
}),
{
status: 200,
headers: {
'Content-Type': 'application/json',
},
},
)
}) as typeof globalThis.fetch
const client = createOpenAIShimClient({}) as {
beta: {
messages: {
create: (params: Record<string, unknown>) => Promise<unknown>
}
}
}
await expect(
client.beta.messages.create({
model: 'qwen2.5-coder:7b',
messages: [{ role: 'user', content: 'hello' }],
tools: [
{
name: 'Read',
description: 'Read file',
input_schema: {
type: 'object',
properties: {
filePath: { type: 'string' },
},
required: ['filePath'],
},
},
],
max_tokens: 64,
stream: false,
}),
).resolves.toBeDefined()
const fallbackLog = debugSpy.mock.calls.find(call =>
typeof call?.[0] === 'string' &&
call[0].includes('self-heal retry reason=tool_call_incompatible mode=toolless'),
)
expect(fallbackLog).toBeDefined()
expect(fallbackLog?.[1]).toEqual({ level: 'warn' })
})

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,225 @@
import { afterEach, describe, expect, mock, test } from 'bun:test'
import { mkdirSync, mkdtempSync, rmSync, writeFileSync } from 'node:fs'
import { tmpdir } from 'node:os'
import { join } from 'node:path'
import * as realOs from 'node:os'
function makeJwt(payload: Record<string, unknown>): string {
const header = Buffer.from(JSON.stringify({ alg: 'none', typ: 'JWT' }))
.toString('base64url')
const body = Buffer.from(JSON.stringify(payload)).toString('base64url')
return `${header}.${body}.signature`
}
describe('resolveCodexApiCredentials with secure storage', () => {
afterEach(() => {
mock.restore()
})
test('loads Codex credentials from OpenClaude secure storage', async () => {
mock.module('../../utils/codexCredentials.js', () => ({
isCodexRefreshFailureCoolingDown: () => false,
readCodexCredentials: () => ({
apiKey: 'codex-api-key-token',
accessToken: 'header.payload.signature',
accountId: 'acct_secure',
}),
}))
// @ts-expect-error cache-busting query string for Bun module mocks
const { resolveCodexApiCredentials } = await import(
'./providerConfig.js?codex-secure-storage'
)
const credentials = resolveCodexApiCredentials({} as NodeJS.ProcessEnv)
expect(credentials.apiKey).toBe('codex-api-key-token')
expect(credentials.accountId).toBe('acct_secure')
expect(credentials.source).toBe('secure-storage')
})
test('prefers explicit env credentials over secure storage', async () => {
mock.module('../../utils/codexCredentials.js', () => ({
isCodexRefreshFailureCoolingDown: () => false,
readCodexCredentials: () => ({
accessToken: 'stored-token',
accountId: 'acct_stored',
}),
}))
// @ts-expect-error cache-busting query string for Bun module mocks
const { resolveCodexApiCredentials } = await import(
'./providerConfig.js?codex-env-precedence'
)
const credentials = resolveCodexApiCredentials({
CODEX_API_KEY: 'env-token',
CHATGPT_ACCOUNT_ID: 'acct_env',
} as NodeJS.ProcessEnv)
expect(credentials.apiKey).toBe('env-token')
expect(credentials.accountId).toBe('acct_env')
expect(credentials.source).toBe('env')
})
test('parses nested chatgpt_account_id from a CODEX_API_KEY JWT', async () => {
mock.module('../../utils/codexCredentials.js', () => ({
isCodexRefreshFailureCoolingDown: () => false,
readCodexCredentials: () => undefined,
}))
// @ts-expect-error cache-busting query string for Bun module mocks
const { resolveCodexApiCredentials } = await import(
'./providerConfig.js?codex-env-nested-account'
)
const credentials = resolveCodexApiCredentials({
CODEX_API_KEY: makeJwt({
'https://api.openai.com/auth': {
chatgpt_account_id: 'acct_nested_env',
},
}),
} as NodeJS.ProcessEnv)
expect(credentials.accountId).toBe('acct_nested_env')
expect(credentials.source).toBe('env')
})
test('parses nested chatgpt_account_id from auth.json tokens', async () => {
mock.module('../../utils/codexCredentials.js', () => ({
isCodexRefreshFailureCoolingDown: () => false,
readCodexCredentials: () => undefined,
}))
const tempDir = mkdtempSync(join(tmpdir(), 'openclaude-codex-auth-'))
const authPath = join(tempDir, 'auth.json')
writeFileSync(
authPath,
JSON.stringify({
openai_api_key: makeJwt({
'https://api.openai.com/auth': {
chatgpt_account_id: 'acct_nested_auth_json',
},
}),
}),
'utf8',
)
try {
// @ts-expect-error cache-busting query string for Bun module mocks
const { resolveCodexApiCredentials } = await import(
'./providerConfig.js?codex-auth-json-nested-account'
)
const credentials = resolveCodexApiCredentials({
CODEX_AUTH_JSON_PATH: authPath,
} as NodeJS.ProcessEnv)
expect(credentials.accountId).toBe('acct_nested_auth_json')
expect(credentials.source).toBe('auth.json')
} finally {
rmSync(tempDir, { force: true, recursive: true })
}
})
test('does not read default auth.json when secure storage already has Codex credentials', async () => {
mock.module('../../utils/codexCredentials.js', () => ({
isCodexRefreshFailureCoolingDown: () => false,
readCodexCredentials: () => ({
apiKey: 'codex-api-key-token',
accessToken: 'header.payload.signature',
accountId: 'acct_secure',
}),
}))
// @ts-expect-error cache-busting query string for Bun module mocks
const { resolveCodexApiCredentials } = await import(
'./providerConfig.js?codex-secure-storage-no-auth-io'
)
const credentials = resolveCodexApiCredentials({} as NodeJS.ProcessEnv)
expect(credentials.apiKey).toBe('codex-api-key-token')
expect(credentials.accountId).toBe('acct_secure')
expect(credentials.source).toBe('secure-storage')
})
test('falls back to the default auth.json when stored Codex refresh is cooling down', async () => {
const tempHomeDir = mkdtempSync(join(tmpdir(), 'openclaude-codex-home-'))
const authJson = JSON.stringify({
openai_api_key: makeJwt({
'https://api.openai.com/auth': {
chatgpt_account_id: 'acct_auth_json',
},
}),
})
mkdirSync(join(tempHomeDir, '.codex'), { recursive: true })
writeFileSync(join(tempHomeDir, '.codex', 'auth.json'), authJson, 'utf8')
mock.module('node:os', () => ({
...realOs,
homedir: () => tempHomeDir,
}))
mock.module('../../utils/codexCredentials.js', () => ({
isCodexRefreshFailureCoolingDown: () => true,
readCodexCredentials: () => ({
accessToken: 'stored-token',
refreshToken: 'refresh-stored',
accountId: 'acct_stored',
lastRefreshFailureAt: Date.now(),
}),
}))
// @ts-expect-error cache-busting query string for Bun module mocks
const { resolveCodexApiCredentials } = await import(
'./providerConfig.js?codex-refresh-cooldown-fallback'
)
try {
const credentials = resolveCodexApiCredentials({} as NodeJS.ProcessEnv)
expect(credentials.source).toBe('auth.json')
expect(credentials.accountId).toBe('acct_auth_json')
expect(credentials.apiKey).not.toBe('stored-token')
} finally {
rmSync(tempHomeDir, { force: true, recursive: true })
}
})
test('preserves the stored account id when auth.json fallback lacks one', async () => {
const tempHomeDir = mkdtempSync(join(tmpdir(), 'openclaude-codex-home-'))
const authJson = JSON.stringify({
openai_api_key: 'auth-json-access-token',
})
mkdirSync(join(tempHomeDir, '.codex'), { recursive: true })
writeFileSync(join(tempHomeDir, '.codex', 'auth.json'), authJson, 'utf8')
mock.module('node:os', () => ({
...realOs,
homedir: () => tempHomeDir,
}))
mock.module('../../utils/codexCredentials.js', () => ({
isCodexRefreshFailureCoolingDown: () => true,
readCodexCredentials: () => ({
accessToken: 'stored-token',
refreshToken: 'refresh-stored',
accountId: 'acct_stored',
lastRefreshFailureAt: Date.now(),
}),
}))
// @ts-expect-error cache-busting query string for Bun module mocks
const { resolveCodexApiCredentials } = await import(
'./providerConfig.js?codex-refresh-cooldown-account-id-fallback'
)
try {
const credentials = resolveCodexApiCredentials({} as NodeJS.ProcessEnv)
expect(credentials.source).toBe('auth.json')
expect(credentials.apiKey).toBe('auth-json-access-token')
expect(credentials.accountId).toBe('acct_stored')
} finally {
rmSync(tempHomeDir, { force: true, recursive: true })
}
})
})

View File

@@ -0,0 +1,107 @@
import { afterEach, expect, mock, test } from 'bun:test'
const originalEnv = {
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
CLAUDE_CODE_USE_MISTRAL: process.env.CLAUDE_CODE_USE_MISTRAL,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_MODEL: process.env.OPENAI_MODEL,
OPENAI_API_BASE: process.env.OPENAI_API_BASE,
MISTRAL_BASE_URL: process.env.MISTRAL_BASE_URL,
MISTRAL_MODEL: process.env.MISTRAL_MODEL,
}
function restoreEnv(key: string, value: string | undefined): void {
if (value === undefined) {
delete process.env[key]
} else {
process.env[key] = value
}
}
afterEach(() => {
restoreEnv('CLAUDE_CODE_USE_OPENAI', originalEnv.CLAUDE_CODE_USE_OPENAI)
restoreEnv('CLAUDE_CODE_USE_MISTRAL', originalEnv.CLAUDE_CODE_USE_MISTRAL)
restoreEnv('OPENAI_BASE_URL', originalEnv.OPENAI_BASE_URL)
restoreEnv('OPENAI_MODEL', originalEnv.OPENAI_MODEL)
restoreEnv('OPENAI_API_BASE', originalEnv.OPENAI_API_BASE)
restoreEnv('MISTRAL_BASE_URL', originalEnv.MISTRAL_BASE_URL)
restoreEnv('MISTRAL_MODEL', originalEnv.MISTRAL_MODEL)
mock.restore()
})
test('logs a warning when OPENAI_BASE_URL is literal undefined', async () => {
const debugSpy = mock(() => {})
mock.module('../../utils/debug.js', () => ({
logForDebugging: debugSpy,
}))
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'undefined'
process.env.OPENAI_MODEL = 'gpt-4o'
delete process.env.OPENAI_API_BASE
const nonce = `${Date.now()}-${Math.random()}`
const { resolveProviderRequest } = await import(`./providerConfig.ts?ts=${nonce}`)
const resolved = resolveProviderRequest()
expect(resolved.baseUrl).toBe('https://api.openai.com/v1')
const warningCall = debugSpy.mock.calls.find(call =>
typeof call?.[0] === 'string' &&
call[0].includes('OPENAI_BASE_URL') &&
call[0].includes('"undefined"'),
)
expect(warningCall).toBeDefined()
expect(warningCall?.[1]).toEqual({ level: 'warn' })
})
test('does not warn for OPENAI_API_BASE when OPENAI_BASE_URL is active', async () => {
const debugSpy = mock(() => {})
mock.module('../../utils/debug.js', () => ({
logForDebugging: debugSpy,
}))
process.env.CLAUDE_CODE_USE_OPENAI = '1'
delete process.env.CLAUDE_CODE_USE_MISTRAL
process.env.OPENAI_BASE_URL = 'http://127.0.0.1:11434/v1'
process.env.OPENAI_MODEL = 'qwen2.5-coder:7b'
process.env.OPENAI_API_BASE = 'undefined'
const nonce = `${Date.now()}-${Math.random()}`
const { resolveProviderRequest } = await import(`./providerConfig.ts?ts=${nonce}`)
const resolved = resolveProviderRequest()
expect(resolved.baseUrl).toBe('http://127.0.0.1:11434/v1')
const aliasWarning = debugSpy.mock.calls.find(call =>
typeof call?.[0] === 'string' &&
call[0].includes('OPENAI_API_BASE') &&
call[0].includes('"undefined"'),
)
expect(aliasWarning).toBeUndefined()
})
test('uses OPENAI_API_BASE as fallback in mistral mode when MISTRAL_BASE_URL is unset', async () => {
const debugSpy = mock(() => {})
mock.module('../../utils/debug.js', () => ({
logForDebugging: debugSpy,
}))
delete process.env.CLAUDE_CODE_USE_OPENAI
process.env.CLAUDE_CODE_USE_MISTRAL = '1'
delete process.env.MISTRAL_BASE_URL
process.env.MISTRAL_MODEL = 'mistral-medium-latest'
process.env.OPENAI_API_BASE = 'http://127.0.0.1:11434/v1'
const nonce = `${Date.now()}-${Math.random()}`
const { resolveProviderRequest } = await import(`./providerConfig.ts?ts=${nonce}`)
const resolved = resolveProviderRequest()
expect(resolved.baseUrl).toBe('http://127.0.0.1:11434/v1')
expect(debugSpy.mock.calls).toHaveLength(0)
})

View File

@@ -2,8 +2,10 @@ import { afterEach, expect, test } from 'bun:test'
import {
getAdditionalModelOptionsCacheScope,
getLocalProviderRetryBaseUrls,
isLocalProviderUrl,
resolveProviderRequest,
shouldAttemptLocalToollessRetry,
} from './providerConfig.js'
const originalEnv = {
@@ -83,3 +85,42 @@ test('skips local model cache scope for remote openai-compatible providers', ()
expect(getAdditionalModelOptionsCacheScope()).toBeNull()
})
test('derives local retry base URLs with /v1 and loopback fallback candidates', () => {
expect(getLocalProviderRetryBaseUrls('http://localhost:11434')).toEqual([
'http://localhost:11434/v1',
'http://127.0.0.1:11434',
'http://127.0.0.1:11434/v1',
])
})
test('does not derive local retry base URLs for remote providers', () => {
expect(getLocalProviderRetryBaseUrls('https://api.openai.com/v1')).toEqual([])
})
test('enables local toolless retry for likely Ollama endpoints with tools', () => {
expect(
shouldAttemptLocalToollessRetry({
baseUrl: 'http://localhost:11434/v1',
hasTools: true,
}),
).toBe(true)
})
test('disables local toolless retry when no tools are present', () => {
expect(
shouldAttemptLocalToollessRetry({
baseUrl: 'http://localhost:11434/v1',
hasTools: false,
}),
).toBe(false)
})
test('disables local toolless retry for non-Ollama local endpoints', () => {
expect(
shouldAttemptLocalToollessRetry({
baseUrl: 'http://localhost:1234/v1',
hasTools: true,
}),
).toBe(false)
})

View File

@@ -0,0 +1,107 @@
import { afterEach, expect, mock, test } from 'bun:test'
import { mkdtempSync, rmSync, writeFileSync } from 'node:fs'
import { tmpdir } from 'node:os'
import { join } from 'node:path'
import { resolveRuntimeCodexCredentials } from './providerConfig.js'
afterEach(() => {
mock.restore()
})
function makeJwt(payload: Record<string, unknown>): string {
const header = Buffer.from(JSON.stringify({ alg: 'none', typ: 'JWT' }))
.toString('base64url')
const body = Buffer.from(JSON.stringify(payload)).toString('base64url')
return `${header}.${body}.signature`
}
test('runtime credential resolution honors explicit auth.json over stored secure-storage tokens', () => {
const tempDir = mkdtempSync(join(tmpdir(), 'openclaude-codex-explicit-auth-'))
const authPath = join(tempDir, 'auth.json')
writeFileSync(
authPath,
JSON.stringify({
openai_api_key: makeJwt({
'https://api.openai.com/auth': {
chatgpt_account_id: 'acct_explicit_auth_json',
},
}),
}),
'utf8',
)
try {
const credentials = resolveRuntimeCodexCredentials({
env: {
CODEX_AUTH_JSON_PATH: authPath,
} as NodeJS.ProcessEnv,
storedCredentials: {
apiKey: 'stored-api-key',
accessToken: 'stored-access-token',
accountId: 'acct_stored',
},
})
expect(credentials.source).toBe('auth.json')
expect(credentials.accountId).toBe('acct_explicit_auth_json')
expect(credentials.apiKey).not.toBe('stored-api-key')
} finally {
rmSync(tempDir, { force: true, recursive: true })
}
})
test('runtime credential resolution preserves an explicit auth.json path even when it is missing', () => {
const tempDir = mkdtempSync(join(tmpdir(), 'openclaude-codex-missing-auth-'))
const authPath = join(tempDir, 'missing-auth.json')
try {
const credentials = resolveRuntimeCodexCredentials({
env: {
CODEX_AUTH_JSON_PATH: authPath,
} as NodeJS.ProcessEnv,
storedCredentials: {
apiKey: 'stored-api-key',
accessToken: 'stored-access-token',
accountId: 'acct_stored',
},
})
expect(credentials.source).toBe('none')
expect(credentials.authPath).toBe(authPath)
expect(credentials.apiKey).toBe('')
} finally {
rmSync(tempDir, { force: true, recursive: true })
}
})
test('runtime credential resolution avoids sync secure-storage reads when async credentials are provided', async () => {
let syncReadCalled = false
mock.module('../../utils/codexCredentials.js', () => ({
isCodexRefreshFailureCoolingDown: () => false,
readCodexCredentials: () => {
syncReadCalled = true
throw new Error('sync secure-storage read should not run in runtime resolution')
},
}))
// @ts-expect-error cache-busting query string for Bun module mocks
const { resolveRuntimeCodexCredentials } = await import(
'./providerConfig.js?runtime-no-sync-secure-storage'
)
const credentials = resolveRuntimeCodexCredentials({
env: {} as NodeJS.ProcessEnv,
storedCredentials: {
accessToken: 'stored-access-token',
accountId: 'acct_stored',
},
})
expect(syncReadCalled).toBe(false)
expect(credentials.source).toBe('secure-storage')
expect(credentials.apiKey).toBe('stored-access-token')
expect(credentials.accountId).toBe('acct_stored')
})

View File

@@ -3,13 +3,25 @@ import { isIP } from 'node:net'
import { homedir } from 'node:os'
import { join } from 'node:path'
import {
isCodexRefreshFailureCoolingDown,
readCodexCredentials,
type CodexCredentialBlob,
} from '../../utils/codexCredentials.js'
import { logForDebugging } from '../../utils/debug.js'
import { isEnvTruthy } from '../../utils/envUtils.js'
import {
asTrimmedString,
parseChatgptAccountId,
} from './codexOAuthShared.js'
import { DEFAULT_GEMINI_BASE_URL } from 'src/utils/providerProfile.js'
export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1'
export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex'
export const DEFAULT_MISTRAL_BASE_URL = 'https://api.mistral.ai/v1'
/** Default GitHub Copilot API model when user selects copilot / github:copilot */
export const DEFAULT_GITHUB_MODELS_API_MODEL = 'gpt-4o'
const warnedUndefinedEnvNames = new Set<string>()
const CODEX_ALIAS_MODELS: Record<
string,
@@ -60,6 +72,8 @@ const CODEX_ALIAS_MODELS: Record<
type CodexAlias = keyof typeof CODEX_ALIAS_MODELS
type ReasoningEffort = 'low' | 'medium' | 'high' | 'xhigh'
const OPENAI_CODEX_SHORTCUT_ALIASES = new Set(['codexplan', 'codexspark'])
export type ProviderTransport = 'chat_completions' | 'codex_responses'
export type ResolvedProviderRequest = {
@@ -76,7 +90,7 @@ export type ResolvedCodexCredentials = {
apiKey: string
accountId?: string
authPath?: string
source: 'env' | 'auth.json' | 'none'
source: 'env' | 'secure-storage' | 'auth.json' | 'none'
}
type ModelDescriptor = {
@@ -112,19 +126,39 @@ function isPrivateIpv6Address(hostname: string): boolean {
return (prefix & 0xfe00) === 0xfc00 || (prefix & 0xffc0) === 0xfe80
}
function asTrimmedString(value: unknown): string | undefined {
if (typeof value !== 'string') return undefined
const trimmed = value.trim()
return trimmed ? trimmed : undefined
}
// Reads an env-var-style string intended as a URL or path, rejecting both
// empty strings and the literal string "undefined" that Windows shells can
// write when a variable is unset-then-referenced without quotes (issue #336).
function asEnvUrl(value: string | undefined): string | undefined {
if (!value) return undefined
const trimmed = value.trim()
if (!trimmed || trimmed === 'undefined') return undefined
if (!trimmed) return undefined
if (trimmed === 'undefined') {
return undefined
}
return trimmed
}
function asNamedEnvUrl(
value: string | undefined,
envName: string,
): string | undefined {
if (!value) return undefined
const trimmed = value.trim()
if (!trimmed) return undefined
if (trimmed === 'undefined') {
if (!warnedUndefinedEnvNames.has(envName)) {
warnedUndefinedEnvNames.add(envName)
logForDebugging(
`[provider-config] Environment variable ${envName} is the literal string "undefined"; ignoring it.`,
{ level: 'warn' },
)
}
return undefined
}
return trimmed
}
@@ -149,23 +183,6 @@ function readNestedString(
return undefined
}
function decodeJwtPayload(token: string): Record<string, unknown> | undefined {
const parts = token.split('.')
if (parts.length < 2) return undefined
try {
const normalized = parts[1].replace(/-/g, '+').replace(/_/g, '/')
const padded = normalized + '='.repeat((4 - (normalized.length % 4)) % 4)
const json = Buffer.from(padded, 'base64').toString('utf8')
const parsed = JSON.parse(json)
return parsed && typeof parsed === 'object'
? (parsed as Record<string, unknown>)
: undefined
} catch {
return undefined
}
}
function parseReasoningEffort(value: string | undefined): ReasoningEffort | undefined {
if (!value) return undefined
const normalized = value.trim().toLowerCase()
@@ -220,6 +237,12 @@ export function isCodexAlias(model: string): boolean {
return base in CODEX_ALIAS_MODELS
}
function isOpenAICodexShortcutAlias(model: string): boolean {
const normalized = model.trim().toLowerCase()
const base = normalized.split('?', 1)[0] ?? normalized
return OPENAI_CODEX_SHORTCUT_ALIASES.has(base)
}
export function shouldUseCodexTransport(
model: string,
baseUrl: string | undefined,
@@ -282,6 +305,101 @@ export function isLocalProviderUrl(baseUrl: string | undefined): boolean {
}
}
function trimTrailingSlash(value: string): string {
return value.replace(/\/+$/, '')
}
function normalizePathWithV1(pathname: string): string {
const trimmed = trimTrailingSlash(pathname)
if (!trimmed || trimmed === '/') {
return '/v1'
}
if (trimmed.toLowerCase().endsWith('/v1')) {
return trimmed
}
return `${trimmed}/v1`
}
function isLikelyOllamaEndpoint(baseUrl: string): boolean {
try {
const parsed = new URL(baseUrl)
const hostname = parsed.hostname.toLowerCase()
const pathname = parsed.pathname.toLowerCase()
if (parsed.port === '11434') {
return true
}
return (
hostname.includes('ollama') ||
pathname.includes('ollama')
)
} catch {
return false
}
}
export function getLocalProviderRetryBaseUrls(baseUrl: string): string[] {
if (!isLocalProviderUrl(baseUrl)) {
return []
}
try {
const parsed = new URL(baseUrl)
const original = trimTrailingSlash(parsed.toString())
const seen = new Set<string>([original])
const candidates: string[] = []
const addCandidate = (hostname: string, pathname: string): void => {
const next = new URL(parsed.toString())
next.hostname = hostname
next.pathname = pathname
next.search = ''
next.hash = ''
const normalized = trimTrailingSlash(next.toString())
if (seen.has(normalized)) {
return
}
seen.add(normalized)
candidates.push(normalized)
}
const v1Pathname = normalizePathWithV1(parsed.pathname)
if (v1Pathname !== trimTrailingSlash(parsed.pathname)) {
addCandidate(parsed.hostname, v1Pathname)
}
const hostname = parsed.hostname.toLowerCase().replace(/^\[|\]$/g, '')
if (hostname === 'localhost' || hostname === '::1') {
addCandidate('127.0.0.1', parsed.pathname || '/')
addCandidate('127.0.0.1', v1Pathname)
}
return candidates
} catch {
return []
}
}
export function shouldAttemptLocalToollessRetry(options: {
baseUrl: string
hasTools: boolean
}): boolean {
if (!options.hasTools) {
return false
}
if (!isLocalProviderUrl(options.baseUrl)) {
return false
}
return isLikelyOllamaEndpoint(options.baseUrl)
}
export function isCodexBaseUrl(baseUrl: string | undefined): boolean {
if (!baseUrl) return false
try {
@@ -359,20 +477,80 @@ export function resolveProviderRequest(options?: {
}): ResolvedProviderRequest {
const isGithubMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const isMistralMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
const isGeminiMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const requestedModel =
options?.model?.trim() ||
(isMistralMode
? process.env.MISTRAL_MODEL?.trim()
: process.env.OPENAI_MODEL?.trim()) ||
(isGeminiMode
? process.env.GEMINI_MODEL?.trim()
: process.env.OPENAI_MODEL?.trim()) ||
options?.fallbackModel?.trim() ||
(isGithubMode ? 'github:copilot' : 'gpt-4o')
const descriptor = parseModelDescriptor(requestedModel)
const rawBaseUrl =
asEnvUrl(options?.baseUrl) ??
asEnvUrl(
isMistralMode ? (process.env.MISTRAL_BASE_URL ?? DEFAULT_MISTRAL_BASE_URL) : process.env.OPENAI_BASE_URL,
) ??
asEnvUrl(process.env.OPENAI_API_BASE)
const explicitBaseUrl = asEnvUrl(options?.baseUrl)
const normalizedMistralEnvBaseUrl = asNamedEnvUrl(
process.env.MISTRAL_BASE_URL,
'MISTRAL_BASE_URL',
)
const normalizedGeminiEnvBaseUrl = asNamedEnvUrl(
process.env.GEMINI_BASE_URL,
'GEMINI_BASE_URL',
)
const primaryEnvBaseUrl = isMistralMode
? normalizedMistralEnvBaseUrl
: isGeminiMode
? normalizedGeminiEnvBaseUrl
: asNamedEnvUrl(process.env.OPENAI_BASE_URL, 'OPENAI_BASE_URL')
// In Mistral mode, a literal "undefined" MISTRAL_BASE_URL is treated as
// misconfiguration and falls back to OPENAI_API_BASE, then
// DEFAULT_MISTRAL_BASE_URL for a safe default endpoint.
const fallbackEnvBaseUrl = isMistralMode
? (primaryEnvBaseUrl === undefined
? asNamedEnvUrl(process.env.OPENAI_API_BASE, 'OPENAI_API_BASE') ?? DEFAULT_MISTRAL_BASE_URL
: undefined)
: isGeminiMode
? (primaryEnvBaseUrl === undefined
? asNamedEnvUrl(process.env.OPENAI_API_BASE, 'OPENAI_API_BASE') ?? DEFAULT_GEMINI_BASE_URL
: undefined)
: (primaryEnvBaseUrl === undefined
? asNamedEnvUrl(process.env.OPENAI_API_BASE, 'OPENAI_API_BASE')
: undefined)
const envBaseUrlRaw =
explicitBaseUrl ??
primaryEnvBaseUrl ??
fallbackEnvBaseUrl
const isCodexModelForGithub = isGithubMode && isCodexAlias(requestedModel)
const envBaseUrl =
isCodexModelForGithub && envBaseUrlRaw && getGithubEndpointType(envBaseUrlRaw) === 'custom'
? undefined
: envBaseUrlRaw
const rawBaseUrl = explicitBaseUrl ?? envBaseUrl
const shellModel = process.env.OPENAI_MODEL?.trim() ?? ''
const envIsCodexShortcut = isOpenAICodexShortcutAlias(shellModel)
const envResolvedCodexModel = envIsCodexShortcut
? parseModelDescriptor(shellModel).baseModel
: null
const requestedMatchesEnvCodexShortcut =
Boolean(options?.model) &&
Boolean(envResolvedCodexModel) &&
descriptor.baseModel === envResolvedCodexModel
const isCodexAliasModel =
isOpenAICodexShortcutAlias(requestedModel) || requestedMatchesEnvCodexShortcut
const hasUserSetBaseUrl = rawBaseUrl && rawBaseUrl !== DEFAULT_OPENAI_BASE_URL
const finalBaseUrl =
!isGithubMode && isCodexAliasModel && !hasUserSetBaseUrl
? DEFAULT_CODEX_BASE_URL
: rawBaseUrl
const githubEndpointType = isGithubMode
? getGithubEndpointType(rawBaseUrl)
@@ -386,7 +564,7 @@ export function resolveProviderRequest(options?: {
: requestedModel
const transport: ProviderTransport =
shouldUseCodexTransport(requestedModel, rawBaseUrl) ||
shouldUseCodexTransport(requestedModel, finalBaseUrl) ||
(isGithubCopilot && shouldUseGithubResponsesApi(githubResolvedModel))
? 'codex_responses'
: 'chat_completions'
@@ -410,7 +588,7 @@ export function resolveProviderRequest(options?: {
requestedModel,
resolvedModel,
baseUrl:
(rawBaseUrl ??
(finalBaseUrl ??
(isGithubCopilot && transport === 'codex_responses'
? GITHUB_COPILOT_BASE_URL
: (isGithubMode
@@ -458,18 +636,6 @@ export function resolveCodexAuthPath(
return join(homedir(), '.codex', 'auth.json')
}
export function parseChatgptAccountId(
token: string | undefined,
): string | undefined {
if (!token) return undefined
const payload = decodeJwtPayload(token)
const fromClaim = asTrimmedString(
payload?.['https://api.openai.com/auth.chatgpt_account_id'],
)
if (fromClaim) return fromClaim
return asTrimmedString(payload?.chatgpt_account_id)
}
function loadCodexAuthJson(
authPath: string,
): Record<string, unknown> | undefined {
@@ -485,8 +651,97 @@ function loadCodexAuthJson(
}
}
export function resolveCodexApiCredentials(
env: NodeJS.ProcessEnv = process.env,
function resolveCodexAuthJsonCredentials(options: {
authJson: Record<string, unknown> | undefined
authPath: string
envAccountId?: string
missingSource?: ResolvedCodexCredentials['source']
}): ResolvedCodexCredentials {
const { authJson, authPath, envAccountId } = options
if (!authJson) {
return {
apiKey: '',
authPath,
source: options.missingSource ?? 'none',
}
}
const apiKey = readNestedString(authJson, [
['openai_api_key'],
['openaiApiKey'],
['access_token'],
['accessToken'],
['tokens', 'access_token'],
['tokens', 'accessToken'],
['auth', 'access_token'],
['auth', 'accessToken'],
['token', 'access_token'],
['token', 'accessToken'],
])
// OIDC identity tokens can carry the ChatGPT account id, but they are not
// valid bearer credentials for Codex API requests.
const idToken = readNestedString(authJson, [
['id_token'],
['idToken'],
['tokens', 'id_token'],
['tokens', 'idToken'],
])
const accountId =
envAccountId ??
readNestedString(authJson, [
['account_id'],
['accountId'],
['tokens', 'account_id'],
['tokens', 'accountId'],
['auth', 'account_id'],
['auth', 'accountId'],
]) ??
parseChatgptAccountId(apiKey) ??
parseChatgptAccountId(idToken)
if (!apiKey) {
return {
apiKey: '',
accountId,
authPath,
source: options.missingSource ?? 'none',
}
}
return {
apiKey,
accountId,
authPath,
source: 'auth.json',
}
}
export function resolveStoredCodexCredentials(options: {
storedCredentials: Pick<
CodexCredentialBlob,
'apiKey' | 'accessToken' | 'idToken' | 'accountId'
>
envAccountId?: string
}): ResolvedCodexCredentials {
const { storedCredentials, envAccountId } = options
return {
apiKey: storedCredentials.apiKey ?? storedCredentials.accessToken,
accountId:
envAccountId ??
storedCredentials.accountId ??
parseChatgptAccountId(storedCredentials.idToken) ??
parseChatgptAccountId(storedCredentials.accessToken),
source: 'secure-storage',
}
}
function resolveEnvOrAuthJsonCodexCredentials(
env: NodeJS.ProcessEnv,
options?: {
explicitAuthPathOnly?: boolean
},
): ResolvedCodexCredentials {
const envApiKey = asTrimmedString(env.CODEX_API_KEY)
const envAccountId =
@@ -501,55 +756,127 @@ export function resolveCodexApiCredentials(
}
}
const explicitAuthPathConfigured = Boolean(
asTrimmedString(env.CODEX_AUTH_JSON_PATH) ?? asTrimmedString(env.CODEX_HOME),
)
if (!explicitAuthPathConfigured && options?.explicitAuthPathOnly) {
return {
apiKey: '',
accountId: envAccountId,
source: 'none',
}
}
const authPath = resolveCodexAuthPath(env)
const authJson = loadCodexAuthJson(authPath)
if (!authJson) {
return {
apiKey: '',
authPath,
source: 'none',
}
}
const apiKey = readNestedString(authJson, [
['access_token'],
['accessToken'],
['tokens', 'access_token'],
['tokens', 'accessToken'],
['auth', 'access_token'],
['auth', 'accessToken'],
['token', 'access_token'],
['token', 'accessToken'],
['tokens', 'id_token'],
['tokens', 'idToken'],
])
const accountId =
envAccountId ??
readNestedString(authJson, [
['account_id'],
['accountId'],
['tokens', 'account_id'],
['tokens', 'accountId'],
['auth', 'account_id'],
['auth', 'accountId'],
]) ??
parseChatgptAccountId(apiKey)
if (!apiKey) {
return {
apiKey: '',
accountId,
authPath,
source: 'none',
}
}
return {
apiKey,
accountId,
return resolveCodexAuthJsonCredentials({
authJson,
authPath,
source: 'auth.json',
envAccountId,
})
}
export function resolveRuntimeCodexCredentials(options?: {
env?: NodeJS.ProcessEnv
storedCredentials?: Pick<
CodexCredentialBlob,
'apiKey' | 'accessToken' | 'idToken' | 'accountId'
>
}): ResolvedCodexCredentials {
const env = options?.env ?? process.env
const explicitCredentials = resolveEnvOrAuthJsonCodexCredentials(env, {
explicitAuthPathOnly: true,
})
const explicitAuthPathConfigured = Boolean(
asTrimmedString(env.CODEX_AUTH_JSON_PATH) ?? asTrimmedString(env.CODEX_HOME),
)
const hasStoredCredentialsOption = Boolean(
options &&
Object.prototype.hasOwnProperty.call(options, 'storedCredentials'),
)
if (
explicitAuthPathConfigured ||
explicitCredentials.source === 'env' ||
explicitCredentials.source === 'auth.json'
) {
return explicitCredentials
}
if (options?.storedCredentials?.accessToken) {
return resolveStoredCodexCredentials({
storedCredentials: options.storedCredentials,
envAccountId:
asTrimmedString(env.CODEX_ACCOUNT_ID) ??
asTrimmedString(env.CHATGPT_ACCOUNT_ID),
})
}
if (hasStoredCredentialsOption) {
return resolveEnvOrAuthJsonCodexCredentials(env)
}
return resolveCodexApiCredentials(env)
}
export function resolveCodexApiCredentials(
env: NodeJS.ProcessEnv = process.env,
): ResolvedCodexCredentials {
const envAccountId =
asTrimmedString(env.CODEX_ACCOUNT_ID) ??
asTrimmedString(env.CHATGPT_ACCOUNT_ID)
const envOrExplicitAuthJsonCredentials = resolveEnvOrAuthJsonCodexCredentials(
env,
{
explicitAuthPathOnly: true,
},
)
if (
envOrExplicitAuthJsonCredentials.source === 'env' ||
envOrExplicitAuthJsonCredentials.source === 'auth.json' ||
envOrExplicitAuthJsonCredentials.authPath
) {
return envOrExplicitAuthJsonCredentials
}
const storedCredentials = readCodexCredentials()
if (storedCredentials?.accessToken) {
const resolvedStoredCredentials = resolveStoredCodexCredentials({
storedCredentials,
envAccountId,
})
const shouldCheckDefaultAuthJson =
!resolvedStoredCredentials.accountId ||
isCodexRefreshFailureCoolingDown(storedCredentials)
if (!shouldCheckDefaultAuthJson) {
return resolvedStoredCredentials
}
const authPath = resolveCodexAuthPath(env)
const authJson = loadCodexAuthJson(authPath)
const resolvedAuthJsonCredentials = resolveCodexAuthJsonCredentials({
authJson,
authPath,
envAccountId,
})
if (resolvedAuthJsonCredentials.apiKey) {
return {
...resolvedAuthJsonCredentials,
accountId:
resolvedAuthJsonCredentials.accountId ??
resolvedStoredCredentials.accountId,
}
}
return resolvedStoredCredentials
}
return resolveEnvOrAuthJsonCodexCredentials(env)
}
export function getReasoningEffortForModel(model: string): ReasoningEffort | undefined {
@@ -559,3 +886,18 @@ export function getReasoningEffortForModel(model: string): ReasoningEffort | und
const aliasConfig = CODEX_ALIAS_MODELS[alias]
return aliasConfig?.reasoningEffort
}
export function supportsCodexReasoningEffort(model: string): boolean {
const normalized = model.trim().toLowerCase()
const base = normalized.split('?', 1)[0] ?? normalized
if (base === 'gpt-5.3-codex-spark' || base === 'codexspark') {
return false
}
if (getReasoningEffortForModel(base) !== undefined) {
return true
}
return /^gpt-5(?:[.-]|$)/.test(base)
}

View File

@@ -1,46 +0,0 @@
import { describe, expect, test } from 'bun:test'
import {
looksLikeLeakedReasoningPrefix,
shouldBufferPotentialReasoningPrefix,
stripLeakedReasoningPreamble,
} from './reasoningLeakSanitizer.ts'
describe('reasoning leak sanitizer', () => {
test('strips explicit internal reasoning preambles', () => {
const text =
'The user just said "hey" - a simple greeting. I should respond briefly and friendly.\n\nHey! How can I help you today?'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(true)
expect(stripLeakedReasoningPreamble(text)).toBe(
'Hey! How can I help you today?',
)
})
test('does not strip normal user-facing advice that mentions "the user should"', () => {
const text =
'The user should reset their password immediately.\n\nHere are the steps...'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(false)
expect(shouldBufferPotentialReasoningPrefix(text)).toBe(false)
expect(stripLeakedReasoningPreamble(text)).toBe(text)
})
test('does not strip legitimate first-person advice about responding to an incident', () => {
const text =
'I need to respond to this security incident immediately. The system is compromised.\n\nHere are the remediation steps...'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(false)
expect(shouldBufferPotentialReasoningPrefix(text)).toBe(false)
expect(stripLeakedReasoningPreamble(text)).toBe(text)
})
test('does not strip legitimate first-person advice about answering a support ticket', () => {
const text =
'I need to answer the support ticket before end of day. The customer is waiting.\n\nHere is the response I drafted...'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(false)
expect(shouldBufferPotentialReasoningPrefix(text)).toBe(false)
expect(stripLeakedReasoningPreamble(text)).toBe(text)
})
})

View File

@@ -1,54 +0,0 @@
const EXPLICIT_REASONING_START_RE =
/^\s*(i should\b|i need to\b|let me think\b|the task\b|the request\b)/i
const EXPLICIT_REASONING_META_RE =
/\b(user|request|question|prompt|message|task|greeting|small talk|briefly|friendly|concise)\b/i
const USER_META_START_RE =
/^\s*the user\s+(just\s+)?(said|asked|is asking|wants|wanted|mentioned|seems|appears)\b/i
const USER_REASONING_RE =
/^\s*the user\s+(just\s+)?(said|asked|is asking|wants|wanted|mentioned|seems|appears)\b[\s\S]*\b(i should|i need to|let me think|respond|reply|answer|greeting|small talk|briefly|friendly|concise)\b/i
export function shouldBufferPotentialReasoningPrefix(text: string): boolean {
const normalized = text.trim()
if (!normalized) return false
if (looksLikeLeakedReasoningPrefix(normalized)) {
return true
}
const hasParagraphBoundary = /\n\s*\n/.test(normalized)
if (hasParagraphBoundary) {
return false
}
return (
EXPLICIT_REASONING_START_RE.test(normalized) ||
USER_META_START_RE.test(normalized)
)
}
export function looksLikeLeakedReasoningPrefix(text: string): boolean {
const normalized = text.trim()
if (!normalized) return false
return (
(EXPLICIT_REASONING_START_RE.test(normalized) &&
EXPLICIT_REASONING_META_RE.test(normalized)) ||
USER_REASONING_RE.test(normalized)
)
}
export function stripLeakedReasoningPreamble(text: string): string {
const normalized = text.replace(/\r\n/g, '\n')
const parts = normalized.split(/\n\s*\n/)
if (parts.length < 2) return text
const first = parts[0]?.trim() ?? ''
if (!looksLikeLeakedReasoningPrefix(first)) {
return text
}
const remainder = parts.slice(1).join('\n\n').trim()
return remainder || text
}

View File

@@ -0,0 +1,191 @@
import { describe, expect, test } from 'bun:test'
import {
routeModel,
type SmartRoutingConfig,
} from './smartModelRouting.ts'
const ENABLED: SmartRoutingConfig = {
enabled: true,
simpleModel: 'claude-haiku-4-5',
strongModel: 'claude-opus-4-7',
}
describe('routeModel — disabled / misconfigured', () => {
test('disabled config routes to strong', () => {
const decision = routeModel(
{ userText: 'hi' },
{ ...ENABLED, enabled: false },
)
expect(decision.model).toBe('claude-opus-4-7')
expect(decision.complexity).toBe('strong')
expect(decision.reason).toContain('disabled')
})
test('missing simpleModel falls back to strong', () => {
const decision = routeModel(
{ userText: 'hi' },
{ ...ENABLED, simpleModel: '' },
)
expect(decision.model).toBe('claude-opus-4-7')
expect(decision.complexity).toBe('strong')
})
test('simpleModel === strongModel routes to strong (no-op)', () => {
const decision = routeModel(
{ userText: 'hi' },
{ ...ENABLED, simpleModel: 'claude-opus-4-7' },
)
expect(decision.model).toBe('claude-opus-4-7')
expect(decision.complexity).toBe('strong')
})
})
describe('routeModel — simple path', () => {
test('short greeting routes to simple', () => {
const decision = routeModel({ userText: 'thanks!', turnNumber: 5 }, ENABLED)
expect(decision.model).toBe('claude-haiku-4-5')
expect(decision.complexity).toBe('simple')
})
test('empty input routes to simple', () => {
const decision = routeModel({ userText: ' ' }, ENABLED)
expect(decision.model).toBe('claude-haiku-4-5')
expect(decision.complexity).toBe('simple')
})
test('mid-length chatter routes to simple', () => {
const decision = routeModel(
{ userText: 'yep looks good, go ahead', turnNumber: 10 },
ENABLED,
)
expect(decision.complexity).toBe('simple')
})
})
describe('routeModel — strong path', () => {
test('first turn always routes to strong, even when short', () => {
const decision = routeModel(
{ userText: 'fix the bug', turnNumber: 1 },
ENABLED,
)
expect(decision.model).toBe('claude-opus-4-7')
expect(decision.complexity).toBe('strong')
expect(decision.reason).toContain('first turn')
})
test('code fence routes to strong', () => {
const decision = routeModel(
{
userText: 'change this:\n```\nfoo()\n```',
turnNumber: 5,
},
ENABLED,
)
expect(decision.complexity).toBe('strong')
expect(decision.reason).toContain('code')
})
test('inline code span routes to strong', () => {
const decision = routeModel(
{ userText: 'rename `foo` to `bar`', turnNumber: 5 },
ENABLED,
)
expect(decision.complexity).toBe('strong')
})
test('reasoning keyword "plan" routes to strong even when short', () => {
const decision = routeModel(
{ userText: 'plan the refactor', turnNumber: 5 },
ENABLED,
)
expect(decision.complexity).toBe('strong')
expect(decision.reason).toContain('keyword')
})
test('reasoning keyword "debug" routes to strong', () => {
const decision = routeModel(
{ userText: 'debug the test', turnNumber: 5 },
ENABLED,
)
expect(decision.complexity).toBe('strong')
})
test('"root cause" multi-word keyword routes to strong', () => {
const decision = routeModel(
{ userText: 'find the root cause', turnNumber: 5 },
ENABLED,
)
expect(decision.complexity).toBe('strong')
})
test('multi-paragraph input routes to strong', () => {
const decision = routeModel(
{
userText: 'first thought.\n\nsecond thought.',
turnNumber: 5,
},
ENABLED,
)
expect(decision.complexity).toBe('strong')
expect(decision.reason).toContain('multi-paragraph')
})
test('over-long input routes to strong', () => {
const long = 'ok '.repeat(100) // ~300 chars, 100 words
const decision = routeModel(
{ userText: long, turnNumber: 5 },
ENABLED,
)
expect(decision.complexity).toBe('strong')
})
test('exactly at the boundary stays simple', () => {
const text = 'a'.repeat(160)
const decision = routeModel(
{ userText: text, turnNumber: 5 },
{ ...ENABLED, simpleMaxChars: 160, simpleMaxWords: 28 },
)
expect(decision.complexity).toBe('simple')
})
test('one char over the boundary routes to strong', () => {
const text = 'a'.repeat(161)
const decision = routeModel(
{ userText: text, turnNumber: 5 },
{ ...ENABLED, simpleMaxChars: 160, simpleMaxWords: 28 },
)
expect(decision.complexity).toBe('strong')
expect(decision.reason).toContain('160 chars')
})
})
describe('routeModel — config overrides', () => {
test('custom simpleMaxChars is honored', () => {
const decision = routeModel(
{ userText: 'abcdefghijklmnop', turnNumber: 5 },
{ ...ENABLED, simpleMaxChars: 10 },
)
expect(decision.complexity).toBe('strong')
expect(decision.reason).toContain('10 chars')
})
test('custom simpleMaxWords is honored', () => {
const decision = routeModel(
{ userText: 'one two three four five', turnNumber: 5 },
{ ...ENABLED, simpleMaxWords: 3 },
)
expect(decision.complexity).toBe('strong')
expect(decision.reason).toContain('3 words')
})
})
describe('routeModel — reason strings', () => {
test('simple decisions include char + word counts', () => {
const decision = routeModel(
{ userText: 'sounds good', turnNumber: 5 },
ENABLED,
)
expect(decision.reason).toMatch(/\d+ chars, \d+ words/)
})
})

View File

@@ -0,0 +1,215 @@
/**
* Smart model routing — cheap-for-simple, strong-for-hard.
*
* For everyday short chatter ("ok", "thanks", "what does this do?") the
* incremental quality of Opus/GPT-5 over Haiku/Mini is negligible while the
* cost and latency are an order of magnitude worse. Smart routing opts a
* user into routing such "obviously simple" turns to a cheaper model while
* keeping the strong model for the anything-non-trivial path.
*
* This module is a pure primitive: it takes a turn description (the user's
* text + light context) and returns which model to use, based on config.
* It never reads env vars or state directly — caller supplies everything.
*
* Off by default. Users opt in via settings.smartRouting.enabled. Intent:
* make this a copy-paste-small config block rather than a hidden heuristic,
* so the tradeoff is visible and the user controls it.
*/
export type SmartRoutingConfig = {
enabled: boolean
/** Model to use for turns classified as "simple". */
simpleModel: string
/** Model to use for turns classified as "strong" (or when unsure). */
strongModel: string
/** Max characters in user input to qualify as "simple". Default 160. */
simpleMaxChars?: number
/** Max whitespace-separated words to qualify as "simple". Default 28. */
simpleMaxWords?: number
}
export type RoutingDecision = {
model: string
complexity: 'simple' | 'strong'
/** Human-readable reason — useful for the UI indicator and debug logs. */
reason: string
}
export type RoutingInput = {
/** The user's message text for this turn. */
userText: string
/**
* Optional: how many tool-use blocks the assistant has emitted in the
* recent conversation. High values correlate with "continue this work"
* follow-ups that can still be cheap, UNLESS the user also typed code
* or strong-keyword text.
*/
recentToolUses?: number
/**
* Optional: turn number within the current session (1-indexed). The first
* turn is often task-setup and benefits from the strong model even if
* short — a bare "build X" opens the whole task.
*/
turnNumber?: number
}
const DEFAULT_SIMPLE_MAX_CHARS = 160
const DEFAULT_SIMPLE_MAX_WORDS = 28
// Keywords that strongly suggest reasoning/planning/design work.
// Matching is word-boundary / case-insensitive. Must include enough anchors
// that short prompts like "plan the refactor" route to strong even under
// the char/word cutoff.
const STRONG_KEYWORDS = [
'plan',
'design',
'architect',
'architecture',
'refactor',
'debug',
'investigate',
'analyze',
'analyse',
'implement',
'optimize',
'optimise',
'review',
'audit',
'diagnose',
'root cause',
'root-cause',
'why does',
'why is',
'how should',
'why did',
'propose',
'trace',
'reproduce',
]
const STRONG_KEYWORD_RE = new RegExp(
`\\b(?:${STRONG_KEYWORDS.map(k => k.replace(/[-]/g, '[-\\s]')).join('|')})\\b`,
'i',
)
const CODE_FENCE_RE = /```[\s\S]*?```|`[^`\n]+`/
function countWords(text: string): number {
const trimmed = text.trim()
if (!trimmed) return 0
return trimmed.split(/\s+/).length
}
function hasMultiParagraph(text: string): boolean {
return /\n\s*\n/.test(text)
}
function hasCode(text: string): boolean {
return CODE_FENCE_RE.test(text)
}
function hasStrongKeyword(text: string): boolean {
return STRONG_KEYWORD_RE.test(text)
}
/**
* Decide whether to route to the simple or strong model based on heuristics.
* Returns the chosen model + a reason. When routing is disabled or both
* models match, the strong model is used (safe default).
*/
export function routeModel(
input: RoutingInput,
config: SmartRoutingConfig,
): RoutingDecision {
if (!config.enabled) {
return {
model: config.strongModel,
complexity: 'strong',
reason: 'smart-routing disabled',
}
}
if (!config.simpleModel || !config.strongModel) {
return {
model: config.strongModel,
complexity: 'strong',
reason: 'simpleModel or strongModel missing from config',
}
}
if (config.simpleModel === config.strongModel) {
return {
model: config.strongModel,
complexity: 'strong',
reason: 'simpleModel equals strongModel',
}
}
const text = input.userText ?? ''
const trimmed = text.trim()
if (!trimmed) {
// Empty input (e.g. resuming a tool-use chain) — cheap by default.
return {
model: config.simpleModel,
complexity: 'simple',
reason: 'empty user text',
}
}
// First turn of a session is task-setup — always use strong.
if (input.turnNumber === 1) {
return {
model: config.strongModel,
complexity: 'strong',
reason: 'first turn of session',
}
}
const maxChars = config.simpleMaxChars ?? DEFAULT_SIMPLE_MAX_CHARS
const maxWords = config.simpleMaxWords ?? DEFAULT_SIMPLE_MAX_WORDS
if (hasCode(trimmed)) {
return {
model: config.strongModel,
complexity: 'strong',
reason: 'contains code block or inline code',
}
}
if (hasStrongKeyword(trimmed)) {
return {
model: config.strongModel,
complexity: 'strong',
reason: 'contains reasoning/planning keyword',
}
}
if (hasMultiParagraph(trimmed)) {
return {
model: config.strongModel,
complexity: 'strong',
reason: 'multi-paragraph input',
}
}
if (trimmed.length > maxChars) {
return {
model: config.strongModel,
complexity: 'strong',
reason: `input > ${maxChars} chars`,
}
}
if (countWords(trimmed) > maxWords) {
return {
model: config.strongModel,
complexity: 'strong',
reason: `input > ${maxWords} words`,
}
}
return {
model: config.simpleModel,
complexity: 'simple',
reason: `short (${trimmed.length} chars, ${countWords(trimmed)} words)`,
}
}

View File

@@ -0,0 +1,183 @@
import { describe, expect, test } from 'bun:test'
import {
createThinkTagFilter,
stripThinkTags,
} from './thinkTagSanitizer.ts'
describe('stripThinkTags — whole-text cleanup', () => {
test('strips closed think pair', () => {
expect(stripThinkTags('<think>reasoning</think>Hello')).toBe('Hello')
})
test('strips closed thinking pair', () => {
expect(stripThinkTags('<thinking>x</thinking>Out')).toBe('Out')
})
test('strips closed reasoning pair', () => {
expect(stripThinkTags('<reasoning>x</reasoning>Out')).toBe('Out')
})
test('strips REASONING_SCRATCHPAD pair', () => {
expect(stripThinkTags('<REASONING_SCRATCHPAD>plan</REASONING_SCRATCHPAD>Answer'))
.toBe('Answer')
})
test('is case-insensitive', () => {
expect(stripThinkTags('<THINKING>x</THINKING>out')).toBe('out')
expect(stripThinkTags('<Think>x</Think>out')).toBe('out')
})
test('handles attributes on open tag', () => {
expect(stripThinkTags('<think id="plan-1">reason</think>ok')).toBe('ok')
})
test('strips unterminated open tag at block boundary', () => {
expect(stripThinkTags('<think>reasoning that never closes')).toBe('')
})
test('strips unterminated open tag after newline', () => {
// Block-boundary match consumes the leading newline, same as hermes.
expect(stripThinkTags('Answer: 42\n<think>second-guess myself'))
.toBe('Answer: 42')
})
test('strips orphan close tag', () => {
expect(stripThinkTags('trailing </think>done')).toBe('trailing done')
})
test('strips multiple blocks', () => {
expect(stripThinkTags('<think>a</think>B<think>c</think>D')).toBe('BD')
})
test('handles reasoning mid-response after content', () => {
expect(stripThinkTags('Answer: 42\n<think>double-check</think>\nDone'))
.toBe('Answer: 42\n\nDone')
})
test('handles nested-looking tags (lazy match + orphan cleanup)', () => {
expect(stripThinkTags('<think><think>x</think></think>y')).toBe('y')
})
test('preserves legitimate non-think tags', () => {
expect(stripThinkTags('use <div> and <span>')).toBe('use <div> and <span>')
})
test('preserves text without any tags', () => {
expect(stripThinkTags('Hello, world. I should respond briefly.')).toBe(
'Hello, world. I should respond briefly.',
)
})
test('handles empty input', () => {
expect(stripThinkTags('')).toBe('')
})
})
describe('createThinkTagFilter — streaming state machine', () => {
test('passes through plain text', () => {
const f = createThinkTagFilter()
expect(f.feed('Hello, ')).toBe('Hello, ')
expect(f.feed('world!')).toBe('world!')
expect(f.flush()).toBe('')
})
test('strips a complete think block in one chunk', () => {
const f = createThinkTagFilter()
expect(f.feed('pre<think>reason</think>post')).toBe('prepost')
expect(f.flush()).toBe('')
})
test('handles open tag split across deltas', () => {
const f = createThinkTagFilter()
expect(f.feed('before<th')).toBe('before')
expect(f.feed('ink>reason</think>after')).toBe('after')
expect(f.flush()).toBe('')
})
test('handles close tag split across deltas', () => {
const f = createThinkTagFilter()
expect(f.feed('<think>reason</th')).toBe('')
expect(f.feed('ink>keep')).toBe('keep')
expect(f.flush()).toBe('')
})
test('handles tag split on bare < boundary', () => {
const f = createThinkTagFilter()
expect(f.feed('leading <')).toBe('leading ')
expect(f.feed('think>inner</think>tail')).toBe('tail')
expect(f.flush()).toBe('')
})
test('preserves partial non-tag < at boundary when next char rules it out', () => {
const f = createThinkTagFilter()
// "<d" — 'd' cannot start any of our tag names, so emit immediately
expect(f.feed('pre<d')).toBe('pre<d')
expect(f.feed('iv>rest')).toBe('iv>rest')
expect(f.flush()).toBe('')
})
test('case-insensitive streaming', () => {
const f = createThinkTagFilter()
expect(f.feed('<THINKING>x</THINKING>out')).toBe('out')
expect(f.flush()).toBe('')
})
test('unterminated open tag — flush drops remainder', () => {
const f = createThinkTagFilter()
expect(f.feed('<think>reasoning with no close ')).toBe('')
expect(f.feed('and more reasoning')).toBe('')
expect(f.flush()).toBe('')
expect(f.isInsideBlock()).toBe(false)
})
test('multiple blocks in single feed', () => {
const f = createThinkTagFilter()
expect(f.feed('<think>a</think>B<think>c</think>D')).toBe('BD')
expect(f.flush()).toBe('')
})
test('flush after clean stream emits nothing extra', () => {
const f = createThinkTagFilter()
expect(f.feed('complete message')).toBe('complete message')
expect(f.flush()).toBe('')
})
test('flush of bare < at end emits it (not a tag prefix)', () => {
const f = createThinkTagFilter()
// bare '<' held back; flush emits it since it has no tag-name chars
expect(f.feed('x <')).toBe('x ')
expect(f.flush()).toBe('<')
})
test('flush of partial tag-name prefix at end drops it', () => {
const f = createThinkTagFilter()
expect(f.feed('x <thi')).toBe('x ')
expect(f.flush()).toBe('')
})
test('handles attributes on streaming open tag', () => {
const f = createThinkTagFilter()
expect(f.feed('<think type="plan">reason</think>ok')).toBe('ok')
expect(f.flush()).toBe('')
})
test('mid-delta transition: content, reasoning, content', () => {
const f = createThinkTagFilter()
expect(f.feed('Answer: 42\n<think>')).toBe('Answer: 42\n')
expect(f.feed('double-check')).toBe('')
expect(f.feed('</think>\nDone')).toBe('\nDone')
expect(f.flush()).toBe('')
})
test('orphan close tag mid-stream is stripped on flush via safety-net behavior', () => {
// Filter alone treats orphan close as "we're not inside", so it emits as-is.
// Safety net (stripThinkTags on final text) removes orphans.
const f = createThinkTagFilter()
const chunk1 = f.feed('trailing ')
const chunk2 = f.feed('</think>done')
const final = chunk1 + chunk2 + f.flush()
// Orphan close appears in stream output; safety net cleans it
expect(stripThinkTags(final)).toBe('trailing done')
})
})

View File

@@ -0,0 +1,162 @@
/**
* Think-tag sanitizer for reasoning content leaks.
*
* Some OpenAI-compatible reasoning models (MiniMax M2.7, GLM-4.5/5, DeepSeek, Kimi K2,
* self-hosted vLLM builds) emit chain-of-thought inline inside the `content` field using
* XML-like tags instead of the separate `reasoning_content` channel. Example:
*
* <think>the user wants foo, let me check bar</think>Here is the answer: ...
*
* This module strips those blocks structurally (tag-based), independent of English
* phrasings. Three layers:
*
* 1. `createThinkTagFilter()` — streaming state machine. Feeds deltas, emits only
* the visible (non-reasoning) portion, and buffers partial tags across chunk
* boundaries so `</th` + `ink>` still parses correctly.
*
* 2. `stripThinkTags()` — whole-text cleanup. Removes closed pairs, unterminated
* opens at block boundaries, and orphan open/close tags. Used for non-streaming
* responses and as a safety net after stream close.
*
* 3. Flush discards buffered partial tags at stream end (false-negative bias —
* prefer losing a partial reasoning fragment over leaking it).
*/
const TAG_NAMES = [
'think',
'thinking',
'reasoning',
'thought',
'reasoning_scratchpad',
] as const
const TAG_ALT = TAG_NAMES.join('|')
const OPEN_TAG_RE = new RegExp(`<\\s*(?:${TAG_ALT})\\b[^>]*>`, 'i')
const CLOSE_TAG_RE = new RegExp(`<\\s*/\\s*(?:${TAG_ALT})\\s*>`, 'i')
const CLOSED_PAIR_RE_G = new RegExp(
`<\\s*(${TAG_ALT})\\b[^>]*>[\\s\\S]*?<\\s*/\\s*\\1\\s*>`,
'gi',
)
const UNTERMINATED_OPEN_RE = new RegExp(
`(?:^|\\n)[ \\t]*<\\s*(?:${TAG_ALT})\\b[^>]*>[\\s\\S]*$`,
'i',
)
const ORPHAN_TAG_RE_G = new RegExp(
`<\\s*/?\\s*(?:${TAG_ALT})\\b[^>]*>\\s*`,
'gi',
)
const MAX_PARTIAL_TAG = 64
/**
* Remove reasoning/thinking blocks from a complete text body.
*
* Handles:
* - Closed pairs: <think>...</think> (lazy match, anywhere in text)
* - Unterminated open tags at a block boundary: strips from the tag to end of string
* - Orphan open or close tags (no matching partner)
*
* False-negative bias: prefers leaving a few tag characters in rare edge cases over
* stripping legitimate content.
*/
export function stripThinkTags(text: string): string {
if (!text) return text
let out = text
out = out.replace(CLOSED_PAIR_RE_G, '')
out = out.replace(UNTERMINATED_OPEN_RE, '')
out = out.replace(ORPHAN_TAG_RE_G, '')
return out
}
export interface ThinkTagFilter {
feed(chunk: string): string
flush(): string
isInsideBlock(): boolean
}
/**
* Streaming state machine. Feed deltas, emits visible (non-reasoning) text.
* Handles tags split across chunk boundaries by holding back a short tail buffer
* whenever the current buffer ends with what looks like a partial tag.
*/
export function createThinkTagFilter(): ThinkTagFilter {
let inside = false
let buffer = ''
function findPartialTagStart(s: string): number {
const lastLt = s.lastIndexOf('<')
if (lastLt === -1) return -1
if (s.indexOf('>', lastLt) !== -1) return -1
const tail = s.slice(lastLt)
if (tail.length > MAX_PARTIAL_TAG) return -1
const m = /^<\s*\/?\s*([a-zA-Z_]\w*)?\s*$/.exec(tail)
if (!m) return -1
const partialName = (m[1] ?? '').toLowerCase()
if (!partialName) return lastLt
if (TAG_NAMES.some(name => name.startsWith(partialName))) return lastLt
return -1
}
function feed(chunk: string): string {
if (!chunk) return ''
buffer += chunk
let out = ''
while (buffer.length > 0) {
if (!inside) {
const open = OPEN_TAG_RE.exec(buffer)
if (open) {
out += buffer.slice(0, open.index)
buffer = buffer.slice(open.index + open[0].length)
inside = true
continue
}
const partialStart = findPartialTagStart(buffer)
if (partialStart === -1) {
out += buffer
buffer = ''
} else {
out += buffer.slice(0, partialStart)
buffer = buffer.slice(partialStart)
}
return out
}
const close = CLOSE_TAG_RE.exec(buffer)
if (close) {
buffer = buffer.slice(close.index + close[0].length)
inside = false
continue
}
const partialStart = findPartialTagStart(buffer)
if (partialStart === -1) {
buffer = ''
} else {
buffer = buffer.slice(partialStart)
}
return out
}
return out
}
function flush(): string {
const held = buffer
const wasInside = inside
buffer = ''
inside = false
if (wasInside) return ''
if (!held) return ''
if (/^<\s*\/?\s*[a-zA-Z_]/.test(held)) return ''
return held
}
return { feed, flush, isInsideBlock: () => inside }
}

View File

@@ -70,7 +70,7 @@ describe('runAutoFixCheck', () => {
test('handles timeout gracefully', async () => {
const result = await runAutoFixCheck({
lint: 'sleep 10',
lint: 'node -e "setTimeout(() => {}, 10000)"',
timeout: 100,
cwd: '/tmp',

Some files were not shown because too many files have changed in this diff Show More