The test "never returns negative even for unknown 3P models (issue #635)"
asserted that getEffectiveContextWindowSize() returns >= 33_000 for an
unknown 3P model under the OpenAI shim. That specific number assumes
reservedTokensForSummary = 20_000 (MAX_OUTPUT_TOKENS_FOR_SUMMARY), which
holds only when the tengu_otk_slot_v1 GrowthBook flag is disabled.
When the flag is ON — which is the case in CI but not always locally —
getMaxOutputTokensForModel() caps the model's default output at
CAPPED_DEFAULT_MAX_TOKENS (8_000). Then reservedTokensForSummary = 8_000,
floor = 8_000 + 13_000 = 21_000, and the test fails with 21_000 < 33_000.
The test reliably passes locally and reliably fails in CI, manifesting as
the intermittent PR-check failure.
Fix: relax the lower bound to 21_000 (cap-enabled worst case), which is
still well above zero — preserving the anti-regression intent of
issue #635 (no infinite auto-compact from a negative effective window)
without binding the test to GrowthBook flag state.
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
Fixes#774. When tool_result content contains multiple text blocks,
they were serialized as arrays instead of strings, causing DeepSeek
to reject the request with 400 error.
Changes:
- convertToolResultContent: collapse all-text arrays to joined string
- convertContentBlocks: defensive collapse for user/assistant messages
- Arrays with images are preserved (not collapsed)
Tests: 3 new tests added, 53 pass, 0 fail
Co-authored-by: nick.mesen <nickmesen@users.noreply.github.com>
Most everyday turns ("ok", "thanks", "yep go ahead", "what does that do?")
get no measurable quality improvement from Opus-tier models over Haiku-tier,
but cost ~10x more and stream slower. Smart routing opts a user into
automatically routing obviously-simple turns to a cheaper model while
keeping the strong model for anything non-trivial.
New module src/services/api/smartModelRouting.ts:
- routeModel(input, config) → { model, complexity, reason }
- Pure primitive: no env reads, no state, caller supplies everything.
- Config is opt-in (enabled: false by default).
Routes to strong (conservative) when ANY of:
- First turn of session (task-setup is worth the quality)
- Code fence or inline code span present
- Reasoning/planning keyword (plan, design, refactor, debug, architect,
investigate, root cause, etc. — 20+ anchors)
- Multi-paragraph input
- Over char/word cutoff (defaults: 160 chars, 28 words; matches hermes)
Routes to simple only for clearly-trivial chatter.
Decision includes a reason string for a future UI indicator that shows
which tier handled the turn.
Integration into query path is intentionally deferred to a follow-up PR so
the heuristics can be reviewed and tuned in isolation first.
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
* feat(provider): first-class Moonshot (Kimi) direct-API support
Moonshot's direct API (api.moonshot.ai/v1) is OpenAI-compatible and works
today via the generic OpenAI shim, including the reasoning_content channel
that Kimi returns alongside the user-visible content. But the UX was rough:
unknown context window triggered the conservative 128k fallback + a warning,
and the provider displayed as "Local OpenAI-compatible".
Makes Moonshot a recognized provider:
- src/utils/model/openaiContextWindows.ts: add the Kimi K2 family and
moonshot-v1-* variants to both the context-window and max-output tables.
Values from Moonshot's model card — K2.6 and K2-thinking are 256K,
K2/K2-instruct are 128K, moonshot-v1 sizes are embedded in the model id.
- src/utils/providerDiscovery.ts: recognize the api.moonshot.ai hostname
and label it "Moonshot (Kimi)" in the startup banner and provider UI.
Users can now launch with:
CLAUDE_CODE_USE_OPENAI=1 \
OPENAI_BASE_URL=https://api.moonshot.ai/v1 \
OPENAI_API_KEY=sk-... \
OPENAI_MODEL=kimi-k2.6 \
openclaude
and get accurate compaction + correct labeling + correct max_tokens out
of the box.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* fix(openai-shim): Moonshot API compatibility — max_tokens + strip store
Moonshot's direct API (api.moonshot.ai and api.moonshot.cn) uses the
classic OpenAI `max_tokens` parameter, not the newer `max_completion_tokens`
that the shim defaults to. It also hasn't published support for `store`
and may reject it on strict-parse — same class of error as Gemini's
"Unknown name 'store': Cannot find field" 400.
- Adds isMoonshotBaseUrl() that recognizes both .ai and .cn hosts.
- Converts max_completion_tokens → max_tokens for Moonshot requests
(alongside GitHub / Mistral / local providers).
- Strips body.store for Moonshot requests (alongside Mistral / Gemini).
Two shim tests cover both the .ai and .cn hostnames.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* fix: null-safe access on getCachedMCConfig() in external builds
External builds stub src/services/compact/cachedMicrocompact.ts so
getCachedMCConfig() returns null, but two call sites still dereferenced
config.supportedModels directly. The ?. operator was in the wrong place
(config.supportedModels? instead of config?.supportedModels), so the null
config threw "Cannot read properties of null (reading 'supportedModels')"
on every request.
Reproduces with any external-build provider (notably Kimi/Moonshot just
enabled in the sibling commits, but equally DeepSeek, Mistral, Groq,
Ollama, etc.):
❯ hey
⏺ Cannot read properties of null (reading 'supportedModels')
- prompts.ts: early-return from getFunctionResultClearingSection() when
config is null, before touching .supportedModels.
- claude.ts: guard the debug-log jsonStringify with ?. so the log line
never throws.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* fix(startup): show "Moonshot (Kimi)" on the startup banner
The startup-screen provider detector had regex branches for OpenRouter,
DeepSeek, Groq, Together, Azure, etc., but nothing for Moonshot. Remote
Moonshot sessions fell through to the generic "OpenAI" label —
getLocalOpenAICompatibleProviderLabel() only runs for local URLs, and
api.moonshot.ai / api.moonshot.cn are not local.
Adds a Moonshot branch matching /moonshot/ in the base URL OR /kimi/ in
the model id. Now launches with:
OPENAI_BASE_URL=https://api.moonshot.ai/v1 OPENAI_MODEL=kimi-k2.6
display the Provider row as "Moonshot (Kimi)" instead of "OpenAI".
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* refactor(provider): sort preset picker alphabetically; Custom at end
The /provider preset picker was in ad-hoc order (Anthropic, Ollama,
OpenAI, then a jumble of third-party / local / codex / Alibaba / custom /
nvidia / minimax). Hard to scan when you know the provider name you want.
Sorts the list alphabetically by label A→Z. Pins "Custom" to the end —
it's the catch-all / escape hatch so it's scanned last, not shuffled into
the alphabetical run where a user looking for a named provider might
grab it by mistake. First-run-only "Skip for now" stays at the very
bottom, after Custom.
Test churn:
- ProviderManager.test.tsx: four tests hardcoded press counts (1 or 3 'j'
presses) that broke when targets moved. Replaces them with a
navigateToPreset(stdin, label) helper driven from a declared
PRESET_ORDER array, so future list edits only update the array.
- ConsoleOAuthFlow.test.tsx: the 13-row test frame only renders the first
~13 providers. "Ollama", "OpenAI", "LM Studio" sentinels moved below
the fold; swap them for alphabetically-early providers still visible
in-frame ("Azure OpenAI", "DeepSeek", "Google Gemini"). Test intent
(picker opened with providers listed) is preserved.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
---------
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
* fix(security): harden project settings trust boundary + MCP sanitization
- Sanitize MCP tool result text with recursivelySanitizeUnicode() to prevent
Unicode injection via malicious MCP servers (tool definitions and prompts
were already sanitized, but tool call results were not)
- Read sandbox.enabled only from trusted settings sources (user, local, flag,
policy) — exclude projectSettings to prevent malicious repos from silently
disabling the sandbox via .claude/settings.json
- Disable git hooks in plugin marketplace clone/pull/submodule operations
with core.hooksPath=/dev/null to prevent code execution from cloned repos
- Remove ANTHROPIC_FOUNDRY_API_KEY from SAFE_ENV_VARS to prevent credential
injection from project-scoped settings without trust verification
- Add ssrfGuardedLookup to WebFetch HTTP requests to block DNS rebinding
attacks that could reach cloud metadata or internal services
Security: closes trust boundary gap where project settings could override
security-critical configuration. Follows the existing pattern established
by hasAllowBypassPermissionsMode() which already excludes projectSettings.
Co-authored-by: auriti <auriti@users.noreply.github.com>
* fix(security): remove unauthenticated file-based permission polling
Remove the legacy file-based permission polling from useSwarmPermissionPoller
that read from ~/.claude/teams/{name}/permissions/resolved/ — an unauthenticated
directory where any local process could forge approval files to auto-approve
tool uses for swarm teammates.
The file polling was dead code:
- The useSwarmPermissionPoller() hook was never mounted by any component
- resolvePermission() (the file writer) was never imported outside its module
- Permission responses are delivered exclusively via the mailbox system:
Leader: sendPermissionResponseViaMailbox() → writeToMailbox()
Worker: useInboxPoller → processMailboxPermissionResponse()
Changes:
- Remove file polling loop, processResponse(), and React hook imports from
useSwarmPermissionPoller.ts (now a pure callback registry module)
- Mark 7 file-based functions as @deprecated in permissionSync.ts
- Add 4 regression tests verifying the removal
No exported functions removed — only deprecated. All 5 consumer modules
verified: they import only mailbox-based functions that remain unchanged.
---------
Co-authored-by: auriti <auriti@users.noreply.github.com>
* feat(api): compress old tool_result content for small-context providers
Adds a shim-layer pass that tiers tool_result content by age on
providers
with small effective context windows (Copilot gpt-4o 128k, Mistral,
Ollama). Recent turns remain full; mid-tier results are truncated to
2k
chars; older results are replaced with a stub that preserves tool name
and arguments so the model can re-invoke if needed.
Tier sizes auto-tune via getEffectiveContextWindowSize, same
calculation
used by auto-compact. Reuses COMPACTABLE_TOOLS and
TOOL_RESULT_CLEARED_MESSAGE to complement (not duplicate)
microCompact.
Configurable via /config toolHistoryCompressionEnabled.
Addresses active-session context accumulation on Copilot where
microCompact's time-based trigger never fires, which surfaces as
"tools appearing in a loop" and prompt_too_long errors after ~15
turns.
* fix: config tool history
* fix: rename .claude.json to .openclaude.json with legacy fallback
Rename the global config file from ~/.claude.json to ~/.openclaude.json,
following the same migration pattern as the config directory
(~/.claude → ~/.openclaude).
- getGlobalClaudeFile() now prefers .openclaude.json; falls back to
.claude.json only if the legacy file exists and the new one does not
- Add .openclaude.json to filesystem permissions allowlist (keep
.claude.json for legacy file protection)
- Update all comment/string references from ~/.claude.json to
~/.openclaude.json across 12 files
New installs get .openclaude.json from the start. Existing users
continue using .claude.json until they rename it (or a future explicit
migration).
* test: add unit tests for getGlobalClaudeFile migration branches
Covers the three cases:
- new install (neither file exists) → .openclaude.json
- existing user (only legacy .claude.json exists) → .claude.json
- migrated user (both files exist) → .openclaude.json
---------
Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
* feat: native Anthropic API mode for Claude models on GitHub Copilot
When using Claude models through GitHub Copilot, automatically switch from
the OpenAI-compatible shim to Anthropic's native messages API format.
The Copilot proxy (api.githubcopilot.com) supports Anthropic's native API
for Claude models. This enables cache_control blocks to be sent and
honoured, allowing explicit prompt caching control (as opposed to relying
solely on server-side auto-caching).
Changes:
- Add isGithubNativeAnthropicMode() in providers.ts that auto-enables when
the resolved model starts with "claude-" and the GitHub provider is active
- Create a native Anthropic client in client.ts using the GitHub base URL
and Bearer token authentication when native mode is detected
- Enable prompt caching in claude.ts for native GitHub mode so cache_control
blocks are sent (previously only allowed for firstParty/bedrock/vertex)
- CLAUDE_CODE_GITHUB_ANTHROPIC_API=1 env var to force native mode for any
model
Benefits:
- Proper Anthropic message format (no lossy OpenAI translation)
- Explicit cache_control blocks for fine-grained caching control
- Potentially better Claude model behaviour with native format
Related: #515
* fix: scope force flag to Claude models and add isGithubNativeAnthropicMode tests
- CLAUDE_CODE_GITHUB_ANTHROPIC_API=1 now returns false for non-Claude models
(force flag still useful for aliases like 'github:copilot' with no model
resolved yet, where it returns true when model is empty)
- Add 7 focused tests covering mode detection: off without GitHub provider,
auto-detect via OPENAI_MODEL and resolvedModel, non-Claude model rejection,
and force-flag behaviour for claude/non-claude/no-model cases
* fix: detect github:copilot:claude- compound format, remove force flag
OPENAI_MODEL for GitHub Copilot uses the format 'github:copilot:MODEL'
(e.g. 'github:copilot:claude-sonnet-4'), which does not start with 'claude-'.
Auto-detection now handles both bare model names and the compound format.
The CLAUDE_CODE_GITHUB_ANTHROPIC_API force flag is removed: with proper
compound-format detection there is no remaining gap it could fill, and
keeping a broad override flag without a concrete use case invites misuse.
Tests updated to cover the compound format, generic alias (false), and
non-Claude compound model (github:copilot:gpt-4o → false).
* fix: use includes('claude-') for model detection, remove force flag
Detection was broken for the standard GitHub Copilot compound format
'github:copilot:claude-sonnet-4' which does not start with 'claude-'.
Using includes('claude-') handles bare names, compound names, and any
future variants without needing updates.
The CLAUDE_CODE_GITHUB_ANTHROPIC_API force flag is removed as it was
a workaround for the broken detection, not a genuine use case.
---------
Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
Reasoning models (MiniMax M2.7, GLM-4.5/5, DeepSeek, Kimi K2) inline
chain-of-thought inside <think>...</think> tags in the content field
rather than using the reasoning_content channel. The prior phrase-matching
sanitizer (looksLikeLeakedReasoningPrefix) only caught English-prose
preambles like "I should"/"the user asked", missed tag-based leaks
entirely, and risked false-stripping legitimate assistant output.
Replace with a structural tag-based approach (same pattern as hermes-agent):
- createThinkTagFilter() — streaming state machine that buffers partial
tags across SSE delta boundaries (<th| + |ink>), so tags split mid-chunk
still parse correctly.
- stripThinkTags() — whole-text cleanup for non-streaming responses and
as a safety net. Handles closed pairs, unterminated opens at block
boundaries, and orphan tags.
- Recognizes think, thinking, reasoning, thought, REASONING_SCRATCHPAD
case-insensitively, including tags with attributes.
- False-negative bias: flush() discards buffered partial tags at stream
end rather than leaking them.
Existing phrase-based shim tests updated to exercise the actual <think>
tag leak. Added regression tests confirming legitimate prose starting
with "I should..." is preserved (the old sanitizer's main false-positive).
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When set, disables strict schema normalization for non-Gemini providers.
Useful for OpenAI-compatible endpoints that reject MCP tools with complex
optional params (e.g. list[dict]) with "Extra required key ... supplied"
errors.
* fix: remove cached mcpClient in diagnostic tracking to prevent stale references
Resolves TODO comment about not caching the connected mcpClient since it can change.
Changes:
- Remove cached mcpClient field from DiagnosticTrackingService
- Add currentMcpClients storage to track active clients
- Update beforeFileEdited, getNewDiagnostics, and ensureFileOpened to accept client parameter
- Add backward-compatible methods to maintain existing API
- Update all callers to use new methods
- Add comprehensive test coverage
This prevents using stale MCP client references during reconnections,
making diagnostic tracking more reliable.
Fixes #TODO
* docs: add my contributions section to README
Add fork-specific section highlighting:
- Diagnostic tracking enhancement (PR #727)
- Technical skills demonstrated
- Links to original project and my work
- Professional contribution showcase
* revert: remove README.md contributions section to comply with reviewer request
- Remove 'My Fork & Contributions' section from README.md
- Keep README.md focused on original project documentation
- Maintain clean, project-focused README as requested by reviewer
* fix(api): drop orphan tool results to satisfy Mistral/OpenAI strict role sequence
* test: add test for orphan tool results and restore gemini comments
Problem: After auto-compaction with DeepSeek models (e.g., deepseek-chat),
the status line displayed ~16% remaining until next auto-compact, but users
expected ~30% (since compaction reduces usage to roughly half of the full
128k context).
Root cause: calculateTokenWarningState() used the auto-compaction threshold
(effectiveContextWindow - 13k buffer) as the denominator for percentLeft.
For DeepSeek-chat:
- Raw context: 128,000
- Effective: 119,808 (128k - 8,192 output reservation)
- Threshold: 106,808 (effective - 13k buffer)
At 90k usage:
- Old: (106,808 - 90k) / 106,808 ≈ 16%
- Expected: (128,000 - 90k) / 128,000 ≈ 30%
Fix: Change percentLeft calculation to use raw context window from
getContextWindowForModel() as denominator, while keeping threshold-based
warnings/triggers unchanged. This makes the displayed percentage show
remaining capacity relative to the model's full context size.
Impact:
- UI now shows correct % of total context remaining
- Auto-compaction trigger point unchanged (still ~90% of effective window)
- All other threshold calculations unaffected
Testing:
- Manual verification: DeepSeek-chat at 90k tokens shows 30% remaining (was 16%)
- Manual verification: Threshold still triggers at ~106k tokens
- Build succeeds: npm run build
- No breaking changes: Callers only depend on percentLeft for display; threshold logic unchanged
Fixes the user-reported discrepancy for DeepSeek and other OpenAI-compatible models.
* add mistral and gemini provider type for profile provider field
* load latest locally selected
* env variables take precedence over json save
* add gemini context windows and fix gemini defaulting for env
* load on startup fix
* fix failing tests
* clarify test message
* fix variable mismatches
* fix failing test
* delete keys and set profile.apiKey for mistral and gemini
* switch model as well when switching provider
* set model when adding a new model
* feat: add NVIDIA NIM and MiniMax provider support
- Add nvidia-nim and minimax to --provider CLI flag
- Add model discovery for NVIDIA NIM (160+ models) and MiniMax
- Update /model picker to show provider-specific models
- Fix provider detection in startup banner
- Update .env.example with new provider options
Supported providers:
- NVIDIA NIM: https://integrate.api.nvidia.com/v1
- MiniMax: https://api.minimax.io/v1
* fix: resolve conflict in StartupScreen (keep NVIDIA/MiniMax + add Codex detection)
* fix: resolve providerProfile conflict (add imports from main, keep NVIDIA/MiniMax)
* fix: revert providerSecrets to match main (NVIDIA/MiniMax handled elsewhere)
* fix: add context window entries for NVIDIA NIM and new MiniMax models
* fix: use GLM-5 as NVIDIA NIM default and MiniMax-M2.5 for consistency
* fix: address remaining review items - add GLM/Kimi context entries, max output tokens, fix .env.example, revert to Nemotron default
* fix: filter NVIDIA NIM picker to chat/instruct models only, set provider-specific API keys from saved profiles
* chore: add more NVIDIA NIM context window entries for popular models
* fix: address remaining non-blocking items - fix base model, clear provider API keys on profile switch
* feat: open useful USER_TYPE-gated features to all users
Remove 13 process.env.USER_TYPE === 'ant' gates that restricted useful
features to Anthropic employees. These features work without Anthropic
infrastructure and are now available to all open-build users.
Features opened:
- Agent nesting (sub-agents can spawn sub-agents)
- Effort 'max' persistence in settings
- Plan mode interview phase (controlled by feature flags)
- Sandbox disabled commands (via ~/.claude/feature-flags.json)
- All tips visible to all users (plan mode, feedback, shift-tab)
Simplified:
- Fullscreen defaults to off (use /config to enable)
- Explore agent always uses haiku model
- Plan mode tool uses conservative prompt for all users
Continues the USER_TYPE cleanup from #637 (dead code) and builds
on #639 (local feature flags).
* fix: address Copilot review comments — remove residual dead code
1. bridgeConfig.ts: ungate bridge override functions — return env vars
directly instead of hardcoded undefined
2. bridgeMain.ts + initReplBridge.ts: ungate sessionIngressUrl — read
CLAUDE_BRIDGE_SESSION_INGRESS_URL without USER_TYPE check
3. tools.ts: remove dead ConfigTool/TungstenTool imports, narrow
eslint-disable scope, stub REPLTool/SuggestBackgroundPRTool to null
4. readOnlyValidation.ts: remove orphaned ANT_ONLY_COMMAND_ALLOWLIST
and unused GH_READ_ONLY_COMMANDS import
5. insights.ts: remove entire remote collection plumbing (types,
functions, options, display logic)
6. osc.ts: hardcode supportsTabStatus() to false (internal-only feature)
7. state.ts: simplify addSlowOperation/getSlowOperations to no-ops,
remove dead constants
* fix: address Copilot review on PR #644
1. settings/types.ts: allow 'max' effort level for all users in Zod
schema — was still gated behind USER_TYPE=ant, causing 'max' to be
silently dropped on settings reload
2. shouldUseSandbox.ts: defensively normalize disabledCommands from
feature flag config with Array.isArray() guards
* fix: address second round of Copilot review on PR #644
1. shouldUseSandbox.ts: validate top-level shape of disabledCommands
before accessing properties (handles null/primitive from feature flag)
2. fullscreen.ts: update JSDoc to reflect removal of USER_TYPE default
3. osc.ts: update JSDoc — "Ant-only" → "Currently disabled"
* fix: resolve 12 bugs across API, MCP, agent tools, web search, and context overflow
API fixes:
- Fix Gemini 400 error: delete 'store: false' field for Gemini endpoints
(was globally injected, Gemini rejects unknown fields)
- Fix session timeout 500 errors after ~25min: add 120s idle timeout
on SSE stream readers in openaiShim and codexShim to detect dead
connections and trigger withRetry reconnection
- Fix context overflow 500 errors: add handler in errors.ts for 500
responses caused by oversized conversation context (too many tokens),
surfacing user-friendly message with recovery actions instead of raw
'API Error: 500'
Agent loop fix:
- Fix premature task completion: detect continuation signals like
'so now I have to do it' in assistant text without tool calls and
inject a meta nudge to force the agent to continue
Web search improvements:
- Increase result counts: Bing/Tavily/Exa/Firecrawl from 10→15,
Mojeek/You/Jina from default→10 (explicit), max_uses 8→15
MCP fixes:
- Reduce default tool timeout from ~27.8 hours to 5 minutes
(tools no longer hang indefinitely on unresponsive servers)
- Add retry logic (3 attempts) for tools/list fetch failures
(prevents all MCP tools from silently disappearing on timeout)
- Add abort signal check in URL elicitation retry loop
- Improve MCP error messages with server and tool name context
Agent tool fixes:
- Fix SendMessage race condition: double-check task status before
auto-resuming stopped agents to prevent duplicate registration
- Fix auto-compact circuit breaker gap: when auto-compact fails 3+
consecutive times, proactively block oversized context BEFORE the
API call instead of letting it 500. Clear message with recovery
instructions (/new, /compact, rewind).
Tests: 850 total, 0 failures (25 new bugfix tests)
* fix: address all 4 review blockers + 6 additional issues from PR #674
Blockers (from Vasanthdev2004 review):
1. Continuation nudge infinite loop — no loop guard
Added continuationNudgeCount to State, capped at MAX_CONTINUATION_NUDGES (3).
Counter increments on each nudge, resets on tool execution (next_turn).
2. Continuation signal regexes too broad — high false-positive rate
Tightened all patterns to require explicit action verbs. Added completion
marker check (done/finished/completed/summary). Broad patterns only fire
on messages <80 chars.
3. BUGFIXES.md in repo root — scope contamination
Removed. PR description already contains this info.
4. AgentTool dump state cleanup is comment-only, not a bug fix
Wrapped clearInvokedSkillsForAgent and clearDumpState in individual
try/catch blocks so one failure doesn't prevent the other.
Additional issues:
5+6. readWithTimeout ignores AbortSignal, timer leak on abort
Added optional signal param to openaiStreamToAnthropic,
codexStreamToAnthropic, collectCodexCompletedResponse, readSseEvents.
Added abort listener that clears idle timer so AbortError surfaces
cleanly instead of spurious idle timeout.
7. MCP error format change breaks consumers
Reverted human-readable message to original errorDetails format.
Moved server/tool context to telemetryMessage param only.
10. AgentTool test broken by comment change
Updated test assertions to match new defensive cleanup text + try/catch.
12. Mojeek test regex dangerously broad
Tightened to match searchParams.set('t', '10') specifically.
14. linkup.ts in providerCounts test — no result count field
Removed from providers list (uses depth param, not result count).
15. Error message overlap between errors.ts and query.ts
Prefixed errorDetails with 'Context overflow (500):' to distinguish.
Tests: 851 pass, 0 fail
---------
Co-authored-by: openclaude-bot <bot@openclaude.ai>
Co-authored-by: Fix Bot <fix@openclaude.dev>
* feat: enhance codex provider resolution with shortcut aliases and improved base URL handling
* fix: enhance codex alias resolution to include shell model
* feat: enhance Codex provider resolution to support new aliases and base URL handling
* fix: update base URL resolution logic for Codex models in GitHub mode
* fix: update provider transport logic to enforce Codex responses and adjust base URL handling
* fix: update provider request resolution to respect custom base URLs and adjust transport logic
* fix: restore OPENAI_MODEL environment variable handling in tests and provider config
- Raise context window fallback from 8k to 128k for unknown OpenAI-compat models.
The 8k fallback caused effective context (8k minus output reservation) to go
negative, making auto-compact fire on every single message.
- Add safety floor in getEffectiveContextWindowSize(): effective context is
always at least reservedTokensForSummary + 13k buffer, ensuring the
auto-compact threshold stays positive.
- Add missing MiniMax model entries (M2.5, M2.5-highspeed, M2.1, M2.1-highspeed)
all at 204,800 context / 131,072 max output per MiniMax docs.
- Add tests for MiniMax variants, 128k fallback, and autoCompact floor.
Fixes#635
Co-authored-by: root <root@vm7508.lumadock.com>
* fix: report cache reads in streaming and correct cost calculation
Fix two bugs in how the OpenAI-to-Anthropic shim handles cached tokens:
1. codexShim: streaming message_delta missing cache_read_input_tokens
The codexStreamToAnthropic() function builds the final message_delta
usage object inline (not through makeUsage()), and only included
input_tokens and output_tokens. cache_read_input_tokens was always 0,
so /cost never showed cache reads for Responses API models (GPT-5+).
Also fix makeUsage() to read input_tokens_details.cached_tokens and
prompt_tokens_details.cached_tokens for the non-streaming path.
2. Both shims: cost double-counting from convention mismatch
OpenAI includes cached tokens in input_tokens/prompt_tokens (i.e.,
input_tokens = uncached + cached). Anthropic treats input_tokens as
uncached only. The cost formula was:
cost = input_tokens * inputRate + cache_read * cacheRate
This double-counts cached tokens. Fix by subtracting cached from
input during the conversion:
input_tokens = prompt_tokens - cached_tokens
In practice this was inflating reported costs by ~2x for sessions
with high cache hit rates (which is most sessions, since Copilot
auto-caches server-side).
Fixes#515
* fix: omit zero cache read/write fields from /cost output
Only show "cache read" and "cache write" in /cost per-model usage when
the value is > 0. Providers like GitHub Copilot never report
cache_creation_input_tokens (the server manages its own cache), so
showing "0 cache write" on every line is misleading — it implies caching
is not working when it actually is.
Before:
claude-haiku: 2.6k input, 151 output, 39.8k cache read, 0 cache write ($0.04)
After:
claude-haiku: 2.6k input, 151 output, 39.8k cache read ($0.04)
---------
Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
Set store: false in the request body for both the Chat Completions path
and the /responses fallback path in openaiShim.ts.
The codexShim (Responses API primary path) already sets store: false.
The Chat Completions path and the /responses fallback in openaiShim were
missing it.
store: false tells the API provider not to persist conversation data for
model training, logging, or other non-operational purposes. This is a
privacy measure — it does not affect caching or functionality.
Note: Whether third-party proxies (e.g. GitHub Copilot) honour this
parameter is provider-dependent, but setting it is a reasonable default
for user privacy.
Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
* Stop canonical Anthropic headers from leaking into 3P shim requests
The remaining blocker from PR #268 was that canonical Anthropic headers such as
`anthropic-version` and `anthropic-beta` could still ride through supported 3P
paths even after the earlier x-anthropic/x-claude scrubber work. This tightens
header filtering inside the shim itself so direct defaultHeaders, env-driven
client setup, providerOverride routing, and per-request header injection all
share the same scrubber.
Constraint: Preserve non-Anthropic custom headers and provider auth while stripping only Anthropic/OpenClaude-internal headers from 3P requests
Rejected: Rely on client.ts filtering alone | direct shim construction and per-request headers would still leave gaps
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Keep header scrubbing centralized in the shim so new call paths do not reopen 3P leakage bugs
Tested: bun test src/services/api/openaiShim.test.ts src/services/api/client.test.ts src/utils/context.test.ts
Tested: bun run test:provider
Tested: bun run build && node dist/cli.mjs --version
Not-tested: bun run typecheck (repository baseline currently fails in many unrelated files)
* Keep OpenAI client tests from restoring undefined env as strings
The new header-leak regression tests in client.test.ts restored environment
variables via direct assignment, which can leave literal "undefined" strings in
process.env when the original value was unset. This switches the teardown over
to the same restore helper pattern already used in openaiShim.test.ts.
Constraint: Keep the fix limited to test hygiene without altering runtime behavior
Rejected: Restore only the two env vars Copilot called out | using one helper for all test env restores is simpler and less error-prone
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Use restore helpers for env teardown in tests so unset values stay deleted instead of becoming the string "undefined"
Tested: bun test src/services/api/client.test.ts src/services/api/openaiShim.test.ts src/utils/context.test.ts
Not-tested: Full provider suite (unchanged runtime path)
* Prevent GitHub Codex requests from forwarding unsanitized Anthropic headers
A base-sync with upstream exposed a separate GitHub+Codex transport branch
that still merged per-request headers raw before adding Copilot headers.
This keeps the filter aligned across Codex-family paths and adds explicit
regression tests for GitHub Codex routing, including providerOverride.
Constraint: Must not push or modify GitHub state while validating the reviewer concern
Rejected: Leave the GitHub Codex path unchanged | runtime repro showed anthropic-* headers still leaked after the upstream sync
Confidence: high
Scope-risk: narrow
Directive: Keep header scrubbing consistent across every Codex-family transport branch when provider routing changes
Tested: bun test src/services/api/openaiShim.test.ts
Tested: bun test src/services/api/client.test.ts src/services/api/codexShim.test.ts src/services/api/providerConfig.github.test.ts
Tested: bun run build
Not-tested: Full repository test suite
* feat: add AutoFix config schema and reader module
Implements AutoFixConfigSchema (Zod v4) with validation for lint/test
commands, maxRetries (0-10, default 3), and timeout (1000-300000ms,
default 30000). Adds getAutoFixConfig helper that returns null for
disabled or invalid configs. All 9 unit tests pass.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat: add autoFix runner with lint/test command execution
Implements AutoFixRunner (Task 2) - executes lint and test shell commands
sequentially, short-circuits on lint failure, handles timeouts, and
produces structured AutoFixResult with AI-friendly error summaries.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat: add autoFix field to SettingsSchema with integration tests
Integrates AutoFixConfigSchema into SettingsSchema so autoFix settings
are validated at the settings layer. Adds two integration tests verifying
that valid configs are accepted and invalid configs (enabled with no
commands) are rejected.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat: add autoFix hook integration helpers (Task 4)
Implements shouldRunAutoFix and buildAutoFixContext functions used by
the PostToolUse hook to determine when to run auto-fix and format
errors as AI-readable context for injection.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat: wire autoFix into PostToolUse hook flow (Task 5)
Add auto-fix lint/test check after existing PostToolUse hooks in
runPostToolUseHooks. When autoFix is configured in settings, runs
lint/test commands after file_edit/file_write tools and yields
errors as hook_additional_context for the model to act on.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add /auto-fix slash command
Adds the /auto-fix prompt command that helps users configure autoFix settings
(lint/test commands, maxRetries, timeout) in .claude/settings.json.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: remove unused imports in autoFixRunner test
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address review feedback — enforce maxRetries, wire abort signal, use cross-platform shell
1. Enforce maxRetries: track auto-fix attempts per query chain in toolHooks.ts
and stop feeding errors back after the configured limit is reached.
2. Wire abort signal to subprocess: subscribe to AbortController signal in
runCommand() and kill the process tree on abort. Uses detached process
groups on Unix to ensure child processes are also terminated.
3. Replace hardcoded bash with shell:true: use Node's cross-platform shell
resolution instead of spawn('bash', ['-c', ...]) so auto-fix commands
work on Windows and non-bash environments.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Fixes#430. In normalizeSchemaForOpenAI(), the strict branch was adding every
property key to required[], including optional ones. This caused providers like
Groq, Azure OpenAI, and others to reject valid tool calls with a 400 /
tool_use_failed error because the model correctly omits optional arguments but
the provider sees them as missing required fields.
Root cause: the strict branch used `[...existingRequired, ...allKeys]` instead
of `existingRequired.filter(k => k in normalizedProps)`. The Gemini branch
already had the correct logic.
Fix: align the strict branch with the Gemini branch — only keep properties that
were already marked required in the original schema. The additionalProperties:
false constraint is preserved as strict-mode providers still require it.
Add regression test covering the Read tool schema (file_path required,
offset/limit/pages optional).
* update gitHub copilot API with offical client id and update model configurations
* test: add unit tests for exchangeForCopilotToken and enhance GitHub model normalization
* remove PAT token feature
* test(api): harden provider tests against env leakage
* Added back trimmed github auth token
* added auto refresh logic for auto token along with test
* fix: remove forked provider validation in cli.tsx and clear stale provider env vars in /onboard-github
* refactor: streamline environment variable handling in mergeUserSettingsEnv
* fix: clear stale provider env vars to ensure correct GH routing
* Remove internal-only tooling from the external build (#352)
* Remove internal-only tooling without changing external runtime contracts
This trims the lowest-risk internal-only surfaces first: deleted internal
modules are replaced by build-time no-op stubs, the bundled stuck skill is
removed, and the insights S3 upload path now stays local-only. The privacy
verifier is expanded and the remaining bundled internal Slack/Artifactory
strings are neutralized without broad repo-wide renames.
Constraint: Keep the first PR deletion-heavy and avoid mass rewrites of USER_TYPE, tengu, or claude_code identifiers
Rejected: One-shot DMCA cleanup branch | too much semantic risk for a first PR
Confidence: medium
Scope-risk: moderate
Reversibility: clean
Directive: Treat full-repo typecheck as a baseline issue on this upstream snapshot; do not claim this commit introduced the existing non-Phase-A errors without isolating them first
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Not-tested: Full repo typecheck (currently fails on widespread pre-existing upstream errors outside this change set)
* Keep minimal source shims so CI can import Phase A cleanup paths
The first PR removed internal-only source files entirely, but CI provider
and context tests import those modules directly from source rather than
through the build-time no-telemetry stubs. This restores tiny no-op source
shims so tests and local source imports resolve while preserving the same
external runtime behavior.
Constraint: GitHub Actions runs source-level tests in addition to bundled build/privacy checks
Rejected: Revert the entire deletion pass | unnecessary once the import contract is satisfied by small shims
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: For later cleanup phases, treat build-time stubs and source-test imports as separate compatibility surfaces
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (still noisy on this upstream snapshot)
---------
Co-authored-by: anandh8x <test@example.com>
* Reduce internal-only labeling noise in source comments (#355)
This pass rewrites comment-only ANT-ONLY markers to neutral internal-only
language across the source tree without changing runtime strings, flags,
commands, or protocol identifiers. The goal is to lower obvious internal
prose leakage while keeping the diff mechanically safe and easy to review.
Constraint: Phase B is limited to comments/prose only; runtime strings and user-facing labels remain deferred
Rejected: Broad search-and-replace across strings and command descriptions | too risky for a prose-only pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly runtime/user-facing strings and should be handled separately from comment cleanup
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Neutralize internal Anthropic prose in explanatory comments (#357)
This is a small prose-only follow-up that rewrites clearly internal or
explanatory Anthropic comment language to neutral wording in a handful of
high-confidence files. It avoids runtime strings, flags, command labels,
protocol identifiers, and provider-facing references.
Constraint: Keep this pass narrowly scoped to comments/documentation only
Rejected: Broader Anthropic comment sweep across functional API/protocol references | too ambiguous for a safe prose-only PR
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Leave functional Anthropic references (API behavior, SDKs, URLs, provider labels, protocol docs) for separate reviewed passes
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Neutralize remaining internal-only diagnostic labels (#359)
This pass rewrites a small set of ant-only diagnostic and UI labels to
neutral internal wording while leaving command definitions, flags, and
runtime logic untouched. It focuses on internal debug output, dead UI
branches, and noninteractive headings rather than broader product text.
Constraint: Label cleanup only; do not change command semantics or ant-only logic gates
Rejected: Renaming ant-only command descriptions in main.tsx | broader UX surface better handled in a separate reviewed pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly command descriptions and intentionally deferred user-facing strings
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Finish eliminating remaining ANT-ONLY source labels (#360)
This extends the label-only cleanup to the remaining internal-only command,
debug, and heading strings so the source tree no longer contains ANT-ONLY
markers. The pass still avoids logic changes and only renames labels shown
in internal or gated surfaces.
Constraint: Update the existing label-cleanup PR without widening scope into behavior changes
Rejected: Leave the last ANT-ONLY strings for a later pass | low-cost cleanup while the branch is already focused on labels
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: The next phase should move off label cleanup and onto a separately scoped logic or rebrand slice
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Stub internal-only recording and model capability helpers (#377)
This follow-up Phase C-lite slice replaces purely internal helper modules
with stable external no-op surfaces and collapses internal elevated error
logging to a no-op. The change removes additional USER_TYPE-gated helper
behavior without touching product-facing runtime flows.
Constraint: Keep this PR limited to isolated helper modules that are already external no-ops in practice
Rejected: Pulling in broader speculation or logging sink changes | less isolated and easier to debate during review
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Continue Phase C with similarly isolated helpers before moving into mixed behavior files
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Remove internal-only bundled skills and mock helpers (#376)
* Remove internal-only bundled skills and mock rate-limit behavior
This takes the next planned Phase C-lite slice by deleting bundled skills
that only ever registered for internal users and replacing the internal
mock rate-limit helper with a stable no-op external stub. The external
build keeps the same behavior while removing a concentrated block of
USER_TYPE-gated dead code.
Constraint: Limit this PR to isolated internal-only helpers and avoid bridge, oauth, or rebrand behavior
Rejected: Broad USER_TYPE cleanup across mixed runtime surfaces | too risky for the next medium-sized PR
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: The next cleanup pass should continue with similarly isolated USER_TYPE helpers before touching main.tsx or protocol-heavy code
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
* Align internal-only helper removal with remaining user guidance
This follow-up fixes the mock billing stub to be a true no-op and removes
stale user-facing references to /verify and /skillify from the same PR.
It also leaves a clearer paper trail for review: the deleted verify skill
was explicitly ant-gated before removal, and the remaining mock helper
callers still resolve to safe no-op returns in the external build.
Constraint: Keep the PR focused on consistency fixes and reviewer-requested evidence, not new cleanup scope
Rejected: Leave stale guidance for a later PR | would make this branch internally inconsistent after skill removal
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When deleting gated features, always sweep user guidance and coordinator prompts in the same pass
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy; changed-file scan still shows only pre-existing tipRegistry errors outside edited lines)
* Clarify generic workflow wording after skill removal
This removes the last generic verification-skill wording that could still
be read as pointing at a deleted bundled command. The guidance now talks
about project workflows rather than a specific bundled verify skill.
Constraint: Keep the follow-up limited to reviewer-facing wording cleanup on the same PR
Rejected: Leave generic wording as-is | still too easy to misread after the explicit /verify references were removed
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When removing bundled commands, scrub both explicit and generic references in the same branch
Tested: bun run build
Tested: bun run smoke
Not-tested: Additional checks unchanged by wording-only follow-up
---------
Co-authored-by: anandh8x <test@example.com>
* test(api): add GEMINI_AUTH_MODE to environment setup in tests
* test: isolate GitHub/Gemini credential tests with fresh module imports and explicit non-bare env setup to prevent cross-test mock/cache leaks
* fix: update GitHub Copilot base URL and model defaults for improved compatibility
* fix: enhance error handling in OpenAI API response processing
* fix: improve error handling for GitHub Copilot API responses and streamline error body consumption
* fix: enhance response handling in OpenAI API shim for better error reporting and support for streaming responses
* feat: enhance GitHub device flow with fresh module import and token validation improvements
* fix: separate Copilot API routing from GitHub Models, clear stale env vars, honor providerOverride.apiKey
* fix: route GitHub GPT-5/Codex to Copilot API, show all Copilot models in picker, clear stale env vars
* fix GitHub Models API regression
* feat: update GitHub authentication to require OAuth tokens, normalize model handling for Copilot and GitHub Models
* fix: update GitHub token validation to support OAuth tokens and improve endpoint type handling
---------
Co-authored-by: Anandan <anandan.8x@gmail.com>
Co-authored-by: anandh8x <test@example.com>
* fix: strip Anthropic-specific params from 3P provider paths
Three silent failure modes affecting all third-party provider users:
1. Thinking blocks serialized as <thinking> text corrupt multi-turn
context — strip them instead of converting to raw text tags.
2. Unknown models fall through to 200k context window default, so
auto-compact never triggers — use conservative 8k for unknown
3P models with a warning log.
3. Session resume with thinking blocks causes 400 or context corruption
on 3P providers — strip thinking/redacted_thinking content blocks
from deserialized messages when resuming against a non-Anthropic
provider.
Addresses findings 2, 3, and 5 from #248.
* test: align resume stripping expectation with orphan-thinking filter
* test: isolate provider env in conversation recovery tests
* test: move provider-sensitive resume coverage behind module mocks
* test: trim extra blank lines in conversation recovery test
Keep the focused provider-resume test diff clean so the regression branch stays easy to review.
Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>
---------
Co-authored-by: Claude Opus 4.6 <noreply@openclaude.dev>
* fix: restore Grep and Glob reliability on OpenAI paths
Preserve Grep and Glob pattern fields during OpenAI/Codex schema sanitization, and fall back to system ripgrep when the packaged binary is missing. This keeps search tool schemas intact and improves Linux usability for npm/source installs.
Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>
* test: clean up ripgrep fallback test helpers
Remove the unused ripgrepCommand import and normalize mocked builtin ripgrep paths so the test behaves consistently across platforms.
Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>
* test: remove duplicate Codex URI schema case
Drop the duplicated WebFetch URI-format test in codexShim.test.ts so test names stay unique and failures remain easier to read.
Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>
* test: stabilize ripgrep fallback coverage
Avoid fs/module mocking in ripgrep fallback tests by extracting the config selection logic into a pure helper. This preserves the fallback coverage while removing the test interaction that caused the narrowed Bun hang repro.
Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>
* test: tighten ripgrep and schema coverage
Align the ripgrep fallback test with the actual auto-fallback branch, clean up strict typing in schema sanitizer tests, and tighten ripgrep error narrowing for type safety.
Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>
---------
Co-authored-by: Claude Opus 4.6 <noreply@openclaude.dev>
* fix: address code scanning alerts
Parse Gemini hostnames instead of matching raw URL substrings, redact gRPC error logs, and harden the Finder drag-drop test escape helper so the flagged paths are fixed without regressing working behavior.
* Potential fix for pull request finding 'CodeQL / Clear-text logging of sensitive information'
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* fix: restore safe grpc error summaries
A later autofix commit removed the exported gRPC error summarizer while the new regression test still imported it. Restore the safe name/code-only summary so CI stays green without reintroducing clear-text logging.
* fix: keep grpc logging generic
Remove the stale helper/test pair and keep the gRPC startup and stream logs free of error-derived data so the CodeQL clear-text logging alert stays closed while the rest of the security fixes remain intact.
---------
Co-authored-by: OpenClaude Worker 3 <worker-3@openclaude.local>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
## Summary
- Added `isCompactableTool()` helper in `microCompact.ts` that matches
both the existing COMPACTABLE_TOOLS set and any tool prefixed `mcp__`
- MCP tool results were never compacted because the hardcoded allowlist
only contained 9 built-in tools — MCP tools fell through and persisted
in full for the entire session, wasting 10-500K tokens/session
## Impact
- user-facing impact: long sessions using MCP servers (GitHub, Slack,
Playwright, etc.) will compact stale MCP tool results, reducing token
usage and delaying autocompact triggers
- developer/maintainer impact: new MCP servers are automatically covered
via prefix match — no need to update the allowlist per-server
## Testing
- [x] `bun run build`
- [x] `bun run smoke`
- [x] focused tests: `bun test src/services/compact/microCompact.test.ts`
- module exports load correctly
- estimateMessageTokens counts MCP tool_use blocks
- microcompactMessages processes MCP tools without error
- microcompactMessages processes mixed built-in and MCP tools
## Notes
- provider/model path tested: n/a (compaction logic is model-agnostic)
- screenshots attached (if UI changed): n/a
- follow-up work or known limitations: subagent results and thinking
blocks are still not compacted (separate RFCs)
https://claude.ai/code/session_01D7kprMn4c66a5WrZscF7rv
Co-authored-by: Claude <noreply@anthropic.com>
* fix: normalize malformed Bash tool arguments from OpenAI-compatible providers
* fix: keep invalid Bash tool args from becoming commands
* fix: preserve malformed Bash JSON literals
* test: stabilize rebased PR 385 checks
* test: isolate provider profile env assertions
* fix: extend tool argument normalization to all tools and harden edge cases
- Extend STRING_ARGUMENT_TOOL_FIELDS to normalize Read, Write, Edit,
Glob, and Grep plain-string arguments (fixes "Invalid tool parameters"
errors reported by VennDev)
- Normalize streaming Bash args regardless of finish_reason, not only
when finish_reason is 'tool_calls'
- Broaden isLikelyStructuredObjectLiteral to catch malformed object-shaped
strings like {command:"pwd"} and {'command':'pwd'} (fixes CR2 from
Vasanthdev2004)
- Apply blank/object-literal guard to all tools, not just Bash
- Extract duplicated JSON repair suffix combinations into shared constant
- Add 32 isolated unit tests for toolArgumentNormalization
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: skip streaming normalization on finish_reason length
Truncated tool calls (finish_reason: 'length') now preserve the raw
buffer instead of normalizing into executable commands, preventing
incomplete commands from becoming runnable.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: comprehensive tool argument normalization hardening
- Remove all { raw: ... } returns that caused InputValidationError with
z.strictObject schemas — return {} instead for clean Zod errors
- Extend normalizeAtStop buffering to all mapped tools (Read, Write,
Edit, Glob, Grep) so streaming paths also get normalized
- Make repairPossiblyTruncatedObjectJson generic — repair any valid
JSON object, not just ones with a command field
- Export hasToolFieldMapping for streaming normalizeAtStop decision
- Skip normalization on finish_reason: length to preserve raw truncated
buffer
- Update all test expectations to match new behavior
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Fix GLM-5 and other reasoning models appearing to hang via OpenAI shim
Reasoning models like GLM-5 and DeepSeek stream chain-of-thought in
`reasoning_content` while `content` stays empty (""). The OpenAI shim
only read `delta.content`, so it saw empty strings and never emitted
any Anthropic stream events — causing the UI to appear frozen.
- Add `reasoning_content` to streaming chunk and non-streaming response types
- Emit `reasoning_content` as thinking blocks (thinking_delta) in streaming mode
- Properly transition from thinking to text blocks when content phase begins
- Fall back to `reasoning_content` in non-streaming mode when content is null
Fixes#214
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Fix non-streaming reasoning_content fallback and add tests
- Use explicit empty-string check instead of || for content fallback
so content: "" doesn't leak reasoning_content as visible text
- Close thinking block before tool call blocks in streaming path
- Add non-streaming and streaming reasoning_content tests
Co-Authored-By: GLM-5.1 <noreply@openclaude.dev>
* Fix flaky Ink reconciler tests caused by react-compiler memoization
Remove hard throw in createTextInstance that crashed when hostContext.isInsideText
was stale due to react-compiler element caching. Add timeout guards to prevent
test hangs when render errors prevent exit() from firing.
Co-Authored-By: Claude GLM-5.1 <noreply@openclaude.dev>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: GLM-5.1 <noreply@openclaude.dev>
* docs(docs): add agent guidance and repository instructions
- Created `AGENTS.md` and `CLAUDE.md` to provide high-signal guidance for AI agents and developers working in the repository.
- Outlined critical developer commands for building, testing, and running diagnostics using `bun`.
- Documented the repository architecture, source entrypoints, and core service logic.
- Defined framework-specific quirks, including module stubbing for internal modules and macro versioning.
- Established style and workflow guidelines regarding telemetry, environment variables, and security scan requirements.
* feat(api): support gemini thought signatures in openai shim
- Added `isGeminiMode` utility to detect Gemini backends via `CLAUDE_CODE_USE_GEMINI` or `OPENAI_BASE_URL`.
- Updated `convertMessages` to extract `thought_signature` from thinking blocks and inject them into tool calls.
- Implemented a fallback mechanism that provides a `skip_thought_signature_validator` string to avoid 400 validation errors when a signature is missing.
- Enhanced `openaiStreamToAnthropic` and `OpenAIShimMessages` to correctly preserve and pass through Gemini-specific metadata in `extra_content`.
* refactor(api): improve gemini metadata handling and remove redundant docs
- Updated `src/services/api/openaiShim.ts` to merge existing `google`-specific metadata within `extra_content` instead of overwriting it.
- Simplified the `thought_signature` assignment logic to use a fallback value of `skip_thought_signature_validator` when no signature is provided.
- Deleted `AGENTS.md` and `CLAUDE.md` files to eliminate redundant agent guidance documentation.
* fix(api): propagate gemini thought signatures to all parallel tool calls
- Removed the index constraint when assigning the `signature` from a `thinkingBlock` to tool calls in `openaiShim.ts`.
- Ensured that the `thought_signature` is applied to every tool call in a parallel set, rather than just the first one.
- Aligned the shim with Gemini API requirements, which mandate that the same signature must be present on every replayed function call part within an assistant turn.
Models served through Ollama/vLLM with strict Jinja templates (Devstral,
Mistral, etc.) require strict user↔assistant role alternation and reject
requests with consecutive messages of the same role.
convertMessages() could produce consecutive user or assistant messages in
three scenarios: batched user input, text-only + tool_use assistant turns,
and tool result remainders followed by another user message.
Added a coalescing pass at the end of convertMessages() that merges
consecutive same-role messages (string concat or array concat), preserving
tool_calls on assistant messages. Tool and system messages are excluded
from coalescing as they have their own alternation rules.
Includes regression tests for both user and assistant coalescing.
Fixes#202
* Add local OpenAI-compatible model discovery to /model
* Guard local OpenAI model discovery from Codex routing
* Preserve remote OpenAI Codex alias behavior