* feat(api): compress old tool_result content for small-context providers
Adds a shim-layer pass that tiers tool_result content by age on
providers
with small effective context windows (Copilot gpt-4o 128k, Mistral,
Ollama). Recent turns remain full; mid-tier results are truncated to
2k
chars; older results are replaced with a stub that preserves tool name
and arguments so the model can re-invoke if needed.
Tier sizes auto-tune via getEffectiveContextWindowSize, same
calculation
used by auto-compact. Reuses COMPACTABLE_TOOLS and
TOOL_RESULT_CLEARED_MESSAGE to complement (not duplicate)
microCompact.
Configurable via /config toolHistoryCompressionEnabled.
Addresses active-session context accumulation on Copilot where
microCompact's time-based trigger never fires, which surfaces as
"tools appearing in a loop" and prompt_too_long errors after ~15
turns.
* fix: config tool history
Reasoning models (MiniMax M2.7, GLM-4.5/5, DeepSeek, Kimi K2) inline
chain-of-thought inside <think>...</think> tags in the content field
rather than using the reasoning_content channel. The prior phrase-matching
sanitizer (looksLikeLeakedReasoningPrefix) only caught English-prose
preambles like "I should"/"the user asked", missed tag-based leaks
entirely, and risked false-stripping legitimate assistant output.
Replace with a structural tag-based approach (same pattern as hermes-agent):
- createThinkTagFilter() — streaming state machine that buffers partial
tags across SSE delta boundaries (<th| + |ink>), so tags split mid-chunk
still parse correctly.
- stripThinkTags() — whole-text cleanup for non-streaming responses and
as a safety net. Handles closed pairs, unterminated opens at block
boundaries, and orphan tags.
- Recognizes think, thinking, reasoning, thought, REASONING_SCRATCHPAD
case-insensitively, including tags with attributes.
- False-negative bias: flush() discards buffered partial tags at stream
end rather than leaking them.
Existing phrase-based shim tests updated to exercise the actual <think>
tag leak. Added regression tests confirming legitimate prose starting
with "I should..." is preserved (the old sanitizer's main false-positive).
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: resolve 12 bugs across API, MCP, agent tools, web search, and context overflow
API fixes:
- Fix Gemini 400 error: delete 'store: false' field for Gemini endpoints
(was globally injected, Gemini rejects unknown fields)
- Fix session timeout 500 errors after ~25min: add 120s idle timeout
on SSE stream readers in openaiShim and codexShim to detect dead
connections and trigger withRetry reconnection
- Fix context overflow 500 errors: add handler in errors.ts for 500
responses caused by oversized conversation context (too many tokens),
surfacing user-friendly message with recovery actions instead of raw
'API Error: 500'
Agent loop fix:
- Fix premature task completion: detect continuation signals like
'so now I have to do it' in assistant text without tool calls and
inject a meta nudge to force the agent to continue
Web search improvements:
- Increase result counts: Bing/Tavily/Exa/Firecrawl from 10→15,
Mojeek/You/Jina from default→10 (explicit), max_uses 8→15
MCP fixes:
- Reduce default tool timeout from ~27.8 hours to 5 minutes
(tools no longer hang indefinitely on unresponsive servers)
- Add retry logic (3 attempts) for tools/list fetch failures
(prevents all MCP tools from silently disappearing on timeout)
- Add abort signal check in URL elicitation retry loop
- Improve MCP error messages with server and tool name context
Agent tool fixes:
- Fix SendMessage race condition: double-check task status before
auto-resuming stopped agents to prevent duplicate registration
- Fix auto-compact circuit breaker gap: when auto-compact fails 3+
consecutive times, proactively block oversized context BEFORE the
API call instead of letting it 500. Clear message with recovery
instructions (/new, /compact, rewind).
Tests: 850 total, 0 failures (25 new bugfix tests)
* fix: address all 4 review blockers + 6 additional issues from PR #674
Blockers (from Vasanthdev2004 review):
1. Continuation nudge infinite loop — no loop guard
Added continuationNudgeCount to State, capped at MAX_CONTINUATION_NUDGES (3).
Counter increments on each nudge, resets on tool execution (next_turn).
2. Continuation signal regexes too broad — high false-positive rate
Tightened all patterns to require explicit action verbs. Added completion
marker check (done/finished/completed/summary). Broad patterns only fire
on messages <80 chars.
3. BUGFIXES.md in repo root — scope contamination
Removed. PR description already contains this info.
4. AgentTool dump state cleanup is comment-only, not a bug fix
Wrapped clearInvokedSkillsForAgent and clearDumpState in individual
try/catch blocks so one failure doesn't prevent the other.
Additional issues:
5+6. readWithTimeout ignores AbortSignal, timer leak on abort
Added optional signal param to openaiStreamToAnthropic,
codexStreamToAnthropic, collectCodexCompletedResponse, readSseEvents.
Added abort listener that clears idle timer so AbortError surfaces
cleanly instead of spurious idle timeout.
7. MCP error format change breaks consumers
Reverted human-readable message to original errorDetails format.
Moved server/tool context to telemetryMessage param only.
10. AgentTool test broken by comment change
Updated test assertions to match new defensive cleanup text + try/catch.
12. Mojeek test regex dangerously broad
Tightened to match searchParams.set('t', '10') specifically.
14. linkup.ts in providerCounts test — no result count field
Removed from providers list (uses depth param, not result count).
15. Error message overlap between errors.ts and query.ts
Prefixed errorDetails with 'Context overflow (500):' to distinguish.
Tests: 851 pass, 0 fail
---------
Co-authored-by: openclaude-bot <bot@openclaude.ai>
Co-authored-by: Fix Bot <fix@openclaude.dev>
* fix: report cache reads in streaming and correct cost calculation
Fix two bugs in how the OpenAI-to-Anthropic shim handles cached tokens:
1. codexShim: streaming message_delta missing cache_read_input_tokens
The codexStreamToAnthropic() function builds the final message_delta
usage object inline (not through makeUsage()), and only included
input_tokens and output_tokens. cache_read_input_tokens was always 0,
so /cost never showed cache reads for Responses API models (GPT-5+).
Also fix makeUsage() to read input_tokens_details.cached_tokens and
prompt_tokens_details.cached_tokens for the non-streaming path.
2. Both shims: cost double-counting from convention mismatch
OpenAI includes cached tokens in input_tokens/prompt_tokens (i.e.,
input_tokens = uncached + cached). Anthropic treats input_tokens as
uncached only. The cost formula was:
cost = input_tokens * inputRate + cache_read * cacheRate
This double-counts cached tokens. Fix by subtracting cached from
input during the conversion:
input_tokens = prompt_tokens - cached_tokens
In practice this was inflating reported costs by ~2x for sessions
with high cache hit rates (which is most sessions, since Copilot
auto-caches server-side).
Fixes#515
* fix: omit zero cache read/write fields from /cost output
Only show "cache read" and "cache write" in /cost per-model usage when
the value is > 0. Providers like GitHub Copilot never report
cache_creation_input_tokens (the server manages its own cache), so
showing "0 cache write" on every line is misleading — it implies caching
is not working when it actually is.
Before:
claude-haiku: 2.6k input, 151 output, 39.8k cache read, 0 cache write ($0.04)
After:
claude-haiku: 2.6k input, 151 output, 39.8k cache read ($0.04)
---------
Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
* fix: disable experimental API betas by default to prevent 500 errors
Tool search (defer_loading), global cache scope, and context management
betas require internal Anthropic server-side support. External accounts
receive 500 Internal Server Error when these are sent.
Set CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=true by default in the CLI
entrypoint. Users with internal access can opt back in with =false.
Also includes: cache key stability fixes (Sonnet 1M latch, system-before-
messages key ordering, resume fingerprint isMeta skip), sideQuery default
cleanup, and /dream command.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor: standardize API headers to Headers type and enable tengu feature flags by default
* fix: address PR review — dream lock, MCP betas guard, redundant Partial
- Call recordConsolidation() programmatically in /dream instead of
delegating to model prompt (unreliable)
- Add CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS guard to MCP entrypoint
(was only in CLI entrypoint, causing 500s in MCP server mode)
- Remove redundant ? markers from SecretValueSource Partial<{}> type
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1. cli/update.ts: Block the update command for third-party providers.
The update mechanism downloads from Anthropic's GCS bucket, which
would silently replace the OpenClaude build (with the OpenAI shim)
with the upstream Claude Code binary (without it). Now shows an
actionable message directing users to rebuild from source.
2. codexShim.ts: Filter thinking blocks from assistant history, matching
the openaiShim behavior. Without this, thinking blocks were included
as plain text in assistant messages for the Codex transport but
excluded for the OpenAI transport — causing inconsistent history
when switching providers mid-session.
Three targeted fixes:
1. Replace Math.random() with crypto.randomUUID() for message and tool
call IDs in both openaiShim.ts and codexShim.ts. Math.random() is
not cryptographically secure and predictable in seeded environments.
2. Anchor Azure endpoint detection to parsed hostname instead of raw
URL regex. Adds support for Azure AI Foundry (services.ai.azure.com)
alongside existing cognitiveservices and openai Azure endpoints.
Prevents SSRF-style bypass via path segments.
3. Surface content safety filter blocks to the user. When Gemini or
Azure returns finish_reason 'content_filter' or 'safety', emit a
visible text block '[Content blocked by provider safety filter]'
instead of silently returning empty/truncated content with
stop_reason 'end_turn'. Applied to both streaming and non-streaming.
Addresses the most critical remaining issues in the provider shim layer,
building on top of #124 (recursive schema normalization + try/finally).
openaiShim.ts:
- Throw APIError via SDK factory instead of plain Error — enables retry
on 429/503 (was completely broken: zero retries for all 3P providers)
- Guard stop_reason !== null before emitting usage-only message_delta
(Azure/Groq send usage before finish_reason)
- Fix assistant content: join text parts instead of invalid as-string cast
(Mistral rejects array content on assistant role)
- Expose real HTTP Response in withResponse() for header inspection
- Skip stream_options for local providers (Ollama < 0.5 compatibility)
codexShim.ts:
- Throw APIError at all 4 throw sites (HTTP + 3 streaming errors)
- Add tool_choice 'none' mapping (was silently ignored)
- Forward is_error flag with Error: prefix (matching openaiShim)
This commit addresses strict schema validation limitations when running subagents under OpenAI backend shims.
- Drops empty properties from payloads (like Record<string, string>) that break OpenAI's Structured Outputs validation.
- Handles edge cases for automated initial teams when subagents bypass standard creation routines.
- Aborts sending unsupported experimental backend parameters like temperature and top_p for GPT-5 derivatives.