* feat: activate local-only team memory in open build
Enable the TEAMMEM feature flag and the isTeamMemoryEnabled() gate so
team memory works in local-only mode for all open-build users.
Team memory is a shared memory system scoped per-project, stored at
~/.claude/projects/<project>/memory/team/. The implementation is
already almost entirely local — extraction, UI, prompts, file
detection, and path validation all work on local files.
The cloud sync overlay (OAuth + API) is cleanly separated: the
watcher does an early return when OAuth is unavailable, so the
feature degrades gracefully to local-only storage with no crashes.
What works locally:
- Memory extraction (auto + team, combined prompts)
- Team MEMORY.md loaded into conversation context
- File selector with team memory folder option
- Collapse tracking (read/search/write counts)
- Secret scanning before persistence
- Path validation + symlink protection
What requires OAuth (not available in open build):
- Cloud sync between team members
- Automatic push/pull via file watcher
* fix: preserve opt-out gate for team memory via feature flag
Change isTeamMemoryEnabled() to read tengu_herring_clock with default
true instead of unconditional return true. This enables team memory by
default while preserving user opt-out via ~/.claude/feature-flags.json.
* feat: activate coordinator mode in open build
Enable the COORDINATOR_MODE feature flag and create the missing
src/coordinator/workerAgent.ts module that provides worker agent
definitions for the coordinator.
Coordinator mode is a multi-agent system where a coordinator agent
orchestrates independent workers via AgentTool, SendMessageTool,
and TaskStopTool. The implementation was already 99% complete
(19KB coordinatorMode.ts, 26 gate sites across 15 files) — only
the workerAgent module was missing from the source snapshot.
Workers get the standard built-in agents (general-purpose, explore,
plan). The coordinator system prompt (252 lines) handles all
orchestration logic.
Activate at runtime: CLAUDE_CODE_COORDINATOR_MODE=1
Optional scratchpad: set {"tengu_scratch": true} in
~/.claude/feature-flags.json (#639)
* fix: add worker agent type for coordinator mode
The coordinator system prompt instructs the model to spawn workers with
subagent_type: "worker", but no agent had agentType === 'worker'.
This caused AgentTool to throw "Agent type 'worker' not found" on
every coordinator spawn attempt.
Add a WORKER_AGENT definition that spreads GENERAL_PURPOSE_AGENT with
agentType: 'worker'. Also use the narrower BuiltInAgentDefinition type.
* feat: activate built-in explore and plan agents in open build
Enable BUILTIN_EXPLORE_PLAN_AGENTS so Explore (fast, haiku, read-only)
and Plan (architect, read-only) agents are available to all users in
both normal and coordinator modes.
This resolves the inconsistency flagged in code review: coordinator
workers had access to Explore/Plan agents while normal sessions did not.
The GrowthBook A/B test gate (tengu_amber_stoat) defaults to true via
the no-telemetry stub. Users can disable via feature-flags.json (#639).
* fix: replace broken bun:bundle shim with source pre-processing
The `onResolve`/`onLoad` plugin shim for `bun:bundle` was silently
ineffective in Bun v1.3.9+ — the `bun:` namespace is resolved by
Bun's native C++ resolver before the JS plugin phase runs. This meant
ALL `feature()` flags evaluated to `false` regardless of the
`featureFlags` map in build.ts (including `MONITOR_TOOL: true`).
Replace the shim with a source pre-processing step that:
1. Strips `import { feature } from 'bun:bundle'` from .ts/.tsx files
2. Replaces `feature('FLAG')` calls with boolean literals
3. Restores original files in a `finally` block after Bun.build()
Also extend the missing-module scanner to detect `require()` and
dynamic `import()` calls — not just static `import ... from` — since
modules behind feature() gates become resolvable when flags are enabled.
* fix: ensure source files are always restored after build
- Add SIGINT/SIGTERM handlers to restore pre-processed source files
on abrupt termination (Ctrl+C, kill)
- Replace process.exit(1) with process.exitCode = 1 so the finally
block runs on build failure
The provider profile activation guard in applyActiveProviderProfileFromConfig()
only checked CLAUDE_CODE_USE_* environment flags, which are never set for the
default anthropic provider. This allowed two terminals sharing ~/.claude.json
to overwrite each other's active provider when one was using anthropic and
the other a third-party provider.
Now also checks the OCODE_PROVIDER_PROFILE_APPLIED flag, which is set by all
profiles including anthropic, preventing cross-terminal interference.
Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
* feat: add allowBypassPermissionsMode setting
Allow bypass permissions mode to appear in the mode list via
settings.json without requiring the --allow-dangerously-skip-permissions
CLI flag. The disableBypassPermissionsMode setting retains priority.
* fix: address Copilot review feedback on allowBypassPermissionsMode
- Security: read allowBypassPermissionsMode only from trusted settings
sources (user/local/flag/policy), excluding projectSettings to prevent
a malicious repo from enabling bypass mode
- UX: update error messages to reference the correct CLI flag
(--allow-dangerously-skip-permissions) and the new settings option
- Tests: add schema validation tests for the new field
* feat: enhance codex provider resolution with shortcut aliases and improved base URL handling
* fix: enhance codex alias resolution to include shell model
* feat: enhance Codex provider resolution to support new aliases and base URL handling
* fix: update base URL resolution logic for Codex models in GitHub mode
* fix: update provider transport logic to enforce Codex responses and adjust base URL handling
* fix: update provider request resolution to respect custom base URLs and adjust transport logic
* fix: restore OPENAI_MODEL environment variable handling in tests and provider config
* feat: implement /loop command with fixed and dynamic scheduling modes
Enable cron tools and /loop skill without the AGENT_TRIGGERS build flag
by removing feature guards from tools.ts, REPL.tsx, and skill registration.
The isKairosCronEnabled() runtime gate now enables cron unconditionally for
open builds while preserving the GrowthBook kill switch for ant builds.
The /loop skill supports four modes: fixed-interval with prompt, fixed-interval
maintenance, dynamic-prompt (self-pacing), and dynamic maintenance (bare /loop).
* chore: remove unused DEFAULT_INTERVAL constant from loop skill
* revert: drop infra changes, scope PR to /loop skill rewrite only
The cron activation layer (AGENT_TRIGGERS guard removal, isKairosCronEnabled
hardcode) is covered by an in-flight stack (#633, #639). Scope this PR to
just the loop.ts rewrite and its tests so it can land cleanly on top.
* fix: restore infra changes needed for /loop in open build
Bun's constant folder evaluates feature('AGENT_TRIGGERS') at bundle time
through the bun:bundle shim — even when the flag is flipped to true in
build.ts, the folded value is cached from the previous build and stays false.
This means the feature-gated require() blocks for cron tools, useScheduledTasks,
and loop skill registration all compile to dead code regardless of the flag.
Fix by removing the AGENT_TRIGGERS guards from the specific paths /loop needs:
- tools.ts: cron tools always registered (isEnabled gates visibility)
- REPL.tsx: useScheduledTasks always mounted
- index.ts: registerLoopSkill via static import, called unconditionally
- prompt.ts: isKairosCronEnabled() bypasses feature flag for non-ant builds
* fix: replace backslash line continuations with explicit delimiters in loop prompts
The backslash-newline sequences inside template literals were acting as
line continuations, collapsing newlines and merging prompt content with
surrounding instruction text. Replace with --- BEGIN/END --- markers
for unambiguous delimiting.
Also add tests for trailing "every" clause parsing, human-readable unit
normalization, and the non-interval "check every PR" case.
* fix: remove remaining AGENT_TRIGGERS guards from print.ts and constants/tools.ts
Completes the cron guard removal started in the previous commit.
The cron scheduler in non-interactive (-p) mode was dead because
print.ts still gated cronSchedulerModule/cronGate requires behind
feature('AGENT_TRIGGERS'), which Bun constant-folds to false in open
builds. Similarly, cron tool names were absent from
IN_PROCESS_TEAMMATE_ALLOWED_TOOLS.
Remove all three guards so the scheduler initialises (gated at runtime
by isKairosCronEnabled) and cron tools are allowed for in-process
teammates in all builds.
- Raise context window fallback from 8k to 128k for unknown OpenAI-compat models.
The 8k fallback caused effective context (8k minus output reservation) to go
negative, making auto-compact fire on every single message.
- Add safety floor in getEffectiveContextWindowSize(): effective context is
always at least reservedTokensForSummary + 13k buffer, ensuring the
auto-compact threshold stays positive.
- Add missing MiniMax model entries (M2.5, M2.5-highspeed, M2.1, M2.1-highspeed)
all at 204,800 context / 131,072 max output per MiniMax docs.
- Add tests for MiniMax variants, 128k fallback, and autoCompact floor.
Fixes#635
Co-authored-by: root <root@vm7508.lumadock.com>
The OPENAI_CONTEXT_WINDOWS/OPENAI_MAX_OUTPUT_TOKENS tables only contained
the `github:copilot:<model>` namespaced form used when talking directly to
Copilot via /onboard-github. When OpenClaude is pointed at a LiteLLM proxy
(which routes Copilot using the standard `github_copilot/<model>` convention),
the lookup missed and fell back to the conservative 8k default — causing the
compaction loop to fire repeatedly on every tick and blocking requests
before they left the client with repeated "not in context window table"
warnings on stderr.
Mirror the 11 active Copilot models with LiteLLM-style keys in both tables.
No behavior change for users of /onboard-github since namespaced entries
remain untouched and `lookupByKey` picks exact matches first.
The previous `isPrivateHostname` used a list of regexes against
`URL.hostname`. Several literal-address forms slipped past it:
- IPv4-mapped IPv6 `[::ffff:127.0.0.1]` (WHATWG URL normalizes to
`[::ffff:7f00:1]`, which no regex matched) — lets callers reach
loopback and other private v4 via an IPv6 literal.
- ULA `fc00::/7` (e.g. `[fc00::1]`) — not covered.
- Link-local `fe80::/10` (e.g. `[fe80::1]`) — not covered.
- IPv4 `169.254.0.0/16` (cloud metadata, including 169.254.169.254),
`100.64.0.0/10` (CGNAT), and the full `0.0.0.0/8` — not covered.
- The IPv6 regex `/^\[::1?\]$/` also required brackets, but `URL.hostname`
returns bracketed form anyway, so this part happened to work.
WHATWG `new URL(...)` already normalizes short-form / numeric / hex /
octal IPv4 to dotted-quad before we see it, so those cases were in fact
handled — the remaining gaps were IPv6 and a few missing v4 ranges.
Replace the regex list with:
- a dotted-quad IPv4 parser + int range check covering 0/8, 10/8,
100.64/10, 127/8, 169.254/16, 172.16/12, 192.168/16;
- a small IPv6 parser (handles `::` compression and embedded v4 suffix)
+ a byte-range check covering `::`, `::1`, IPv4-mapped (recursing
into the v4 classifier), IPv4-compatible, `fc00::/7`, `fe80::/10`,
and `fec0::/10`.
Export `isPrivateHostname` and add unit tests covering every bypass
listed above plus public-address negatives.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
`fetchWithRetry` created a fresh `AbortController` per attempt and did:
signal?.addEventListener('abort', () => controller.abort(), { once: true })
The listener was never removed. Consequences:
- On retry, a second listener was attached to the caller's signal,
each closing over a different controller.
- After a successful fetch, the listener remained on the caller's
signal indefinitely, referencing a controller whose work was done.
For a long-lived caller signal this is a slow leak.
- The `{ once: true }` only helps if the signal actually fires — on
non-aborted signals the listener stays attached forever.
Replace the manual controller + timer + listener dance with
`AbortSignal.any([signal, AbortSignal.timeout(ms)])`, which the
codebase already uses elsewhere (see src/services/mcp/xaa.ts). This:
- has no user-code listener to leak,
- gives each attempt a fresh independent timeout,
- cleanly distinguishes caller-initiated abort from timeout via
`signal.aborted` vs `timeoutSignal.aborted` before rewriting the
error as "Custom search timed out after Ns".
Also resets `lastStatus` per attempt so a 5xx on attempt 0 can't leak
into attempt 1's retry decision, and collapses the two redundant
retry branches (`lastStatus >= 500` and `lastStatus === undefined`)
into one.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
React 19's react-reconciler@0.33 mutation path calls commitUpdate with
(instance, type, oldProps, newProps, fiber), but our Ink host config
still expected an updatePayload from prepareUpdate. That left mounted
ink-* nodes with stale onKeyDown, tabIndex, and textStyles, making menu
navigation and highlights appear stuck until remount.
Diff old/new props directly inside commitUpdate and add regression tests
covering in-place updates for ink-box handlers/attributes and ink-text
styles.
* fix: report cache reads in streaming and correct cost calculation
Fix two bugs in how the OpenAI-to-Anthropic shim handles cached tokens:
1. codexShim: streaming message_delta missing cache_read_input_tokens
The codexStreamToAnthropic() function builds the final message_delta
usage object inline (not through makeUsage()), and only included
input_tokens and output_tokens. cache_read_input_tokens was always 0,
so /cost never showed cache reads for Responses API models (GPT-5+).
Also fix makeUsage() to read input_tokens_details.cached_tokens and
prompt_tokens_details.cached_tokens for the non-streaming path.
2. Both shims: cost double-counting from convention mismatch
OpenAI includes cached tokens in input_tokens/prompt_tokens (i.e.,
input_tokens = uncached + cached). Anthropic treats input_tokens as
uncached only. The cost formula was:
cost = input_tokens * inputRate + cache_read * cacheRate
This double-counts cached tokens. Fix by subtracting cached from
input during the conversion:
input_tokens = prompt_tokens - cached_tokens
In practice this was inflating reported costs by ~2x for sessions
with high cache hit rates (which is most sessions, since Copilot
auto-caches server-side).
Fixes#515
* fix: omit zero cache read/write fields from /cost output
Only show "cache read" and "cache write" in /cost per-model usage when
the value is > 0. Providers like GitHub Copilot never report
cache_creation_input_tokens (the server manages its own cache), so
showing "0 cache write" on every line is misleading — it implies caching
is not working when it actually is.
Before:
claude-haiku: 2.6k input, 151 output, 39.8k cache read, 0 cache write ($0.04)
After:
claude-haiku: 2.6k input, 151 output, 39.8k cache read ($0.04)
---------
Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
Set store: false in the request body for both the Chat Completions path
and the /responses fallback path in openaiShim.ts.
The codexShim (Responses API primary path) already sets store: false.
The Chat Completions path and the /responses fallback in openaiShim were
missing it.
store: false tells the API provider not to persist conversation data for
model training, logging, or other non-operational purposes. This is a
privacy measure — it does not affect caching or functionality.
Note: Whether third-party proxies (e.g. GitHub Copilot) honour this
parameter is provider-dependent, but setting it is a reasonable default
for user privacy.
Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
Add context_window and max_output_tokens entries for all models available
through the GitHub Copilot proxy (Claude, GPT, Gemini, Grok), sourced from
https://api.githubcopilot.com/models.
Models are namespaced as "github:copilot:<model>" to avoid collisions with
the same model names served by other providers (which may have different
limits). A new lookupByKey() helper and qualified-key lookup in
lookupByModel() ensures the correct limits are selected when
OPENAI_MODEL=github:copilot.
Without this, Claude models on Copilot would use default context/output
limits that may not match the proxy's actual constraints, causing 400 errors
like "max_tokens is too large".
Related: #515
Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
Treat profile-managed env as restart state rather than explicit user intent so saved OpenAI-compatible profiles can replace stale Ollama values on startup and persist correctly across restarts.
Co-authored-by: Claude Opus 4.6 <noreply@openclaude.dev>
* Stop canonical Anthropic headers from leaking into 3P shim requests
The remaining blocker from PR #268 was that canonical Anthropic headers such as
`anthropic-version` and `anthropic-beta` could still ride through supported 3P
paths even after the earlier x-anthropic/x-claude scrubber work. This tightens
header filtering inside the shim itself so direct defaultHeaders, env-driven
client setup, providerOverride routing, and per-request header injection all
share the same scrubber.
Constraint: Preserve non-Anthropic custom headers and provider auth while stripping only Anthropic/OpenClaude-internal headers from 3P requests
Rejected: Rely on client.ts filtering alone | direct shim construction and per-request headers would still leave gaps
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Keep header scrubbing centralized in the shim so new call paths do not reopen 3P leakage bugs
Tested: bun test src/services/api/openaiShim.test.ts src/services/api/client.test.ts src/utils/context.test.ts
Tested: bun run test:provider
Tested: bun run build && node dist/cli.mjs --version
Not-tested: bun run typecheck (repository baseline currently fails in many unrelated files)
* Keep OpenAI client tests from restoring undefined env as strings
The new header-leak regression tests in client.test.ts restored environment
variables via direct assignment, which can leave literal "undefined" strings in
process.env when the original value was unset. This switches the teardown over
to the same restore helper pattern already used in openaiShim.test.ts.
Constraint: Keep the fix limited to test hygiene without altering runtime behavior
Rejected: Restore only the two env vars Copilot called out | using one helper for all test env restores is simpler and less error-prone
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Use restore helpers for env teardown in tests so unset values stay deleted instead of becoming the string "undefined"
Tested: bun test src/services/api/client.test.ts src/services/api/openaiShim.test.ts src/utils/context.test.ts
Not-tested: Full provider suite (unchanged runtime path)
* Prevent GitHub Codex requests from forwarding unsanitized Anthropic headers
A base-sync with upstream exposed a separate GitHub+Codex transport branch
that still merged per-request headers raw before adding Copilot headers.
This keeps the filter aligned across Codex-family paths and adds explicit
regression tests for GitHub Codex routing, including providerOverride.
Constraint: Must not push or modify GitHub state while validating the reviewer concern
Rejected: Leave the GitHub Codex path unchanged | runtime repro showed anthropic-* headers still leaked after the upstream sync
Confidence: high
Scope-risk: narrow
Directive: Keep header scrubbing consistent across every Codex-family transport branch when provider routing changes
Tested: bun test src/services/api/openaiShim.test.ts
Tested: bun test src/services/api/client.test.ts src/services/api/codexShim.test.ts src/services/api/providerConfig.github.test.ts
Tested: bun run build
Not-tested: Full repository test suite
Treat default select focus as initial state so /theme and first-run previews follow keyboard navigation again.
Co-authored-by: anandh8x <test@example.com>
Add a /cache-probe slash command for debugging prompt caching behaviour
on OpenAI-compatible providers (GitHub Copilot, OpenAI direct).
The command sends two identical API requests in sequence and compares the
raw server response usage stats, showing:
- Input/output token counts
- Cache read tokens (from prompt_tokens_details or input_tokens_details)
- Latency for each request
- Cache hit rate percentage
Usage:
/cache-probe # test default model
/cache-probe claude-sonnet-4 # test specific model
/cache-probe gpt-5.4 --no-key # test without prompt_cache_key
The --no-key flag omits prompt_cache_key/prompt_cache_retention/store to
test whether the server does content-based auto-caching (it does on
GitHub Copilot).
This is a debugging/diagnostic tool, not intended for regular use. It was
instrumental in discovering that:
1. Copilot auto-caches server-side based on content hash
2. prompt_cache_key is ignored by the proxy
3. The streaming path was not reporting cached tokens
Only enabled when the provider is OpenAI or GitHub (not for firstParty
Anthropic which has different caching semantics).
Related: #515
Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
* feat: add AutoFix config schema and reader module
Implements AutoFixConfigSchema (Zod v4) with validation for lint/test
commands, maxRetries (0-10, default 3), and timeout (1000-300000ms,
default 30000). Adds getAutoFixConfig helper that returns null for
disabled or invalid configs. All 9 unit tests pass.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat: add autoFix runner with lint/test command execution
Implements AutoFixRunner (Task 2) - executes lint and test shell commands
sequentially, short-circuits on lint failure, handles timeouts, and
produces structured AutoFixResult with AI-friendly error summaries.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat: add autoFix field to SettingsSchema with integration tests
Integrates AutoFixConfigSchema into SettingsSchema so autoFix settings
are validated at the settings layer. Adds two integration tests verifying
that valid configs are accepted and invalid configs (enabled with no
commands) are rejected.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat: add autoFix hook integration helpers (Task 4)
Implements shouldRunAutoFix and buildAutoFixContext functions used by
the PostToolUse hook to determine when to run auto-fix and format
errors as AI-readable context for injection.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat: wire autoFix into PostToolUse hook flow (Task 5)
Add auto-fix lint/test check after existing PostToolUse hooks in
runPostToolUseHooks. When autoFix is configured in settings, runs
lint/test commands after file_edit/file_write tools and yields
errors as hook_additional_context for the model to act on.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add /auto-fix slash command
Adds the /auto-fix prompt command that helps users configure autoFix settings
(lint/test commands, maxRetries, timeout) in .claude/settings.json.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: remove unused imports in autoFixRunner test
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address review feedback — enforce maxRetries, wire abort signal, use cross-platform shell
1. Enforce maxRetries: track auto-fix attempts per query chain in toolHooks.ts
and stop feeding errors back after the configured limit is reached.
2. Wire abort signal to subprocess: subscribe to AbortController signal in
runCommand() and kill the process tree on abort. Uses detached process
groups on Unix to ensure child processes are also terminated.
3. Replace hardcoded bash with shell:true: use Node's cross-platform shell
resolution instead of spawn('bash', ['-c', ...]) so auto-fix commands
work on Windows and non-bash environments.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: custom web search — WEB_URL_TEMPLATE not recognized, timeout too short, silent native fallback
1. custom.ts: Add WEB_URL_TEMPLATE to isConfigured() so the custom provider
is recognized when configured via URL template alone.
2. custom.ts: Bump DEFAULT_TIMEOUT_SECONDS from 15s to 120s.
Self-hosted search APIs (SearXNG, internal) commonly need 30-90s.
3. WebSearchTool.ts: When an explicit adapter is selected via
WEB_SEARCH_PROVIDER=custom, do not silently fall through to the
native Anthropic path on adapter errors or 0-hit results.
- 0 hits: return directly (no fallback)
- Error: throw the real error (no fallback)
- Auto mode: existing fallback behavior preserved
* fix: tighten auto-mode adapter fallback — only swallow transient errors
Address review feedback: in auto mode, only fall through to native on
transient errors (network failure, timeout, HTTP 5xx). Config and
guardrail errors (SSRF, HTTPS, bad URL, header allowlist, etc.) now
surface properly instead of being silently swallowed.
---------
Co-authored-by: FluxLuFFy <fluxluffy@users.noreply.github.com>
* refactor: provider adapter system + 7 new search providers
Architecture:
- Each search backend is a small adapter implementing SearchProvider
- 12 providers: custom, tavily, exa, you, jina, bing, mojeek, linkup, firecrawl, duckduckgo + native
- WEB_SEARCH_PROVIDER controls selection: auto (fallback chain) or specific provider
- Auth always in headers, never in query strings
Bug fixes from review feedback:
- Fix applyDomainFilters catch block: keep hits with malformed URLs on blocked_domains
(can't confirm blocked), drop on allowed_domains (can't confirm allowed)
- Add safeHostname() helper: safely extract hostname from URLs without throwing
- Replace unsafe new URL(r.url).hostname in 7 providers with safeHostname()
- Remove dead code: buildAllHeaders, buildAuthHeaders, parseExtraHeaders from types.ts
- Fix WEB_PARMS typo: consistently use WEB_QUERY_PARAM everywhere
- AbortSignal forwarded to fetch() in all 12 providers
- DuckDuckGo: wrap dynamic import in try/catch for graceful error
- Exa: remove double domain filtering (server-side already)
- runSearch(): aggregate all provider errors instead of throwing only the last one
- Retry logic: check numeric status code directly, retry 5xx/network, skip 4xx
Test coverage (44 tests, all passing):
- types.test.ts: safeHostname, normalizeHit, applyDomainFilters (20 tests)
- index.test.ts: getProviderMode, getProviderChain, getAvailableProviders (13 tests)
- custom.test.ts: extractHits flexible response parsing (11 tests)
Co-authored-by: FluxLuFFy <195792511+FluxLuFFy@users.noreply.github.com>
* security: add guardrails to custom search provider (Option B)
- HTTPS-only by default (opt-out: WEB_CUSTOM_ALLOW_HTTP=true)
- Private/localhost IPs blocked by default (opt-out: WEB_CUSTOM_ALLOW_PRIVATE=true)
- Header allowlist: only known-safe headers allowed unless WEB_CUSTOM_ALLOW_ARBITRARY_HEADERS=true
- Configurable timeout in seconds (WEB_CUSTOM_TIMEOUT_SEC, default 15)
- Configurable POST body limit (WEB_CUSTOM_MAX_BODY_KB, default 300)
- Removed max URL size restriction
- Audit log warning on first custom search call
- Updated .env.example and README_SEARCH_PROVIDERS.md with all new options
* fix: remove custom provider from auto chain (Option 1)
Remove customProvider from the auto fallback chain so it is only
available when WEB_SEARCH_PROVIDER=custom is explicitly selected.
Changes:
- Remove customProvider from ALL_PROVIDERS array in providers/index.ts
- Add 3 new tests verifying custom is excluded from auto chain
- Update README_SEARCH_PROVIDERS.md: auto priority, mode table, note
- Update .env.example: auto priority comment, custom mode annotation
All 47 tests pass (44 existing + 3 new).
Co-Authored-By: @Vasanthdev2004
* fix: address review blockers (routing, abort, config check, domain matching)
1. Native/Codex routing precedence in auto mode
shouldUseAdapterProvider() now checks if native/first-party/vertex/foundry
or Codex paths are available before falling back to adapter providers.
Auto mode: native paths take precedence; adapter is fallback only.
2. AbortError stops provider chain immediately
runSearch() now checks for AbortError/aborted signal before continuing
the fallback chain. Cancelled searches don't create extra outbound requests.
3. Explicit provider mode fails fast on missing credentials
runSearch() validates isConfigured() for explicit modes before attempting
requests. Throws clear error: 'Search provider "X" is not configured.'
4. Domain filter exact-or-subdomain matching (fixes suffix collision)
New hostMatchesDomain() helper: exact match or .subdomain match.
badexample.com no longer matches example.com.
5. Tests: 56 pass (9 new) covering all 4 fixes
Co-Authored-By: @Vasanthdev2004
---------
Co-authored-by: Claude Fix <fix@openclaude.local>
Co-authored-by: FluxLuFFy <195792511+FluxLuFFy@users.noreply.github.com>
Co-authored-by: bot <bot@openclaw.ai>
Root cause: IMAGE_MAX_WIDTH and IMAGE_MAX_HEIGHT were set to 2000 — exactly the APIs many-image dimension limit. Images resized to exactly 2000px would get rejected when the conversation accumulated enough images to trigger the API's many-image mode.
Fix: Changed both constants from 2000 to 1568 in src/constants/apiLimits.ts:42-43. This is the resolution the API internally downscales to anyway (documented in the API's encoding/full_encoding.py), so there is zero effective quality loss. All images are now safely below the many-image threshold.
export const IMAGE_MAX_WIDTH = 1568
export const IMAGE_MAX_HEIGHT = 1568
Impact: The single constant change propagates everywhere — imageResizer.ts uses IMAGE_MAX_WIDTH/IMAGE_MAX_HEIGHT for all resize decisions, and the error messages reference these constants dynamically. No other files need changes.
The select cursor highlight was broken because isDeepStrictEqual in
use-select-navigation.ts and use-multi-select-state.ts would fail when
options contained identity-unstable properties (JSX label elements,
function onChange callbacks, computed disabled booleans). This caused
the reset logic to fire on every re-render, resetting focusedValue
back to the first option.
Replace isDeepStrictEqual with optionsNavigateEqual which only compares
properties that affect navigation behavior: value, disabled, and type.
ReactNode labels and function callbacks are intentionally excluded as
they are identity-unstable but don't change navigation semantics.
Fixes#472
Co-authored-by: OpenClaude Worker 3 <worker-3@openclaude.local>