* feat(provider): add Bankr LLM Gateway support
Add Bankr as an OpenAI-compatible provider preset with dedicated env vars:
- BNKR_API_KEY, BANKR_BASE_URL, BANKR_MODEL
- Uses X-API-Key header instead of Authorization Bearer
- Base URL: https://llm.bankr.bot/v1
- Default model: claude-opus-4.6
Changes:
- Add 'bankr' to VALID_PROVIDERS and provider flag handling
- Add buildBankrProfileEnv() with env key registration
- Add Bankr detection in startup screen and provider discovery
- Map Bankr env vars to OpenAI-compatible vars in shim
- Add Bankr preset to ProviderManager (alphabetical order)
- Update PRESET_ORDER test to include Bankr
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* fixup(provider): address Bankr PR review feedback
1. Map BNKR_API_KEY → OPENAI_API_KEY in providerFlag.ts so
--provider bankr works with BNKR_API_KEY in non-interactive startup.
2. Remove unconditional BANKR_MODEL read from model.ts; it maps to
OPENAI_MODEL via providerFlag.ts and openaiShim.ts, preventing
cross-provider leakage.
3. Use X-API-Key for Bankr model discovery in openaiModelDiscovery.ts
and providerDiscovery.ts, matching chat request auth.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
---------
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
- Bump Codex provider defaults from gpt-5.4 to gpt-5.5 across all ModelConfigs
- Update codexplan alias to resolve to gpt-5.5
- Add gpt-5.5 and gpt-5.5-mini to model picker with reasoning effort mappings
- Add context window and max output token specs for gpt-5.5 family
- Add gpt-5.5 entries to COPILOT_MODELS registry
- Keep official OpenAI API preset at gpt-5.4 (API availability pending)
- Update codexShim tests to expect gpt-5.5 from codexplan alias
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
* fix: make OpenAI fallback context window configurable and support external lookup table
Unknown OpenAI-compatible models fell back to a hardcoded 128k constant,
causing auto-compact to fire prematurely on models with larger windows
(issue #635 follow-up). Two escape hatches are added without touching the
built-in table:
- CLAUDE_CODE_OPENAI_FALLBACK_CONTEXT_WINDOW (number): overrides the 128k
default for all unknown models.
- CLAUDE_CODE_OPENAI_CONTEXT_WINDOWS (JSON object): per-model overrides that
take precedence over the built-in OPENAI_CONTEXT_WINDOWS table; supports
the same provider-qualified and prefix-matching lookup as the built-in path.
- CLAUDE_CODE_OPENAI_MAX_OUTPUT_TOKENS (JSON object): same pattern for output
token limits.
This lets operators deploy new or private models without patching
openaiContextWindows.ts on every model release.
* docs: add new OpenAI context window env vars to .env.example
Document CLAUDE_CODE_OPENAI_FALLBACK_CONTEXT_WINDOW,
CLAUDE_CODE_OPENAI_CONTEXT_WINDOWS, and
CLAUDE_CODE_OPENAI_MAX_OUTPUT_TOKENS with usage examples.
Addresses reviewer feedback on PR #861.
---------
Co-authored-by: opencode <dev@example.com>
- Add exponential backoff retry to DuckDuckGo adapter (3 attempts with
jitter) to handle transient rate-limiting and connection errors.
- Add native fetch() fallback in WebFetch when axios hangs with custom
DNS lookup in bundled contexts.
- Prevent broken native-path fallback for web search on OpenAI shim
providers (minimax, moonshot, nvidia-nim, etc.) that do not support
Anthropic's web_search_20250305 tool.
- Cherry-pick existing fixes:
- a48bd56: cover codex/minimax/nvidia-nim in getSmallFastModel()
- 31f0b68: 45s budget + raw-markdown fallback for secondary model
- 446c1e8: sparse Codex /responses payload parsing
- ae3f0b2: echo reasoning_content on assistant tool-call messages
- Fix domainCheck.test.ts mock modules to include isFirstPartyAnthropicBaseUrl
and isGithubNativeAnthropicMode exports.
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
getUserSpecifiedModelSetting() decides which env var to consult based on
the active provider. The check included openai and github but omitted
codex, nvidia-nim, and minimax — even though all three use the OpenAI
shim transport and get their model routing via CLAUDE_CODE_USE_OPENAI=1
+ OPENAI_MODEL (set by applyProviderProfileToProcessEnv).
Concrete failure: user switches from Moonshot profile (which persisted
settings.model='kimi-k2.6') to the Codex profile. The new profile
correctly writes OPENAI_MODEL=codexplan + base URL to
chatgpt.com/backend-api/codex. Startup banner reflects Codex / gpt-5.4
correctly. But at request time getUserSpecifiedModelSetting() returns
early for provider='codex' (not in the env-consult list), falls through
to the stale settings.model='kimi-k2.6', and the Codex API rejects:
API Error 400: "The 'kimi-k2.6' model is not supported when using
Codex with a ChatGPT account."
Fix: extract an isOpenAIShimProvider flag covering openai|codex|github|
nvidia-nim|minimax — all providers that set OPENAI_MODEL as their model
env var. The Gemini and Mistral branches stay as-is (they use
GEMINI_MODEL / MISTRAL_MODEL).
Five regression tests pin the fix for each OpenAI-shim provider plus
guard tests for openai and github that already worked.
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
* feat(provider): first-class Moonshot (Kimi) direct-API support
Moonshot's direct API (api.moonshot.ai/v1) is OpenAI-compatible and works
today via the generic OpenAI shim, including the reasoning_content channel
that Kimi returns alongside the user-visible content. But the UX was rough:
unknown context window triggered the conservative 128k fallback + a warning,
and the provider displayed as "Local OpenAI-compatible".
Makes Moonshot a recognized provider:
- src/utils/model/openaiContextWindows.ts: add the Kimi K2 family and
moonshot-v1-* variants to both the context-window and max-output tables.
Values from Moonshot's model card — K2.6 and K2-thinking are 256K,
K2/K2-instruct are 128K, moonshot-v1 sizes are embedded in the model id.
- src/utils/providerDiscovery.ts: recognize the api.moonshot.ai hostname
and label it "Moonshot (Kimi)" in the startup banner and provider UI.
Users can now launch with:
CLAUDE_CODE_USE_OPENAI=1 \
OPENAI_BASE_URL=https://api.moonshot.ai/v1 \
OPENAI_API_KEY=sk-... \
OPENAI_MODEL=kimi-k2.6 \
openclaude
and get accurate compaction + correct labeling + correct max_tokens out
of the box.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* fix(openai-shim): Moonshot API compatibility — max_tokens + strip store
Moonshot's direct API (api.moonshot.ai and api.moonshot.cn) uses the
classic OpenAI `max_tokens` parameter, not the newer `max_completion_tokens`
that the shim defaults to. It also hasn't published support for `store`
and may reject it on strict-parse — same class of error as Gemini's
"Unknown name 'store': Cannot find field" 400.
- Adds isMoonshotBaseUrl() that recognizes both .ai and .cn hosts.
- Converts max_completion_tokens → max_tokens for Moonshot requests
(alongside GitHub / Mistral / local providers).
- Strips body.store for Moonshot requests (alongside Mistral / Gemini).
Two shim tests cover both the .ai and .cn hostnames.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* fix: null-safe access on getCachedMCConfig() in external builds
External builds stub src/services/compact/cachedMicrocompact.ts so
getCachedMCConfig() returns null, but two call sites still dereferenced
config.supportedModels directly. The ?. operator was in the wrong place
(config.supportedModels? instead of config?.supportedModels), so the null
config threw "Cannot read properties of null (reading 'supportedModels')"
on every request.
Reproduces with any external-build provider (notably Kimi/Moonshot just
enabled in the sibling commits, but equally DeepSeek, Mistral, Groq,
Ollama, etc.):
❯ hey
⏺ Cannot read properties of null (reading 'supportedModels')
- prompts.ts: early-return from getFunctionResultClearingSection() when
config is null, before touching .supportedModels.
- claude.ts: guard the debug-log jsonStringify with ?. so the log line
never throws.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* fix(startup): show "Moonshot (Kimi)" on the startup banner
The startup-screen provider detector had regex branches for OpenRouter,
DeepSeek, Groq, Together, Azure, etc., but nothing for Moonshot. Remote
Moonshot sessions fell through to the generic "OpenAI" label —
getLocalOpenAICompatibleProviderLabel() only runs for local URLs, and
api.moonshot.ai / api.moonshot.cn are not local.
Adds a Moonshot branch matching /moonshot/ in the base URL OR /kimi/ in
the model id. Now launches with:
OPENAI_BASE_URL=https://api.moonshot.ai/v1 OPENAI_MODEL=kimi-k2.6
display the Provider row as "Moonshot (Kimi)" instead of "OpenAI".
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* refactor(provider): sort preset picker alphabetically; Custom at end
The /provider preset picker was in ad-hoc order (Anthropic, Ollama,
OpenAI, then a jumble of third-party / local / codex / Alibaba / custom /
nvidia / minimax). Hard to scan when you know the provider name you want.
Sorts the list alphabetically by label A→Z. Pins "Custom" to the end —
it's the catch-all / escape hatch so it's scanned last, not shuffled into
the alphabetical run where a user looking for a named provider might
grab it by mistake. First-run-only "Skip for now" stays at the very
bottom, after Custom.
Test churn:
- ProviderManager.test.tsx: four tests hardcoded press counts (1 or 3 'j'
presses) that broke when targets moved. Replaces them with a
navigateToPreset(stdin, label) helper driven from a declared
PRESET_ORDER array, so future list edits only update the array.
- ConsoleOAuthFlow.test.tsx: the 13-row test frame only renders the first
~13 providers. "Ollama", "OpenAI", "LM Studio" sentinels moved below
the fold; swap them for alphabetically-early providers still visible
in-frame ("Azure OpenAI", "DeepSeek", "Google Gemini"). Test intent
(picker opened with providers listed) is preserved.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
---------
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
* feat: add model caching and benchmarking utilities
- Add modelCache.ts for disk caching of model lists
- Add benchmark.ts for testing model speed/quality
* fix: address review feedback - async fs, multi-provider support, error handling
* feat: add /benchmark slash command and unit tests
* feat: add /benchmark slash command and unit tests
Updates both the model config mappings (configs.ts) and the runtime
fallback in getDefaultOpusModel() (model.ts) so Gemini mode no longer
falls back to the discontinued preview model when GEMINI_MODEL is unset.
Fixes#398
* feat: native Anthropic API mode for Claude models on GitHub Copilot
When using Claude models through GitHub Copilot, automatically switch from
the OpenAI-compatible shim to Anthropic's native messages API format.
The Copilot proxy (api.githubcopilot.com) supports Anthropic's native API
for Claude models. This enables cache_control blocks to be sent and
honoured, allowing explicit prompt caching control (as opposed to relying
solely on server-side auto-caching).
Changes:
- Add isGithubNativeAnthropicMode() in providers.ts that auto-enables when
the resolved model starts with "claude-" and the GitHub provider is active
- Create a native Anthropic client in client.ts using the GitHub base URL
and Bearer token authentication when native mode is detected
- Enable prompt caching in claude.ts for native GitHub mode so cache_control
blocks are sent (previously only allowed for firstParty/bedrock/vertex)
- CLAUDE_CODE_GITHUB_ANTHROPIC_API=1 env var to force native mode for any
model
Benefits:
- Proper Anthropic message format (no lossy OpenAI translation)
- Explicit cache_control blocks for fine-grained caching control
- Potentially better Claude model behaviour with native format
Related: #515
* fix: scope force flag to Claude models and add isGithubNativeAnthropicMode tests
- CLAUDE_CODE_GITHUB_ANTHROPIC_API=1 now returns false for non-Claude models
(force flag still useful for aliases like 'github:copilot' with no model
resolved yet, where it returns true when model is empty)
- Add 7 focused tests covering mode detection: off without GitHub provider,
auto-detect via OPENAI_MODEL and resolvedModel, non-Claude model rejection,
and force-flag behaviour for claude/non-claude/no-model cases
* fix: detect github:copilot:claude- compound format, remove force flag
OPENAI_MODEL for GitHub Copilot uses the format 'github:copilot:MODEL'
(e.g. 'github:copilot:claude-sonnet-4'), which does not start with 'claude-'.
Auto-detection now handles both bare model names and the compound format.
The CLAUDE_CODE_GITHUB_ANTHROPIC_API force flag is removed: with proper
compound-format detection there is no remaining gap it could fill, and
keeping a broad override flag without a concrete use case invites misuse.
Tests updated to cover the compound format, generic alias (false), and
non-Claude compound model (github:copilot:gpt-4o → false).
* fix: use includes('claude-') for model detection, remove force flag
Detection was broken for the standard GitHub Copilot compound format
'github:copilot:claude-sonnet-4' which does not start with 'claude-'.
Using includes('claude-') handles bare names, compound names, and any
future variants without needing updates.
The CLAUDE_CODE_GITHUB_ANTHROPIC_API force flag is removed as it was
a workaround for the broken detection, not a genuine use case.
---------
Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
* add mistral and gemini provider type for profile provider field
* load latest locally selected
* env variables take precedence over json save
* add gemini context windows and fix gemini defaulting for env
* load on startup fix
* fix failing tests
* clarify test message
* fix variable mismatches
* fix failing test
* delete keys and set profile.apiKey for mistral and gemini
* switch model as well when switching provider
* set model when adding a new model
* test: add tests for provider model env updates and multi-model profiles
Add comprehensive tests covering:
- OPENAI_MODEL/ANTHROPIC_MODEL env updates on provider activation
- Cross-provider type switches (openai ↔ anthropic) clearing stale env
- Multi-model profile activation using only the first model for env vars
- Model options cache population from comma-separated model lists
- getProfileModelOptions generating correct ModelOption arrays
* feat: multi-model provider support and model auto-switch
Support comma-separated model names in provider profiles (e.g.
"glm-4.7, glm-4.7-flash"). The first model is used as default on
activation; all models appear in the /model picker for easy switching.
When switching active providers, the session model now automatically
updates to the new provider's first model. The multi-model list is
preserved across switches and /model selections.
Changes:
- Add parseModelList, getPrimaryModel, hasMultipleModels utilities
with full test coverage (19 tests)
- Use getPrimaryModel when applying profiles to process.env so only
the primary model is set in OPENAI_MODEL/ANTHROPIC_MODEL
- Update ProviderManager UI to hint at multi-model syntax and show
model count in provider list summaries
- Populate model options cache from multi-model profiles on activation
so all models appear in /model picker regardless of base URL type
- Guard persistActiveProviderProfileModel against overwriting
comma-separated lists: models already in the profile are session
selections, not profile edits
- Set AppState.mainLoopModel to the actual model string on provider
switch so Anthropic profiles use the configured model instead of
falling back to the built-in default
* fix: only show profile models when provider profile env is applied
Guard the profile model picker options behind a
PROFILE_ENV_APPLIED check. getActiveProviderProfile() has a
?? profiles[0] fallback that returns the first profile even when
no profile is explicitly active, causing users with inactive
profiles to lose all standard model options (Opus, Haiku, etc.)
from the /model picker.
* fix: show all model names for profiles with 3 or fewer models
Instead of a summary format for multi-model profiles, display all
model names when there are 3 or fewer. Only use the "+ N more"
format for profiles with 4+ models.
* fix: preserve standard model options in picker alongside profile models
The previous implementation used an early return that replaced all
standard picker options (Opus, Haiku, Sonnet for Anthropic; Codex/GPT
models for OpenAI) with only the profile's custom models.
Changes:
- Collect profile models into a shared array instead of early returning
- Append profile models to firstParty path (Opus + Haiku + Sonnet + custom)
- Append profile models to PAYG 3P path (Codex + Sonnet + Opus + Haiku + custom)
- Guard collection behind PROFILE_ENV_APPLIED to avoid ?? profiles[0] fallback
Fixes review feedback: standard models are no longer hidden when a
provider profile with custom models is active. Users see both the
standard options and their profile's models.
---------
Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
* feat: add NVIDIA NIM and MiniMax provider support
- Add nvidia-nim and minimax to --provider CLI flag
- Add model discovery for NVIDIA NIM (160+ models) and MiniMax
- Update /model picker to show provider-specific models
- Fix provider detection in startup banner
- Update .env.example with new provider options
Supported providers:
- NVIDIA NIM: https://integrate.api.nvidia.com/v1
- MiniMax: https://api.minimax.io/v1
* fix: resolve conflict in StartupScreen (keep NVIDIA/MiniMax + add Codex detection)
* fix: resolve providerProfile conflict (add imports from main, keep NVIDIA/MiniMax)
* fix: revert providerSecrets to match main (NVIDIA/MiniMax handled elsewhere)
* fix: add context window entries for NVIDIA NIM and new MiniMax models
* fix: use GLM-5 as NVIDIA NIM default and MiniMax-M2.5 for consistency
* fix: address remaining review items - add GLM/Kimi context entries, max output tokens, fix .env.example, revert to Nemotron default
* fix: filter NVIDIA NIM picker to chat/instruct models only, set provider-specific API keys from saved profiles
* chore: add more NVIDIA NIM context window entries for popular models
* fix: address remaining non-blocking items - fix base model, clear provider API keys on profile switch
- Raise context window fallback from 8k to 128k for unknown OpenAI-compat models.
The 8k fallback caused effective context (8k minus output reservation) to go
negative, making auto-compact fire on every single message.
- Add safety floor in getEffectiveContextWindowSize(): effective context is
always at least reservedTokensForSummary + 13k buffer, ensuring the
auto-compact threshold stays positive.
- Add missing MiniMax model entries (M2.5, M2.5-highspeed, M2.1, M2.1-highspeed)
all at 204,800 context / 131,072 max output per MiniMax docs.
- Add tests for MiniMax variants, 128k fallback, and autoCompact floor.
Fixes#635
Co-authored-by: root <root@vm7508.lumadock.com>
The OPENAI_CONTEXT_WINDOWS/OPENAI_MAX_OUTPUT_TOKENS tables only contained
the `github:copilot:<model>` namespaced form used when talking directly to
Copilot via /onboard-github. When OpenClaude is pointed at a LiteLLM proxy
(which routes Copilot using the standard `github_copilot/<model>` convention),
the lookup missed and fell back to the conservative 8k default — causing the
compaction loop to fire repeatedly on every tick and blocking requests
before they left the client with repeated "not in context window table"
warnings on stderr.
Mirror the 11 active Copilot models with LiteLLM-style keys in both tables.
No behavior change for users of /onboard-github since namespaced entries
remain untouched and `lookupByKey` picks exact matches first.
Add context_window and max_output_tokens entries for all models available
through the GitHub Copilot proxy (Claude, GPT, Gemini, Grok), sourced from
https://api.githubcopilot.com/models.
Models are namespaced as "github:copilot:<model>" to avoid collisions with
the same model names served by other providers (which may have different
limits). A new lookupByKey() helper and qualified-key lookup in
lookupByModel() ensures the correct limits are selected when
OPENAI_MODEL=github:copilot.
Without this, Claude models on Copilot would use default context/output
limits that may not match the proxy's actual constraints, causing 400 errors
like "max_tokens is too large".
Related: #515
Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
* update gitHub copilot API with offical client id and update model configurations
* test: add unit tests for exchangeForCopilotToken and enhance GitHub model normalization
* remove PAT token feature
* test(api): harden provider tests against env leakage
* Added back trimmed github auth token
* added auto refresh logic for auto token along with test
* fix: remove forked provider validation in cli.tsx and clear stale provider env vars in /onboard-github
* refactor: streamline environment variable handling in mergeUserSettingsEnv
* fix: clear stale provider env vars to ensure correct GH routing
* Remove internal-only tooling from the external build (#352)
* Remove internal-only tooling without changing external runtime contracts
This trims the lowest-risk internal-only surfaces first: deleted internal
modules are replaced by build-time no-op stubs, the bundled stuck skill is
removed, and the insights S3 upload path now stays local-only. The privacy
verifier is expanded and the remaining bundled internal Slack/Artifactory
strings are neutralized without broad repo-wide renames.
Constraint: Keep the first PR deletion-heavy and avoid mass rewrites of USER_TYPE, tengu, or claude_code identifiers
Rejected: One-shot DMCA cleanup branch | too much semantic risk for a first PR
Confidence: medium
Scope-risk: moderate
Reversibility: clean
Directive: Treat full-repo typecheck as a baseline issue on this upstream snapshot; do not claim this commit introduced the existing non-Phase-A errors without isolating them first
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Not-tested: Full repo typecheck (currently fails on widespread pre-existing upstream errors outside this change set)
* Keep minimal source shims so CI can import Phase A cleanup paths
The first PR removed internal-only source files entirely, but CI provider
and context tests import those modules directly from source rather than
through the build-time no-telemetry stubs. This restores tiny no-op source
shims so tests and local source imports resolve while preserving the same
external runtime behavior.
Constraint: GitHub Actions runs source-level tests in addition to bundled build/privacy checks
Rejected: Revert the entire deletion pass | unnecessary once the import contract is satisfied by small shims
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: For later cleanup phases, treat build-time stubs and source-test imports as separate compatibility surfaces
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (still noisy on this upstream snapshot)
---------
Co-authored-by: anandh8x <test@example.com>
* Reduce internal-only labeling noise in source comments (#355)
This pass rewrites comment-only ANT-ONLY markers to neutral internal-only
language across the source tree without changing runtime strings, flags,
commands, or protocol identifiers. The goal is to lower obvious internal
prose leakage while keeping the diff mechanically safe and easy to review.
Constraint: Phase B is limited to comments/prose only; runtime strings and user-facing labels remain deferred
Rejected: Broad search-and-replace across strings and command descriptions | too risky for a prose-only pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly runtime/user-facing strings and should be handled separately from comment cleanup
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Neutralize internal Anthropic prose in explanatory comments (#357)
This is a small prose-only follow-up that rewrites clearly internal or
explanatory Anthropic comment language to neutral wording in a handful of
high-confidence files. It avoids runtime strings, flags, command labels,
protocol identifiers, and provider-facing references.
Constraint: Keep this pass narrowly scoped to comments/documentation only
Rejected: Broader Anthropic comment sweep across functional API/protocol references | too ambiguous for a safe prose-only PR
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Leave functional Anthropic references (API behavior, SDKs, URLs, provider labels, protocol docs) for separate reviewed passes
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Neutralize remaining internal-only diagnostic labels (#359)
This pass rewrites a small set of ant-only diagnostic and UI labels to
neutral internal wording while leaving command definitions, flags, and
runtime logic untouched. It focuses on internal debug output, dead UI
branches, and noninteractive headings rather than broader product text.
Constraint: Label cleanup only; do not change command semantics or ant-only logic gates
Rejected: Renaming ant-only command descriptions in main.tsx | broader UX surface better handled in a separate reviewed pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly command descriptions and intentionally deferred user-facing strings
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Finish eliminating remaining ANT-ONLY source labels (#360)
This extends the label-only cleanup to the remaining internal-only command,
debug, and heading strings so the source tree no longer contains ANT-ONLY
markers. The pass still avoids logic changes and only renames labels shown
in internal or gated surfaces.
Constraint: Update the existing label-cleanup PR without widening scope into behavior changes
Rejected: Leave the last ANT-ONLY strings for a later pass | low-cost cleanup while the branch is already focused on labels
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: The next phase should move off label cleanup and onto a separately scoped logic or rebrand slice
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Stub internal-only recording and model capability helpers (#377)
This follow-up Phase C-lite slice replaces purely internal helper modules
with stable external no-op surfaces and collapses internal elevated error
logging to a no-op. The change removes additional USER_TYPE-gated helper
behavior without touching product-facing runtime flows.
Constraint: Keep this PR limited to isolated helper modules that are already external no-ops in practice
Rejected: Pulling in broader speculation or logging sink changes | less isolated and easier to debate during review
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Continue Phase C with similarly isolated helpers before moving into mixed behavior files
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Remove internal-only bundled skills and mock helpers (#376)
* Remove internal-only bundled skills and mock rate-limit behavior
This takes the next planned Phase C-lite slice by deleting bundled skills
that only ever registered for internal users and replacing the internal
mock rate-limit helper with a stable no-op external stub. The external
build keeps the same behavior while removing a concentrated block of
USER_TYPE-gated dead code.
Constraint: Limit this PR to isolated internal-only helpers and avoid bridge, oauth, or rebrand behavior
Rejected: Broad USER_TYPE cleanup across mixed runtime surfaces | too risky for the next medium-sized PR
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: The next cleanup pass should continue with similarly isolated USER_TYPE helpers before touching main.tsx or protocol-heavy code
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
* Align internal-only helper removal with remaining user guidance
This follow-up fixes the mock billing stub to be a true no-op and removes
stale user-facing references to /verify and /skillify from the same PR.
It also leaves a clearer paper trail for review: the deleted verify skill
was explicitly ant-gated before removal, and the remaining mock helper
callers still resolve to safe no-op returns in the external build.
Constraint: Keep the PR focused on consistency fixes and reviewer-requested evidence, not new cleanup scope
Rejected: Leave stale guidance for a later PR | would make this branch internally inconsistent after skill removal
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When deleting gated features, always sweep user guidance and coordinator prompts in the same pass
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy; changed-file scan still shows only pre-existing tipRegistry errors outside edited lines)
* Clarify generic workflow wording after skill removal
This removes the last generic verification-skill wording that could still
be read as pointing at a deleted bundled command. The guidance now talks
about project workflows rather than a specific bundled verify skill.
Constraint: Keep the follow-up limited to reviewer-facing wording cleanup on the same PR
Rejected: Leave generic wording as-is | still too easy to misread after the explicit /verify references were removed
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When removing bundled commands, scrub both explicit and generic references in the same branch
Tested: bun run build
Tested: bun run smoke
Not-tested: Additional checks unchanged by wording-only follow-up
---------
Co-authored-by: anandh8x <test@example.com>
* test(api): add GEMINI_AUTH_MODE to environment setup in tests
* test: isolate GitHub/Gemini credential tests with fresh module imports and explicit non-bare env setup to prevent cross-test mock/cache leaks
* fix: update GitHub Copilot base URL and model defaults for improved compatibility
* fix: enhance error handling in OpenAI API response processing
* fix: improve error handling for GitHub Copilot API responses and streamline error body consumption
* fix: enhance response handling in OpenAI API shim for better error reporting and support for streaming responses
* feat: enhance GitHub device flow with fresh module import and token validation improvements
* fix: separate Copilot API routing from GitHub Models, clear stale env vars, honor providerOverride.apiKey
* fix: route GitHub GPT-5/Codex to Copilot API, show all Copilot models in picker, clear stale env vars
* fix GitHub Models API regression
* feat: update GitHub authentication to require OAuth tokens, normalize model handling for Copilot and GitHub Models
* fix: update GitHub token validation to support OAuth tokens and improve endpoint type handling
---------
Co-authored-by: Anandan <anandan.8x@gmail.com>
Co-authored-by: anandh8x <test@example.com>
* fix: normalize malformed Bash tool arguments from OpenAI-compatible providers
* fix: keep invalid Bash tool args from becoming commands
* fix: preserve malformed Bash JSON literals
* test: stabilize rebased PR 385 checks
* test: isolate provider profile env assertions
* fix: extend tool argument normalization to all tools and harden edge cases
- Extend STRING_ARGUMENT_TOOL_FIELDS to normalize Read, Write, Edit,
Glob, and Grep plain-string arguments (fixes "Invalid tool parameters"
errors reported by VennDev)
- Normalize streaming Bash args regardless of finish_reason, not only
when finish_reason is 'tool_calls'
- Broaden isLikelyStructuredObjectLiteral to catch malformed object-shaped
strings like {command:"pwd"} and {'command':'pwd'} (fixes CR2 from
Vasanthdev2004)
- Apply blank/object-literal guard to all tools, not just Bash
- Extract duplicated JSON repair suffix combinations into shared constant
- Add 32 isolated unit tests for toolArgumentNormalization
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: skip streaming normalization on finish_reason length
Truncated tool calls (finish_reason: 'length') now preserve the raw
buffer instead of normalizing into executable commands, preventing
incomplete commands from becoming runnable.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: comprehensive tool argument normalization hardening
- Remove all { raw: ... } returns that caused InputValidationError with
z.strictObject schemas — return {} instead for clean Zod errors
- Extend normalizeAtStop buffering to all mapped tools (Read, Write,
Edit, Glob, Grep) so streaming paths also get normalized
- Make repairPossiblyTruncatedObjectJson generic — repair any valid
JSON object, not just ones with a command field
- Export hasToolFieldMapping for streaming normalizeAtStop decision
- Skip normalization on finish_reason: length to preserve raw truncated
buffer
- Update all test expectations to match new behavior
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Add local OpenAI-compatible model discovery to /model
* Guard local OpenAI model discovery from Codex routing
* Preserve remote OpenAI Codex alias behavior
Models not in the lookup table fall through to a 200k default, causing
auto-compact to never trigger for models with smaller actual context
windows. Users hit hard context_window_exceeded errors instead.
Added to both context window and max output token tables:
- o1, o1-mini, o1-preview, o1-pro (OpenAI reasoning models)
- llama3.2:1b, qwen3:8b, codestral (common Ollama models)
Relates to #248
This follow-up Phase C-lite slice replaces purely internal helper modules
with stable external no-op surfaces and collapses internal elevated error
logging to a no-op. The change removes additional USER_TYPE-gated helper
behavior without touching product-facing runtime flows.
Constraint: Keep this PR limited to isolated helper modules that are already external no-ops in practice
Rejected: Pulling in broader speculation or logging sink changes | less isolated and easier to debate during review
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Continue Phase C with similarly isolated helpers before moving into mixed behavior files
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
This pass rewrites a small set of ant-only diagnostic and UI labels to
neutral internal wording while leaving command definitions, flags, and
runtime logic untouched. It focuses on internal debug output, dead UI
branches, and noninteractive headings rather than broader product text.
Constraint: Label cleanup only; do not change command semantics or ant-only logic gates
Rejected: Renaming ant-only command descriptions in main.tsx | broader UX surface better handled in a separate reviewed pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly command descriptions and intentionally deferred user-facing strings
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
This pass rewrites comment-only ANT-ONLY markers to neutral internal-only
language across the source tree without changing runtime strings, flags,
commands, or protocol identifiers. The goal is to lower obvious internal
prose leakage while keeping the diff mechanically safe and easy to review.
Constraint: Phase B is limited to comments/prose only; runtime strings and user-facing labels remain deferred
Rejected: Broad search-and-replace across strings and command descriptions | too risky for a prose-only pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly runtime/user-facing strings and should be handled separately from comment cleanup
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
Two provider routing bugs that cause silent wrong-model failures:
1. model.ts: getUserSpecifiedModelSetting() read ANTHROPIC_MODEL ||
GEMINI_MODEL || OPENAI_MODEL with no provider check. A user
switching from Anthropic to OpenAI with ANTHROPIC_MODEL still set
would silently send the Anthropic model name to the OpenAI API.
Now gates each env var behind the active provider from
getAPIProvider().
2. providers.ts: isCodexModel() maintained a hardcoded list of 8 model
names that was missing gpt-5.4-mini and gpt-5.2 from the canonical
CODEX_ALIAS_MODELS table in providerConfig.ts. This caused a
split-brain: getAPIProvider() returned 'openai' while
resolveProviderRequest() selected 'codex_responses' transport.
Now delegates to the exported isCodexAlias() to keep both detection
systems in sync.
* feat: fix open-source build and add Ollama model picker
- Fix build failures by stubbing 62+ missing Anthropic-internal modules
with a catch-all plugin in scripts/build.ts
- Add runtime shim exports (isReplBridgeActive, getReplBridgeHandle) in
bootstrap/state.ts for feature-gated code references
- Add /model picker support for Ollama: fetches available models from
Ollama server at startup and displays them in the model selection menu
- Add Ollama model validation against cached server model list
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address PR review feedback for Ollama integration
- Move Ollama validation before enterprise allowlist check in validateModel
- Truncate model list in error messages to first 5 entries
- Fix isOllamaProvider() to detect OLLAMA_BASE_URL-only configurations
- Reuse getOllamaApiBaseUrl() from providerDiscovery instead of duplicating
- Reset fetchPromise on failure to allow retry in prefetchOllamaModels
- Include Default option in Ollama model picker, prevent Claude model fallthrough
- Add file existence check for src/tasks/ stubs in build script
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: use pre-scanned exact-match resolvers to avoid Bun bundler corruption
Bun's onResolve plugin corrupts the module graph even when returning null
for non-matching imports. This caused lodash-es memoize and zod's util
namespace to be incorrectly tree-shaken, producing runtime ReferenceErrors.
Replace all pattern-based onResolve hooks with a pre-build scan that
identifies missing modules upfront, then registers exact-match resolvers
only for confirmed missing imports. This avoids touching any valid module
resolution paths.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: move Ollama model prefetch outside startup throttle gate
prefetchOllamaModels() was inside the skipStartupPrefetches condition,
so it would be skipped on subsequent launches due to the bgRefresh
throttle timestamp. Ollama model fetch targets a local/remote server
and is fast & cheap, so it should always run at startup.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Two fixes in openaiContextWindows.ts:
1. Sort lookup keys by length descending in lookupByModel() so the most
specific prefix always wins. Without this, 'gpt-4-turbo-preview'
could match 'gpt-4' (8k) instead of 'gpt-4-turbo' (128k) depending
on V8's object key iteration order.
2. Update Llama 3.1/3.2/3.3 context windows from 8,192 to 128,000.
These models support 128k context natively (Meta official specs).
The previous 8k value was Ollama's default num_ctx, not the model's
actual capability, causing premature auto-compact warnings.
Replace raw === '1' || === 'true' comparisons with isEnvTruthy() in
context.ts for consistency with getAPIProvider() in providers.ts.
This also covers the newly added CLAUDE_CODE_USE_GITHUB provider.
Add native Gemini model entries (without google/ prefix) to both
context window and max output token tables. Corrects gemini-2.5-pro
and gemini-2.5-flash max output tokens to 65,536 (was 8,192/32,768).
- Introduced environment variable CLAUDE_CODE_USE_GITHUB to enable GitHub Models.
- Added checks for GITHUB_TOKEN or GH_TOKEN for authentication.
- Updated base URL handling to include GitHub Models default.
- Enhanced provider detection and error handling for GitHub Models.
- Updated relevant functions and components to accommodate the new provider.
DeepSeek V3 documentation specifies 128k context window for both
deepseek-chat and deepseek-reasoner. The previous 64k value caused
premature compaction and underutilization of available context.
Relates to #39
Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
Without this fix, getContextWindowForModel() returns 200k for all OpenAI
models (the Claude default), causing two problems:
1. Auto-compact/warnings trigger at wrong thresholds (200k instead of 128k)
2. getModelMaxOutputTokens() returns 32k causing 400 errors from APIs that
cap output tokens lower (gpt-4o supports max 16384)
Fix:
- Add openaiContextWindows.ts with known context window sizes and max output
token limits for 30+ OpenAI-compatible models (OpenAI, DeepSeek, Groq,
Mistral, Ollama, LM Studio)
- Hook into getContextWindowForModel() so correct input limits are used
- Hook into getModelMaxOutputTokens() so correct output limits are sent,
preventing 400 "max_tokens is too large" errors
All existing warning, blocking, and auto-compact infrastructure works
automatically once the correct limits are returned.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds Google Gemini as a first-class provider using Gemini's OpenAI-compatible
endpoint, supporting gemini-2.0-flash, gemini-2.5-pro, and gemini-2.0-flash-lite
across all three model tiers (opus/sonnet/haiku).
- Add 'gemini' to APIProvider type with CLAUDE_CODE_USE_GEMINI env detection
- Map all 11 model configs to appropriate Gemini models per tier
- Route Gemini through existing OpenAI shim (generativelanguage.googleapis.com)
- Support GEMINI_API_KEY and GOOGLE_API_KEY for authentication
- Fix model display name to show actual Gemini model instead of Claude fallback
- Add Gemini support to provider-launch, provider-bootstrap, system-check scripts
- Add dev:gemini npm script for local development
Bootstrap: bun run profile:init -- --provider gemini --api-key <key>
Launch: bun run dev:gemini
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When using OpenAI provider, getPublicModelDisplayName() was incorrectly
returning "Opus 4.6" because CLAUDE_OPUS_4_6_CONFIG.openai maps to 'gpt-4o',
causing a false match in the switch statement. Now returns null for OpenAI
provider so the raw model name (e.g. 'gpt-4o') is displayed directly.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds a new 'openai' API provider that translates Anthropic SDK calls to
OpenAI chat completions format, enabling Claude Code's full tool system
(bash, file read/write/edit, grep, glob, agents) with any OpenAI-compatible
model: GPT-4o, DeepSeek, Gemini, Llama, Ollama, OpenRouter, and 200+ more.
Set CLAUDE_CODE_USE_OPENAI=1, OPENAI_API_KEY, and OPENAI_MODEL to use.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Squash the current repository state back into one baseline commit while
preserving the README reframing and repository contents.
Constraint: User explicitly requested a single squashed commit with subject "asdf"
Confidence: high
Scope-risk: broad
Reversibility: clean
Directive: This commit intentionally rewrites published history; coordinate before future force-pushes
Tested: git status clean; local history rewritten to one commit; force-pushed main to origin and instructkr
Not-tested: Fresh clone verification after push