Commit Graph

378 Commits

Author SHA1 Message Date
Kevin Codex
1ee2ce931a Merge pull request #117 from auriti/fix/context-isenvtruthy-mismatch
fix: use isEnvTruthy() for provider detection in context window lookup
2026-04-02 21:01:15 +08:00
Kevin Codex
bc2a4bcdd5 Merge pull request #121 from Vasanthdev2004/provider-setup-wizard
feat: add guided /provider setup for saved profiles
2026-04-02 21:00:41 +08:00
Vasanthdev2004
118b0793e0 fix: move slash suggestion highlight with selection 2026-04-02 18:25:52 +05:30
Vasanthdev2004
5ccda35941 fix: highlight selected slash suggestion 2026-04-02 18:18:48 +05:30
Juan Camilo
f385740bd6 fix: use isEnvTruthy() for provider detection in context window lookup
Replace raw === '1' || === 'true' comparisons with isEnvTruthy() in
context.ts for consistency with getAPIProvider() in providers.ts.
This also covers the newly added CLAUDE_CODE_USE_GITHUB provider.

Add native Gemini model entries (without google/ prefix) to both
context window and max output token tables. Corrects gemini-2.5-pro
and gemini-2.5-flash max output tokens to 65,536 (was 8,192/32,768).
2026-04-02 14:43:03 +02:00
Juan Camilo
f4818dc213 fix: shim reliability and protocol compliance overhaul
Addresses the most critical remaining issues in the provider shim layer,
building on top of #124 (recursive schema normalization + try/finally).

openaiShim.ts:
- Throw APIError via SDK factory instead of plain Error — enables retry
  on 429/503 (was completely broken: zero retries for all 3P providers)
- Guard stop_reason !== null before emitting usage-only message_delta
  (Azure/Groq send usage before finish_reason)
- Fix assistant content: join text parts instead of invalid as-string cast
  (Mistral rejects array content on assistant role)
- Expose real HTTP Response in withResponse() for header inspection
- Skip stream_options for local providers (Ollama < 0.5 compatibility)

codexShim.ts:
- Throw APIError at all 4 throw sites (HTTP + 3 streaming errors)
- Add tool_choice 'none' mapping (was silently ignored)
- Forward is_error flag with Error: prefix (matching openaiShim)
2026-04-02 14:41:40 +02:00
Vasanthdev2004
71a3f36e95 Merge origin/main into provider-setup-wizard 2026-04-02 18:03:44 +05:30
Kevin Codex
3d72d9e5e2 Merge pull request #137 from gnanam1990/feat/mcp-doctor
feat(mcp): add doctor diagnostics command
2026-04-02 20:25:41 +08:00
Kevin Codex
4260f5bcd7 Merge pull request #123 from auriti/fix/assert-min-version-provider-guard
fix: skip assertMinVersion for third-party providers
2026-04-02 20:24:37 +08:00
Kevin Codex
49b9c043f5 Merge pull request #120 from auriti/fix/migration-provider-guard
fix: skip Anthropic model migration for third-party providers
2026-04-02 20:22:50 +08:00
Kevin Codex
903a30916a Merge pull request #107 from rithulkamesh/main
feat: GitHub Models provider + interactive onboard (keychain-backed)
2026-04-02 20:14:51 +08:00
Kevin Codex
6b7c0e5339 Merge pull request #74 from Vect0rM/feature/atomic-chat-integration
feat: add support for Atomic Chat provider
2026-04-02 20:13:37 +08:00
skfallin
0c88dea247 Strip incompatible JSON Schema keywords from tool schemas 2026-04-02 13:50:47 +02:00
erdemozyol
cec3629017 fix: support codex web tools and non-git agents 2026-04-02 14:08:22 +03:00
Rithul Kamesh
0a42839475 fix(github): address PR feedback for onboarding flow
- Set competing provider flags to undefined in updateSettingsForSource to ensure clean GitHub boot
- Fix resolveProviderRequest to default to github:copilot when OPENAI_MODEL is unset
- Hydrate secure tokens and managed settings in system-check.ts to prevent false negatives
- Add models:read scope to GitHub device flow
2026-04-02 15:38:54 +05:30
gnanam1990
fb27164ddf fix(mcp): await failed transport cleanup on Windows
Wait for failed MCP transport cleanup before command exit so targeted live checks do not crash on Windows.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-04-02 14:55:05 +05:30
gnanam1990
ad1f328672 feat(mcp): add doctor command
Add the MCP doctor subcommand with text and JSON output, config-only mode, and scope filtering so users can diagnose MCP issues from the CLI.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-04-02 14:55:05 +05:30
gnanam1990
001f89f62c feat: add MCP doctor diagnostics service
Add the diagnostics core and report model for MCP health, scope, and config analysis. This creates the structured report used by both text and JSON doctor output.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-04-02 14:55:04 +05:30
Kevin Codex
5cd95f4bb1 Merge pull request #116 from Aarondio/fix/tolerant-json-parser
fix(shim): implement tolerant bracket balancer for truncated tool JSON
2026-04-02 17:12:44 +08:00
Juan Camilo
6c4225f6f4 fix: skip assertMinVersion for third-party providers
The version kill-switch calls Anthropic's GrowthBook endpoint to
enforce a minimum version. This is currently safe for 3P users only
because isAnalyticsDisabled() returns true (disabling GrowthBook).
Adding an explicit provider guard makes this safety independent of the
analytics stub, preventing 3P users from being blocked by Anthropic's
version requirements in case of future upstream merges.
2026-04-02 11:09:20 +02:00
Juan Camilo
7a7437b309 fix: skip Anthropic model migration for third-party providers
Add provider guard to migrateSonnet1mToSonnet45() so it only runs for
firstParty (Anthropic) users. Without this, a 3P user with
model='sonnet[1m]' would have it rewritten to an Anthropic-specific
alias that is invalid for OpenAI/Gemini/Ollama providers.
2026-04-02 11:09:18 +02:00
Kevin Codex
c94f9e18c3 Merge pull request #124 from salmanrajz/fix/recursive-schema-normalization
fix: make normalizeSchemaForOpenAI recursive for nested objects
2026-04-02 17:03:37 +08:00
salmanrajz
14de9cf0fb refactor: address code review feedback
- Make getProviderLabel() switch exhaustive with explicit openai/gemini
  arms instead of falling through to env-var checks in default
- Add clarifying comment on additionalProperties override in schema
  normalization
2026-04-02 12:36:05 +04:00
Raj Rasane
7f969200fb Add exit reason types and improve graceful shutdown handling 2026-04-02 14:00:32 +05:30
salmanrajz
e494015e9a fix: wrap streaming reader in try/finally to release lock and prevent resource leaks
Partially addresses #112. The streaming reader in openaiStreamToAnthropic
had no error handling - if an error occurred during streaming, the reader
lock was never released. Wrapped the while loop in try/finally to ensure
reader.releaseLock() is always called.
2026-04-02 12:12:24 +04:00
salmanrajz
5b20fe783d fix: make CostThresholdDialog provider-aware instead of hardcoding Anthropic
Partially addresses #39. The cost threshold dialog hardcoded
'Anthropic API' in the title, which is misleading for users on
OpenAI, Gemini, Ollama, or other providers. Now detects the active
provider via getAPIProvider() and shows the correct label.
2026-04-02 12:00:07 +04:00
salmanrajz
6aec8416cc fix: make normalizeSchemaForOpenAI recursive for nested objects
Fixes #111. normalizeSchemaForOpenAI only processed the top-level
object schema, leaving nested objects untouched. OpenAI strict mode
rejects schemas where nested objects have properties not listed in
their required array, causing 400 errors on tools with nested params.

Now recurses into properties, items, and anyOf/oneOf/allOf combinators
(matching the pattern used by enforceStrictSchema in codexShim.ts).
Also adds additionalProperties: false to nested objects in strict mode.

Build verified passing.
2026-04-02 11:51:04 +04:00
Vasanthdev2004
08f0b6030e feat: add guided /provider setup 2026-04-02 13:13:50 +05:30
Misha Skvortsov
577e654ae7 feat: add support for Atomic Chat provider
- Introduced a new provider profile for Atomic Chat, allowing it to be used alongside existing providers.
- Updated `package.json` to include a new development script for launching Atomic Chat.
- Modified `smart_router.py` to recognize Atomic Chat as a local provider that does not require an API key.
- Enhanced provider discovery and launch scripts to handle Atomic Chat, including model listing and connection checks.
- Added tests to ensure proper environment setup and behavior for Atomic Chat profiles.

This update expands the functionality of the application to support local LLMs via Atomic Chat, improving versatility for users.
2026-04-02 10:37:54 +03:00
Aarondio
d156aed32d fix(shim): implement tolerant bracket balancer for truncated tool JSON 2026-04-02 08:14:52 +01:00
Rithul Kamesh
25c5987276 feat: add support for GitHub Models provider
- Introduced environment variable CLAUDE_CODE_USE_GITHUB to enable GitHub Models.
- Added checks for GITHUB_TOKEN or GH_TOKEN for authentication.
- Updated base URL handling to include GitHub Models default.
- Enhanced provider detection and error handling for GitHub Models.
- Updated relevant functions and components to accommodate the new provider.
2026-04-02 11:25:28 +05:30
Kevin Codex
1059915c84 Merge pull request #105 from rajrasane/fix/third-party-provider-compatibility
fix: Improve session title handling and Docker compatibility
2026-04-02 13:50:18 +08:00
Kevin Codex
fcb1b82d9b Merge pull request #104 from slx618/fix/azure-max-completion-tokens
fix Azure OpenAI max token parameter
2026-04-02 13:40:23 +08:00
Kevin Codex
e54c39e3cb Merge pull request #100 from Vasanthdev2004/ripgrep-install-hint
fix: add clearer ripgrep install guidance
2026-04-02 13:39:52 +08:00
Kevin Codex
a6ba34a3de Merge pull request #99 from gigachad80/main
Update resume command in gracefulShutdown message
2026-04-02 13:36:45 +08:00
Raj Rasane
f340b199c8 refactor: simplify session title fallback to static 'Open Claude' 2026-04-02 11:04:35 +05:30
Raj Rasane
63546dcd9c chore: rename default terminal title to Open Claude 2026-04-02 11:04:35 +05:30
Raj Rasane
302d9d4e44 fix: enable session title generation for non-firstParty providers 2026-04-02 11:04:35 +05:30
Raj Rasane
310f1d344a fix: provide local session title fallback for 3P providers
When using non-Anthropic providers (Ollama, Gemini, Codex), the
underlying call to queryHaiku for session title generation fails.
Previously, this caused the catch block to return null, leaving the
terminal tab permanently stuck on 'Claude Code'.

Now, when the API call fails, we gracefully derive a title locally from
the user's first message (first 7 words, sentence-cased), ensuring
users still see a meaningful session title in their terminal tab.
2026-04-02 11:04:35 +05:30
Vasanthdev2004
2bade922ef fix: add clearer ripgrep install guidance 2026-04-02 10:19:36 +05:30
Dark Yagami
4918caa22b Update resume command in gracefulShutdown message 2026-04-02 10:18:27 +05:30
Vasanthdev2004
ffbc1f8f6e fix: support CSI-u printable input on Windows 2026-04-02 10:05:16 +05:30
Alex
f3ebd7d256 fix: convert max_tokens to max_completion_tokens for Azure OpenAI
Azure OpenAI API rejects the max_tokens parameter and requires
max_completion_tokens instead. This change ensures the conversion
is robust by validating that max_tokens is a positive number before
using it, preventing edge cases like null or "null" string values
from being incorrectly sent.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-02 12:01:01 +08:00
gnanam1990
47b19c9a00 fix: style version number in startup screen accent orange
Apply the existing ACCENT colour (rgb 240 148 100) to the version
string so it stands out against the dim label, matching the warm
orange used throughout the startup screen for stars and status text.

Requested in #95.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 09:11:12 +05:30
gnanam1990
8c6a10517f fix: show correct version in startup screen
StartupScreen.ts was reading the version via globalThis['MACRO_DISPLAY_VERSION']
which is never populated — the Bun bundler inlines it as MACRO.DISPLAY_VERSION
(dot notation), not as a globalThis key.

Result: startup screen always showed the hardcoded fallback 'v0.1.4' regardless
of the installed version.

Fix: use MACRO.DISPLAY_VERSION ?? MACRO.VERSION directly, consistent with
cli.tsx, main.tsx, and logoV2Utils.ts.

Fixes #95

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 09:05:00 +05:30
Kevin Codex
085ba9206e Merge pull request #80 from gnanam1990/fix/azure-cognitive-services-endpoint
fix: support Azure Cognitive Services and Azure OpenAI endpoints
2026-04-02 11:08:19 +08:00
Kevin Codex
0f34a8eadb Merge pull request #93 from gnanam1990/fix/gemini-schema-required-validation
fix: make schema normalization provider-aware for Gemini compatibility
2026-04-02 11:08:02 +08:00
Kevin Codex
10a5444241 Merge pull request #94 from kevincodex1/feature/removed-telemetry-noise
removed telemetry noise, unnecessary packets sent to anthropic
2026-04-02 11:04:29 +08:00
Kevin Codex
42e614dfb3 removed telemetry noise, unnecessary packets sent to anthropic 2026-04-02 11:01:14 +08:00
gnanam1990
ab911d1ed1 fix: make schema normalization provider-aware for Gemini compatibility
Two bugs in convertTools() caused Gemini's OpenAI-compatible endpoint
to reject tool schemas with 400 "schema requires unspecified property":

1. The Agent tool patch unconditionally pushed 'message' into required[]
   even though 'message' is not a property of the Agent schema. Gemini
   strictly validates that every key in required[] exists in properties.

2. normalizeSchemaForOpenAI() added all property keys to required[] for
   OpenAI strict mode, but this conflicts with Gemini's stricter schema
   validation which rejects required keys absent from properties.

Fix:
- Agent tool patch now only adds a key to required[] if it exists in
  schema.properties (fixes the 'message' 400 error on Gemini)
- normalizeSchemaForOpenAI() accepts a strict flag: true for OpenAI
  (promotes all property keys into required[]), false for Gemini
  (filters required[] to only keys present in properties)
- convertTools() detects CLAUDE_CODE_USE_GEMINI and passes strict=false

Fixes #82

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 08:28:07 +05:30