Commit Graph

360 Commits

Author SHA1 Message Date
Kevin Codex
5cd95f4bb1 Merge pull request #116 from Aarondio/fix/tolerant-json-parser
fix(shim): implement tolerant bracket balancer for truncated tool JSON
2026-04-02 17:12:44 +08:00
Juan Camilo
6c4225f6f4 fix: skip assertMinVersion for third-party providers
The version kill-switch calls Anthropic's GrowthBook endpoint to
enforce a minimum version. This is currently safe for 3P users only
because isAnalyticsDisabled() returns true (disabling GrowthBook).
Adding an explicit provider guard makes this safety independent of the
analytics stub, preventing 3P users from being blocked by Anthropic's
version requirements in case of future upstream merges.
2026-04-02 11:09:20 +02:00
Juan Camilo
7a7437b309 fix: skip Anthropic model migration for third-party providers
Add provider guard to migrateSonnet1mToSonnet45() so it only runs for
firstParty (Anthropic) users. Without this, a 3P user with
model='sonnet[1m]' would have it rewritten to an Anthropic-specific
alias that is invalid for OpenAI/Gemini/Ollama providers.
2026-04-02 11:09:18 +02:00
Kevin Codex
c94f9e18c3 Merge pull request #124 from salmanrajz/fix/recursive-schema-normalization
fix: make normalizeSchemaForOpenAI recursive for nested objects
2026-04-02 17:03:37 +08:00
salmanrajz
14de9cf0fb refactor: address code review feedback
- Make getProviderLabel() switch exhaustive with explicit openai/gemini
  arms instead of falling through to env-var checks in default
- Add clarifying comment on additionalProperties override in schema
  normalization
2026-04-02 12:36:05 +04:00
Raj Rasane
7f969200fb Add exit reason types and improve graceful shutdown handling 2026-04-02 14:00:32 +05:30
salmanrajz
e494015e9a fix: wrap streaming reader in try/finally to release lock and prevent resource leaks
Partially addresses #112. The streaming reader in openaiStreamToAnthropic
had no error handling - if an error occurred during streaming, the reader
lock was never released. Wrapped the while loop in try/finally to ensure
reader.releaseLock() is always called.
2026-04-02 12:12:24 +04:00
salmanrajz
5b20fe783d fix: make CostThresholdDialog provider-aware instead of hardcoding Anthropic
Partially addresses #39. The cost threshold dialog hardcoded
'Anthropic API' in the title, which is misleading for users on
OpenAI, Gemini, Ollama, or other providers. Now detects the active
provider via getAPIProvider() and shows the correct label.
2026-04-02 12:00:07 +04:00
salmanrajz
6aec8416cc fix: make normalizeSchemaForOpenAI recursive for nested objects
Fixes #111. normalizeSchemaForOpenAI only processed the top-level
object schema, leaving nested objects untouched. OpenAI strict mode
rejects schemas where nested objects have properties not listed in
their required array, causing 400 errors on tools with nested params.

Now recurses into properties, items, and anyOf/oneOf/allOf combinators
(matching the pattern used by enforceStrictSchema in codexShim.ts).
Also adds additionalProperties: false to nested objects in strict mode.

Build verified passing.
2026-04-02 11:51:04 +04:00
Vasanthdev2004
08f0b6030e feat: add guided /provider setup 2026-04-02 13:13:50 +05:30
Misha Skvortsov
577e654ae7 feat: add support for Atomic Chat provider
- Introduced a new provider profile for Atomic Chat, allowing it to be used alongside existing providers.
- Updated `package.json` to include a new development script for launching Atomic Chat.
- Modified `smart_router.py` to recognize Atomic Chat as a local provider that does not require an API key.
- Enhanced provider discovery and launch scripts to handle Atomic Chat, including model listing and connection checks.
- Added tests to ensure proper environment setup and behavior for Atomic Chat profiles.

This update expands the functionality of the application to support local LLMs via Atomic Chat, improving versatility for users.
2026-04-02 10:37:54 +03:00
Aarondio
d156aed32d fix(shim): implement tolerant bracket balancer for truncated tool JSON 2026-04-02 08:14:52 +01:00
Rithul Kamesh
25c5987276 feat: add support for GitHub Models provider
- Introduced environment variable CLAUDE_CODE_USE_GITHUB to enable GitHub Models.
- Added checks for GITHUB_TOKEN or GH_TOKEN for authentication.
- Updated base URL handling to include GitHub Models default.
- Enhanced provider detection and error handling for GitHub Models.
- Updated relevant functions and components to accommodate the new provider.
2026-04-02 11:25:28 +05:30
Kevin Codex
1059915c84 Merge pull request #105 from rajrasane/fix/third-party-provider-compatibility
fix: Improve session title handling and Docker compatibility
2026-04-02 13:50:18 +08:00
Kevin Codex
fcb1b82d9b Merge pull request #104 from slx618/fix/azure-max-completion-tokens
fix Azure OpenAI max token parameter
2026-04-02 13:40:23 +08:00
Kevin Codex
e54c39e3cb Merge pull request #100 from Vasanthdev2004/ripgrep-install-hint
fix: add clearer ripgrep install guidance
2026-04-02 13:39:52 +08:00
Kevin Codex
a6ba34a3de Merge pull request #99 from gigachad80/main
Update resume command in gracefulShutdown message
2026-04-02 13:36:45 +08:00
Raj Rasane
f340b199c8 refactor: simplify session title fallback to static 'Open Claude' 2026-04-02 11:04:35 +05:30
Raj Rasane
63546dcd9c chore: rename default terminal title to Open Claude 2026-04-02 11:04:35 +05:30
Raj Rasane
302d9d4e44 fix: enable session title generation for non-firstParty providers 2026-04-02 11:04:35 +05:30
Raj Rasane
310f1d344a fix: provide local session title fallback for 3P providers
When using non-Anthropic providers (Ollama, Gemini, Codex), the
underlying call to queryHaiku for session title generation fails.
Previously, this caused the catch block to return null, leaving the
terminal tab permanently stuck on 'Claude Code'.

Now, when the API call fails, we gracefully derive a title locally from
the user's first message (first 7 words, sentence-cased), ensuring
users still see a meaningful session title in their terminal tab.
2026-04-02 11:04:35 +05:30
Vasanthdev2004
2bade922ef fix: add clearer ripgrep install guidance 2026-04-02 10:19:36 +05:30
Dark Yagami
4918caa22b Update resume command in gracefulShutdown message 2026-04-02 10:18:27 +05:30
Vasanthdev2004
ffbc1f8f6e fix: support CSI-u printable input on Windows 2026-04-02 10:05:16 +05:30
Alex
f3ebd7d256 fix: convert max_tokens to max_completion_tokens for Azure OpenAI
Azure OpenAI API rejects the max_tokens parameter and requires
max_completion_tokens instead. This change ensures the conversion
is robust by validating that max_tokens is a positive number before
using it, preventing edge cases like null or "null" string values
from being incorrectly sent.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-02 12:01:01 +08:00
gnanam1990
47b19c9a00 fix: style version number in startup screen accent orange
Apply the existing ACCENT colour (rgb 240 148 100) to the version
string so it stands out against the dim label, matching the warm
orange used throughout the startup screen for stars and status text.

Requested in #95.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 09:11:12 +05:30
gnanam1990
8c6a10517f fix: show correct version in startup screen
StartupScreen.ts was reading the version via globalThis['MACRO_DISPLAY_VERSION']
which is never populated — the Bun bundler inlines it as MACRO.DISPLAY_VERSION
(dot notation), not as a globalThis key.

Result: startup screen always showed the hardcoded fallback 'v0.1.4' regardless
of the installed version.

Fix: use MACRO.DISPLAY_VERSION ?? MACRO.VERSION directly, consistent with
cli.tsx, main.tsx, and logoV2Utils.ts.

Fixes #95

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 09:05:00 +05:30
Kevin Codex
085ba9206e Merge pull request #80 from gnanam1990/fix/azure-cognitive-services-endpoint
fix: support Azure Cognitive Services and Azure OpenAI endpoints
2026-04-02 11:08:19 +08:00
Kevin Codex
0f34a8eadb Merge pull request #93 from gnanam1990/fix/gemini-schema-required-validation
fix: make schema normalization provider-aware for Gemini compatibility
2026-04-02 11:08:02 +08:00
Kevin Codex
10a5444241 Merge pull request #94 from kevincodex1/feature/removed-telemetry-noise
removed telemetry noise, unnecessary packets sent to anthropic
2026-04-02 11:04:29 +08:00
Kevin Codex
42e614dfb3 removed telemetry noise, unnecessary packets sent to anthropic 2026-04-02 11:01:14 +08:00
gnanam1990
ab911d1ed1 fix: make schema normalization provider-aware for Gemini compatibility
Two bugs in convertTools() caused Gemini's OpenAI-compatible endpoint
to reject tool schemas with 400 "schema requires unspecified property":

1. The Agent tool patch unconditionally pushed 'message' into required[]
   even though 'message' is not a property of the Agent schema. Gemini
   strictly validates that every key in required[] exists in properties.

2. normalizeSchemaForOpenAI() added all property keys to required[] for
   OpenAI strict mode, but this conflicts with Gemini's stricter schema
   validation which rejects required keys absent from properties.

Fix:
- Agent tool patch now only adds a key to required[] if it exists in
  schema.properties (fixes the 'message' 400 error on Gemini)
- normalizeSchemaForOpenAI() accepts a strict flag: true for OpenAI
  (promotes all property keys into required[]), false for Gemini
  (filters required[] to only keys present in properties)
- convertTools() detects CLAUDE_CODE_USE_GEMINI and passes strict=false

Fixes #82

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 08:28:07 +05:30
Kevin Codex
e524be7e22 Merge pull request #50 from auriti/fix/status-panel-openai-provider
fix: show OpenAI/Gemini provider info in /status panel
2026-04-02 10:50:16 +08:00
gnanam1990
ac2ea6aeb2 test: align codexShim test with strict schema normalization
Update the stale test expectation to match current behavior where
normalizeSchemaForOpenAI() promotes all properties into required[]
and marks the schema as strict: true.

Same fix as PR #72 — included here so PR #80 passes CI independently.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 08:16:51 +05:30
nusquama
537ac24a9c fix: use max_completion_tokens instead of max_tokens for OpenAI-compatible APIs
Azure OpenAI and newer OpenAI models (o1, o3, o4...) reject `max_tokens`
with a 400 error and require `max_completion_tokens` instead.

Maps `params.max_tokens` → `max_completion_tokens` in the request body,
which is the current standard across OpenAI-compatible providers.
2026-04-02 08:36:01 +08:00
Kevin Codex
01246f98bd Merge pull request #51 from auriti/fix/proxy-wss-default-port
fix: use correct default port for wss:// in NO_PROXY matching
2026-04-02 08:29:39 +08:00
Kevin Codex
1ce19b9a39 Merge pull request #59 from Vasanthdev2004/gpt4o-max-tokens-test
test: cover OpenAI max token caps for gpt-4o and GPT-5.4
2026-04-02 08:24:25 +08:00
Kevin Codex
2a8f6fc242 Merge pull request #75 from tunnckoCore/feat/disable-coauthor-and-openclaude-pr-branding
feat: support disabling commit co-author attribution
2026-04-02 07:51:02 +08:00
Vasanthdev2004
fd6f4e6632 test: align Codex strict schema expectation 2026-04-02 01:37:30 +05:30
Vasanthdev2004
c22045e3e4 fix: skip Anthropic setup flow for third-party providers 2026-04-02 01:32:38 +05:30
gnanam1990
4c9b9f0d5d fix: support Azure Cognitive Services and Azure OpenAI endpoints
Azure endpoints require two changes vs standard OpenAI:
1. Auth header: `api-key: {key}` instead of `Authorization: Bearer {key}`
2. URL path: `/openai/deployments/{model}/chat/completions?api-version={version}`
   instead of `/chat/completions`

Detection is automatic when OPENAI_BASE_URL contains
`cognitiveservices.azure.com` or `openai.azure.com`.

The api-version defaults to `2024-12-01-preview` and can be overridden
via the AZURE_OPENAI_API_VERSION env var.

Handles all common Azure base URL formats:
- https://{resource}.cognitiveservices.azure.com/
- https://{resource}.cognitiveservices.azure.com/openai/v1
- https://{resource}.openai.azure.com/openai/v1
- https://{resource}.cognitiveservices.azure.com/openai/deployments/{model}/v1

Fixes #79

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 01:32:30 +05:30
tunnckoCore
8466fc138e test: align Codex strict schema expectation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 22:32:32 +03:00
tunnckoCore
217a864ba0 feat: support disabling commit co-author attribution
Add an env var to suppress the default Co-Authored-By trailer and rebrand PR attribution text to OpenClaude.
2026-04-01 21:43:29 +03:00
Kevin Codex
b204ae722f Merge pull request #71 from Vasanthdev2004/pr-checks
ci: add automated PR smoke and provider checks
2026-04-02 02:33:25 +08:00
Kevin Codex
80df0c57bd Merge pull request #48 from auriti/fix/empty-string-content-delta
fix: handle empty string delta.content in OpenAI streaming
2026-04-02 02:31:11 +08:00
Vasanthdev2004
9951da5397 ci: add PR smoke and provider test checks 2026-04-02 00:00:12 +05:30
Kevin Codex
18e24a75f1 Merge pull request #70 from gnanam1990/feat/gradient-startup-screen
feat: gradient startup screen with provider info
2026-04-02 02:30:00 +08:00
gnanam1990
9d464f3488 feat: add gradient startup screen and remove old OPEN box logo
Adds a new startup screen with filled-block text logo and sunset
gradient, printed to stdout before the Ink UI loads. Removes the
old OPEN box logo from the chat UI since the new screen replaces it.

Changes:
- src/components/StartupScreen.ts (NEW) — gradient OPEN CLAUDE logo
  with provider info box (Provider, Model, Endpoint). Auto-detects
  active provider from env vars (OpenAI, Gemini, DeepSeek, Ollama,
  Groq, Mistral, Azure, LM Studio, Anthropic). Skipped in CI and
  non-TTY environments.
- src/entrypoints/cli.tsx — calls printStartupScreen() at startup
  before Ink renders
- src/components/Messages.tsx — removes <LogoV2 /> from LogoHeader
  so the old OPEN box logo no longer appears in the chat UI

Addresses #55.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 23:57:45 +05:30
Vasanthdev2004
3491dc3cba fix: preserve Gemini thought signatures for tools 2026-04-01 23:54:17 +05:30
Kevin Codex
b78db9568a Merge pull request #63 from step325/fix/codex-multi-agent-compatibility
Fix/codex multi agent compatibility
2026-04-02 02:20:39 +08:00