Commit Graph

312 Commits

Author SHA1 Message Date
Juan Camilo
409e90c510 fix: use correct default port for wss:// in NO_PROXY matching
The proxy bypass logic assigned port 80 to any non-https protocol,
including wss:// whose default port is 443. A NO_PROXY entry like
example.com:443 would not match wss://example.com because the port
was incorrectly resolved to 80.

Relates to #40

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 17:05:45 +02:00
Juan Camilo
481e608903 fix: show OpenAI/Gemini provider info in /status panel
The /status panel showed 'undefined' for the API provider label when
using OpenAI or Gemini providers, and did not display the base URL or
model name. Added provider labels and property sections for both.

Relates to #39

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 17:04:42 +02:00
Juan Camilo
39d9616ed7 fix: update DeepSeek context window from 64k to 128k
DeepSeek V3 documentation specifies 128k context window for both
deepseek-chat and deepseek-reasoner. The previous 64k value caused
premature compaction and underutilization of available context.

Relates to #39

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 17:03:57 +02:00
Juan Camilo
788cfa3e9a fix: handle empty string delta.content in OpenAI streaming
Some providers send an empty string as the first delta to signal
streaming start. The falsy check `if (delta.content)` treated "" as
absent, skipping content_block_start. The next delta with actual
content was emitted without it, violating the Anthropic protocol.

Changed to `delta.content != null` to distinguish between absent field
and empty string.

Relates to #42

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 17:03:28 +02:00
gnanam1990
6c46974bf9 fix: normalize tool schemas so required ⊇ properties for OpenAI/Codex
OpenAI and Codex enforce strict JSON Schema validation — every key in
`properties` must also appear in `required`. Anthropic schemas often
mark fields as optional (omitted from `required`), which causes 400
errors on OpenAI/Codex endpoints.

Example: the Agent tool has `subagent_type` in `properties` but not
in `required`, producing:
  "Invalid schema for function 'Agent': Missing 'subagent_type'
   in required array"

Fix: add `normalizeSchemaForOpenAI()` in `convertTools()` that ensures
`required` is a superset of all `properties` keys before the schema is
sent to the API. Existing `required` entries are preserved; missing
ones are appended. Schemas without `properties` pass through unchanged.

Fixes #46.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 20:26:21 +05:30
Sahil
a44f45e951 first commit 2026-04-01 22:55:56 +08:00
Daniel
372ba31c17 feat: enhance tool conversion to support strict mode based on schema validation 2026-04-01 22:55:56 +08:00
Vasanthdev2004
20b4176f83 docs: note minimum Bun version for Windows builds 2026-04-01 20:12:04 +05:30
Kevin Codex
8750f84464 Merge pull request #44 from gnanam1990/fix/auth-ci-crash
fix: skip Anthropic credential check in CI for 3P providers
2026-04-01 22:35:16 +08:00
gnanam1990
1278967223 fix: skip Anthropic credential check in CI for 3P providers
In CI mode, auth.ts throws if ANTHROPIC_API_KEY or
CLAUDE_CODE_OAUTH_TOKEN are missing — even when using
CLAUDE_CODE_USE_OPENAI=1 or CLAUDE_CODE_USE_GEMINI=1.
This crashes any OpenAI/Gemini/Ollama CI pipeline immediately.

Fix: guard the throw with !isUsing3PServices() so non-Anthropic
providers skip the check entirely.

Also added CLAUDE_CODE_USE_GEMINI to isUsing3PServices() which
was missing — Gemini users were excluded from the 3P detection
used elsewhere in the same function.

Fixes #40.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 20:00:42 +05:30
Kevin Codex
82e7168349 Merge pull request #38 from Vasanthdev2004/deepseek-max-tokens
test: cover DeepSeek max token limits
2026-04-01 22:27:20 +08:00
Vasanthdev2004
7ef085c605 test: cover deepseek max token limits 2026-04-01 19:17:58 +05:30
Juan Camilo
dda553e281 fix: define MACRO.PACKAGE_URL and MACRO.NATIVE_PACKAGE_URL in build
These macros are used in ~10 files (autoUpdater, localInstaller,
nativeInstaller, update CLI) but were not defined in the build script's
`define` block. At runtime, they resolve to `undefined`, causing
commands like `npm install undefined` and `npm view undefined` to fail
silently during auto-update checks.

Sets MACRO.PACKAGE_URL to the published npm package name and
MACRO.NATIVE_PACKAGE_URL to undefined (no native binary distribution).

Relates to #29

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 15:35:10 +02:00
Kevin Codex
00744a814b Merge pull request #31 from auriti/fix/double-slash-import
fix: correct double-slash in import path (structuredIO.ts)
2026-04-01 21:35:03 +08:00
Juan Camilo
fd5e954990 fix: restrict .openclaude-profile.json permissions to owner-only (0600)
The profile file may contain API keys (OPENAI_API_KEY, CODEX_API_KEY,
GEMINI_API_KEY) in plain text. Without explicit permissions, writeFileSync
uses the process umask — on systems with permissive umask (0022), the file
is world-readable (644), exposing credentials to other users.

Relates to #24

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 15:34:37 +02:00
Kevin Codex
11a3553055 Merge pull request #5 from Vasanthdev2004/codex/provider-profile-recommendations
feat: add intelligent provider profile recommendation
2026-04-01 21:34:30 +08:00
Juan Camilo
598f59e546 fix: map tool_choice 'none' in OpenAI shim
The Anthropic-to-OpenAI tool_choice mapping handled 'auto', 'any', and
'tool' but not 'none'. When 'none' was passed, the request was sent
without tool_choice, defaulting to 'auto' — the opposite of the
intended behavior (disable tool use).

Relates to #30

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 15:34:08 +02:00
Juan Camilo
99543a2aae fix: correct double-slash in import path for structuredIO
The import `src//types/message.js` contains a double slash that may cause
unpredictable module resolution depending on OS and bundler behavior.

Relates to #29

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 15:33:40 +02:00
Vasanthdev2004
ce45bd080e Merge origin/main into provider-profile-recommendations 2026-04-01 18:38:59 +05:30
Kevin Codex
0192dc0fa0 Merge pull request #21 from gnanam1990/fix/openai-stream-duplicate-response
fix: prevent duplicate responses in OpenAI streaming
2026-04-01 20:57:52 +08:00
gnanam1990
cb86f73c06 fix: prevent duplicate responses in OpenAI streaming
When certain OpenAI-compatible APIs (LM Studio, some proxies) send
multiple stream chunks with finish_reason set, the finish block ran
multiple times — emitting content_block_stop and message_delta for
each one. Each content_block_stop caused claude.ts to create and yield
a new assistant message, making every response appear twice in the UI.

Fix: add hasProcessedFinishReason flag (same pattern as the existing
hasEmittedFinalUsage flag) so the finish block only executes once per
response regardless of how many chunks contain finish_reason.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 18:14:41 +05:30
Kevin Codex
1431522ecd Merge pull request #19 from SenaxxZz/patch-1
Fix Windows ESM import by using file URL
2026-04-01 20:24:13 +08:00
gnanam1990
4ca94b2454 feat: add context window guard for OpenAI-compatible models
Without this fix, getContextWindowForModel() returns 200k for all OpenAI
models (the Claude default), causing two problems:
  1. Auto-compact/warnings trigger at wrong thresholds (200k instead of 128k)
  2. getModelMaxOutputTokens() returns 32k causing 400 errors from APIs that
     cap output tokens lower (gpt-4o supports max 16384)

Fix:
- Add openaiContextWindows.ts with known context window sizes and max output
  token limits for 30+ OpenAI-compatible models (OpenAI, DeepSeek, Groq,
  Mistral, Ollama, LM Studio)
- Hook into getContextWindowForModel() so correct input limits are used
- Hook into getModelMaxOutputTokens() so correct output limits are sent,
  preventing 400 "max_tokens is too large" errors

All existing warning, blocking, and auto-compact infrastructure works
automatically once the correct limits are returned.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 17:42:04 +05:30
Kevin Codex
e7c600de3b chore: release 0.1.4 2026-04-01 20:10:12 +08:00
gnanam1990
a3d8ab0fec feat: add native Gemini provider for Google AI models
Adds Google Gemini as a first-class provider using Gemini's OpenAI-compatible
endpoint, supporting gemini-2.0-flash, gemini-2.5-pro, and gemini-2.0-flash-lite
across all three model tiers (opus/sonnet/haiku).

- Add 'gemini' to APIProvider type with CLAUDE_CODE_USE_GEMINI env detection
- Map all 11 model configs to appropriate Gemini models per tier
- Route Gemini through existing OpenAI shim (generativelanguage.googleapis.com)
- Support GEMINI_API_KEY and GOOGLE_API_KEY for authentication
- Fix model display name to show actual Gemini model instead of Claude fallback
- Add Gemini support to provider-launch, provider-bootstrap, system-check scripts
- Add dev:gemini npm script for local development

Bootstrap: bun run profile:init -- --provider gemini --api-key <key>
Launch: bun run dev:gemini

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 17:38:30 +05:30
Vasanthdev2004
f51cd3aa15 Merge origin/main into codex/provider-profile-recommendations
Preserve provider recommendation workflows while integrating Codex profile support, safer launch isolation, and updated docs/scripts from upstream main.
2026-04-01 17:33:07 +05:30
Kevin Codex
6b6407018d Merge pull request #17 from Kartvya69/fix/tui-freeze-style-rerender
Fix TUI freeze without dropping prompt styles
2026-04-01 19:54:12 +08:00
SenaxZz
2ee43e010b Fix Windows ESM import by using file URL 2026-04-01 13:53:36 +02:00
Kartvya69
9ee20cfd4a fix: preserve tui styles while fixing freeze 2026-04-01 11:33:08 +00:00
Kevin Codex
c1317ef544 Merge pull request #16 from Vasanthdev2004/codex/fix-open-build-missing-stubs
fix: stub internal-only modules in open build
2026-04-01 19:32:53 +08:00
Kevin Codex
24f1b52918 Merge pull request #14 from umairinayat/fix/openai-stream-usage-accounting
Fix missing usage accounting for final OpenAI stream chunks
2026-04-01 19:31:50 +08:00
Kevin Codex
833c90fbc7 Merge pull request #12 from salmanrajz/fix/supply-chain-safety-and-build-docs
security: remove runtime require of unverified modifiers-napi package
2026-04-01 19:30:07 +08:00
Kevin Codex
2e70fa1bde Merge pull request #11 from strato-space/feat/codexplan-codexspark
Add Codex plan/spark provider support
2026-04-01 19:28:55 +08:00
Kevin Codex
b16d46d61e Merge pull request #10 from Vasanthdev2004/codex/fix-windows-esm-launch
fix: support Windows launcher import paths
2026-04-01 19:26:50 +08:00
Kevin Codex
b7bc3f361c Merge pull request #9 from gnanam1990/feat/ollama-provider
fix: resolve frozen terminal for OpenAI/3P provider users (#3)
2026-04-01 18:48:37 +08:00
Vasanthdev2004
5175dba6da fix: stub internal-only modules in open build 2026-04-01 15:26:55 +05:30
salmanrajz
cb24750cb7 security: remove runtime require of unverified modifiers-napi package
Fixes #7. The modifiers-napi package is an Anthropic-internal native
addon, but a package with the same name exists on npm and could be a
supply chain attack vector. The build script already stubs it, but
the source code had live require() calls that would execute when
running without the bundler (e.g. bun dev, ts-node).

Replaced both functions with safe no-ops since modifier key detection
is not needed in the open-source build. Build verified passing.
2026-04-01 12:10:31 +04:00
gnanam1990
6cf95f5b1d fix: show actual OpenAI model name in welcome screen UI
When using OpenAI provider, getPublicModelDisplayName() was incorrectly
returning "Opus 4.6" because CLAUDE_OPUS_4_6_CONFIG.openai maps to 'gpt-4o',
causing a false match in the switch statement. Now returns null for OpenAI
provider so the raw model name (e.g. 'gpt-4o') is displayed directly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 13:23:48 +05:30
vp
cbeed0f76f Add Codex plan/spark provider support 2026-04-01 10:44:35 +03:00
umairinayat
0a5827c0b6 fix(openai-shim): preserve final streaming usage chunks
Handle OpenAI-compatible SSE responses that send usage in a trailing empty-choices chunk so token accounting and budget enforcement stay correct.
2026-04-01 12:33:06 +05:00
Vasanthdev2004
4ce7dcf91e fix: support Windows launcher import paths 2026-04-01 12:43:14 +05:30
gnanam1990
d1267393a9 fix: resolve frozen terminal for OpenAI/3P provider users (#3)
The showSetupScreens() early return for CLAUDE_CODE_USE_OPENAI skipped
all trust state initialization (setSessionTrustAccepted, GrowthBook,
getSystemContext), causing downstream config lookups to fail silently.
This prevented the REPL component tree from mounting correctly —
useInput never fired, stdin stayed in cooked mode, and the terminal
appeared frozen.

Fix: skip only the UI dialogs (onboarding, trust, MCP approval) for
OpenAI provider while still running the critical state initialization
that the REPL depends on.

Closes #3

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 12:17:31 +05:30
Vasanthdev2004
8fe03cba57 fix: harden provider recommendation safety 2026-04-01 11:55:24 +05:30
Vasanthdev2004
174eb8ad3b feat: add intelligent provider profile recommendation 2026-04-01 11:16:49 +05:30
Kevin Codex
2d7aa9c841 feat: rebrand as Open Claude and harden OpenAI REPL 2026-04-01 13:31:18 +08:00
Kevin Codex
55098bf9b7 Merge pull request #2 from gnanam1990/feat/smart-auto-router
feat: intelligent auto-router — routes to fastest/cheapest provider automatically
2026-04-01 12:51:13 +08:00
Kevin Codex
5eb7ab4ecc Merge pull request #1 from gnanam1990/feat/ollama-provider
feat: add native Ollama provider for local LLM support
2026-04-01 12:50:37 +08:00
gnanam1990
6b163e2e7e feat: add intelligent smart auto-router with latency/cost scoring 2026-04-01 10:19:32 +05:30
gnanam1990
752c71c30b feat: add native Ollama provider for local LLM support 2026-04-01 10:05:27 +05:30
Kevin
c957d495ac fix: prevent interactive stream crash on node removal 2026-04-01 11:23:47 +08:00