Commit Graph

87 Commits

Author SHA1 Message Date
Vasanthdev2004
936107f569 test: align Codex strict schema expectation 2026-04-02 00:11:42 +05:30
Kevin Codex
b204ae722f Merge pull request #71 from Vasanthdev2004/pr-checks
ci: add automated PR smoke and provider checks
2026-04-02 02:33:25 +08:00
Kevin Codex
80df0c57bd Merge pull request #48 from auriti/fix/empty-string-content-delta
fix: handle empty string delta.content in OpenAI streaming
2026-04-02 02:31:11 +08:00
Vasanthdev2004
9951da5397 ci: add PR smoke and provider test checks 2026-04-02 00:00:12 +05:30
Kevin Codex
18e24a75f1 Merge pull request #70 from gnanam1990/feat/gradient-startup-screen
feat: gradient startup screen with provider info
2026-04-02 02:30:00 +08:00
gnanam1990
9d464f3488 feat: add gradient startup screen and remove old OPEN box logo
Adds a new startup screen with filled-block text logo and sunset
gradient, printed to stdout before the Ink UI loads. Removes the
old OPEN box logo from the chat UI since the new screen replaces it.

Changes:
- src/components/StartupScreen.ts (NEW) — gradient OPEN CLAUDE logo
  with provider info box (Provider, Model, Endpoint). Auto-detects
  active provider from env vars (OpenAI, Gemini, DeepSeek, Ollama,
  Groq, Mistral, Azure, LM Studio, Anthropic). Skipped in CI and
  non-TTY environments.
- src/entrypoints/cli.tsx — calls printStartupScreen() at startup
  before Ink renders
- src/components/Messages.tsx — removes <LogoV2 /> from LogoHeader
  so the old OPEN box logo no longer appears in the chat UI

Addresses #55.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 23:57:45 +05:30
Vasanthdev2004
3491dc3cba fix: preserve Gemini thought signatures for tools 2026-04-01 23:54:17 +05:30
Kevin Codex
b78db9568a Merge pull request #63 from step325/fix/codex-multi-agent-compatibility
Fix/codex multi agent compatibility
2026-04-02 02:20:39 +08:00
Kevin Codex
f8d9dbeda9 Merge pull request #66 from tunnckoCore/fix/skills-menu-namespace-sort
fix: sort skills menu by namespace
2026-04-02 02:08:12 +08:00
Kevin Codex
43042ede56 Merge pull request #61 from Vasanthdev2004/ctrl-o-expand-crash
fix: guard ctrl-o transcript sandbox subscription
2026-04-02 02:05:41 +08:00
Charlike Mike Reagent
e8dd3d6289 fix: sort skills menu by namespace 2026-04-01 21:04:02 +03:00
Kevin Codex
e5db3033ad Merge pull request #65 from tunnckoCore/fix/skills-menu-nested-labels
fix: clarify nested skill labels in skills menu
2026-04-02 02:01:45 +08:00
Charlike Mike Reagent
1d82022978 fix: clarify nested skill labels in skills menu 2026-04-01 20:58:53 +03:00
step325
66f5981c45 fix(codex): Support Multi-Agent framework schemas for OpenAI/Codex endpoints
This commit addresses strict schema validation limitations when running subagents under OpenAI backend shims.

- Drops empty properties from payloads (like Record<string, string>) that break OpenAI's Structured Outputs validation.

- Handles edge cases for automated initial teams when subagents bypass standard creation routines.

- Aborts sending unsupported experimental backend parameters like temperature and top_p for GPT-5 derivatives.
2026-04-01 19:47:26 +02:00
Kevin Codex
4221b453c7 Merge pull request #32 from auriti/fix/tool-choice-none
fix: map tool_choice 'none' in OpenAI shim
2026-04-02 01:42:00 +08:00
Kevin Codex
d4b24483a6 Merge pull request #49 from auriti/fix/deepseek-context-window
fix: update DeepSeek context window from 64k to 128k
2026-04-02 01:41:10 +08:00
Kevin Codex
a26844ac7e Merge pull request #64 from tunnckoCore/feat/nested-skills-support
fix: support nested skill directories
2026-04-02 01:40:39 +08:00
Kevin Codex
732633cdf8 Merge pull request #62 from gnanam1990/fix/gemini-auth-login-screen
fix: add OpenAI and Gemini to /login 3rd-party platform screen
2026-04-02 01:32:07 +08:00
Charlike Mike Reagent
63adb95e8d fix: support nested skill directories
Load nested SKILL.md files from .claude/skills and namespace them with colons so category-based skill layouts work in Claude Code clients.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 20:20:13 +03:00
gnanam1990
802cb4ea36 fix: add OpenAI and Gemini to /login 3rd-party platform screen
The /login platform_setup screen only listed Amazon Bedrock,
Microsoft Foundry, and Vertex AI — OpenAI-compatible providers
and Gemini were completely absent, leaving users with no guidance
on how to use OpenClaude's main feature.

Changes:
- Selector label: "Amazon Bedrock, Microsoft Foundry, or Vertex AI"
  → "OpenAI, Gemini, Bedrock, Ollama, and more"
- Description updated to mention OpenAI-compatible providers and Gemini
- Added OpenAI and Gemini env var instructions to the docs list

Fixes #43 (login screen confusion for Gemini users).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 22:43:55 +05:30
Vasanthdev2004
c3ddc83ed6 fix: type PrBadge props 2026-04-01 22:17:40 +05:30
Vasanthdev2004
6c35f4e52e fix: guard transcript sandbox subscription 2026-04-01 22:12:45 +05:30
Kevin Codex
5f774cfe5d Merge pull request #47 from gnanam1990/fix/agent-tool-schema-openai
fix: normalize tool schemas so required ⊇ properties for OpenAI/Codex
2026-04-02 00:31:20 +08:00
Kevin Codex
8da5614110 Merge pull request #57 from strato-space/fix/codex-post-merge
Fix Codex launcher and prompt rerender follow-ups
2026-04-02 00:27:08 +08:00
vp
c8a780a9bd fix: follow up Codex launcher and input handling 2026-04-01 19:15:37 +03:00
Kevin Codex
b8ea6f8a6e Merge pull request #56 from gnanam1990/fix/gemini-auth-login-screen
fix: add CLAUDE_CODE_USE_GEMINI to is3P check to prevent login screen
2026-04-02 00:11:07 +08:00
gnanam1990
c3db3d882d fix: add CLAUDE_CODE_USE_GEMINI to is3P check in isAnthropicAuthEnabled
CLAUDE_CODE_USE_GEMINI was missing from the is3P check in
isAnthropicAuthEnabled(), causing Gemini users to see the
Anthropic login screen at startup even with GEMINI_API_KEY set.

isAnthropicAuthEnabled() returns true when is3P is false, which
triggers the OAuth/login flow. Since CLAUDE_CODE_USE_GEMINI was
not included, Gemini was not treated as a 3P provider here,
showing the gcloud/Anthropic login prompt unexpectedly.

Fix: add CLAUDE_CODE_USE_GEMINI to the is3P check, consistent
with how CLAUDE_CODE_USE_OPENAI is handled in the same block.

Fixes #43.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 21:29:34 +05:30
Kevin Codex
5a3573f9c3 Merge pull request #54 from kevincodex1/feature/updated-branding
Feature/updated branding
2026-04-01 23:42:07 +08:00
Kevin Codex
58009bcb1c removed unnecessary changes 2026-04-01 23:39:35 +08:00
Kevin Codex
65af73910c improved startup screen 2026-04-01 23:32:38 +08:00
Juan Camilo
39d9616ed7 fix: update DeepSeek context window from 64k to 128k
DeepSeek V3 documentation specifies 128k context window for both
deepseek-chat and deepseek-reasoner. The previous 64k value caused
premature compaction and underutilization of available context.

Relates to #39

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 17:03:57 +02:00
Juan Camilo
788cfa3e9a fix: handle empty string delta.content in OpenAI streaming
Some providers send an empty string as the first delta to signal
streaming start. The falsy check `if (delta.content)` treated "" as
absent, skipping content_block_start. The next delta with actual
content was emitted without it, violating the Anthropic protocol.

Changed to `delta.content != null` to distinguish between absent field
and empty string.

Relates to #42

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 17:03:28 +02:00
gnanam1990
6c46974bf9 fix: normalize tool schemas so required ⊇ properties for OpenAI/Codex
OpenAI and Codex enforce strict JSON Schema validation — every key in
`properties` must also appear in `required`. Anthropic schemas often
mark fields as optional (omitted from `required`), which causes 400
errors on OpenAI/Codex endpoints.

Example: the Agent tool has `subagent_type` in `properties` but not
in `required`, producing:
  "Invalid schema for function 'Agent': Missing 'subagent_type'
   in required array"

Fix: add `normalizeSchemaForOpenAI()` in `convertTools()` that ensures
`required` is a superset of all `properties` keys before the schema is
sent to the API. Existing `required` entries are preserved; missing
ones are appended. Schemas without `properties` pass through unchanged.

Fixes #46.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 20:26:21 +05:30
Sahil
a44f45e951 first commit 2026-04-01 22:55:56 +08:00
Daniel
372ba31c17 feat: enhance tool conversion to support strict mode based on schema validation 2026-04-01 22:55:56 +08:00
Kevin Codex
8750f84464 Merge pull request #44 from gnanam1990/fix/auth-ci-crash
fix: skip Anthropic credential check in CI for 3P providers
2026-04-01 22:35:16 +08:00
gnanam1990
1278967223 fix: skip Anthropic credential check in CI for 3P providers
In CI mode, auth.ts throws if ANTHROPIC_API_KEY or
CLAUDE_CODE_OAUTH_TOKEN are missing — even when using
CLAUDE_CODE_USE_OPENAI=1 or CLAUDE_CODE_USE_GEMINI=1.
This crashes any OpenAI/Gemini/Ollama CI pipeline immediately.

Fix: guard the throw with !isUsing3PServices() so non-Anthropic
providers skip the check entirely.

Also added CLAUDE_CODE_USE_GEMINI to isUsing3PServices() which
was missing — Gemini users were excluded from the 3P detection
used elsewhere in the same function.

Fixes #40.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 20:00:42 +05:30
Kevin Codex
82e7168349 Merge pull request #38 from Vasanthdev2004/deepseek-max-tokens
test: cover DeepSeek max token limits
2026-04-01 22:27:20 +08:00
Vasanthdev2004
7ef085c605 test: cover deepseek max token limits 2026-04-01 19:17:58 +05:30
Kevin Codex
00744a814b Merge pull request #31 from auriti/fix/double-slash-import
fix: correct double-slash in import path (structuredIO.ts)
2026-04-01 21:35:03 +08:00
Kevin Codex
11a3553055 Merge pull request #5 from Vasanthdev2004/codex/provider-profile-recommendations
feat: add intelligent provider profile recommendation
2026-04-01 21:34:30 +08:00
Juan Camilo
598f59e546 fix: map tool_choice 'none' in OpenAI shim
The Anthropic-to-OpenAI tool_choice mapping handled 'auto', 'any', and
'tool' but not 'none'. When 'none' was passed, the request was sent
without tool_choice, defaulting to 'auto' — the opposite of the
intended behavior (disable tool use).

Relates to #30

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 15:34:08 +02:00
Juan Camilo
99543a2aae fix: correct double-slash in import path for structuredIO
The import `src//types/message.js` contains a double slash that may cause
unpredictable module resolution depending on OS and bundler behavior.

Relates to #29

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 15:33:40 +02:00
Vasanthdev2004
ce45bd080e Merge origin/main into provider-profile-recommendations 2026-04-01 18:38:59 +05:30
Kevin Codex
0192dc0fa0 Merge pull request #21 from gnanam1990/fix/openai-stream-duplicate-response
fix: prevent duplicate responses in OpenAI streaming
2026-04-01 20:57:52 +08:00
gnanam1990
cb86f73c06 fix: prevent duplicate responses in OpenAI streaming
When certain OpenAI-compatible APIs (LM Studio, some proxies) send
multiple stream chunks with finish_reason set, the finish block ran
multiple times — emitting content_block_stop and message_delta for
each one. Each content_block_stop caused claude.ts to create and yield
a new assistant message, making every response appear twice in the UI.

Fix: add hasProcessedFinishReason flag (same pattern as the existing
hasEmittedFinalUsage flag) so the finish block only executes once per
response regardless of how many chunks contain finish_reason.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 18:14:41 +05:30
Kevin Codex
1431522ecd Merge pull request #19 from SenaxxZz/patch-1
Fix Windows ESM import by using file URL
2026-04-01 20:24:13 +08:00
gnanam1990
4ca94b2454 feat: add context window guard for OpenAI-compatible models
Without this fix, getContextWindowForModel() returns 200k for all OpenAI
models (the Claude default), causing two problems:
  1. Auto-compact/warnings trigger at wrong thresholds (200k instead of 128k)
  2. getModelMaxOutputTokens() returns 32k causing 400 errors from APIs that
     cap output tokens lower (gpt-4o supports max 16384)

Fix:
- Add openaiContextWindows.ts with known context window sizes and max output
  token limits for 30+ OpenAI-compatible models (OpenAI, DeepSeek, Groq,
  Mistral, Ollama, LM Studio)
- Hook into getContextWindowForModel() so correct input limits are used
- Hook into getModelMaxOutputTokens() so correct output limits are sent,
  preventing 400 "max_tokens is too large" errors

All existing warning, blocking, and auto-compact infrastructure works
automatically once the correct limits are returned.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 17:42:04 +05:30
Kevin Codex
e7c600de3b chore: release 0.1.4 2026-04-01 20:10:12 +08:00
gnanam1990
a3d8ab0fec feat: add native Gemini provider for Google AI models
Adds Google Gemini as a first-class provider using Gemini's OpenAI-compatible
endpoint, supporting gemini-2.0-flash, gemini-2.5-pro, and gemini-2.0-flash-lite
across all three model tiers (opus/sonnet/haiku).

- Add 'gemini' to APIProvider type with CLAUDE_CODE_USE_GEMINI env detection
- Map all 11 model configs to appropriate Gemini models per tier
- Route Gemini through existing OpenAI shim (generativelanguage.googleapis.com)
- Support GEMINI_API_KEY and GOOGLE_API_KEY for authentication
- Fix model display name to show actual Gemini model instead of Claude fallback
- Add Gemini support to provider-launch, provider-bootstrap, system-check scripts
- Add dev:gemini npm script for local development

Bootstrap: bun run profile:init -- --provider gemini --api-key <key>
Launch: bun run dev:gemini

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 17:38:30 +05:30