Commit Graph

499 Commits

Author SHA1 Message Date
Kevin Codex
00744a814b Merge pull request #31 from auriti/fix/double-slash-import
fix: correct double-slash in import path (structuredIO.ts)
2026-04-01 21:35:03 +08:00
Juan Camilo
fd5e954990 fix: restrict .openclaude-profile.json permissions to owner-only (0600)
The profile file may contain API keys (OPENAI_API_KEY, CODEX_API_KEY,
GEMINI_API_KEY) in plain text. Without explicit permissions, writeFileSync
uses the process umask — on systems with permissive umask (0022), the file
is world-readable (644), exposing credentials to other users.

Relates to #24

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 15:34:37 +02:00
Kevin Codex
11a3553055 Merge pull request #5 from Vasanthdev2004/codex/provider-profile-recommendations
feat: add intelligent provider profile recommendation
2026-04-01 21:34:30 +08:00
Juan Camilo
598f59e546 fix: map tool_choice 'none' in OpenAI shim
The Anthropic-to-OpenAI tool_choice mapping handled 'auto', 'any', and
'tool' but not 'none'. When 'none' was passed, the request was sent
without tool_choice, defaulting to 'auto' — the opposite of the
intended behavior (disable tool use).

Relates to #30

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 15:34:08 +02:00
Juan Camilo
99543a2aae fix: correct double-slash in import path for structuredIO
The import `src//types/message.js` contains a double slash that may cause
unpredictable module resolution depending on OS and bundler behavior.

Relates to #29

Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
2026-04-01 15:33:40 +02:00
Vasanthdev2004
ce45bd080e Merge origin/main into provider-profile-recommendations 2026-04-01 18:38:59 +05:30
Kevin Codex
0192dc0fa0 Merge pull request #21 from gnanam1990/fix/openai-stream-duplicate-response
fix: prevent duplicate responses in OpenAI streaming
2026-04-01 20:57:52 +08:00
gnanam1990
cb86f73c06 fix: prevent duplicate responses in OpenAI streaming
When certain OpenAI-compatible APIs (LM Studio, some proxies) send
multiple stream chunks with finish_reason set, the finish block ran
multiple times — emitting content_block_stop and message_delta for
each one. Each content_block_stop caused claude.ts to create and yield
a new assistant message, making every response appear twice in the UI.

Fix: add hasProcessedFinishReason flag (same pattern as the existing
hasEmittedFinalUsage flag) so the finish block only executes once per
response regardless of how many chunks contain finish_reason.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 18:14:41 +05:30
Kevin Codex
1431522ecd Merge pull request #19 from SenaxxZz/patch-1
Fix Windows ESM import by using file URL
2026-04-01 20:24:13 +08:00
gnanam1990
4ca94b2454 feat: add context window guard for OpenAI-compatible models
Without this fix, getContextWindowForModel() returns 200k for all OpenAI
models (the Claude default), causing two problems:
  1. Auto-compact/warnings trigger at wrong thresholds (200k instead of 128k)
  2. getModelMaxOutputTokens() returns 32k causing 400 errors from APIs that
     cap output tokens lower (gpt-4o supports max 16384)

Fix:
- Add openaiContextWindows.ts with known context window sizes and max output
  token limits for 30+ OpenAI-compatible models (OpenAI, DeepSeek, Groq,
  Mistral, Ollama, LM Studio)
- Hook into getContextWindowForModel() so correct input limits are used
- Hook into getModelMaxOutputTokens() so correct output limits are sent,
  preventing 400 "max_tokens is too large" errors

All existing warning, blocking, and auto-compact infrastructure works
automatically once the correct limits are returned.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 17:42:04 +05:30
Kevin Codex
e7c600de3b chore: release 0.1.4 2026-04-01 20:10:12 +08:00
gnanam1990
a3d8ab0fec feat: add native Gemini provider for Google AI models
Adds Google Gemini as a first-class provider using Gemini's OpenAI-compatible
endpoint, supporting gemini-2.0-flash, gemini-2.5-pro, and gemini-2.0-flash-lite
across all three model tiers (opus/sonnet/haiku).

- Add 'gemini' to APIProvider type with CLAUDE_CODE_USE_GEMINI env detection
- Map all 11 model configs to appropriate Gemini models per tier
- Route Gemini through existing OpenAI shim (generativelanguage.googleapis.com)
- Support GEMINI_API_KEY and GOOGLE_API_KEY for authentication
- Fix model display name to show actual Gemini model instead of Claude fallback
- Add Gemini support to provider-launch, provider-bootstrap, system-check scripts
- Add dev:gemini npm script for local development

Bootstrap: bun run profile:init -- --provider gemini --api-key <key>
Launch: bun run dev:gemini

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 17:38:30 +05:30
Vasanthdev2004
f51cd3aa15 Merge origin/main into codex/provider-profile-recommendations
Preserve provider recommendation workflows while integrating Codex profile support, safer launch isolation, and updated docs/scripts from upstream main.
2026-04-01 17:33:07 +05:30
Kevin Codex
6b6407018d Merge pull request #17 from Kartvya69/fix/tui-freeze-style-rerender
Fix TUI freeze without dropping prompt styles
2026-04-01 19:54:12 +08:00
SenaxZz
2ee43e010b Fix Windows ESM import by using file URL 2026-04-01 13:53:36 +02:00
Kartvya69
9ee20cfd4a fix: preserve tui styles while fixing freeze 2026-04-01 11:33:08 +00:00
Kevin Codex
c1317ef544 Merge pull request #16 from Vasanthdev2004/codex/fix-open-build-missing-stubs
fix: stub internal-only modules in open build
2026-04-01 19:32:53 +08:00
Kevin Codex
24f1b52918 Merge pull request #14 from umairinayat/fix/openai-stream-usage-accounting
Fix missing usage accounting for final OpenAI stream chunks
2026-04-01 19:31:50 +08:00
Kevin Codex
833c90fbc7 Merge pull request #12 from salmanrajz/fix/supply-chain-safety-and-build-docs
security: remove runtime require of unverified modifiers-napi package
2026-04-01 19:30:07 +08:00
Kevin Codex
2e70fa1bde Merge pull request #11 from strato-space/feat/codexplan-codexspark
Add Codex plan/spark provider support
2026-04-01 19:28:55 +08:00
Kevin Codex
b16d46d61e Merge pull request #10 from Vasanthdev2004/codex/fix-windows-esm-launch
fix: support Windows launcher import paths
2026-04-01 19:26:50 +08:00
Kevin Codex
b7bc3f361c Merge pull request #9 from gnanam1990/feat/ollama-provider
fix: resolve frozen terminal for OpenAI/3P provider users (#3)
2026-04-01 18:48:37 +08:00
Vasanthdev2004
5175dba6da fix: stub internal-only modules in open build 2026-04-01 15:26:55 +05:30
salmanrajz
cb24750cb7 security: remove runtime require of unverified modifiers-napi package
Fixes #7. The modifiers-napi package is an Anthropic-internal native
addon, but a package with the same name exists on npm and could be a
supply chain attack vector. The build script already stubs it, but
the source code had live require() calls that would execute when
running without the bundler (e.g. bun dev, ts-node).

Replaced both functions with safe no-ops since modifier key detection
is not needed in the open-source build. Build verified passing.
2026-04-01 12:10:31 +04:00
gnanam1990
6cf95f5b1d fix: show actual OpenAI model name in welcome screen UI
When using OpenAI provider, getPublicModelDisplayName() was incorrectly
returning "Opus 4.6" because CLAUDE_OPUS_4_6_CONFIG.openai maps to 'gpt-4o',
causing a false match in the switch statement. Now returns null for OpenAI
provider so the raw model name (e.g. 'gpt-4o') is displayed directly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 13:23:48 +05:30
vp
cbeed0f76f Add Codex plan/spark provider support 2026-04-01 10:44:35 +03:00
umairinayat
0a5827c0b6 fix(openai-shim): preserve final streaming usage chunks
Handle OpenAI-compatible SSE responses that send usage in a trailing empty-choices chunk so token accounting and budget enforcement stay correct.
2026-04-01 12:33:06 +05:00
Vasanthdev2004
4ce7dcf91e fix: support Windows launcher import paths 2026-04-01 12:43:14 +05:30
gnanam1990
d1267393a9 fix: resolve frozen terminal for OpenAI/3P provider users (#3)
The showSetupScreens() early return for CLAUDE_CODE_USE_OPENAI skipped
all trust state initialization (setSessionTrustAccepted, GrowthBook,
getSystemContext), causing downstream config lookups to fail silently.
This prevented the REPL component tree from mounting correctly —
useInput never fired, stdin stayed in cooked mode, and the terminal
appeared frozen.

Fix: skip only the UI dialogs (onboarding, trust, MCP approval) for
OpenAI provider while still running the critical state initialization
that the REPL depends on.

Closes #3

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 12:17:31 +05:30
Vasanthdev2004
8fe03cba57 fix: harden provider recommendation safety 2026-04-01 11:55:24 +05:30
Vasanthdev2004
174eb8ad3b feat: add intelligent provider profile recommendation 2026-04-01 11:16:49 +05:30
Kevin Codex
2d7aa9c841 feat: rebrand as Open Claude and harden OpenAI REPL 2026-04-01 13:31:18 +08:00
Kevin Codex
55098bf9b7 Merge pull request #2 from gnanam1990/feat/smart-auto-router
feat: intelligent auto-router — routes to fastest/cheapest provider automatically
2026-04-01 12:51:13 +08:00
Kevin Codex
5eb7ab4ecc Merge pull request #1 from gnanam1990/feat/ollama-provider
feat: add native Ollama provider for local LLM support
2026-04-01 12:50:37 +08:00
gnanam1990
6b163e2e7e feat: add intelligent smart auto-router with latency/cost scoring 2026-04-01 10:19:32 +05:30
gnanam1990
752c71c30b feat: add native Ollama provider for local LLM support 2026-04-01 10:05:27 +05:30
Kevin
c957d495ac fix: prevent interactive stream crash on node removal 2026-04-01 11:23:47 +08:00
Kevin
ba5e1f07af fix: suppress OpenAI startup warning and account banner 2026-04-01 11:02:56 +08:00
Kevin
958f8c1869 chore: publish @gitlawb/openclaude package 2026-04-01 10:52:59 +08:00
Kevin
770e16dadb fix: restore interactive OpenAI REPL on React 18 2026-04-01 09:52:00 +08:00
Reservieren
e69cf0917e feat: enhance provider-launch script with fast mode and improved argument parsing 2026-03-31 22:12:00 -03:00
Reservieren
009c29d318 refactor: update import paths for react/compiler-runtime to react-compiler-runtime
feat: add OpenClaude local agent playbook for setup and usage instructions

chore: implement provider bootstrap script for profile initialization

chore: create provider launch script to manage provider execution

chore: add system check script for runtime diagnostics and validation

feat: implement useEffectEventCompat hook for React 18 compatibility
2026-03-31 22:09:56 -03:00
Kevin
747be9c2f3 fix: restore interactive OpenAI REPL startup 2026-04-01 05:16:40 +08:00
Kevin
651335f682 fix: skip auth and onboarding for OpenAI provider in interactive mode
The interactive REPL was hanging because isAnthropicAuthEnabled() didn't
recognize CLAUDE_CODE_USE_OPENAI as a 3rd party provider, triggering
OAuth flows. Also skip setup screens (onboarding, trust dialog) when
using the OpenAI shim.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 03:34:17 +08:00
did:key:z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr
db04c3e0c0 fix: bypass version gate by setting MACRO.VERSION to 99.0.0
The original code checks GrowthBook for a minimum version and blocks
startup if the build version is too low. Setting to 99.0.0 ensures
OpenClaude always passes the check.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 02:48:10 +08:00
did:key:z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr
3e652cafdf feat: add build system, stubs, and npm packaging — openclaude is now runnable
- package.json with all 70+ dependencies
- Bun build script with feature flag shims, native module stubs, otel externals
- Stubs for ~15 missing source files (snapshot gaps)
- tsconfig.json for TypeScript
- bin/openclaude entry point
- Builds to single 19MB dist/cli.mjs
- Verified: --version and --help work

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 02:36:07 +08:00
did:key:z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr
fd108243eb docs: rewrite README for OpenClaude — setup instructions for all providers
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 23:16:40 +08:00
did:key:z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr
619b5fb603 feat: add OpenAI-compatible provider shim — use any LLM with Claude Code
Adds a new 'openai' API provider that translates Anthropic SDK calls to
OpenAI chat completions format, enabling Claude Code's full tool system
(bash, file read/write/edit, grep, glob, agents) with any OpenAI-compatible
model: GPT-4o, DeepSeek, Gemini, Llama, Ollama, OpenRouter, and 200+ more.

Set CLAUDE_CODE_USE_OPENAI=1, OPENAI_API_KEY, and OPENAI_MODEL to use.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 23:12:43 +08:00
did:key:z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr
d2542c9a62 asdf
Squash the current repository state back into one baseline commit while
preserving the README reframing and repository contents.

Constraint: User explicitly requested a single squashed commit with subject "asdf"
Confidence: high
Scope-risk: broad
Reversibility: clean
Directive: This commit intentionally rewrites published history; coordinate before future force-pushes
Tested: git status clean; local history rewritten to one commit; force-pushed main to origin and instructkr
Not-tested: Fresh clone verification after push
2026-03-31 03:34:03 -07:00