Compare commits

...

14 Commits

Author SHA1 Message Date
github-actions[bot]
c0b8a59a23 chore(main): release 0.5.1 (#776)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-20 12:47:40 +08:00
Kevin Codex
aab489055c fix: require trusted approval for sandbox override (#778) 2026-04-20 12:01:44 +08:00
Kevin Codex
7002cb302b fix: enforce Bash path constraints after sandbox allow (#777) 2026-04-20 11:46:24 +08:00
Kevin Codex
739b8d1f40 fix: enforce MCP OAuth callback state before errors (#775) 2026-04-20 09:36:05 +08:00
github-actions[bot]
f166ec1a4e chore(main): release 0.5.0 (#758)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-20 08:30:58 +08:00
Kevin Codex
13e9f22a83 feat: mask provider api key input (#772) 2026-04-20 08:25:22 +08:00
Kevin Codex
f828171ef1 fix: allow provider recovery during startup (#765) 2026-04-20 06:46:05 +08:00
Allan Almeida
e6e8d9a248 feat: add OPENCLAUDE_DISABLE_STRICT_TOOLS env var to opt out of strict MCP tool schema normalization (#770)
When set, disables strict schema normalization for non-Gemini providers.
Useful for OpenAI-compatible endpoints that reject MCP tools with complex
optional params (e.g. list[dict]) with "Extra required key ... supplied"
errors.
2026-04-20 06:45:01 +08:00
Sreedhar Busanelli
2c98be7002 fix: remove cached mcpClient in diagnostic tracking to prevent stale references (#727)
* fix: remove cached mcpClient in diagnostic tracking to prevent stale references

Resolves TODO comment about not caching the connected mcpClient since it can change.

Changes:
- Remove cached mcpClient field from DiagnosticTrackingService
- Add currentMcpClients storage to track active clients
- Update beforeFileEdited, getNewDiagnostics, and ensureFileOpened to accept client parameter
- Add backward-compatible methods to maintain existing API
- Update all callers to use new methods
- Add comprehensive test coverage

This prevents using stale MCP client references during reconnections,
making diagnostic tracking more reliable.

Fixes #TODO

* docs: add my contributions section to README

Add fork-specific section highlighting:
- Diagnostic tracking enhancement (PR #727)
- Technical skills demonstrated
- Links to original project and my work
- Professional contribution showcase

* revert: remove README.md contributions section to comply with reviewer request

- Remove 'My Fork & Contributions' section from README.md
- Keep README.md focused on original project documentation
- Maintain clean, project-focused README as requested by reviewer
2026-04-19 09:02:52 +08:00
3kin0x
b786b765f0 fix(api): drop orphan tool results to satisfy strict role sequence (#745)
* fix(api): drop orphan tool results to satisfy Mistral/OpenAI strict role sequence

* test: add test for orphan tool results and restore gemini comments
2026-04-19 08:57:14 +08:00
bpawnzz
55c5f262a9 fix: use raw context window for auto-compact percentage display (#748)
Problem: After auto-compaction with DeepSeek models (e.g., deepseek-chat),
the status line displayed ~16% remaining until next auto-compact, but users
expected ~30% (since compaction reduces usage to roughly half of the full
128k context).

Root cause: calculateTokenWarningState() used the auto-compaction threshold
(effectiveContextWindow - 13k buffer) as the denominator for percentLeft.
For DeepSeek-chat:
- Raw context: 128,000
- Effective: 119,808 (128k - 8,192 output reservation)
- Threshold: 106,808 (effective - 13k buffer)
At 90k usage:
  - Old: (106,808 - 90k) / 106,808 ≈ 16%
  - Expected: (128,000 - 90k) / 128,000 ≈ 30%

Fix: Change percentLeft calculation to use raw context window from
getContextWindowForModel() as denominator, while keeping threshold-based
warnings/triggers unchanged. This makes the displayed percentage show
remaining capacity relative to the model's full context size.

Impact:
- UI now shows correct % of total context remaining
- Auto-compaction trigger point unchanged (still ~90% of effective window)
- All other threshold calculations unaffected

Testing:
- Manual verification: DeepSeek-chat at 90k tokens shows 30% remaining (was 16%)
- Manual verification: Threshold still triggers at ~106k tokens
- Build succeeds: npm run build
- No breaking changes: Callers only depend on percentLeft for display; threshold logic unchanged

Fixes the user-reported discrepancy for DeepSeek and other OpenAI-compatible models.
2026-04-19 08:55:41 +08:00
Kagura
002a8f1f6d fix(mcp): sync required array with properties in tool schemas (#754)
* fix(mcp): sync required array with properties in tool schemas

MCP servers can emit schemas where the required array contains keys
not present in properties. This causes API 400 errors:
"Extra required key 'X' supplied."

- Add sanitizeSchemaRequired() to filter required arrays
- Apply it to MCP tool inputJSONSchema before sending to API
- Also fix filterSwarmFieldsFromSchema to update required after
  removing properties

Fixes #525

* test: add MCP schema required sanitization test
2026-04-19 06:44:25 +08:00
dhenuh
3d1979ff06 fix(help): prevent /help tab crash from undefined descriptions (#732)
- Guard formatDescriptionWithSource() so missing command descriptions become ''
- Harden truncate helpers to accept undefined text/path safely
- Add regression tests covering undefined input cases
2026-04-19 06:38:44 +08:00
lunamonke
b0d9fe7112 Provider loading fix (#623)
* add mistral and gemini provider type for profile provider field

* load latest locally selected

* env variables take precedence over json save

* add gemini context windows and fix gemini defaulting for env

* load on startup fix

* fix failing tests

* clarify test message

* fix variable mismatches

* fix failing test

* delete keys and set profile.apiKey for mistral and gemini

* switch model as well when switching provider

* set model when adding a new model
2026-04-18 01:46:20 +08:00
45 changed files with 1345 additions and 248 deletions

View File

@@ -1,3 +1,3 @@
{ {
".": "0.4.0" ".": "0.5.1"
} }

View File

@@ -1,5 +1,32 @@
# Changelog # Changelog
## [0.5.1](https://github.com/Gitlawb/openclaude/compare/v0.5.0...v0.5.1) (2026-04-20)
### Bug Fixes
* enforce Bash path constraints after sandbox allow ([#777](https://github.com/Gitlawb/openclaude/issues/777)) ([7002cb3](https://github.com/Gitlawb/openclaude/commit/7002cb302b78ea2a19da3f26226de24e2903fa1d))
* enforce MCP OAuth callback state before errors ([#775](https://github.com/Gitlawb/openclaude/issues/775)) ([739b8d1](https://github.com/Gitlawb/openclaude/commit/739b8d1f40fde0e401a5cbd2b9a55d88bd5124ad))
* require trusted approval for sandbox override ([#778](https://github.com/Gitlawb/openclaude/issues/778)) ([aab4890](https://github.com/Gitlawb/openclaude/commit/aab489055c53dd64369414116fe93226d2656273))
## [0.5.0](https://github.com/Gitlawb/openclaude/compare/v0.4.0...v0.5.0) (2026-04-20)
### Features
* add OPENCLAUDE_DISABLE_STRICT_TOOLS env var to opt out of strict MCP tool schema normalization ([#770](https://github.com/Gitlawb/openclaude/issues/770)) ([e6e8d9a](https://github.com/Gitlawb/openclaude/commit/e6e8d9a24897e4c9ef08b72df20fabbf8ef27f38))
* mask provider api key input ([#772](https://github.com/Gitlawb/openclaude/issues/772)) ([13e9f22](https://github.com/Gitlawb/openclaude/commit/13e9f22a83a2b0f85f557b1e12c9442ba61241e4))
### Bug Fixes
* allow provider recovery during startup ([#765](https://github.com/Gitlawb/openclaude/issues/765)) ([f828171](https://github.com/Gitlawb/openclaude/commit/f828171ef1ab94e2acf73a28a292799e4e26cc0d))
* **api:** drop orphan tool results to satisfy strict role sequence ([#745](https://github.com/Gitlawb/openclaude/issues/745)) ([b786b76](https://github.com/Gitlawb/openclaude/commit/b786b765f01f392652eaf28ed3579a96b7260a53))
* **help:** prevent /help tab crash from undefined descriptions ([#732](https://github.com/Gitlawb/openclaude/issues/732)) ([3d1979f](https://github.com/Gitlawb/openclaude/commit/3d1979ff066db32415e0c8321af916d81f5f2621))
* **mcp:** sync required array with properties in tool schemas ([#754](https://github.com/Gitlawb/openclaude/issues/754)) ([002a8f1](https://github.com/Gitlawb/openclaude/commit/002a8f1f6de2fcfc917165d828501d3047bad61f))
* remove cached mcpClient in diagnostic tracking to prevent stale references ([#727](https://github.com/Gitlawb/openclaude/issues/727)) ([2c98be7](https://github.com/Gitlawb/openclaude/commit/2c98be700274a4241963b5f43530bf3bd8f8963f))
* use raw context window for auto-compact percentage display ([#748](https://github.com/Gitlawb/openclaude/issues/748)) ([55c5f26](https://github.com/Gitlawb/openclaude/commit/55c5f262a9a5a8be0aa9ae8dc6c7dafc465eb2c6))
## [0.4.0](https://github.com/Gitlawb/openclaude/compare/v0.3.0...v0.4.0) (2026-04-17) ## [0.4.0](https://github.com/Gitlawb/openclaude/compare/v0.3.0...v0.4.0) (2026-04-17)

View File

@@ -331,7 +331,8 @@ For larger changes, open an issue first so the scope is clear before implementat
- `bun run build` - `bun run build`
- `bun run test:coverage` - `bun run test:coverage`
- `bun run smoke` - `bun run smoke`
- focused `bun test ...` runs for touched areas - focused `bun test ...` runs for files and flows you changed
## Disclaimer ## Disclaimer

View File

@@ -1,6 +1,6 @@
{ {
"name": "@gitlawb/openclaude", "name": "@gitlawb/openclaude",
"version": "0.4.0", "version": "0.5.1",
"description": "Claude Code opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models", "description": "Claude Code opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models",
"type": "module", "type": "module",
"bin": { "bin": {

30
src/commands.test.ts Normal file
View File

@@ -0,0 +1,30 @@
import { formatDescriptionWithSource } from './commands.js'
describe('formatDescriptionWithSource', () => {
test('returns empty text for prompt commands missing a description', () => {
const command = {
name: 'example',
type: 'prompt',
source: 'builtin',
description: undefined,
} as any
expect(formatDescriptionWithSource(command)).toBe('')
})
test('formats plugin commands with missing description safely', () => {
const command = {
name: 'example',
type: 'prompt',
source: 'plugin',
description: undefined,
pluginInfo: {
pluginManifest: {
name: 'MyPlugin',
},
},
} as any
expect(formatDescriptionWithSource(command)).toBe('(MyPlugin) ')
})
})

View File

@@ -740,23 +740,23 @@ export function getCommand(commandName: string, commands: Command[]): Command {
*/ */
export function formatDescriptionWithSource(cmd: Command): string { export function formatDescriptionWithSource(cmd: Command): string {
if (cmd.type !== 'prompt') { if (cmd.type !== 'prompt') {
return cmd.description return cmd.description ?? ''
} }
if (cmd.kind === 'workflow') { if (cmd.kind === 'workflow') {
return `${cmd.description} (workflow)` return `${cmd.description ?? ''} (workflow)`
} }
if (cmd.source === 'plugin') { if (cmd.source === 'plugin') {
const pluginName = cmd.pluginInfo?.pluginManifest.name const pluginName = cmd.pluginInfo?.pluginManifest.name
if (pluginName) { if (pluginName) {
return `(${pluginName}) ${cmd.description}` return `(${pluginName}) ${cmd.description ?? ''}`
} }
return `${cmd.description} (plugin)` return `${cmd.description ?? ''} (plugin)`
} }
if (cmd.source === 'builtin' || cmd.source === 'mcp') { if (cmd.source === 'builtin' || cmd.source === 'mcp') {
return cmd.description return cmd.description ?? ''
} }
if (cmd.source === 'bundled') { if (cmd.source === 'bundled') {

View File

@@ -401,7 +401,7 @@ test('buildCodexProfileEnv derives oauth source from secure storage when no expl
}) })
}) })
test('applySavedProfileToCurrentSession switches the current env to the saved Codex profile', async () => { test('explicitly declared env takes precedence over applySavedProfileToCurrentSession', async () => {
// @ts-expect-error cache-busting query string for Bun module mocks // @ts-expect-error cache-busting query string for Bun module mocks
const { applySavedProfileToCurrentSession } = await import( const { applySavedProfileToCurrentSession } = await import(
'../../utils/providerProfile.js?apply-saved-profile-codex' '../../utils/providerProfile.js?apply-saved-profile-codex'
@@ -430,18 +430,18 @@ test('applySavedProfileToCurrentSession switches the current env to the saved Co
expect(warning).toBeNull() expect(warning).toBeNull()
expect(processEnv.CLAUDE_CODE_USE_OPENAI).toBe('1') expect(processEnv.CLAUDE_CODE_USE_OPENAI).toBe('1')
expect(processEnv.OPENAI_MODEL).toBe('codexplan') expect(processEnv.OPENAI_MODEL).toBe('gpt-4o')
expect(processEnv.OPENAI_BASE_URL).toBe( expect(processEnv.OPENAI_BASE_URL).toBe(
'https://chatgpt.com/backend-api/codex', "https://api.openai.com/v1",
) )
expect(processEnv.CODEX_API_KEY).toBe('codex-live') expect(processEnv.CODEX_API_KEY).toBeUndefined()
expect(processEnv.CHATGPT_ACCOUNT_ID).toBe('acct_codex') expect(processEnv.CHATGPT_ACCOUNT_ID).toBeUndefined()
expect(processEnv.OPENAI_API_KEY).toBeUndefined() expect(processEnv.OPENAI_API_KEY).toBe("sk-openai")
expect(processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED).toBeUndefined() expect(processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED).toBeUndefined()
expect(processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID).toBeUndefined() expect(processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID).toBeUndefined()
}) })
test('applySavedProfileToCurrentSession ignores stale Codex env overrides for OAuth-backed profiles', async () => { test('explicitly declared env takes precedence over applySavedProfileToCurrentSession', async () => {
// @ts-expect-error cache-busting query string for Bun module mocks // @ts-expect-error cache-busting query string for Bun module mocks
const { applySavedProfileToCurrentSession } = await import( const { applySavedProfileToCurrentSession } = await import(
'../../utils/providerProfile.js?apply-saved-profile-codex-oauth' '../../utils/providerProfile.js?apply-saved-profile-codex-oauth'
@@ -465,13 +465,13 @@ test('applySavedProfileToCurrentSession ignores stale Codex env overrides for OA
processEnv, processEnv,
}) })
expect(warning).toBeNull() expect(warning).not.toBeUndefined()
expect(processEnv.OPENAI_MODEL).toBe('codexplan') expect(processEnv.OPENAI_MODEL).toBe('gpt-4o')
expect(processEnv.OPENAI_BASE_URL).toBe( expect(processEnv.OPENAI_BASE_URL).toBe(
'https://chatgpt.com/backend-api/codex', "https://api.openai.com/v1",
) )
expect(processEnv.CODEX_API_KEY).toBeUndefined() expect(processEnv.CODEX_API_KEY).toBe("stale-codex-key")
expect(processEnv.CHATGPT_ACCOUNT_ID).not.toBe('acct_stale') expect(processEnv.CHATGPT_ACCOUNT_ID).toBe('acct_stale')
expect(processEnv.CHATGPT_ACCOUNT_ID).toBeTruthy() expect(processEnv.CHATGPT_ACCOUNT_ID).toBeTruthy()
}) })
@@ -487,8 +487,8 @@ test('buildCurrentProviderSummary redacts poisoned model and endpoint values', (
}) })
expect(summary.providerLabel).toBe('OpenAI-compatible') expect(summary.providerLabel).toBe('OpenAI-compatible')
expect(summary.modelLabel).toBe('sk-...5678') expect(summary.modelLabel).toBe('sk-...678')
expect(summary.endpointLabel).toBe('sk-...5678') expect(summary.endpointLabel).toBe('sk-...678')
}) })
test('buildCurrentProviderSummary labels generic local openai-compatible providers', () => { test('buildCurrentProviderSummary labels generic local openai-compatible providers', () => {

View File

@@ -3,6 +3,7 @@ import * as React from 'react'
import { DEFAULT_CODEX_BASE_URL } from '../services/api/providerConfig.js' import { DEFAULT_CODEX_BASE_URL } from '../services/api/providerConfig.js'
import { Box, Text } from '../ink.js' import { Box, Text } from '../ink.js'
import { useKeybinding } from '../keybindings/useKeybinding.js' import { useKeybinding } from '../keybindings/useKeybinding.js'
import { useSetAppState } from '../state/AppState.js'
import type { ProviderProfile } from '../utils/config.js' import type { ProviderProfile } from '../utils/config.js'
import { import {
clearCodexCredentials, clearCodexCredentials,
@@ -581,6 +582,11 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
return return
} }
setAppState(prev => ({
...prev,
mainLoopModel: GITHUB_PROVIDER_DEFAULT_MODEL,
mainLoopModelForSession: null,
}))
refreshProfiles() refreshProfiles()
setAppState(prev => ({ setAppState(prev => ({
...prev, ...prev,
@@ -609,6 +615,11 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
})) }))
providerLabel = active.name providerLabel = active.name
setAppState(prev => ({
...prev,
mainLoopModel: active.model,
mainLoopModelForSession: null,
}))
const settingsOverrideError = const settingsOverrideError =
clearStartupProviderOverrideFromUserSettings() clearStartupProviderOverrideFromUserSettings()
const isActiveCodexOAuth = isCodexOAuthProfile( const isActiveCodexOAuth = isCodexOAuthProfile(
@@ -801,6 +812,13 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
} }
const isActiveSavedProfile = getActiveProviderProfile()?.id === saved.id const isActiveSavedProfile = getActiveProviderProfile()?.id === saved.id
if (isActiveSavedProfile) {
setAppState(prev => ({
...prev,
mainLoopModel: saved.model,
mainLoopModelForSession: null,
}))
}
const settingsOverrideError = isActiveSavedProfile const settingsOverrideError = isActiveSavedProfile
? clearStartupProviderOverrideFromUserSettings() ? clearStartupProviderOverrideFromUserSettings()
: null : null
@@ -1132,6 +1150,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
focus={true} focus={true}
showCursor={true} showCursor={true}
placeholder={`${currentStep.placeholder}${figures.ellipsis}`} placeholder={`${currentStep.placeholder}${figures.ellipsis}`}
mask={currentStepKey === 'apiKey' ? '*' : undefined}
columns={80} columns={80}
cursorOffset={cursorOffset} cursorOffset={cursorOffset}
onChangeCursorOffset={setCursorOffset} onChangeCursorOffset={setCursorOffset}

View File

@@ -6,6 +6,7 @@ import stripAnsi from 'strip-ansi'
import { createRoot } from '../ink.js' import { createRoot } from '../ink.js'
import { AppStateProvider } from '../state/AppState.js' import { AppStateProvider } from '../state/AppState.js'
import { maskTextWithVisibleEdges } from '../utils/Cursor.js'
import TextInput from './TextInput.js' import TextInput from './TextInput.js'
import VimTextInput from './VimTextInput.js' import VimTextInput from './VimTextInput.js'
@@ -199,6 +200,13 @@ test('TextInput renders typed characters before delayed parent value commits', a
expect(output).not.toContain('Type here...') expect(output).not.toContain('Type here...')
}) })
test('maskTextWithVisibleEdges preserves only the first and last three chars', () => {
expect(maskTextWithVisibleEdges('sk-secret-12345678', '*')).toBe(
'sk-************678',
)
expect(maskTextWithVisibleEdges('abcdef', '*')).toBe('******')
})
test('VimTextInput preserves rapid typed characters before delayed parent value commits', async () => { test('VimTextInput preserves rapid typed characters before delayed parent value commits', async () => {
const { stdout, stdin, getOutput } = createTestStreams() const { stdout, stdin, getOutput } = createTestStreams()
const root = await createRoot({ const root = await createRoot({

View File

@@ -5,7 +5,7 @@ import {
} from '../utils/providerProfile.js' } from '../utils/providerProfile.js'
import { import {
getProviderValidationError, getProviderValidationError,
validateProviderEnvOrExit, validateProviderEnvForStartupOrExit,
} from '../utils/providerValidation.js' } from '../utils/providerValidation.js'
// OpenClaude: polyfill globalThis.File for Node < 20. // OpenClaude: polyfill globalThis.File for Node < 20.
@@ -132,7 +132,7 @@ async function main(): Promise<void> {
hydrateGithubModelsTokenFromSecureStorage() hydrateGithubModelsTokenFromSecureStorage()
} }
await validateProviderEnvOrExit() await validateProviderEnvForStartupOrExit()
// Print the gradient startup screen before the Ink UI loads // Print the gradient startup screen before the Ink UI loads
const { printStartupScreen } = await import('../components/StartupScreen.js') const { printStartupScreen } = await import('../components/StartupScreen.js')

View File

@@ -114,8 +114,8 @@ export const SandboxSettingsSchema = lazySchema(() =>
.boolean() .boolean()
.optional() .optional()
.describe( .describe(
'Allow commands to run outside the sandbox via the dangerouslyDisableSandbox parameter. ' + 'Allow trusted, user-initiated commands to run outside the sandbox. ' +
'When false, the dangerouslyDisableSandbox parameter is completely ignored and all commands must run sandboxed. ' + 'When false, sandbox override requests are ignored and all commands must run sandboxed. ' +
'Default: true.', 'Default: true.',
), ),
network: SandboxNetworkConfigSchema(), network: SandboxNetworkConfigSchema(),

View File

@@ -2856,3 +2856,91 @@ test('classifies chat-completions endpoint 404 failures with endpoint_not_found
}), }),
).rejects.toThrow('openai_category=endpoint_not_found') ).rejects.toThrow('openai_category=endpoint_not_found')
}) })
test('preserves valid tool_result and drops orphan tool_result', async () => {
let requestBody: Record<string, unknown> | undefined
globalThis.fetch = (async (_input, init) => {
requestBody = JSON.parse(String(init?.body))
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'mistral-large-latest',
choices: [
{
message: {
role: 'assistant',
content: 'done',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 12,
completion_tokens: 4,
total_tokens: 16,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create({
model: 'mistral-large-latest',
system: 'test system',
messages: [
{ role: 'user', content: 'Search and then I will interrupt' },
{
role: 'assistant',
content: [
{
type: 'tool_use',
id: 'valid_call_1',
name: 'Search',
input: { query: 'openclaude' },
},
],
},
{
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'valid_call_1',
content: 'Found it!',
},
{
type: 'tool_result',
tool_use_id: 'orphan_call_2',
content: 'Interrupted result',
},
{
role: 'user',
content: 'What happened?',
}
],
},
],
max_tokens: 64,
stream: false,
})
const messages = requestBody?.messages as Array<Record<string, unknown>>
// Should have: system, user, assistant (tool_use), tool (valid_call_1), user
// Should NOT have: tool (orphan_call_2)
const toolMessages = messages.filter(m => m.role === 'tool')
expect(toolMessages.length).toBe(1)
expect(toolMessages[0].tool_call_id).toBe('valid_call_1')
const orphanMessage = toolMessages.find(m => m.tool_call_id === 'orphan_call_2')
expect(orphanMessage).toBeUndefined()
})

View File

@@ -349,6 +349,7 @@ function convertMessages(
system: unknown, system: unknown,
): OpenAIMessage[] { ): OpenAIMessage[] {
const result: OpenAIMessage[] = [] const result: OpenAIMessage[] = []
const knownToolCallIds = new Set<string>()
// System message first // System message first
const sysText = convertSystemPrompt(system) const sysText = convertSystemPrompt(system)
@@ -368,13 +369,21 @@ function convertMessages(
const toolResults = content.filter((b: { type?: string }) => b.type === 'tool_result') const toolResults = content.filter((b: { type?: string }) => b.type === 'tool_result')
const otherContent = content.filter((b: { type?: string }) => b.type !== 'tool_result') const otherContent = content.filter((b: { type?: string }) => b.type !== 'tool_result')
// Emit tool results as tool messages // Emit tool results as tool messages, but ONLY if we have a matching tool_use ID.
// Mistral/OpenAI strictly require tool messages to follow an assistant message with tool_calls.
// If the user interrupted (ESC) and a synthetic tool_result was generated without a recorded tool_use,
// emitting it here would cause a "role must alternate" or "unexpected role" error.
for (const tr of toolResults) { for (const tr of toolResults) {
const id = tr.tool_use_id ?? 'unknown'
if (knownToolCallIds.has(id)) {
result.push({ result.push({
role: 'tool', role: 'tool',
tool_call_id: tr.tool_use_id ?? 'unknown', tool_call_id: id,
content: convertToolResultContent(tr.content, tr.is_error), content: convertToolResultContent(tr.content, tr.is_error),
}) })
} else {
logForDebugging(`Dropping orphan tool_result for ID: ${id} to prevent API error`)
}
} }
// Emit remaining user content // Emit remaining user content
@@ -415,9 +424,11 @@ function convertMessages(
input?: unknown input?: unknown
extra_content?: Record<string, unknown> extra_content?: Record<string, unknown>
signature?: string signature?: string
}, index) => { }) => {
const id = tu.id ?? `call_${crypto.randomUUID().replace(/-/g, '')}`
knownToolCallIds.add(id)
const toolCall: NonNullable<OpenAIMessage['tool_calls']>[number] = { const toolCall: NonNullable<OpenAIMessage['tool_calls']>[number] = {
id: tu.id ?? `call_${crypto.randomUUID().replace(/-/g, '')}`, id,
type: 'function' as const, type: 'function' as const,
function: { function: {
name: tu.name ?? 'unknown', name: tu.name ?? 'unknown',
@@ -442,7 +453,6 @@ function convertMessages(
// Merge into existing google-specific metadata if present // Merge into existing google-specific metadata if present
const existingGoogle = (toolCall.extra_content?.google as Record<string, unknown>) ?? {} const existingGoogle = (toolCall.extra_content?.google as Record<string, unknown>) ?? {}
toolCall.extra_content = { toolCall.extra_content = {
...toolCall.extra_content, ...toolCall.extra_content,
google: { google: {
@@ -597,7 +607,10 @@ function convertTools(
function: { function: {
name: t.name, name: t.name,
description: t.description ?? '', description: t.description ?? '',
parameters: normalizeSchemaForOpenAI(schema, !isGemini), parameters: normalizeSchemaForOpenAI(
schema,
!isGemini && !isEnvTruthy(process.env.OPENCLAUDE_DISABLE_STRICT_TOOLS),
),
}, },
} }
}) })

View File

@@ -14,6 +14,7 @@ import {
asTrimmedString, asTrimmedString,
parseChatgptAccountId, parseChatgptAccountId,
} from './codexOAuthShared.js' } from './codexOAuthShared.js'
import { DEFAULT_GEMINI_BASE_URL } from 'src/utils/providerProfile.js'
export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1' export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1'
export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex' export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex'
@@ -381,11 +382,15 @@ export function resolveProviderRequest(options?: {
}): ResolvedProviderRequest { }): ResolvedProviderRequest {
const isGithubMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) const isGithubMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const isMistralMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL) const isMistralMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
const isGeminiMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const requestedModel = const requestedModel =
options?.model?.trim() || options?.model?.trim() ||
(isMistralMode (isMistralMode
? process.env.MISTRAL_MODEL?.trim() ? process.env.MISTRAL_MODEL?.trim()
: process.env.OPENAI_MODEL?.trim()) || : process.env.OPENAI_MODEL?.trim()) ||
(isGeminiMode
? process.env.GEMINI_MODEL?.trim()
: process.env.OPENAI_MODEL?.trim()) ||
options?.fallbackModel?.trim() || options?.fallbackModel?.trim() ||
(isGithubMode ? 'github:copilot' : 'gpt-4o') (isGithubMode ? 'github:copilot' : 'gpt-4o')
const descriptor = parseModelDescriptor(requestedModel) const descriptor = parseModelDescriptor(requestedModel)
@@ -396,14 +401,25 @@ export function resolveProviderRequest(options?: {
'MISTRAL_BASE_URL', 'MISTRAL_BASE_URL',
) )
const normalizedGeminiEnvBaseUrl = asNamedEnvUrl(
process.env.GEMINI_BASE_URL,
'GEMINI_BASE_URL',
)
const primaryEnvBaseUrl = isMistralMode const primaryEnvBaseUrl = isMistralMode
? normalizedMistralEnvBaseUrl ? normalizedMistralEnvBaseUrl
: isGeminiMode
? normalizedGeminiEnvBaseUrl
: asNamedEnvUrl(process.env.OPENAI_BASE_URL, 'OPENAI_BASE_URL') : asNamedEnvUrl(process.env.OPENAI_BASE_URL, 'OPENAI_BASE_URL')
const fallbackEnvBaseUrl = isMistralMode const fallbackEnvBaseUrl = isMistralMode
? (primaryEnvBaseUrl === undefined ? (primaryEnvBaseUrl === undefined
? asNamedEnvUrl(process.env.OPENAI_API_BASE, 'OPENAI_API_BASE') ?? DEFAULT_MISTRAL_BASE_URL ? asNamedEnvUrl(process.env.OPENAI_API_BASE, 'OPENAI_API_BASE') ?? DEFAULT_MISTRAL_BASE_URL
: undefined) : undefined)
: isGeminiMode
? (primaryEnvBaseUrl === undefined
? asNamedEnvUrl(process.env.OPENAI_API_BASE, 'OPENAI_API_BASE') ?? DEFAULT_GEMINI_BASE_URL
: undefined)
: (primaryEnvBaseUrl === undefined : (primaryEnvBaseUrl === undefined
? asNamedEnvUrl(process.env.OPENAI_API_BASE, 'OPENAI_API_BASE') ? asNamedEnvUrl(process.env.OPENAI_API_BASE, 'OPENAI_API_BASE')
: undefined) : undefined)

View File

@@ -110,9 +110,14 @@ export function calculateTokenWarningState(
? autoCompactThreshold ? autoCompactThreshold
: getEffectiveContextWindowSize(model) : getEffectiveContextWindowSize(model)
// Use the raw context window (without output reservation) for the percentage
// display, so users see remaining context relative to the model's full capacity.
// The threshold (which subtracts buffer) should only affect when we warn/compact,
// not what percentage we display.
const rawContextWindow = getContextWindowForModel(model, getSdkBetas())
const percentLeft = Math.max( const percentLeft = Math.max(
0, 0,
Math.round(((threshold - tokenUsage) / threshold) * 100), Math.round(((rawContextWindow - tokenUsage) / rawContextWindow) * 100),
) )
const warningThreshold = threshold - WARNING_THRESHOLD_BUFFER_TOKENS const warningThreshold = threshold - WARNING_THRESHOLD_BUFFER_TOKENS

View File

@@ -0,0 +1,152 @@
import { describe, test, expect, beforeEach, afterEach } from 'bun:test'
import { DiagnosticTrackingService } from './diagnosticTracking.js'
import type { MCPServerConnection } from './mcp/types.js'
// Mock the IDE client utility
const mockGetConnectedIdeClient = (clients: MCPServerConnection[]) =>
clients.find(client => client.type === 'connected')
describe('DiagnosticTrackingService', () => {
let service: DiagnosticTrackingService
let mockClients: MCPServerConnection[]
let mockIdeClient: MCPServerConnection
beforeEach(() => {
// Get fresh instance for each test
service = DiagnosticTrackingService.getInstance()
// Setup mock clients
mockIdeClient = {
type: 'connected',
name: 'test-ide',
capabilities: {},
config: {},
cleanup: async () => {},
client: {
request: async () => ({}),
setNotificationHandler: () => {},
close: async () => {},
},
} as unknown as MCPServerConnection
mockClients = [
{ type: 'disconnected', name: 'test-disconnected', config: {} } as unknown as MCPServerConnection,
mockIdeClient,
]
})
afterEach(async () => {
await service.shutdown()
})
describe('handleQueryStart', () => {
test('should store MCP clients and initialize service', async () => {
await service.handleQueryStart(mockClients)
// Service should be initialized
expect(service).toBeDefined()
// Should be able to get IDE client from stored clients
// We can't directly test private methods, but we can test the behavior
const result = await service.getNewDiagnosticsCompat()
expect(result).toEqual([]) // Should return empty when no diagnostics
})
test('should reset service if already initialized', async () => {
// Initialize first
await service.handleQueryStart(mockClients)
// Call again - should reset without error
await service.handleQueryStart(mockClients)
// Should still work
const result = await service.getNewDiagnosticsCompat()
expect(result).toEqual([])
})
})
describe('backward-compatible methods', () => {
beforeEach(async () => {
await service.handleQueryStart(mockClients)
})
test('beforeFileEditedCompat should work without explicit client', async () => {
// Should not throw error and should return undefined when no IDE client
const result = await service.beforeFileEditedCompat('/test/file.ts')
expect(result).toBeUndefined()
})
test('getNewDiagnosticsCompat should work without explicit client', async () => {
const result = await service.getNewDiagnosticsCompat()
expect(Array.isArray(result)).toBe(true)
})
test('ensureFileOpenedCompat should work without explicit client', async () => {
const result = await service.ensureFileOpenedCompat('/test/file.ts')
expect(result).toBeUndefined()
})
})
describe('new explicit client methods', () => {
test('beforeFileEdited should require client parameter', async () => {
// Should not work without client
const result = await service.beforeFileEdited('/test/file.ts', undefined as any)
expect(result).toBeUndefined()
})
test('getNewDiagnostics should require client parameter', async () => {
// Should not work without client
const result = await service.getNewDiagnostics(undefined as any)
expect(result).toEqual([])
})
test('ensureFileOpened should require client parameter', async () => {
// Should not work without client
const result = await service.ensureFileOpened('/test/file.ts', undefined as any)
expect(result).toBeUndefined()
})
})
describe('shutdown', () => {
test('should clear stored clients on shutdown', async () => {
await service.handleQueryStart(mockClients)
// Verify service is working
const beforeResult = await service.getNewDiagnosticsCompat()
expect(Array.isArray(beforeResult)).toBe(true)
// Shutdown
await service.shutdown()
// After shutdown, compat methods should return empty results
const afterResult = await service.getNewDiagnosticsCompat()
expect(afterResult).toEqual([])
})
})
describe('integration with existing functionality', () => {
test('should maintain existing diagnostic tracking behavior', async () => {
await service.handleQueryStart(mockClients)
// Test baseline tracking
await service.beforeFileEditedCompat('/test/file.ts')
// Test getting new diagnostics (should be empty since no IDE client is actually connected)
const newDiagnostics = await service.getNewDiagnosticsCompat()
expect(Array.isArray(newDiagnostics)).toBe(true)
})
test('should handle missing IDE client gracefully', async () => {
// Test with no connected clients
const noIdeClients = [
{ type: 'disconnected', name: 'test-disconnected-2', config: {} } as unknown as MCPServerConnection,
]
await service.handleQueryStart(noIdeClients)
// Should handle gracefully
const result = await service.getNewDiagnosticsCompat()
expect(result).toEqual([])
})
})
})

View File

@@ -32,7 +32,7 @@ export class DiagnosticTrackingService {
private baseline: Map<string, Diagnostic[]> = new Map() private baseline: Map<string, Diagnostic[]> = new Map()
private initialized = false private initialized = false
private mcpClient: MCPServerConnection | undefined private currentMcpClients: MCPServerConnection[] = []
// Track when files were last processed/fetched // Track when files were last processed/fetched
private lastProcessedTimestamps: Map<string, number> = new Map() private lastProcessedTimestamps: Map<string, number> = new Map()
@@ -48,18 +48,17 @@ export class DiagnosticTrackingService {
return DiagnosticTrackingService.instance return DiagnosticTrackingService.instance
} }
initialize(mcpClient: MCPServerConnection) { initialize() {
if (this.initialized) { if (this.initialized) {
return return
} }
// TODO: Do not cache the connected mcpClient since it can change.
this.mcpClient = mcpClient
this.initialized = true this.initialized = true
} }
async shutdown(): Promise<void> { async shutdown(): Promise<void> {
this.initialized = false this.initialized = false
this.currentMcpClients = []
this.baseline.clear() this.baseline.clear()
this.rightFileDiagnosticsState.clear() this.rightFileDiagnosticsState.clear()
this.lastProcessedTimestamps.clear() this.lastProcessedTimestamps.clear()
@@ -75,6 +74,46 @@ export class DiagnosticTrackingService {
this.lastProcessedTimestamps.clear() this.lastProcessedTimestamps.clear()
} }
/**
* Get the current IDE client from stored MCP clients
*/
private getCurrentIdeClient(): MCPServerConnection | undefined {
return getConnectedIdeClient(this.currentMcpClients)
}
/**
* Backward-compatible method that uses stored IDE client
*/
async beforeFileEditedCompat(filePath: string): Promise<void> {
const ideClient = this.getCurrentIdeClient()
if (!ideClient) {
return
}
return await this.beforeFileEdited(filePath, ideClient)
}
/**
* Backward-compatible method that uses stored IDE client
*/
async getNewDiagnosticsCompat(): Promise<DiagnosticFile[]> {
const ideClient = this.getCurrentIdeClient()
if (!ideClient) {
return []
}
return await this.getNewDiagnostics(ideClient)
}
/**
* Backward-compatible method that uses stored IDE client
*/
async ensureFileOpenedCompat(fileUri: string): Promise<void> {
const ideClient = this.getCurrentIdeClient()
if (!ideClient) {
return
}
return await this.ensureFileOpened(fileUri, ideClient)
}
private normalizeFileUri(fileUri: string): string { private normalizeFileUri(fileUri: string): string {
// Remove our protocol prefixes // Remove our protocol prefixes
const protocolPrefixes = [ const protocolPrefixes = [
@@ -100,11 +139,11 @@ export class DiagnosticTrackingService {
* Ensure a file is opened in the IDE before processing. * Ensure a file is opened in the IDE before processing.
* This is important for language services like diagnostics to work properly. * This is important for language services like diagnostics to work properly.
*/ */
async ensureFileOpened(fileUri: string): Promise<void> { async ensureFileOpened(fileUri: string, mcpClient: MCPServerConnection): Promise<void> {
if ( if (
!this.initialized || !this.initialized ||
!this.mcpClient || !mcpClient ||
this.mcpClient.type !== 'connected' mcpClient.type !== 'connected'
) { ) {
return return
} }
@@ -121,7 +160,7 @@ export class DiagnosticTrackingService {
selectToEndOfLine: false, selectToEndOfLine: false,
makeFrontmost: false, makeFrontmost: false,
}, },
this.mcpClient, mcpClient,
) )
} catch (error) { } catch (error) {
logError(error as Error) logError(error as Error)
@@ -132,11 +171,11 @@ export class DiagnosticTrackingService {
* Capture baseline diagnostics for a specific file before editing. * Capture baseline diagnostics for a specific file before editing.
* This is called before editing a file to ensure we have a baseline to compare against. * This is called before editing a file to ensure we have a baseline to compare against.
*/ */
async beforeFileEdited(filePath: string): Promise<void> { async beforeFileEdited(filePath: string, mcpClient: MCPServerConnection): Promise<void> {
if ( if (
!this.initialized || !this.initialized ||
!this.mcpClient || !mcpClient ||
this.mcpClient.type !== 'connected' mcpClient.type !== 'connected'
) { ) {
return return
} }
@@ -147,7 +186,7 @@ export class DiagnosticTrackingService {
const result = await callIdeRpc( const result = await callIdeRpc(
'getDiagnostics', 'getDiagnostics',
{ uri: `file://${filePath}` }, { uri: `file://${filePath}` },
this.mcpClient, mcpClient,
) )
const diagnosticFile = this.parseDiagnosticResult(result)[0] const diagnosticFile = this.parseDiagnosticResult(result)[0]
if (diagnosticFile) { if (diagnosticFile) {
@@ -185,11 +224,11 @@ export class DiagnosticTrackingService {
* Get new diagnostics from file://, _claude_fs_right, and _claude_fs_ URIs that aren't in the baseline. * Get new diagnostics from file://, _claude_fs_right, and _claude_fs_ URIs that aren't in the baseline.
* Only processes diagnostics for files that have been edited. * Only processes diagnostics for files that have been edited.
*/ */
async getNewDiagnostics(): Promise<DiagnosticFile[]> { async getNewDiagnostics(mcpClient: MCPServerConnection): Promise<DiagnosticFile[]> {
if ( if (
!this.initialized || !this.initialized ||
!this.mcpClient || !mcpClient ||
this.mcpClient.type !== 'connected' mcpClient.type !== 'connected'
) { ) {
return [] return []
} }
@@ -200,7 +239,7 @@ export class DiagnosticTrackingService {
const result = await callIdeRpc( const result = await callIdeRpc(
'getDiagnostics', 'getDiagnostics',
{}, // Empty params fetches all diagnostics {}, // Empty params fetches all diagnostics
this.mcpClient, mcpClient,
) )
allDiagnosticFiles = this.parseDiagnosticResult(result) allDiagnosticFiles = this.parseDiagnosticResult(result)
} catch (_error) { } catch (_error) {
@@ -328,13 +367,16 @@ export class DiagnosticTrackingService {
* @param shouldQuery Whether a query is actually being made (not just a command) * @param shouldQuery Whether a query is actually being made (not just a command)
*/ */
async handleQueryStart(clients: MCPServerConnection[]): Promise<void> { async handleQueryStart(clients: MCPServerConnection[]): Promise<void> {
// Store the current MCP clients for later use
this.currentMcpClients = clients
// Only proceed if we should query and have clients // Only proceed if we should query and have clients
if (!this.initialized) { if (!this.initialized) {
// Find the connected IDE client // Find the connected IDE client
const connectedIdeClient = getConnectedIdeClient(clients) const connectedIdeClient = getConnectedIdeClient(clients)
if (connectedIdeClient) { if (connectedIdeClient) {
this.initialize(connectedIdeClient) this.initialize()
} }
} else { } else {
// Reset diagnostic tracking for new query loops // Reset diagnostic tracking for new query loops

View File

@@ -0,0 +1,61 @@
import assert from 'node:assert/strict'
import test from 'node:test'
import { validateOAuthCallbackParams } from './auth.js'
test('OAuth callback rejects error parameters before state validation can be bypassed', () => {
const result = validateOAuthCallbackParams(
{
error: 'access_denied',
error_description: 'denied by provider',
},
'expected-state',
)
assert.deepEqual(result, { type: 'state_mismatch' })
})
test('OAuth callback accepts provider errors only when state matches', () => {
const result = validateOAuthCallbackParams(
{
state: 'expected-state',
error: 'access_denied',
error_description: 'denied by provider',
error_uri: 'https://example.test/error',
},
'expected-state',
)
assert.deepEqual(result, {
type: 'error',
error: 'access_denied',
errorDescription: 'denied by provider',
errorUri: 'https://example.test/error',
message:
'OAuth error: access_denied - denied by provider (See: https://example.test/error)',
})
})
test('OAuth callback accepts authorization codes only when state matches', () => {
assert.deepEqual(
validateOAuthCallbackParams(
{
state: 'expected-state',
code: 'auth-code',
},
'expected-state',
),
{ type: 'code', code: 'auth-code' },
)
assert.deepEqual(
validateOAuthCallbackParams(
{
state: 'wrong-state',
code: 'auth-code',
},
'expected-state',
),
{ type: 'state_mismatch' },
)
})

View File

@@ -124,6 +124,74 @@ function redactSensitiveUrlParams(url: string): string {
} }
} }
type OAuthCallbackParamValue = string | string[] | null | undefined
type OAuthCallbackValidationResult =
| { type: 'code'; code: string }
| {
type: 'error'
error: string
errorDescription: string
errorUri: string
message: string
}
| { type: 'missing_result' }
| { type: 'state_mismatch' }
function getFirstOAuthCallbackParam(
value: OAuthCallbackParamValue,
): string | undefined {
if (Array.isArray(value)) {
return value.find(item => item.length > 0)
}
return value && value.length > 0 ? value : undefined
}
export function validateOAuthCallbackParams(
params: {
code?: OAuthCallbackParamValue
state?: OAuthCallbackParamValue
error?: OAuthCallbackParamValue
error_description?: OAuthCallbackParamValue
error_uri?: OAuthCallbackParamValue
},
oauthState: string,
): OAuthCallbackValidationResult {
const code = getFirstOAuthCallbackParam(params.code)
const state = getFirstOAuthCallbackParam(params.state)
const error = getFirstOAuthCallbackParam(params.error)
const errorDescription =
getFirstOAuthCallbackParam(params.error_description) ?? ''
const errorUri = getFirstOAuthCallbackParam(params.error_uri) ?? ''
if (state !== oauthState) {
return { type: 'state_mismatch' }
}
if (error) {
let message = `OAuth error: ${error}`
if (errorDescription) {
message += ` - ${errorDescription}`
}
if (errorUri) {
message += ` (See: ${errorUri})`
}
return {
type: 'error',
error,
errorDescription,
errorUri,
message,
}
}
if (code) {
return { type: 'code', code }
}
return { type: 'missing_result' }
}
/** /**
* Some OAuth servers (notably Slack) return HTTP 200 for all responses, * Some OAuth servers (notably Slack) return HTTP 200 for all responses,
* signaling errors via the JSON body instead. The SDK's executeTokenRequest * signaling errors via the JSON body instead. The SDK's executeTokenRequest
@@ -1058,30 +1126,31 @@ export async function performMCPOAuthFlow(
options.onWaitingForCallback((callbackUrl: string) => { options.onWaitingForCallback((callbackUrl: string) => {
try { try {
const parsed = new URL(callbackUrl) const parsed = new URL(callbackUrl)
const code = parsed.searchParams.get('code') const result = validateOAuthCallbackParams(
const state = parsed.searchParams.get('state') {
const error = parsed.searchParams.get('error') code: parsed.searchParams.get('code'),
state: parsed.searchParams.get('state'),
if (error) { error: parsed.searchParams.get('error'),
const errorDescription = error_description:
parsed.searchParams.get('error_description') || '' parsed.searchParams.get('error_description'),
cleanup() error_uri: parsed.searchParams.get('error_uri'),
rejectOnce( },
new Error(`OAuth error: ${error} - ${errorDescription}`), oauthState,
) )
if (result.type === 'state_mismatch') {
// Ignore so a stray or malicious URL cannot cancel an active flow.
return return
} }
if (!code) { if (result.type === 'missing_result') {
// Not a valid callback URL, ignore so the user can try again // Not a valid callback URL, ignore so the user can try again.
return return
} }
if (state !== oauthState) { if (result.type === 'error') {
cleanup() cleanup()
rejectOnce( rejectOnce(new Error(result.message))
new Error('OAuth state mismatch - possible CSRF attack'),
)
return return
} }
@@ -1090,7 +1159,7 @@ export async function performMCPOAuthFlow(
`Received auth code via manual callback URL`, `Received auth code via manual callback URL`,
) )
cleanup() cleanup()
resolveOnce(code) resolveOnce(result.code)
} catch { } catch {
// Invalid URL, ignore so the user can try again // Invalid URL, ignore so the user can try again
} }
@@ -1101,53 +1170,49 @@ export async function performMCPOAuthFlow(
const parsedUrl = parse(req.url || '', true) const parsedUrl = parse(req.url || '', true)
if (parsedUrl.pathname === '/callback') { if (parsedUrl.pathname === '/callback') {
const code = parsedUrl.query.code as string const result = validateOAuthCallbackParams(
const state = parsedUrl.query.state as string parsedUrl.query,
const error = parsedUrl.query.error oauthState,
const errorDescription = parsedUrl.query.error_description as string )
const errorUri = parsedUrl.query.error_uri as string
// Validate OAuth state to prevent CSRF attacks // Validate OAuth state to prevent CSRF attacks
if (!error && state !== oauthState) { if (result.type === 'state_mismatch') {
res.writeHead(400, { 'Content-Type': 'text/html' }) res.writeHead(400, { 'Content-Type': 'text/html' })
res.end( res.end(
`<h1>Authentication Error</h1><p>Invalid state parameter. Please try again.</p><p>You can close this window.</p>`, `<h1>Authentication Error</h1><p>Invalid state parameter. Please try again.</p><p>You can close this window.</p>`,
) )
cleanup()
rejectOnce(new Error('OAuth state mismatch - possible CSRF attack'))
return return
} }
if (error) { if (result.type === 'missing_result') {
res.writeHead(400, { 'Content-Type': 'text/html' })
res.end(
`<h1>Authentication Error</h1><p>Missing OAuth result. Please try again.</p><p>You can close this window.</p>`,
)
return
}
if (result.type === 'error') {
res.writeHead(200, { 'Content-Type': 'text/html' }) res.writeHead(200, { 'Content-Type': 'text/html' })
// Sanitize error messages to prevent XSS // Sanitize error messages to prevent XSS
const sanitizedError = xss(String(error)) const sanitizedError = xss(result.error)
const sanitizedErrorDescription = errorDescription const sanitizedErrorDescription = result.errorDescription
? xss(String(errorDescription)) ? xss(result.errorDescription)
: '' : ''
res.end( res.end(
`<h1>Authentication Error</h1><p>${sanitizedError}: ${sanitizedErrorDescription}</p><p>You can close this window.</p>`, `<h1>Authentication Error</h1><p>${sanitizedError}: ${sanitizedErrorDescription}</p><p>You can close this window.</p>`,
) )
cleanup() cleanup()
let errorMessage = `OAuth error: ${error}` rejectOnce(new Error(result.message))
if (errorDescription) {
errorMessage += ` - ${errorDescription}`
}
if (errorUri) {
errorMessage += ` (See: ${errorUri})`
}
rejectOnce(new Error(errorMessage))
return return
} }
if (code) {
res.writeHead(200, { 'Content-Type': 'text/html' }) res.writeHead(200, { 'Content-Type': 'text/html' })
res.end( res.end(
`<h1>Authentication Successful</h1><p>You can close this window. Return to Claude Code.</p>`, `<h1>Authentication Successful</h1><p>You can close this window. Return to Claude Code.</p>`,
) )
cleanup() cleanup()
resolveOnce(code) resolveOnce(result.code)
}
} }
}) })

View File

@@ -240,21 +240,28 @@ For commands that are harder to parse at a glance (piped commands, obscure flags
- curl -s url | jq '.data[]' → "Fetch JSON from URL and extract data array elements"`), - curl -s url | jq '.data[]' → "Fetch JSON from URL and extract data array elements"`),
run_in_background: semanticBoolean(z.boolean().optional()).describe(`Set to true to run this command in the background. Use Read to read the output later.`), run_in_background: semanticBoolean(z.boolean().optional()).describe(`Set to true to run this command in the background. Use Read to read the output later.`),
dangerouslyDisableSandbox: semanticBoolean(z.boolean().optional()).describe('Set this to true to dangerously override sandbox mode and run commands without sandboxing.'), dangerouslyDisableSandbox: semanticBoolean(z.boolean().optional()).describe('Set this to true to dangerously override sandbox mode and run commands without sandboxing.'),
_dangerouslyDisableSandboxApproved: z.boolean().optional().describe('Internal: user-approved sandbox override'),
_simulatedSedEdit: z.object({ _simulatedSedEdit: z.object({
filePath: z.string(), filePath: z.string(),
newContent: z.string() newContent: z.string()
}).optional().describe('Internal: pre-computed sed edit result from preview') }).optional().describe('Internal: pre-computed sed edit result from preview')
})); }));
// Always omit _simulatedSedEdit from the model-facing schema. It is an internal-only // Always omit internal-only fields from the model-facing schema.
// field set by SedEditPermissionRequest after the user approves a sed edit preview. // _simulatedSedEdit is set by SedEditPermissionRequest after the user approves a
// Exposing it in the schema would let the model bypass permission checks and the // sed edit preview; exposing it would let the model bypass permission checks and
// sandbox by pairing an innocuous command with an arbitrary file write. // the sandbox by pairing an innocuous command with an arbitrary file write.
// dangerouslyDisableSandbox is also omitted because sandbox escape must be tied
// to trusted user/internal provenance, not model-controlled tool input.
// Also conditionally remove run_in_background when background tasks are disabled. // Also conditionally remove run_in_background when background tasks are disabled.
const inputSchema = lazySchema(() => isBackgroundTasksDisabled ? fullInputSchema().omit({ const inputSchema = lazySchema(() => isBackgroundTasksDisabled ? fullInputSchema().omit({
run_in_background: true, run_in_background: true,
dangerouslyDisableSandbox: true,
_dangerouslyDisableSandboxApproved: true,
_simulatedSedEdit: true _simulatedSedEdit: true
}) : fullInputSchema().omit({ }) : fullInputSchema().omit({
dangerouslyDisableSandbox: true,
_dangerouslyDisableSandboxApproved: true,
_simulatedSedEdit: true _simulatedSedEdit: true
})); }));
type InputSchema = ReturnType<typeof inputSchema>; type InputSchema = ReturnType<typeof inputSchema>;

View File

@@ -0,0 +1,59 @@
import { afterEach, expect, test } from 'bun:test'
import { getEmptyToolPermissionContext } from '../../Tool.js'
import { SandboxManager } from '../../utils/sandbox/sandbox-adapter.js'
import { bashToolHasPermission } from './bashPermissions.js'
const originalSandboxMethods = {
isSandboxingEnabled: SandboxManager.isSandboxingEnabled,
isAutoAllowBashIfSandboxedEnabled:
SandboxManager.isAutoAllowBashIfSandboxedEnabled,
areUnsandboxedCommandsAllowed: SandboxManager.areUnsandboxedCommandsAllowed,
getExcludedCommands: SandboxManager.getExcludedCommands,
}
afterEach(() => {
SandboxManager.isSandboxingEnabled =
originalSandboxMethods.isSandboxingEnabled
SandboxManager.isAutoAllowBashIfSandboxedEnabled =
originalSandboxMethods.isAutoAllowBashIfSandboxedEnabled
SandboxManager.areUnsandboxedCommandsAllowed =
originalSandboxMethods.areUnsandboxedCommandsAllowed
SandboxManager.getExcludedCommands = originalSandboxMethods.getExcludedCommands
})
function makeToolUseContext() {
const toolPermissionContext = getEmptyToolPermissionContext()
return {
abortController: new AbortController(),
options: {
isNonInteractiveSession: false,
},
getAppState() {
return {
toolPermissionContext,
}
},
} as never
}
test('sandbox auto-allow still enforces Bash path constraints', async () => {
;(globalThis as unknown as { MACRO: { VERSION: string } }).MACRO = {
VERSION: 'test',
}
SandboxManager.isSandboxingEnabled = () => true
SandboxManager.isAutoAllowBashIfSandboxedEnabled = () => true
SandboxManager.areUnsandboxedCommandsAllowed = () => true
SandboxManager.getExcludedCommands = () => []
const result = await bashToolHasPermission(
{ command: 'cat ../../../../../etc/passwd' },
makeToolUseContext(),
)
expect(result.behavior).toBe('ask')
expect(result.message).toContain('was blocked')
expect(result.message).toContain('/etc/passwd')
})

View File

@@ -1814,7 +1814,10 @@ export async function bashToolHasPermission(
input, input,
appState.toolPermissionContext, appState.toolPermissionContext,
) )
if (sandboxAutoAllowResult.behavior !== 'passthrough') { if (
sandboxAutoAllowResult.behavior === 'deny' ||
sandboxAutoAllowResult.behavior === 'ask'
) {
return sandboxAutoAllowResult return sandboxAutoAllowResult
} }
} }

View File

@@ -179,9 +179,6 @@ function getSimpleSandboxSection(): string {
const networkRestrictionConfig = SandboxManager.getNetworkRestrictionConfig() const networkRestrictionConfig = SandboxManager.getNetworkRestrictionConfig()
const allowUnixSockets = SandboxManager.getAllowUnixSockets() const allowUnixSockets = SandboxManager.getAllowUnixSockets()
const ignoreViolations = SandboxManager.getIgnoreViolations() const ignoreViolations = SandboxManager.getIgnoreViolations()
const allowUnsandboxedCommands =
SandboxManager.areUnsandboxedCommandsAllowed()
// Replace the per-UID temp dir literal (e.g. /private/tmp/claude-1001/) with // Replace the per-UID temp dir literal (e.g. /private/tmp/claude-1001/) with
// "$TMPDIR" so the prompt is identical across users — avoids busting the // "$TMPDIR" so the prompt is identical across users — avoids busting the
// cross-user global prompt cache. The sandbox already sets $TMPDIR at runtime. // cross-user global prompt cache. The sandbox already sets $TMPDIR at runtime.
@@ -225,38 +222,9 @@ function getSimpleSandboxSection(): string {
) )
} }
const sandboxOverrideItems: Array<string | string[]> =
allowUnsandboxedCommands
? [
'You should always default to running commands within the sandbox. Do NOT attempt to set `dangerouslyDisableSandbox: true` unless:',
[
'The user *explicitly* asks you to bypass sandbox',
'A specific command just failed and you see evidence of sandbox restrictions causing the failure. Note that commands can fail for many reasons unrelated to the sandbox (missing files, wrong arguments, network issues, etc.).',
],
'Evidence of sandbox-caused failures includes:',
[
'"Operation not permitted" errors for file/network operations',
'Access denied to specific paths outside allowed directories',
'Network connection failures to non-whitelisted hosts',
'Unix socket connection errors',
],
'When you see evidence of sandbox-caused failure:',
[
"Immediately retry with `dangerouslyDisableSandbox: true` (don't ask, just do it)",
'Briefly explain what sandbox restriction likely caused the failure. Be sure to mention that the user can use the `/sandbox` command to manage restrictions.',
'This will prompt the user for permission',
],
'Treat each command you execute with `dangerouslyDisableSandbox: true` individually. Even if you have recently run a command with this setting, you should default to running future commands within the sandbox.',
'Do not suggest adding sensitive paths like ~/.bashrc, ~/.zshrc, ~/.ssh/*, or credential files to the sandbox allowlist.',
]
: [
'All commands MUST run in sandbox mode - the `dangerouslyDisableSandbox` parameter is disabled by policy.',
'Commands cannot run outside the sandbox under any circumstances.',
'If a command fails due to sandbox restrictions, work with the user to adjust sandbox settings instead.',
]
const items: Array<string | string[]> = [ const items: Array<string | string[]> = [
...sandboxOverrideItems, 'Commands MUST run in sandbox mode. If a command fails due to sandbox restrictions, explain the likely restriction and work with the user to adjust sandbox settings or run an explicit user-initiated shell command.',
'Do not suggest adding sensitive paths like ~/.bashrc, ~/.zshrc, ~/.ssh/*, or credential files to the sandbox allowlist.',
'For temporary files, always use the `$TMPDIR` environment variable. TMPDIR is automatically set to the correct sandbox-writable directory in sandbox mode. Do NOT use `/tmp` directly - use `$TMPDIR` instead.', 'For temporary files, always use the `$TMPDIR` environment variable. TMPDIR is automatically set to the correct sandbox-writable directory in sandbox mode. Do NOT use `/tmp` directly - use `$TMPDIR` instead.',
] ]

View File

@@ -0,0 +1,74 @@
import { afterEach, expect, test } from 'bun:test'
import { SandboxManager } from '../../utils/sandbox/sandbox-adapter.js'
import { BashTool } from './BashTool.js'
import { PowerShellTool } from '../PowerShellTool/PowerShellTool.js'
import { shouldUseSandbox } from './shouldUseSandbox.js'
const originalSandboxMethods = {
isSandboxingEnabled: SandboxManager.isSandboxingEnabled,
areUnsandboxedCommandsAllowed: SandboxManager.areUnsandboxedCommandsAllowed,
}
afterEach(() => {
SandboxManager.isSandboxingEnabled =
originalSandboxMethods.isSandboxingEnabled
SandboxManager.areUnsandboxedCommandsAllowed =
originalSandboxMethods.areUnsandboxedCommandsAllowed
})
test('model-facing Bash schema rejects dangerouslyDisableSandbox', () => {
const result = BashTool.inputSchema.safeParse({
command: 'cat /etc/passwd',
dangerouslyDisableSandbox: true,
})
expect(result.success).toBe(false)
})
test('model-facing PowerShell schema rejects dangerouslyDisableSandbox', () => {
const result = PowerShellTool.inputSchema.safeParse({
command: 'Get-Content C:\\Windows\\System32\\drivers\\etc\\hosts',
dangerouslyDisableSandbox: true,
})
expect(result.success).toBe(false)
})
test('model-controlled dangerouslyDisableSandbox does not bypass sandbox', () => {
SandboxManager.isSandboxingEnabled = () => true
SandboxManager.areUnsandboxedCommandsAllowed = () => true
expect(
shouldUseSandbox({
command: 'cat /etc/passwd',
dangerouslyDisableSandbox: true,
}),
).toBe(true)
})
test('trusted internal approval can disable sandbox when policy allows it', () => {
SandboxManager.isSandboxingEnabled = () => true
SandboxManager.areUnsandboxedCommandsAllowed = () => true
expect(
shouldUseSandbox({
command: 'cat /etc/passwd',
dangerouslyDisableSandbox: true,
_dangerouslyDisableSandboxApproved: true,
}),
).toBe(false)
})
test('trusted internal approval cannot disable sandbox when policy forbids it', () => {
SandboxManager.isSandboxingEnabled = () => true
SandboxManager.areUnsandboxedCommandsAllowed = () => false
expect(
shouldUseSandbox({
command: 'cat /etc/passwd',
dangerouslyDisableSandbox: true,
_dangerouslyDisableSandboxApproved: true,
}),
).toBe(true)
})

View File

@@ -13,6 +13,7 @@ import {
type SandboxInput = { type SandboxInput = {
command?: string command?: string
dangerouslyDisableSandbox?: boolean dangerouslyDisableSandbox?: boolean
_dangerouslyDisableSandboxApproved?: boolean
} }
// NOTE: excludedCommands is a user-facing convenience feature, not a security boundary. // NOTE: excludedCommands is a user-facing convenience feature, not a security boundary.
@@ -141,9 +142,13 @@ export function shouldUseSandbox(input: Partial<SandboxInput>): boolean {
return false return false
} }
// Don't sandbox if explicitly overridden AND unsandboxed commands are allowed by policy // Only trusted internal callers may request an unsandboxed command. The
// model-facing Bash schema omits _dangerouslyDisableSandboxApproved, so a
// tool_use payload cannot disable the sandbox by setting
// dangerouslyDisableSandbox directly.
if ( if (
input.dangerouslyDisableSandbox && input.dangerouslyDisableSandbox &&
input._dangerouslyDisableSandboxApproved &&
SandboxManager.areUnsandboxedCommandsAllowed() SandboxManager.areUnsandboxedCommandsAllowed()
) { ) {
return false return false

View File

@@ -422,7 +422,7 @@ export const FileEditTool = buildTool({
activateConditionalSkillsForPaths([absoluteFilePath], cwd) activateConditionalSkillsForPaths([absoluteFilePath], cwd)
} }
await diagnosticTracker.beforeFileEdited(absoluteFilePath) await diagnosticTracker.beforeFileEditedCompat(absoluteFilePath)
// Ensure parent directory exists before the atomic read-modify-write section. // Ensure parent directory exists before the atomic read-modify-write section.
// These awaits must stay OUTSIDE the critical section below — a yield between // These awaits must stay OUTSIDE the critical section below — a yield between

View File

@@ -244,7 +244,7 @@ export const FileWriteTool = buildTool({
// Activate conditional skills whose path patterns match this file // Activate conditional skills whose path patterns match this file
activateConditionalSkillsForPaths([fullFilePath], cwd) activateConditionalSkillsForPaths([fullFilePath], cwd)
await diagnosticTracker.beforeFileEdited(fullFilePath) await diagnosticTracker.beforeFileEditedCompat(fullFilePath)
// Ensure parent directory exists before the atomic read-modify-write section. // Ensure parent directory exists before the atomic read-modify-write section.
// Must stay OUTSIDE the critical section below (a yield between the staleness // Must stay OUTSIDE the critical section below (a yield between the staleness

View File

@@ -230,13 +230,20 @@ const fullInputSchema = lazySchema(() => z.strictObject({
timeout: semanticNumber(z.number().optional()).describe(`Optional timeout in milliseconds (max ${getMaxTimeoutMs()})`), timeout: semanticNumber(z.number().optional()).describe(`Optional timeout in milliseconds (max ${getMaxTimeoutMs()})`),
description: z.string().optional().describe('Clear, concise description of what this command does in active voice.'), description: z.string().optional().describe('Clear, concise description of what this command does in active voice.'),
run_in_background: semanticBoolean(z.boolean().optional()).describe(`Set to true to run this command in the background. Use Read to read the output later.`), run_in_background: semanticBoolean(z.boolean().optional()).describe(`Set to true to run this command in the background. Use Read to read the output later.`),
dangerouslyDisableSandbox: semanticBoolean(z.boolean().optional()).describe('Set this to true to dangerously override sandbox mode and run commands without sandboxing.') dangerouslyDisableSandbox: semanticBoolean(z.boolean().optional()).describe('Set this to true to dangerously override sandbox mode and run commands without sandboxing.'),
_dangerouslyDisableSandboxApproved: z.boolean().optional().describe('Internal: user-approved sandbox override')
})); }));
// Conditionally remove run_in_background from schema when background tasks are disabled // Omit internal-only sandbox override fields from the model-facing schema.
// Conditionally remove run_in_background from schema when background tasks are disabled.
const inputSchema = lazySchema(() => isBackgroundTasksDisabled ? fullInputSchema().omit({ const inputSchema = lazySchema(() => isBackgroundTasksDisabled ? fullInputSchema().omit({
run_in_background: true run_in_background: true,
}) : fullInputSchema()); dangerouslyDisableSandbox: true,
_dangerouslyDisableSandboxApproved: true
}) : fullInputSchema().omit({
dangerouslyDisableSandbox: true,
_dangerouslyDisableSandboxApproved: true
}));
type InputSchema = ReturnType<typeof inputSchema>; type InputSchema = ReturnType<typeof inputSchema>;
// Use fullInputSchema for the type to always include run_in_background // Use fullInputSchema for the type to always include run_in_background
@@ -697,7 +704,8 @@ async function* runPowerShellCommand({
description, description,
timeout, timeout,
run_in_background, run_in_background,
dangerouslyDisableSandbox dangerouslyDisableSandbox,
_dangerouslyDisableSandboxApproved
} = input; } = input;
const timeoutMs = Math.min(timeout || getDefaultTimeoutMs(), getMaxTimeoutMs()); const timeoutMs = Math.min(timeout || getDefaultTimeoutMs(), getMaxTimeoutMs());
let fullOutput = ''; let fullOutput = '';
@@ -749,7 +757,8 @@ async function* runPowerShellCommand({
// The explicit platform check is redundant-but-obvious. // The explicit platform check is redundant-but-obvious.
shouldUseSandbox: getPlatform() === 'windows' ? false : shouldUseSandbox({ shouldUseSandbox: getPlatform() === 'windows' ? false : shouldUseSandbox({
command, command,
dangerouslyDisableSandbox dangerouslyDisableSandbox,
_dangerouslyDisableSandboxApproved
}), }),
shouldAutoBackground shouldAutoBackground
}); });

View File

@@ -148,6 +148,42 @@ type Position = {
column: number column: number
} }
export function maskTextWithVisibleEdges(
value: string,
mask: string,
visiblePrefix = 3,
visibleSuffix = 3,
): string {
if (!mask || !value) return value
const graphemes = Array.from(getGraphemeSegmenter().segment(value))
const secretGraphemeCount = graphemes.filter(
({ segment }) => segment !== '\n',
).length
const visibleCount = visiblePrefix + visibleSuffix
if (secretGraphemeCount <= visibleCount) {
return graphemes
.map(({ segment }) => (segment === '\n' ? segment : mask))
.join('')
}
let secretIndex = 0
return graphemes
.map(({ segment }) => {
if (segment === '\n') return segment
const nextSegment =
secretIndex < visiblePrefix ||
secretIndex >= secretGraphemeCount - visibleSuffix
? segment
: mask
secretIndex += 1
return nextSegment
})
.join('')
}
export class Cursor { export class Cursor {
readonly offset: number readonly offset: number
constructor( constructor(
@@ -208,7 +244,12 @@ export class Cursor {
maxVisibleLines?: number, maxVisibleLines?: number,
) { ) {
const { line, column } = this.getPosition() const { line, column } = this.getPosition()
const allLines = this.measuredText.getWrappedText() const allLines = mask
? new MeasuredText(
maskTextWithVisibleEdges(this.text, mask),
this.measuredText.columns,
).getWrappedText()
: this.measuredText.getWrappedText()
const startLine = this.getViewportStartLine(maxVisibleLines) const startLine = this.getViewportStartLine(maxVisibleLines)
const endLine = const endLine =
@@ -221,23 +262,6 @@ export class Cursor {
.map((text, i) => { .map((text, i) => {
const currentLine = i + startLine const currentLine = i + startLine
let displayText = text let displayText = text
if (mask) {
const graphemes = Array.from(getGraphemeSegmenter().segment(text))
if (currentLine === allLines.length - 1) {
// Last line: mask all but the trailing 6 chars so the user can
// confirm they pasted the right thing without exposing the full token
const visibleCount = Math.min(6, graphemes.length)
const maskCount = graphemes.length - visibleCount
const splitOffset =
graphemes.length > visibleCount ? graphemes[maskCount]!.index : 0
displayText = mask.repeat(maskCount) + text.slice(splitOffset)
} else {
// Earlier wrapped lines: fully mask. Previously only the last line
// was masked, leaking the start of the token on narrow terminals
// where the pasted OAuth code wraps across multiple lines.
displayText = mask.repeat(graphemes.length)
}
}
// looking for the line with the cursor // looking for the line with the cursor
if (line !== currentLine) return displayText.trimEnd() if (line !== currentLine) return displayText.trimEnd()

View File

@@ -78,3 +78,28 @@ test('toolToAPISchema keeps skill required for SkillTool', async () => {
required: ['skill'], required: ['skill'],
}) })
}) })
test('toolToAPISchema removes extra required keys not in properties (MCP schema sanitization)', async () => {
const schema = await toolToAPISchema(
{
name: 'mcp__test__create_object',
inputSchema: z.strictObject({}),
inputJSONSchema: {
type: 'object',
properties: {
name: { type: 'string' },
},
required: ['name', 'attributes'],
},
prompt: async () => 'Create an object',
} as unknown as Tool,
{
getToolPermissionContext: async () => getEmptyToolPermissionContext(),
tools: [] as unknown as Tools,
agents: [],
},
)
const inputSchema = (schema as { input_schema: { required?: string[] } }).input_schema
expect(inputSchema.required).toEqual(['name'])
})

View File

@@ -111,11 +111,60 @@ function filterSwarmFieldsFromSchema(
delete filteredProps[field] delete filteredProps[field]
} }
filtered.properties = filteredProps filtered.properties = filteredProps
// Keep `required` in sync after removing properties
if (Array.isArray(filtered.required)) {
filtered.required = filtered.required.filter(
(key: string) => key in filteredProps,
)
}
} }
return filtered return filtered
} }
/**
* Ensure `required` only lists keys present in `properties`.
* MCP servers may emit schemas where these are out of sync, causing
* API 400 errors ("Extra required key supplied").
* Recurses into nested object schemas.
*/
function sanitizeSchemaRequired(
schema: Anthropic.Tool.InputSchema,
): Anthropic.Tool.InputSchema {
if (!schema || typeof schema !== 'object') {
return schema
}
const result = { ...schema }
const props = result.properties as Record<string, unknown> | undefined
if (props && Array.isArray(result.required)) {
result.required = result.required.filter(
(key: string) => key in props,
)
}
// Recurse into nested object properties
if (props) {
const sanitizedProps = { ...props }
for (const [key, value] of Object.entries(sanitizedProps)) {
if (
value &&
typeof value === 'object' &&
(value as Record<string, unknown>).type === 'object'
) {
sanitizedProps[key] = sanitizeSchemaRequired(
value as Anthropic.Tool.InputSchema,
)
}
}
result.properties = sanitizedProps
}
return result
}
export async function toolToAPISchema( export async function toolToAPISchema(
tool: Tool, tool: Tool,
options: { options: {
@@ -156,7 +205,7 @@ export async function toolToAPISchema(
// Use tool's JSON schema directly if provided, otherwise convert Zod schema // Use tool's JSON schema directly if provided, otherwise convert Zod schema
let input_schema = ( let input_schema = (
'inputJSONSchema' in tool && tool.inputJSONSchema 'inputJSONSchema' in tool && tool.inputJSONSchema
? tool.inputJSONSchema ? sanitizeSchemaRequired(tool.inputJSONSchema as Anthropic.Tool.InputSchema)
: zodToJsonSchema(tool.inputSchema) : zodToJsonSchema(tool.inputSchema)
) as Anthropic.Tool.InputSchema ) as Anthropic.Tool.InputSchema
@@ -613,10 +662,6 @@ export function normalizeToolInput<T extends Tool>(
...(timeout !== undefined && { timeout }), ...(timeout !== undefined && { timeout }),
...(description !== undefined && { description }), ...(description !== undefined && { description }),
...(run_in_background !== undefined && { run_in_background }), ...(run_in_background !== undefined && { run_in_background }),
...('dangerouslyDisableSandbox' in parsed &&
parsed.dangerouslyDisableSandbox !== undefined && {
dangerouslyDisableSandbox: parsed.dangerouslyDisableSandbox,
}),
} as z.infer<T['inputSchema']> } as z.infer<T['inputSchema']>
} }
case FileEditTool.name: { case FileEditTool.name: {

View File

@@ -2882,7 +2882,7 @@ async function getDiagnosticAttachments(
} }
// Get new diagnostics from the tracker (IDE diagnostics via MCP) // Get new diagnostics from the tracker (IDE diagnostics via MCP)
const newDiagnostics = await diagnosticTracker.getNewDiagnostics() const newDiagnostics = await diagnosticTracker.getNewDiagnosticsCompat()
if (newDiagnostics.length === 0) { if (newDiagnostics.length === 0) {
return [] return []
} }

View File

@@ -155,7 +155,7 @@ export {
NOTIFICATION_CHANNELS, NOTIFICATION_CHANNELS,
} from './configConstants.js' } from './configConstants.js'
import type { EDITOR_MODES, NOTIFICATION_CHANNELS } from './configConstants.js' import type { EDITOR_MODES, NOTIFICATION_CHANNELS, PROVIDERS } from './configConstants.js'
export type NotificationChannel = (typeof NOTIFICATION_CHANNELS)[number] export type NotificationChannel = (typeof NOTIFICATION_CHANNELS)[number]
@@ -181,10 +181,12 @@ export type DiffTool = 'terminal' | 'auto'
export type OutputStyle = string export type OutputStyle = string
export type Providers = typeof PROVIDERS[number]
export type ProviderProfile = { export type ProviderProfile = {
id: string id: string
name: string name: string
provider: 'openai' | 'anthropic' provider: Providers
baseUrl: string baseUrl: string
model: string model: string
apiKey?: string apiKey?: string

View File

@@ -19,3 +19,5 @@ export const EDITOR_MODES = ['normal', 'vim'] as const
// 'in-process' = in-process teammates running in same process // 'in-process' = in-process teammates running in same process
// 'auto' = automatically choose based on context (default) // 'auto' = automatically choose based on context (default)
export const TEAMMATE_MODES = ['auto', 'tmux', 'in-process'] as const export const TEAMMATE_MODES = ['auto', 'tmux', 'in-process'] as const
export const PROVIDERS = ['openai', 'anthropic', 'mistral', 'gemini'] as const

View File

@@ -184,6 +184,8 @@ const OPENAI_CONTEXT_WINDOWS: Record<string, number> = {
'gemini-2.0-flash': 1_048_576, 'gemini-2.0-flash': 1_048_576,
'gemini-2.5-pro': 1_048_576, 'gemini-2.5-pro': 1_048_576,
'gemini-2.5-flash': 1_048_576, 'gemini-2.5-flash': 1_048_576,
'gemini-3.1-pro': 1_048_576,
'gemini-3.1-flash-lite-preview': 1_048_576,
// Ollama local models // Ollama local models
// Llama 3.1+ models support 128k context natively (Meta official specs). // Llama 3.1+ models support 128k context natively (Meta official specs).
@@ -334,6 +336,8 @@ const OPENAI_MAX_OUTPUT_TOKENS: Record<string, number> = {
'gemini-2.0-flash': 8_192, 'gemini-2.0-flash': 8_192,
'gemini-2.5-pro': 65_536, 'gemini-2.5-pro': 65_536,
'gemini-2.5-flash': 65_536, 'gemini-2.5-flash': 65_536,
'gemini-3.1-pro': 65_536,
'gemini-3.1-flash-lite-preview': 65_536,
// Ollama local models (conservative safe defaults) // Ollama local models (conservative safe defaults)
'llama3.3:70b': 4_096, 'llama3.3:70b': 4_096,

View File

@@ -65,10 +65,11 @@ export async function processBashCommand(inputString: string, precedingInputBloc
}); });
}; };
// User-initiated `!` commands run outside sandbox. Both shell tools honor // User-initiated `!` commands run outside sandbox when policy allows it.
// dangerouslyDisableSandbox (checked against areUnsandboxedCommandsAllowed() // Bash requires an internal approval marker so model-controlled tool input
// in shouldUseSandbox.ts). PS sandbox is Linux/macOS/WSL2 only — on Windows // cannot disable sandboxing by setting dangerouslyDisableSandbox directly.
// native, shouldUseSandbox() returns false regardless (unsupported platform). // PS sandbox is Linux/macOS/WSL2 only — on Windows native, shouldUseSandbox()
// returns false regardless (unsupported platform).
// Lazy-require PowerShellTool so its ~300KB chunk only loads when the // Lazy-require PowerShellTool so its ~300KB chunk only loads when the
// user has actually selected the powershell default shell. // user has actually selected the powershell default shell.
type PSMod = typeof import('src/tools/PowerShellTool/PowerShellTool.js'); type PSMod = typeof import('src/tools/PowerShellTool/PowerShellTool.js');
@@ -81,10 +82,12 @@ export async function processBashCommand(inputString: string, precedingInputBloc
const shellTool = PowerShellTool ?? BashTool; const shellTool = PowerShellTool ?? BashTool;
const response = PowerShellTool ? await PowerShellTool.call({ const response = PowerShellTool ? await PowerShellTool.call({
command: inputString, command: inputString,
dangerouslyDisableSandbox: true dangerouslyDisableSandbox: true,
_dangerouslyDisableSandboxApproved: true
}, bashModeContext, undefined, undefined, onProgress) : await BashTool.call({ }, bashModeContext, undefined, undefined, onProgress) : await BashTool.call({
command: inputString, command: inputString,
dangerouslyDisableSandbox: true dangerouslyDisableSandbox: true,
_dangerouslyDisableSandboxApproved: true
}, bashModeContext, undefined, undefined, onProgress); }, bashModeContext, undefined, undefined, onProgress);
const data = response.data; const data = response.data;
if (!data) { if (!data) {

View File

@@ -166,7 +166,7 @@ test('matching persisted gemini env is reused for gemini launch', async () => {
assert.equal(env.GEMINI_BASE_URL, 'https://example.test/v1beta/openai') assert.equal(env.GEMINI_BASE_URL, 'https://example.test/v1beta/openai')
}) })
test('gemini launch ignores mismatched persisted openai env and strips other provider secrets', async () => { test('openai env variables take precedence over gemini', async () => {
const env = await buildLaunchEnv({ const env = await buildLaunchEnv({
profile: 'gemini', profile: 'gemini',
persisted: profile('openai', { persisted: profile('openai', {
@@ -187,16 +187,16 @@ test('gemini launch ignores mismatched persisted openai env and strips other pro
}, },
}) })
assert.equal(env.CLAUDE_CODE_USE_GEMINI, '1') assert.equal(env.CLAUDE_CODE_USE_GEMINI, undefined)
assert.equal(env.CLAUDE_CODE_USE_OPENAI, undefined) assert.equal(env.CLAUDE_CODE_USE_OPENAI, '1')
assert.equal(env.GEMINI_MODEL, 'gemini-2.0-flash') assert.equal(env.GEMINI_MODEL, undefined)
assert.equal(env.GEMINI_API_KEY, 'gem-live') assert.equal(env.GEMINI_API_KEY, undefined)
assert.equal( assert.equal(
env.GEMINI_BASE_URL, env.GEMINI_BASE_URL,
'https://generativelanguage.googleapis.com/v1beta/openai', undefined,
) )
assert.equal(env.GOOGLE_API_KEY, undefined) assert.equal(env.GOOGLE_API_KEY, undefined)
assert.equal(env.OPENAI_API_KEY, undefined) assert.equal(env.OPENAI_API_KEY, 'sk-live')
assert.equal(env.CODEX_API_KEY, undefined) assert.equal(env.CODEX_API_KEY, undefined)
assert.equal(env.CHATGPT_ACCOUNT_ID, undefined) assert.equal(env.CHATGPT_ACCOUNT_ID, undefined)
}) })
@@ -562,8 +562,13 @@ test('buildStartupEnvFromProfile leaves explicit provider selections untouched',
processEnv, processEnv,
}) })
assert.equal(env, processEnv) // Remove the strict object equality check: assert.equal(env, processEnv)
assert.equal(env.CLAUDE_CODE_USE_GEMINI, '1') assert.equal(env.CLAUDE_CODE_USE_GEMINI, '1')
assert.equal(env.GEMINI_API_KEY, 'gem-live')
assert.equal(env.GEMINI_MODEL, 'gemini-2.0-flash')
// Add the new default fields injected by the function
assert.equal(env.GEMINI_BASE_URL, 'https://generativelanguage.googleapis.com/v1beta/openai')
assert.equal(env.GEMINI_AUTH_MODE, 'api-key')
assert.equal(env.OPENAI_API_KEY, undefined) assert.equal(env.OPENAI_API_KEY, undefined)
}) })
@@ -607,14 +612,17 @@ test('buildStartupEnvFromProfile treats explicit falsey provider flags as user i
processEnv, processEnv,
}) })
assert.equal(env, processEnv) assert.equal(env.CLAUDE_CODE_USE_OPENAI, undefined)
assert.equal(env.CLAUDE_CODE_USE_OPENAI, '0') assert.equal(env.CLAUDE_CODE_USE_GEMINI, '1')
assert.equal(env.GEMINI_API_KEY, undefined) assert.equal(env.GEMINI_API_KEY, 'gem-persisted')
assert.equal(env.GEMINI_MODEL, 'gemini-2.5-flash')
assert.equal(env.GEMINI_BASE_URL, 'https://generativelanguage.googleapis.com/v1beta/openai')
assert.equal(env.GEMINI_AUTH_MODE, 'api-key')
}) })
test('maskSecretForDisplay preserves only a short prefix and suffix', () => { test('maskSecretForDisplay preserves only a short prefix and suffix', () => {
assert.equal(maskSecretForDisplay('sk-secret-12345678'), 'sk-...5678') assert.equal(maskSecretForDisplay('sk-secret-12345678'), 'sk-...678')
assert.equal(maskSecretForDisplay('AIzaSecret12345678'), 'AIza...5678') assert.equal(maskSecretForDisplay('AIzaSecret12345678'), 'AIz...678')
}) })
test('redactSecretValueForDisplay masks poisoned display fields that equal configured secrets', () => { test('redactSecretValueForDisplay masks poisoned display fields that equal configured secrets', () => {
@@ -622,7 +630,7 @@ test('redactSecretValueForDisplay masks poisoned display fields that equal confi
assert.equal( assert.equal(
redactSecretValueForDisplay(apiKey, { OPENAI_API_KEY: apiKey }), redactSecretValueForDisplay(apiKey, { OPENAI_API_KEY: apiKey }),
'sk-...5678', 'sk-...678',
) )
assert.equal( assert.equal(
redactSecretValueForDisplay('gpt-4o', { OPENAI_API_KEY: apiKey }), redactSecretValueForDisplay('gpt-4o', { OPENAI_API_KEY: apiKey }),

View File

@@ -29,6 +29,9 @@ export {
sanitizeApiKey, sanitizeApiKey,
sanitizeProviderConfigValue, sanitizeProviderConfigValue,
} from './providerSecrets.js' } from './providerSecrets.js'
import { isEnvTruthy } from './envUtils.ts'
import { PROVIDERS } from './configConstants.js'
export const PROFILE_FILE_NAME = '.openclaude-profile.json' export const PROFILE_FILE_NAME = '.openclaude-profile.json'
export const DEFAULT_GEMINI_BASE_URL = export const DEFAULT_GEMINI_BASE_URL =
@@ -498,13 +501,13 @@ export function hasExplicitProviderSelection(
} }
return ( return (
processEnv.CLAUDE_CODE_USE_OPENAI !== undefined || isEnvTruthy(processEnv.CLAUDE_CODE_USE_OPENAI) ||
processEnv.CLAUDE_CODE_USE_GITHUB !== undefined || isEnvTruthy(processEnv.CLAUDE_CODE_USE_GITHUB) ||
processEnv.CLAUDE_CODE_USE_GEMINI !== undefined || isEnvTruthy(processEnv.CLAUDE_CODE_USE_GEMINI) ||
processEnv.CLAUDE_CODE_USE_MISTRAL !== undefined || isEnvTruthy(processEnv.CLAUDE_CODE_USE_MISTRAL) ||
processEnv.CLAUDE_CODE_USE_BEDROCK !== undefined || isEnvTruthy(processEnv.CLAUDE_CODE_USE_BEDROCK) ||
processEnv.CLAUDE_CODE_USE_VERTEX !== undefined || isEnvTruthy(processEnv.CLAUDE_CODE_USE_VERTEX) ||
processEnv.CLAUDE_CODE_USE_FOUNDRY !== undefined isEnvTruthy(processEnv.CLAUDE_CODE_USE_FOUNDRY)
) )
} }
@@ -573,6 +576,20 @@ export async function buildLaunchEnv(options: {
const persistedGeminiKey = sanitizeApiKey(persistedEnv.GEMINI_API_KEY) const persistedGeminiKey = sanitizeApiKey(persistedEnv.GEMINI_API_KEY)
const persistedGeminiAuthMode = persistedEnv.GEMINI_AUTH_MODE const persistedGeminiAuthMode = persistedEnv.GEMINI_AUTH_MODE
if (hasExplicitProviderSelection(processEnv)) {
for (let provider of PROVIDERS) {
if (provider === "anthropic") {
continue;
}
const env_key_name = `CLAUDE_CODE_USE_${provider.toUpperCase()}`
if (env_key_name in processEnv && isEnvTruthy(processEnv[env_key_name])) {
options.profile = provider;
}
}
}
if (options.profile === 'gemini') { if (options.profile === 'gemini') {
const env: NodeJS.ProcessEnv = { const env: NodeJS.ProcessEnv = {
...processEnv, ...processEnv,
@@ -825,12 +842,18 @@ export async function buildStartupEnvFromProfile(options?: {
const persisted = options?.persisted ?? loadProfileFile() const persisted = options?.persisted ?? loadProfileFile()
// Saved /provider profiles should still win over provider-manager env that was // Saved /provider profiles should still win over provider-manager env that was
// auto-applied during startup. Only explicit shell/flag provider selection // auto-applied during startup. Only an explicit shell/flag provider selection
// should bypass the persisted startup profile. // should bypass the persisted startup profile.
//
const profileManagedEnv = processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED === '1' const profileManagedEnv = processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED === '1'
if (hasExplicitProviderSelection(processEnv) && !profileManagedEnv) {
return processEnv // If the user explicitly selected a provider via env, allow it to bypass
} // the persisted profile only when we can prove it was managed by the
// persisted profile env itself.
//
// Practically: on initial startup, provider routing env vars can already
// be present due to earlier auto-application steps. We should still apply
// the persisted profile rather than returning early.
if (!persisted) { if (!persisted) {
return processEnv return processEnv

View File

@@ -13,6 +13,7 @@ const RESTORED_KEYS = [
'CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID', 'CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID',
'CLAUDE_CODE_USE_OPENAI', 'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI', 'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_MISTRAL',
'CLAUDE_CODE_USE_GITHUB', 'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK', 'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX', 'CLAUDE_CODE_USE_VERTEX',
@@ -24,6 +25,15 @@ const RESTORED_KEYS = [
'ANTHROPIC_BASE_URL', 'ANTHROPIC_BASE_URL',
'ANTHROPIC_MODEL', 'ANTHROPIC_MODEL',
'ANTHROPIC_API_KEY', 'ANTHROPIC_API_KEY',
'GEMINI_BASE_URL',
'GEMINI_MODEL',
'GEMINI_API_KEY',
'GEMINI_AUTH_MODE',
'GEMINI_ACCESS_TOKEN',
'GOOGLE_API_KEY',
'MISTRAL_BASE_URL',
'MISTRAL_MODEL',
'MISTRAL_API_KEY',
] as const ] as const
type MockConfigState = { type MockConfigState = {
@@ -98,6 +108,24 @@ function buildProfile(overrides: Partial<ProviderProfile> = {}): ProviderProfile
} }
} }
function buildMistralProfile(overrides: Partial<ProviderProfile> = {}): ProviderProfile {
return buildProfile({
provider: 'mistral',
baseUrl: 'https://api.mistral.ai/v1',
model: 'devstral-latest',
...overrides,
})
}
function buildGeminiProfile(overrides: Partial<ProviderProfile> = {}): ProviderProfile {
return buildProfile({
provider: 'gemini',
baseUrl: 'https://generativelanguage.googleapis.com/v1beta/openai',
model: 'gemini-3-flash-preview',
...overrides,
})
}
describe('applyProviderProfileToProcessEnv', () => { describe('applyProviderProfileToProcessEnv', () => {
test('openai profile clears competing gemini/github flags', async () => { test('openai profile clears competing gemini/github flags', async () => {
const { applyProviderProfileToProcessEnv } = const { applyProviderProfileToProcessEnv } =
@@ -118,6 +146,36 @@ describe('applyProviderProfileToProcessEnv', () => {
expect(getFreshAPIProvider()).toBe('openai') expect(getFreshAPIProvider()).toBe('openai')
}) })
test('mistral profile sets CLAUDE_CODE_USE_MISTRAL and clears openai flags', async () => {
const { applyProviderProfileToProcessEnv } =
await importFreshProviderProfileModules()
process.env.CLAUDE_CODE_USE_OPENAI = '1'
applyProviderProfileToProcessEnv(buildMistralProfile())
const { getAPIProvider: getFreshAPIProvider } =
await importFreshProvidersModule()
expect(process.env.CLAUDE_CODE_USE_MISTRAL).toBe('1')
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(process.env.MISTRAL_MODEL).toBe('devstral-latest')
expect(getFreshAPIProvider()).toBe('mistral')
})
test('gemini profile sets CLAUDE_CODE_USE_GEMINI and clears openai flags', async () => {
const { applyProviderProfileToProcessEnv } =
await importFreshProviderProfileModules()
process.env.CLAUDE_CODE_USE_OPENAI = '1'
applyProviderProfileToProcessEnv(buildGeminiProfile())
const { getAPIProvider: getFreshAPIProvider } =
await importFreshProvidersModule()
expect(process.env.CLAUDE_CODE_USE_GEMINI).toBe('1')
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(process.env.GEMINI_MODEL).toBe('gemini-3-flash-preview')
expect(getFreshAPIProvider()).toBe('gemini')
})
test('anthropic profile clears competing gemini/github flags', async () => { test('anthropic profile clears competing gemini/github flags', async () => {
const { applyProviderProfileToProcessEnv } = const { applyProviderProfileToProcessEnv } =
await importFreshProviderProfileModules() await importFreshProviderProfileModules()

View File

@@ -6,6 +6,14 @@ import {
} from './config.js' } from './config.js'
import type { ModelOption } from './model/modelOptions.js' import type { ModelOption } from './model/modelOptions.js'
import { getPrimaryModel, parseModelList } from './providerModels.js' import { getPrimaryModel, parseModelList } from './providerModels.js'
import {
createProfileFile,
saveProfileFile,
buildGeminiProfileEnv,
buildMistralProfileEnv,
buildOpenAIProfileEnv,
type ProviderProfile as ProviderProfileStartup,
} from './providerProfile.js'
export type ProviderPreset = export type ProviderPreset =
| 'anthropic' | 'anthropic'
@@ -60,7 +68,14 @@ function normalizeBaseUrl(value: string): string {
function sanitizeProfile(profile: ProviderProfile): ProviderProfile | null { function sanitizeProfile(profile: ProviderProfile): ProviderProfile | null {
const id = trimValue(profile.id) const id = trimValue(profile.id)
const name = trimValue(profile.name) const name = trimValue(profile.name)
const provider = profile.provider === 'anthropic' ? 'anthropic' : 'openai' const provider =
profile.provider === 'anthropic'
? 'anthropic'
: profile.provider === 'mistral'
? 'mistral'
: profile.provider === 'gemini'
? 'gemini'
: 'openai'
const baseUrl = normalizeBaseUrl(profile.baseUrl) const baseUrl = normalizeBaseUrl(profile.baseUrl)
const model = trimValue(profile.model) const model = trimValue(profile.model)
@@ -161,7 +176,7 @@ export function getProviderPresetDefaults(
} }
case 'gemini': case 'gemini':
return { return {
provider: 'openai', provider: 'gemini',
name: 'Google Gemini', name: 'Google Gemini',
baseUrl: 'https://generativelanguage.googleapis.com/v1beta/openai', baseUrl: 'https://generativelanguage.googleapis.com/v1beta/openai',
model: 'gemini-3-flash-preview', model: 'gemini-3-flash-preview',
@@ -170,7 +185,7 @@ export function getProviderPresetDefaults(
} }
case 'mistral': case 'mistral':
return { return {
provider: 'openai', provider: 'mistral',
name: 'Mistral', name: 'Mistral',
baseUrl: 'https://api.mistral.ai/v1', baseUrl: 'https://api.mistral.ai/v1',
model: 'devstral-latest', model: 'devstral-latest',
@@ -317,6 +332,7 @@ function hasConflictingProviderFlagsForProfile(
return ( return (
processEnv.CLAUDE_CODE_USE_GEMINI !== undefined || processEnv.CLAUDE_CODE_USE_GEMINI !== undefined ||
processEnv.CLAUDE_CODE_USE_MISTRAL !== undefined ||
processEnv.CLAUDE_CODE_USE_GITHUB !== undefined || processEnv.CLAUDE_CODE_USE_GITHUB !== undefined ||
processEnv.CLAUDE_CODE_USE_BEDROCK !== undefined || processEnv.CLAUDE_CODE_USE_BEDROCK !== undefined ||
processEnv.CLAUDE_CODE_USE_VERTEX !== undefined || processEnv.CLAUDE_CODE_USE_VERTEX !== undefined ||
@@ -358,6 +374,38 @@ function isProcessEnvAlignedWithProfile(
) )
} }
if (profile.provider === 'mistral') {
return (
processEnv.CLAUDE_CODE_USE_MISTRAL !== undefined &&
processEnv.CLAUDE_CODE_USE_GEMINI === undefined &&
processEnv.CLAUDE_CODE_USE_OPENAI === undefined &&
processEnv.CLAUDE_CODE_USE_GITHUB === undefined &&
processEnv.CLAUDE_CODE_USE_BEDROCK === undefined &&
processEnv.CLAUDE_CODE_USE_VERTEX === undefined &&
processEnv.CLAUDE_CODE_USE_FOUNDRY === undefined &&
sameOptionalEnvValue(processEnv.MISTRAL_BASE_URL, profile.baseUrl) &&
sameOptionalEnvValue(processEnv.MISTRAL_MODEL, profile.model) &&
(!includeApiKey ||
sameOptionalEnvValue(processEnv.MISTRAL_API_KEY, profile.apiKey))
)
}
if (profile.provider === 'gemini') {
return (
processEnv.CLAUDE_CODE_USE_GEMINI !== undefined &&
processEnv.CLAUDE_CODE_USE_MISTRAL === undefined &&
processEnv.CLAUDE_CODE_USE_OPENAI === undefined &&
processEnv.CLAUDE_CODE_USE_GITHUB === undefined &&
processEnv.CLAUDE_CODE_USE_BEDROCK === undefined &&
processEnv.CLAUDE_CODE_USE_VERTEX === undefined &&
processEnv.CLAUDE_CODE_USE_FOUNDRY === undefined &&
sameOptionalEnvValue(processEnv.GEMINI_BASE_URL, profile.baseUrl) &&
sameOptionalEnvValue(processEnv.GEMINI_MODEL, profile.model) &&
(!includeApiKey ||
sameOptionalEnvValue(processEnv.GEMINI_API_KEY, profile.apiKey))
)
}
return ( return (
processEnv.CLAUDE_CODE_USE_OPENAI !== undefined && processEnv.CLAUDE_CODE_USE_OPENAI !== undefined &&
processEnv.CLAUDE_CODE_USE_GEMINI === undefined && processEnv.CLAUDE_CODE_USE_GEMINI === undefined &&
@@ -407,6 +455,17 @@ export function clearProviderProfileEnvFromProcessEnv(
delete processEnv[PROFILE_ENV_APPLIED_FLAG] delete processEnv[PROFILE_ENV_APPLIED_FLAG]
delete processEnv[PROFILE_ENV_APPLIED_ID] delete processEnv[PROFILE_ENV_APPLIED_ID]
delete processEnv.GEMINI_MODEL
delete processEnv.GEMINI_BASE_URL
delete processEnv.GEMINI_API_KEY
delete processEnv.GEMINI_AUTH_MODE
delete processEnv.GEMINI_ACCESS_TOKEN
delete processEnv.GOOGLE_API_KEY
delete processEnv.MISTRAL_MODEL
delete processEnv.MISTRAL_BASE_URL
delete processEnv.MISTRAL_API_KEY
// Clear provider-specific API keys // Clear provider-specific API keys
delete processEnv.MINIMAX_API_KEY delete processEnv.MINIMAX_API_KEY
delete processEnv.NVIDIA_API_KEY delete processEnv.NVIDIA_API_KEY
@@ -435,6 +494,40 @@ export function applyProviderProfileToProcessEnv(profile: ProviderProfile): void
return return
} }
if (profile.provider === 'mistral') {
process.env.CLAUDE_CODE_USE_MISTRAL = '1'
process.env.MISTRAL_BASE_URL = profile.baseUrl
process.env.MISTRAL_MODEL = profile.model
if (profile.apiKey) {
process.env.MISTRAL_API_KEY = profile.apiKey
} else {
delete process.env.MISTRAL_API_KEY
}
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_KEY
delete process.env.OPENAI_MODEL
return
}
if (profile.provider === 'gemini') {
process.env.CLAUDE_CODE_USE_GEMINI = '1'
process.env.GEMINI_BASE_URL = profile.baseUrl
process.env.GEMINI_MODEL = profile.model
if (profile.apiKey) {
process.env.GEMINI_API_KEY = profile.apiKey
} else {
delete process.env.GEMINI_API_KEY
}
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_KEY
delete process.env.OPENAI_MODEL
return
}
process.env.CLAUDE_CODE_USE_OPENAI = '1' process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = profile.baseUrl process.env.OPENAI_BASE_URL = profile.baseUrl
process.env.OPENAI_MODEL = getPrimaryModel(profile.model) process.env.OPENAI_MODEL = getPrimaryModel(profile.model)
@@ -520,7 +613,7 @@ export function addProviderProfile(
const activeProfile = getActiveProviderProfile() const activeProfile = getActiveProviderProfile()
if (activeProfile?.id === profile.id) { if (activeProfile?.id === profile.id) {
applyProviderProfileToProcessEnv(profile) setActiveProviderProfile(profile.id)
clearActiveOpenAIModelOptionsCache() clearActiveOpenAIModelOptionsCache()
} }
@@ -699,6 +792,68 @@ export function setActiveProviderProfile(
})) }))
applyProviderProfileToProcessEnv(activeProfile) applyProviderProfileToProcessEnv(activeProfile)
// Keep startup persisted provider profile in sync so initial startup
// uses the selected provider/model.
const persistedProfile = (() => {
if (activeProfile.provider === 'anthropic') return 'openai' as const
return activeProfile.provider
})()
const profileEnv = (() => {
switch (activeProfile.provider) {
case 'gemini':
return (
buildGeminiProfileEnv({
model: activeProfile.model,
baseUrl: activeProfile.baseUrl,
apiKey: activeProfile.apiKey,
authMode: 'api-key',
processEnv: process.env,
}) ?? null
)
case 'mistral':
return (
buildMistralProfileEnv({
model: activeProfile.model,
baseUrl: activeProfile.baseUrl,
apiKey: activeProfile.apiKey,
processEnv: process.env,
}) ?? null
)
default:
// anthropic and all openai-compatible providers
return (
buildOpenAIProfileEnv({
model: activeProfile.model,
baseUrl: activeProfile.baseUrl,
apiKey: activeProfile.apiKey,
processEnv: process.env,
}) ?? null
)
}
})()
if (profileEnv) {
const startupProfile =
activeProfile.provider === 'anthropic'
? ({
profile: 'openai' as ProviderProfileStartup,
env: {
OPENAI_BASE_URL: activeProfile.baseUrl,
OPENAI_MODEL: activeProfile.model,
OPENAI_API_KEY: activeProfile.apiKey,
},
} as const)
: ({
profile: activeProfile.provider as ProviderProfileStartup,
env: profileEnv,
} as const)
const file = createProfileFile(startupProfile.profile, startupProfile.env)
saveProfileFile(file)
}
return activeProfile return activeProfile
} }

View File

@@ -61,15 +61,7 @@ export function maskSecretForDisplay(
return 'configured' return 'configured'
} }
if (sanitized.startsWith('sk-')) { return `${sanitized.slice(0, 3)}...${sanitized.slice(-3)}`
return `${sanitized.slice(0, 3)}...${sanitized.slice(-4)}`
}
if (sanitized.startsWith('AIza')) {
return `${sanitized.slice(0, 4)}...${sanitized.slice(-4)}`
}
return `${sanitized.slice(0, 2)}...${sanitized.slice(-4)}`
} }
export function redactSecretValueForDisplay( export function redactSecretValueForDisplay(

View File

@@ -1,6 +1,9 @@
import { afterEach, expect, test } from 'bun:test' import { afterEach, expect, test } from 'bun:test'
import { getProviderValidationError } from './providerValidation.ts' import {
getProviderValidationError,
shouldExitForStartupProviderValidationError,
} from './providerValidation.ts'
const originalEnv = { const originalEnv = {
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI, CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
@@ -93,3 +96,45 @@ test('openai missing key error includes recovery guidance and config locations',
expect(message).toContain('Saved startup settings can come from') expect(message).toContain('Saved startup settings can come from')
expect(message).toContain('.openclaude-profile.json') expect(message).toContain('.openclaude-profile.json')
}) })
test('startup provider validation allows interactive recovery', () => {
expect(
shouldExitForStartupProviderValidationError({
args: [],
stdoutIsTTY: true,
}),
).toBe(false)
})
test('startup provider validation stays strict for non-interactive launches', () => {
expect(
shouldExitForStartupProviderValidationError({
args: ['-p', 'hello'],
stdoutIsTTY: true,
}),
).toBe(true)
expect(
shouldExitForStartupProviderValidationError({
args: ['--print', 'hello'],
stdoutIsTTY: true,
}),
).toBe(true)
expect(
shouldExitForStartupProviderValidationError({
args: [],
stdoutIsTTY: false,
}),
).toBe(true)
expect(
shouldExitForStartupProviderValidationError({
args: ['--sdk-url', 'ws://127.0.0.1:3000'],
stdoutIsTTY: true,
}),
).toBe(true)
expect(
shouldExitForStartupProviderValidationError({
args: ['--sdk-url=ws://127.0.0.1:3000'],
stdoutIsTTY: true,
}),
).toBe(true)
})

View File

@@ -169,3 +169,44 @@ export async function validateProviderEnvOrExit(
process.exit(1) process.exit(1)
} }
} }
export function shouldExitForStartupProviderValidationError(options: {
args?: string[]
stdoutIsTTY?: boolean
} = {}): boolean {
const args = options.args ?? process.argv.slice(2)
const stdoutIsTTY = options.stdoutIsTTY ?? process.stdout.isTTY
if (!stdoutIsTTY) {
return true
}
return (
args.includes('-p') ||
args.includes('--print') ||
args.includes('--init-only') ||
args.some(arg => arg.startsWith('--sdk-url'))
)
}
export async function validateProviderEnvForStartupOrExit(
env: NodeJS.ProcessEnv = process.env,
options?: {
args?: string[]
stdoutIsTTY?: boolean
},
): Promise<void> {
const error = await getProviderValidationError(env)
if (!error) {
return
}
if (shouldExitForStartupProviderValidationError(options)) {
console.error(error)
process.exit(1)
}
console.error(
`Warning: provider configuration is incomplete.\n${error}\nOpenClaude will continue starting so you can run /provider and repair the saved provider settings.`,
)
}

View File

@@ -0,0 +1,15 @@
import { truncate, truncateToWidth, truncatePathMiddle } from './truncate.js'
describe('truncate utilities', () => {
test('truncate returns empty string for undefined input', () => {
expect(truncate(undefined, 10)).toBe('')
})
test('truncateToWidth returns empty string for undefined input', () => {
expect(truncateToWidth(undefined, 5)).toBe('')
})
test('truncatePathMiddle returns empty string for undefined path', () => {
expect(truncatePathMiddle(undefined, 20)).toBe('')
})
})

View File

@@ -13,10 +13,11 @@ import { getGraphemeSegmenter } from './intl.js'
* @param maxLength Maximum display width of the result in terminal columns (must be > 0) * @param maxLength Maximum display width of the result in terminal columns (must be > 0)
* @returns The truncated path, or original if it fits within maxLength * @returns The truncated path, or original if it fits within maxLength
*/ */
export function truncatePathMiddle(path: string, maxLength: number): string { export function truncatePathMiddle(path: string | undefined, maxLength: number): string {
const safePath = path ?? ''
// No truncation needed // No truncation needed
if (stringWidth(path) <= maxLength) { if (stringWidth(safePath) <= maxLength) {
return path return safePath
} }
// Handle edge case of very small or non-positive maxLength // Handle edge case of very small or non-positive maxLength
@@ -26,14 +27,14 @@ export function truncatePathMiddle(path: string, maxLength: number): string {
// Need at least room for "…" + something meaningful // Need at least room for "…" + something meaningful
if (maxLength < 5) { if (maxLength < 5) {
return truncateToWidth(path, maxLength) return truncateToWidth(safePath, maxLength)
} }
// Find the filename (last path segment) // Find the filename (last path segment)
const lastSlash = path.lastIndexOf('/') const lastSlash = safePath.lastIndexOf('/')
// Include the leading slash in filename for display // Include the leading slash in filename for display
const filename = lastSlash >= 0 ? path.slice(lastSlash) : path const filename = lastSlash >= 0 ? safePath.slice(lastSlash) : safePath
const directory = lastSlash >= 0 ? path.slice(0, lastSlash) : '' const directory = lastSlash >= 0 ? safePath.slice(0, lastSlash) : ''
const filenameWidth = stringWidth(filename) const filenameWidth = stringWidth(filename)
// If filename alone is too long, truncate from start // If filename alone is too long, truncate from start
@@ -60,12 +61,13 @@ export function truncatePathMiddle(path: string, maxLength: number): string {
* Splits on grapheme boundaries to avoid breaking emoji or surrogate pairs. * Splits on grapheme boundaries to avoid breaking emoji or surrogate pairs.
* Appends '…' when truncation occurs. * Appends '…' when truncation occurs.
*/ */
export function truncateToWidth(text: string, maxWidth: number): string { export function truncateToWidth(text: string | undefined, maxWidth: number): string {
if (stringWidth(text) <= maxWidth) return text const safeText = text ?? ''
if (stringWidth(safeText) <= maxWidth) return safeText
if (maxWidth <= 1) return '…' if (maxWidth <= 1) return '…'
let width = 0 let width = 0
let result = '' let result = ''
for (const { segment } of getGraphemeSegmenter().segment(text)) { for (const { segment } of getGraphemeSegmenter().segment(safeText)) {
const segWidth = stringWidth(segment) const segWidth = stringWidth(segment)
if (width + segWidth > maxWidth - 1) break if (width + segWidth > maxWidth - 1) break
result += segment result += segment
@@ -79,10 +81,11 @@ export function truncateToWidth(text: string, maxWidth: number): string {
* Prepends '…' when truncation occurs. * Prepends '…' when truncation occurs.
* Width-aware and grapheme-safe. * Width-aware and grapheme-safe.
*/ */
export function truncateStartToWidth(text: string, maxWidth: number): string { export function truncateStartToWidth(text: string | undefined, maxWidth: number): string {
if (stringWidth(text) <= maxWidth) return text const safeText = text ?? ''
if (stringWidth(safeText) <= maxWidth) return safeText
if (maxWidth <= 1) return '…' if (maxWidth <= 1) return '…'
const segments = [...getGraphemeSegmenter().segment(text)] const segments = [...getGraphemeSegmenter().segment(safeText)]
let width = 0 let width = 0
let startIdx = segments.length let startIdx = segments.length
for (let i = segments.length - 1; i >= 0; i--) { for (let i = segments.length - 1; i >= 0; i--) {
@@ -106,14 +109,15 @@ export function truncateStartToWidth(text: string, maxWidth: number): string {
* Width-aware and grapheme-safe. * Width-aware and grapheme-safe.
*/ */
export function truncateToWidthNoEllipsis( export function truncateToWidthNoEllipsis(
text: string, text: string | undefined,
maxWidth: number, maxWidth: number,
): string { ): string {
if (stringWidth(text) <= maxWidth) return text const safeText = text ?? ''
if (stringWidth(safeText) <= maxWidth) return safeText
if (maxWidth <= 0) return '' if (maxWidth <= 0) return ''
let width = 0 let width = 0
let result = '' let result = ''
for (const { segment } of getGraphemeSegmenter().segment(text)) { for (const { segment } of getGraphemeSegmenter().segment(safeText)) {
const segWidth = stringWidth(segment) const segWidth = stringWidth(segment)
if (width + segWidth > maxWidth) break if (width + segWidth > maxWidth) break
result += segment result += segment
@@ -133,20 +137,19 @@ export function truncateToWidthNoEllipsis(
*/ */
export function truncate( export function truncate(
str: string, str: string | undefined,
maxWidth: number, maxWidth: number,
singleLine: boolean = false, singleLine: boolean = false,
): string { ): string {
// Undefined or null protection const safeStr = str ?? ''
if (!str) return '' if (safeStr === '') return ''
let result = safeStr
let result = str
// If singleLine is true, truncate at first newline // If singleLine is true, truncate at first newline
if (singleLine) { if (singleLine) {
const firstNewline = str.indexOf('\n') const firstNewline = safeStr.indexOf('\n')
if (firstNewline !== -1) { if (firstNewline !== -1) {
result = str.substring(0, firstNewline) result = safeStr.substring(0, firstNewline)
// Ensure total width including ellipsis doesn't exceed maxWidth // Ensure total width including ellipsis doesn't exceed maxWidth
if (stringWidth(result) + 1 > maxWidth) { if (stringWidth(result) + 1 > maxWidth) {
return truncateToWidth(result, maxWidth) return truncateToWidth(result, maxWidth)