Compare commits

..

8 Commits

Author SHA1 Message Date
Juan Camilo
98f38d8bfc test: trim extra blank lines in conversation recovery test
Keep the focused provider-resume test diff clean so the regression branch stays easy to review.

Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>
2026-04-07 15:27:39 +02:00
Juan Camilo
279cd1a7e1 test: move provider-sensitive resume coverage behind module mocks 2026-04-07 15:02:41 +02:00
Juan Camilo
5c13223aa4 test: isolate provider env in conversation recovery tests 2026-04-07 15:02:41 +02:00
Juan Camilo
2c8842f87c test: align resume stripping expectation with orphan-thinking filter 2026-04-07 15:02:41 +02:00
Juan Camilo
858f06d964 fix: strip Anthropic-specific params from 3P provider paths
Three silent failure modes affecting all third-party provider users:

1. Thinking blocks serialized as <thinking> text corrupt multi-turn
   context — strip them instead of converting to raw text tags.

2. Unknown models fall through to 200k context window default, so
   auto-compact never triggers — use conservative 8k for unknown
   3P models with a warning log.

3. Session resume with thinking blocks causes 400 or context corruption
   on 3P providers — strip thinking/redacted_thinking content blocks
   from deserialized messages when resuming against a non-Anthropic
   provider.

Addresses findings 2, 3, and 5 from #248.
2026-04-07 15:02:13 +02:00
ibaaaaal
600c01faf7 fix: restore Grep and Glob reliability on OpenAI paths (#461)
* fix: restore Grep and Glob reliability on OpenAI paths

Preserve Grep and Glob pattern fields during OpenAI/Codex schema sanitization, and fall back to system ripgrep when the packaged binary is missing. This keeps search tool schemas intact and improves Linux usability for npm/source installs.

Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>

* test: clean up ripgrep fallback test helpers

Remove the unused ripgrepCommand import and normalize mocked builtin ripgrep paths so the test behaves consistently across platforms.

Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>

* test: remove duplicate Codex URI schema case

Drop the duplicated WebFetch URI-format test in codexShim.test.ts so test names stay unique and failures remain easier to read.

Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>

* test: stabilize ripgrep fallback coverage

Avoid fs/module mocking in ripgrep fallback tests by extracting the config selection logic into a pure helper. This preserves the fallback coverage while removing the test interaction that caused the narrowed Bun hang repro.

Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>

* test: tighten ripgrep and schema coverage

Align the ripgrep fallback test with the actual auto-fallback branch, clean up strict typing in schema sanitizer tests, and tighten ripgrep error narrowing for type safety.

Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>

---------

Co-authored-by: Claude Opus 4.6 <noreply@openclaude.dev>
2026-04-07 17:26:00 +08:00
makskopchan-tech
b07bafa5bd Security code scanning (#459)
* fix: address code scanning alerts

Parse Gemini hostnames instead of matching raw URL substrings, redact gRPC error logs, and harden the Finder drag-drop test escape helper so the flagged paths are fixed without regressing working behavior.

* Potential fix for pull request finding 'CodeQL / Clear-text logging of sensitive information'

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* fix: restore safe grpc error summaries

A later autofix commit removed the exported gRPC error summarizer while the new regression test still imported it. Restore the safe name/code-only summary so CI stays green without reintroducing clear-text logging.

* fix: keep grpc logging generic

Remove the stale helper/test pair and keep the gRPC startup and stream logs free of error-derived data so the CodeQL clear-text logging alert stays closed while the rest of the security fixes remain intact.

---------

Co-authored-by: OpenClaude Worker 3 <worker-3@openclaude.local>
Co-authored-by: Vasanth T <148849890+Vasanthdev2004@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-04-07 16:03:16 +08:00
changjiaoxigua
85aa8b0985 fix: add File polyfill for Node < 20 to prevent startup deadlock with proxy (#442)
When a proxy is configured, configureGlobalAgents() loads undici to set a
global dispatcher. However, undici v7.24.6 requires Node.js >= 20.18.1 and
references globalThis.File at module evaluation time for webidl type assertions.

Node 18 lacks the File global, causing ReferenceError inside the bundled
__commonJS require chain, which deadlocks due to unresolved circular
dependencies in the module initialization.

Fix by polyfilling globalThis.File early in cli.tsx entrypoint, before any
undici code loads. Try node:buffer.File (available in Node 18.13+), fallback
to minimal Blob-based stub.

Fixes: bun run start hangs indefinitely when HTTP_PROXY/HTTPS_PROXY is set

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-07 16:02:05 +08:00
25 changed files with 545 additions and 389 deletions

1
.gitignore vendored
View File

@@ -10,4 +10,3 @@ GEMINI.md
package-lock.json package-lock.json
/.claude /.claude
coverage/ coverage/
.worktrees/

View File

@@ -8,6 +8,34 @@ import {
validateProviderEnvOrExit, validateProviderEnvOrExit,
} from '../utils/providerValidation.js' } from '../utils/providerValidation.js'
// OpenClaude: polyfill globalThis.File for Node < 20.
// undici v7 references `File` at module evaluation time (webidl type
// assertions). Node 18 lacks the global, causing a ReferenceError inside
// the bundled __commonJS require chain which deadlocks the process when a
// proxy is configured (configureGlobalAgents → require_undici).
// eslint-disable-next-line custom-rules/no-top-level-side-effects
if (typeof globalThis.File === 'undefined') {
try {
// Node 18.13+ exposes File in node:buffer but not as a global.
// eslint-disable-next-line @typescript-eslint/no-require-imports
const { File: NodeFile } = require('node:buffer')
// @ts-expect-error -- polyfilling missing global
globalThis.File = NodeFile
} catch {
// Absolute fallback: stub so `MakeTypeAssertion(File)` doesn't throw.
// @ts-expect-error -- minimal polyfill
globalThis.File = class File extends Blob {
name: string
lastModified: number
constructor(parts: BlobPart[], name: string, opts?: FilePropertyBag) {
super(parts, opts)
this.name = name
this.lastModified = opts?.lastModified ?? Date.now()
}
}
}
}
// OpenClaude: disable experimental API betas by default. // OpenClaude: disable experimental API betas by default.
// Tool search (defer_loading), global cache scope, and context management // Tool search (defer_loading), global cache scope, and context management
// require internal API support not available to external accounts → 500. // require internal API support not available to external accounts → 500.

View File

@@ -238,7 +238,6 @@ import { usePromptsFromClaudeInChrome } from 'src/hooks/usePromptsFromClaudeInCh
import { getTipToShowOnSpinner, recordShownTip } from 'src/services/tips/tipScheduler.js'; import { getTipToShowOnSpinner, recordShownTip } from 'src/services/tips/tipScheduler.js';
import type { Theme } from 'src/utils/theme.js'; import type { Theme } from 'src/utils/theme.js';
import { isPromptTypingSuppressionActive } from './replInputSuppression.js'; import { isPromptTypingSuppressionActive } from './replInputSuppression.js';
import { shouldStartStartupChecks } from './replStartupGates.js';
import { checkAndDisableBypassPermissionsIfNeeded, checkAndDisableAutoModeIfNeeded, useKickOffCheckAndDisableBypassPermissionsIfNeeded, useKickOffCheckAndDisableAutoModeIfNeeded } from 'src/utils/permissions/bypassPermissionsKillswitch.js'; import { checkAndDisableBypassPermissionsIfNeeded, checkAndDisableAutoModeIfNeeded, useKickOffCheckAndDisableBypassPermissionsIfNeeded, useKickOffCheckAndDisableAutoModeIfNeeded } from 'src/utils/permissions/bypassPermissionsKillswitch.js';
import { SandboxManager } from 'src/utils/sandbox/sandbox-adapter.js'; import { SandboxManager } from 'src/utils/sandbox/sandbox-adapter.js';
import { SANDBOX_NETWORK_ACCESS_TOOL_NAME } from 'src/cli/structuredIO.js'; import { SANDBOX_NETWORK_ACCESS_TOOL_NAME } from 'src/cli/structuredIO.js';
@@ -785,6 +784,19 @@ export function REPL({
}); });
const tasksV2 = useTasksV2WithCollapseEffect(); const tasksV2 = useTasksV2WithCollapseEffect();
// Start background plugin installations
// SECURITY: This code is guaranteed to run ONLY after the "trust this folder" dialog
// has been confirmed by the user. The trust dialog is shown in cli.tsx (line ~387)
// before the REPL component is rendered. The dialog blocks execution until the user
// accepts, and only then is the REPL component mounted and this effect runs.
// This ensures that plugin installations from repository and user settings only
// happen after explicit user consent to trust the current working directory.
useEffect(() => {
if (isRemoteSession) return;
void performStartupChecks(setAppState);
}, [setAppState, isRemoteSession]);
// Allow Claude in Chrome MCP to send prompts through MCP notifications // Allow Claude in Chrome MCP to send prompts through MCP notifications
// and sync permission mode changes to the Chrome extension // and sync permission mode changes to the Chrome extension
usePromptsFromClaudeInChrome(isRemoteSession ? EMPTY_MCP_CLIENTS : mcpClients, toolPermissionContext.mode); usePromptsFromClaudeInChrome(isRemoteSession ? EMPTY_MCP_CLIENTS : mcpClients, toolPermissionContext.mode);
@@ -1325,7 +1337,6 @@ export function REPL({
const [inputValue, setInputValueRaw] = useState(() => consumeEarlyInput()); const [inputValue, setInputValueRaw] = useState(() => consumeEarlyInput());
const inputValueRef = useRef(inputValue); const inputValueRef = useRef(inputValue);
inputValueRef.current = inputValue; inputValueRef.current = inputValue;
const startupChecksStartedRef = useRef(false);
const promptTypingSuppressionActive = isPromptTypingSuppressionActive(isPromptInputActive, inputValue); const promptTypingSuppressionActive = isPromptTypingSuppressionActive(isPromptInputActive, inputValue);
const insertTextRef = useRef<{ const insertTextRef = useRef<{
insert: (text: string) => void; insert: (text: string) => void;
@@ -1333,24 +1344,6 @@ export function REPL({
cursorOffset: number; cursorOffset: number;
} | null>(null); } | null>(null);
// Start background plugin installations after the initial input window is idle.
// SECURITY: This still runs only after the "trust this folder" dialog has been
// confirmed because the REPL is not mounted until that dialog completes.
useEffect(() => {
if (
!shouldStartStartupChecks({
isRemoteSession,
promptTypingSuppressionActive,
startupChecksStarted: startupChecksStartedRef.current,
})
) {
return;
}
startupChecksStartedRef.current = true;
void performStartupChecks(setAppState);
}, [isRemoteSession, promptTypingSuppressionActive, setAppState]);
// Wrap setInputValue to co-locate suppression state updates. // Wrap setInputValue to co-locate suppression state updates.
// Both setState calls happen in the same synchronous context so React // Both setState calls happen in the same synchronous context so React
// batches them into a single render, eliminating the extra render that // batches them into a single render, eliminating the extra render that

View File

@@ -1,44 +0,0 @@
import { describe, expect, test } from 'bun:test'
import { shouldStartStartupChecks } from './replStartupGates.js'
describe('shouldStartStartupChecks', () => {
test('returns false for remote sessions', () => {
expect(
shouldStartStartupChecks({
isRemoteSession: true,
promptTypingSuppressionActive: false,
startupChecksStarted: false,
}),
).toBe(false)
})
test('returns false while prompt typing suppression is active', () => {
expect(
shouldStartStartupChecks({
isRemoteSession: false,
promptTypingSuppressionActive: true,
startupChecksStarted: false,
}),
).toBe(false)
})
test('returns true once local startup is idle and checks have not started', () => {
expect(
shouldStartStartupChecks({
isRemoteSession: false,
promptTypingSuppressionActive: false,
startupChecksStarted: false,
}),
).toBe(true)
})
test('returns false after startup checks have already started', () => {
expect(
shouldStartStartupChecks({
isRemoteSession: false,
promptTypingSuppressionActive: false,
startupChecksStarted: true,
}),
).toBe(false)
})
})

View File

@@ -1,11 +0,0 @@
export function shouldStartStartupChecks(options: {
isRemoteSession: boolean
promptTypingSuppressionActive: boolean
startupChecksStarted: boolean
}): boolean {
return (
!options.isRemoteSession &&
!options.promptTypingSuppressionActive &&
!options.startupChecksStarted
)
}

View File

@@ -201,6 +201,117 @@ describe('Codex request translation', () => {
]) ])
}) })
test('preserves Grep tool pattern field in Codex strict schemas', () => {
const tools = convertToolsToResponsesTools([
{
name: 'Grep',
description: 'Search file contents',
input_schema: {
type: 'object',
properties: {
pattern: { type: 'string', description: 'Search pattern' },
path: { type: 'string' },
},
required: ['pattern'],
additionalProperties: false,
},
},
])
expect(tools).toEqual([
{
type: 'function',
name: 'Grep',
description: 'Search file contents',
parameters: {
type: 'object',
properties: {
pattern: { type: 'string', description: 'Search pattern' },
path: { type: 'string' },
},
required: ['pattern', 'path'],
additionalProperties: false,
},
strict: true,
},
])
})
test('preserves Glob tool pattern field in Codex strict schemas', () => {
const tools = convertToolsToResponsesTools([
{
name: 'Glob',
description: 'Find files by pattern',
input_schema: {
type: 'object',
properties: {
pattern: { type: 'string', description: 'Glob pattern' },
path: { type: 'string' },
},
required: ['pattern'],
additionalProperties: false,
},
},
])
expect(tools).toEqual([
{
type: 'function',
name: 'Glob',
description: 'Find files by pattern',
parameters: {
type: 'object',
properties: {
pattern: { type: 'string', description: 'Glob pattern' },
path: { type: 'string' },
},
required: ['pattern', 'path'],
additionalProperties: false,
},
strict: true,
},
])
})
test('strips validator pattern keyword but keeps string field named pattern in Codex schemas', () => {
const tools = convertToolsToResponsesTools([
{
name: 'RegexProbe',
description: 'Probe regex schema handling',
input_schema: {
type: 'object',
properties: {
pattern: {
type: 'string',
pattern: '^[a-z]+$',
},
},
required: ['pattern'],
additionalProperties: false,
},
},
])
expect(tools).toEqual([
{
type: 'function',
name: 'RegexProbe',
description: 'Probe regex schema handling',
parameters: {
type: 'object',
properties: {
pattern: {
type: 'string',
},
},
required: ['pattern'],
additionalProperties: false,
},
strict: true,
},
])
})
test('removes unsupported uri format from strict Responses schemas', () => { test('removes unsupported uri format from strict Responses schemas', () => {
const tools = convertToolsToResponsesTools([ const tools = convertToolsToResponsesTools([
{ {

View File

@@ -261,6 +261,73 @@ test('preserves Gemini tool call extra_content in follow-up requests', async ()
}) })
}) })
test('preserves Grep tool pattern field in OpenAI-compatible schemas', async () => {
let requestBody: Record<string, unknown> | undefined
globalThis.fetch = (async (_input, init) => {
requestBody = JSON.parse(String(init?.body))
return new Response(
JSON.stringify({
id: 'chatcmpl-grep-schema',
model: 'qwen/qwen3.6-plus',
choices: [
{
message: {
role: 'assistant',
content: 'done',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 12,
completion_tokens: 4,
total_tokens: 16,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create({
model: 'qwen/qwen3.6-plus',
system: 'test system',
messages: [{ role: 'user', content: 'Use Grep' }],
tools: [
{
name: 'Grep',
description: 'Search file contents',
input_schema: {
type: 'object',
properties: {
pattern: { type: 'string', description: 'Search pattern' },
path: { type: 'string' },
},
required: ['pattern'],
additionalProperties: false,
},
},
],
max_tokens: 64,
stream: false,
})
const tools = requestBody?.tools as Array<Record<string, unknown>> | undefined
const grepTool = tools?.find(tool => (tool.function as Record<string, unknown>)?.name === 'Grep') as
| { function?: { parameters?: { properties?: Record<string, unknown>; required?: string[] } } }
| undefined
expect(Object.keys(grepTool?.function?.parameters?.properties ?? {})).toContain('pattern')
expect(grepTool?.function?.parameters?.required).toContain('pattern')
})
test('does not infer Gemini mode from OPENAI_BASE_URL path substrings', async () => { test('does not infer Gemini mode from OPENAI_BASE_URL path substrings', async () => {
let capturedAuthorization: string | null = null let capturedAuthorization: string | null = null

View File

@@ -195,10 +195,12 @@ function convertContentBlocks(
// handled separately // handled separately
break break
case 'thinking': case 'thinking':
// Append thinking as text with a marker for models that support reasoning case 'redacted_thinking':
if (block.thinking) { // Strip thinking blocks for OpenAI-compatible providers.
parts.push({ type: 'text', text: `<thinking>${block.thinking}</thinking>` }) // These are Anthropic-specific content types that 3P providers
} // don't understand. Serializing them as <thinking> text corrupts
// multi-turn context: the model sees the tags as part of its
// previous reply and may mimic or misattribute them.
break break
default: default:
if (block.text) { if (block.text) {

View File

@@ -72,16 +72,23 @@ export function getContextWindowForModel(
return 1_000_000 return 1_000_000
} }
// OpenAI-compatible provider — use known context windows for the model // OpenAI-compatible provider — use known context windows for the model.
if ( // Unknown models get a conservative 8k default so auto-compact triggers
// before hitting a hard context_window_exceeded error (issue #248 finding 3).
const isOpenAIProvider =
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) || isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) || isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
) { if (isOpenAIProvider) {
const openaiWindow = getOpenAIContextWindow(model) const openaiWindow = getOpenAIContextWindow(model)
if (openaiWindow !== undefined) { if (openaiWindow !== undefined) {
return openaiWindow return openaiWindow
} }
console.error(
`[context] Warning: model "${model}" not in context window table — using conservative 8k default. ` +
'Add it to src/utils/model/openaiContextWindows.ts for accurate compaction.',
)
return 8_000
} }
const cap = getModelCapability(model) const cap = getModelCapability(model)

View File

@@ -69,3 +69,93 @@ test('loadConversationForResume rejects oversized transcripts before resume hook
) )
expect(hookSpy).not.toHaveBeenCalled() expect(hookSpy).not.toHaveBeenCalled()
}) })
test('deserializeMessagesWithInterruptDetection strips thinking blocks only for OpenAI-compatible providers', async () => {
const serializedMessages = [
user(id(10), 'hello'),
{
type: 'assistant',
uuid: id(11),
parentUuid: id(10),
timestamp: ts,
cwd: '/tmp',
sessionId,
version: 'test',
message: {
role: 'assistant',
content: [
{ type: 'thinking', thinking: 'secret reasoning' },
{ type: 'text', text: 'visible reply' },
],
},
},
{
type: 'assistant',
uuid: id(12),
parentUuid: id(11),
timestamp: ts,
cwd: '/tmp',
sessionId,
version: 'test',
message: {
role: 'assistant',
content: [{ type: 'thinking', thinking: 'only hidden reasoning' }],
},
},
user(id(13), 'follow up'),
]
mock.module('./model/providers.js', () => ({
getAPIProvider: () => 'openai',
isOpenAICompatibleProvider: (provider: string) =>
provider === 'openai' ||
provider === 'gemini' ||
provider === 'github' ||
provider === 'codex',
}))
const openaiModule = await import(`./conversationRecovery.ts?provider=openai-${Date.now()}`)
const thirdParty = openaiModule.deserializeMessagesWithInterruptDetection(serializedMessages as never[])
const thirdPartyAssistantMessages = thirdParty.messages.filter(
message => message.type === 'assistant',
)
expect(thirdPartyAssistantMessages).toHaveLength(2)
expect(thirdPartyAssistantMessages[0]?.message?.content).toEqual([
{ type: 'text', text: 'visible reply' },
])
expect(
JSON.stringify(thirdPartyAssistantMessages.map(message => message.message?.content)),
).not.toContain('secret reasoning')
expect(
JSON.stringify(thirdPartyAssistantMessages.map(message => message.message?.content)),
).not.toContain('only hidden reasoning')
mock.restore()
mock.module('./model/providers.js', () => ({
getAPIProvider: () => 'bedrock',
isOpenAICompatibleProvider: (provider: string) =>
provider === 'openai' ||
provider === 'gemini' ||
provider === 'github' ||
provider === 'codex',
}))
const bedrockModule = await import(`./conversationRecovery.ts?provider=bedrock-${Date.now()}`)
const anthropicCompatible = bedrockModule.deserializeMessagesWithInterruptDetection(serializedMessages as never[])
const anthropicAssistantMessages = anthropicCompatible.messages.filter(
message => message.type === 'assistant',
)
expect(anthropicAssistantMessages).toHaveLength(2)
expect(anthropicAssistantMessages[0]?.message?.content).toEqual([
{ type: 'thinking', thinking: 'secret reasoning' },
{ type: 'text', text: 'visible reply' },
])
expect(
JSON.stringify(anthropicAssistantMessages.map(message => message.message?.content)),
).toContain('secret reasoning')
expect(
JSON.stringify(anthropicAssistantMessages.map(message => message.message?.content)),
).not.toContain('only hidden reasoning')
})

View File

@@ -13,6 +13,7 @@ const originalSimple = process.env.CLAUDE_CODE_SIMPLE
const sessionId = '00000000-0000-4000-8000-000000001999' const sessionId = '00000000-0000-4000-8000-000000001999'
const ts = '2026-04-02T00:00:00.000Z' const ts = '2026-04-02T00:00:00.000Z'
function id(n: number): string { function id(n: number): string {
return `00000000-0000-4000-8000-${String(n).padStart(12, '0')}` return `00000000-0000-4000-8000-${String(n).padStart(12, '0')}`
} }
@@ -76,4 +77,3 @@ test('loadConversationForResume rejects oversized reconstructed transcripts', as
'Reconstructed transcript is too large to resume safely', 'Reconstructed transcript is too large to resume safely',
) )
}) })

View File

@@ -24,6 +24,7 @@ import {
type FileHistorySnapshot, type FileHistorySnapshot,
} from './fileHistory.js' } from './fileHistory.js'
import { logError } from './log.js' import { logError } from './log.js'
import { getAPIProvider } from './model/providers.js'
import { import {
createAssistantMessage, createAssistantMessage,
createUserMessage, createUserMessage,
@@ -177,6 +178,25 @@ export type DeserializeResult = {
turnInterruptionState: TurnInterruptionState turnInterruptionState: TurnInterruptionState
} }
/**
* Remove thinking/redacted_thinking content blocks from assistant messages.
* Messages that become empty after stripping are removed entirely.
*/
function stripThinkingBlocks(messages: NormalizedMessage[]): NormalizedMessage[] {
return messages.reduce<NormalizedMessage[]>((acc, msg) => {
if (msg.type !== 'assistant' || !Array.isArray(msg.message?.content)) {
acc.push(msg)
return acc
}
const filtered = msg.message.content.filter(
(block: { type?: string }) => block.type !== 'thinking' && block.type !== 'redacted_thinking',
)
if (filtered.length === 0) return acc
acc.push({ ...msg, message: { ...msg.message, content: filtered } })
return acc
}, [])
}
/** /**
* Deserializes messages from a log file into the format expected by the REPL. * Deserializes messages from a log file into the format expected by the REPL.
* Filters unresolved tool uses, orphaned thinking messages, and appends a * Filters unresolved tool uses, orphaned thinking messages, and appends a
@@ -227,10 +247,19 @@ export function deserializeMessagesWithInterruptDetection(
filteredToolUses, filteredToolUses,
) as NormalizedMessage[] ) as NormalizedMessage[]
// Strip thinking/redacted_thinking content blocks from assistant messages
// when resuming against a 3P provider. These Anthropic-specific blocks cause
// 400 errors or context corruption on OpenAI-compatible providers (issue #248 finding 5).
const provider = getAPIProvider()
const isThirdPartyProvider = provider !== 'firstParty' && provider !== 'bedrock' && provider !== 'vertex' && provider !== 'foundry'
const thinkingStripped = isThirdPartyProvider
? stripThinkingBlocks(filteredThinking)
: filteredThinking
// Filter out assistant messages with only whitespace text content. // Filter out assistant messages with only whitespace text content.
// This can happen when model outputs "\n\n" before thinking, user cancels mid-stream. // This can happen when model outputs "\n\n" before thinking, user cancels mid-stream.
const filteredMessages = filterWhitespaceOnlyAssistantMessages( const filteredMessages = filterWhitespaceOnlyAssistantMessages(
filteredThinking, thinkingStripped,
) as NormalizedMessage[] ) as NormalizedMessage[]
const internalState = detectTurnInterruption(filteredMessages) const internalState = detectTurnInterruption(filteredMessages)

View File

@@ -8,7 +8,6 @@ import {
} from './managedEnvConstants.js' } from './managedEnvConstants.js'
import { clearMTLSCache } from './mtls.js' import { clearMTLSCache } from './mtls.js'
import { clearProxyCache, configureGlobalAgents } from './proxy.js' import { clearProxyCache, configureGlobalAgents } from './proxy.js'
import { filterSettingsEnvForExplicitProvider } from './providerEnvSelection.js'
import { applyActiveProviderProfileFromConfig } from './providerProfiles.js' import { applyActiveProviderProfileFromConfig } from './providerProfiles.js'
import { isSettingSourceEnabled } from './settings/constants.js' import { isSettingSourceEnabled } from './settings/constants.js'
import { import {
@@ -88,9 +87,7 @@ function filterSettingsEnv(
env: Record<string, string> | undefined, env: Record<string, string> | undefined,
): Record<string, string> { ): Record<string, string> {
return withoutCcdSpawnEnvKeys( return withoutCcdSpawnEnvKeys(
filterSettingsEnvForExplicitProvider( withoutHostManagedProviderVars(withoutSSHTunnelVars(env)),
withoutHostManagedProviderVars(withoutSSHTunnelVars(env)),
),
) )
} }

View File

@@ -1,116 +0,0 @@
import { afterEach, beforeEach, describe, expect, test } from 'bun:test'
import { filterSettingsEnvForExplicitProvider } from './providerEnvSelection.js'
const originalEnv = { ...process.env }
const RESET_KEYS = [
'CLAUDE_CODE_EXPLICIT_PROVIDER',
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
] as const
beforeEach(() => {
for (const key of RESET_KEYS) {
delete process.env[key]
}
})
afterEach(() => {
for (const key of RESET_KEYS) {
if (originalEnv[key] === undefined) delete process.env[key]
else process.env[key] = originalEnv[key]
}
})
describe('filterSettingsEnvForExplicitProvider', () => {
test('does not treat plain provider flags as an explicit CLI override', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
expect(
filterSettingsEnvForExplicitProvider({
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_MODEL: 'gpt-4o',
OTHER: 'keep-me',
}),
).toEqual({
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_MODEL: 'gpt-4o',
OTHER: 'keep-me',
})
})
test('strips settings-sourced provider flags when CLI provider is explicit', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'openai'
expect(
filterSettingsEnvForExplicitProvider({
CLAUDE_CODE_USE_GITHUB: '1',
CLAUDE_CODE_USE_OPENAI: '1',
OTHER: 'keep-me',
}),
).toEqual({ OTHER: 'keep-me' })
})
test('strips a stale GitHub model when explicit provider is not github', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'openai'
expect(
filterSettingsEnvForExplicitProvider({
OPENAI_MODEL: 'github:copilot',
OTHER: 'keep-me',
}),
).toEqual({ OTHER: 'keep-me' })
})
test('keeps a normal OpenAI model when explicit provider is openai', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'openai'
expect(
filterSettingsEnvForExplicitProvider({
OPENAI_MODEL: 'gpt-4o',
OTHER: 'keep-me',
}),
).toEqual({ OPENAI_MODEL: 'gpt-4o', OTHER: 'keep-me' })
})
test('strips a non-GitHub OpenAI model when explicit provider is github', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'github'
expect(
filterSettingsEnvForExplicitProvider({
OPENAI_MODEL: 'gpt-4o',
OTHER: 'keep-me',
}),
).toEqual({ OTHER: 'keep-me' })
})
test('preserves anthropic startup intent by stripping stale GitHub/OpenAI settings', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'anthropic'
expect(
filterSettingsEnvForExplicitProvider({
CLAUDE_CODE_USE_GITHUB: '1',
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_MODEL: 'github:copilot',
OTHER: 'keep-me',
}),
).toEqual({ OTHER: 'keep-me' })
})
test('preserves explicit ollama startup intent by stripping OpenAI routing settings', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'ollama'
expect(
filterSettingsEnvForExplicitProvider({
OPENAI_BASE_URL: 'https://api.openai.com/v1',
OPENAI_MODEL: 'gpt-4o',
OPENAI_API_KEY: 'sk-test',
OTHER: 'keep-me',
}),
).toEqual({ OTHER: 'keep-me' })
})
})

View File

@@ -1,63 +0,0 @@
export const EXPLICIT_PROVIDER_ENV_VAR = 'CLAUDE_CODE_EXPLICIT_PROVIDER'
const PROVIDER_FLAG_KEYS = [
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
] as const
export function clearProviderSelectionFlags(
env: NodeJS.ProcessEnv = process.env,
): void {
for (const key of PROVIDER_FLAG_KEYS) {
delete env[key]
}
}
function getExplicitProvider(processEnv: NodeJS.ProcessEnv): string | undefined {
return processEnv[EXPLICIT_PROVIDER_ENV_VAR]?.trim() || undefined
}
function isGithubModel(model: string | undefined): boolean {
return (model ?? '').trim().toLowerCase().startsWith('github:')
}
export function filterSettingsEnvForExplicitProvider(
env: Record<string, string> | undefined,
processEnv: NodeJS.ProcessEnv = process.env,
): Record<string, string> {
if (!env) return {}
const explicitProvider = getExplicitProvider(processEnv)
if (!explicitProvider) {
return env
}
const filtered = { ...env }
for (const key of PROVIDER_FLAG_KEYS) {
delete filtered[key]
}
if (explicitProvider === 'ollama') {
delete filtered.OPENAI_BASE_URL
delete filtered.OPENAI_MODEL
delete filtered.OPENAI_API_KEY
return filtered
}
if (explicitProvider === 'github') {
if (!isGithubModel(filtered.OPENAI_MODEL)) {
delete filtered.OPENAI_MODEL
}
return filtered
}
if (isGithubModel(filtered.OPENAI_MODEL)) {
delete filtered.OPENAI_MODEL
}
return filtered
}

View File

@@ -9,13 +9,11 @@ import {
const originalEnv = { ...process.env } const originalEnv = { ...process.env }
const RESET_KEYS = [ const RESET_KEYS = [
'CLAUDE_CODE_EXPLICIT_PROVIDER',
'CLAUDE_CODE_USE_OPENAI', 'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI', 'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB', 'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK', 'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX', 'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
'OPENAI_BASE_URL', 'OPENAI_BASE_URL',
'OPENAI_API_KEY', 'OPENAI_API_KEY',
'OPENAI_MODEL', 'OPENAI_MODEL',
@@ -85,16 +83,6 @@ describe('applyProviderFlag - openai', () => {
applyProviderFlag('openai', ['--model', 'gpt-4o']) applyProviderFlag('openai', ['--model', 'gpt-4o'])
expect(process.env.OPENAI_MODEL).toBe('gpt-4o') expect(process.env.OPENAI_MODEL).toBe('gpt-4o')
}) })
test('clears a previously persisted GitHub flag', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const result = applyProviderFlag('openai', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBe('1')
})
}) })
describe('applyProviderFlag - gemini', () => { describe('applyProviderFlag - gemini', () => {
@@ -116,16 +104,6 @@ describe('applyProviderFlag - github', () => {
expect(result.error).toBeUndefined() expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBe('1') expect(process.env.CLAUDE_CODE_USE_GITHUB).toBe('1')
}) })
test('clears a previously set OpenAI flag', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
const result = applyProviderFlag('github', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBe('1')
})
}) })
describe('applyProviderFlag - bedrock', () => { describe('applyProviderFlag - bedrock', () => {
@@ -173,19 +151,6 @@ describe('applyProviderFlag - invalid provider', () => {
}) })
}) })
describe('applyProviderFlag - anthropic', () => {
test('clears third-party provider flags', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.CLAUDE_CODE_USE_OPENAI = '1'
const result = applyProviderFlag('anthropic', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
})
})
describe('applyProviderFlagFromArgs', () => { describe('applyProviderFlagFromArgs', () => {
test('applies ollama provider and model from argv in one step', () => { test('applies ollama provider and model from argv in one step', () => {
const result = applyProviderFlagFromArgs([ const result = applyProviderFlagFromArgs([

View File

@@ -1,8 +1,3 @@
import {
clearProviderSelectionFlags,
EXPLICIT_PROVIDER_ENV_VAR,
} from './providerEnvSelection.js'
/** /**
* --provider CLI flag support. * --provider CLI flag support.
* *
@@ -82,9 +77,6 @@ export function applyProviderFlag(
} }
} }
clearProviderSelectionFlags()
process.env[EXPLICIT_PROVIDER_ENV_VAR] = provider
const model = parseModelFlag(args) const model = parseModelFlag(args)
switch (provider as ProviderFlagName) { switch (provider as ProviderFlagName) {

View File

@@ -485,26 +485,6 @@ test('buildStartupEnvFromProfile leaves explicit provider selections untouched',
assert.equal(env.OPENAI_API_KEY, undefined) assert.equal(env.OPENAI_API_KEY, undefined)
}) })
test('buildStartupEnvFromProfile preserves explicit anthropic startup selection', async () => {
const processEnv = {
CLAUDE_CODE_EXPLICIT_PROVIDER: 'anthropic',
}
const env = await buildStartupEnvFromProfile({
persisted: profile('openai', {
CLAUDE_CODE_USE_GITHUB: '1',
OPENAI_MODEL: 'github:copilot',
}),
processEnv,
})
assert.equal(env, processEnv)
assert.equal(env.CLAUDE_CODE_EXPLICIT_PROVIDER, 'anthropic')
assert.equal(env.CLAUDE_CODE_USE_OPENAI, undefined)
assert.equal(env.CLAUDE_CODE_USE_GITHUB, undefined)
assert.equal(env.OPENAI_MODEL, undefined)
})
test('buildStartupEnvFromProfile leaves profile-managed env untouched', async () => { test('buildStartupEnvFromProfile leaves profile-managed env untouched', async () => {
const processEnv = { const processEnv = {
CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED: '1', CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED: '1',

View File

@@ -412,10 +412,6 @@ export function hasExplicitProviderSelection(
return true return true
} }
if (processEnv.CLAUDE_CODE_EXPLICIT_PROVIDER?.trim()) {
return true
}
return ( return (
processEnv.CLAUDE_CODE_USE_OPENAI !== undefined || processEnv.CLAUDE_CODE_USE_OPENAI !== undefined ||
processEnv.CLAUDE_CODE_USE_GITHUB !== undefined || processEnv.CLAUDE_CODE_USE_GITHUB !== undefined ||

View File

@@ -9,7 +9,6 @@ async function importFreshProvidersModule() {
const originalEnv = { ...process.env } const originalEnv = { ...process.env }
const RESTORED_KEYS = [ const RESTORED_KEYS = [
'CLAUDE_CODE_EXPLICIT_PROVIDER',
'CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED', 'CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED',
'CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID', 'CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID',
'CLAUDE_CODE_USE_OPENAI', 'CLAUDE_CODE_USE_OPENAI',
@@ -143,29 +142,6 @@ describe('applyProviderProfileToProcessEnv', () => {
}) })
describe('applyActiveProviderProfileFromConfig', () => { describe('applyActiveProviderProfileFromConfig', () => {
test('does not override explicit anthropic startup selection', async () => {
const { applyActiveProviderProfileFromConfig } =
await importFreshProviderProfileModules()
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'anthropic'
const applied = applyActiveProviderProfileFromConfig({
providerProfiles: [
buildProfile({
id: 'saved_github',
baseUrl: 'https://api.githubcopilot.com',
model: 'github:copilot',
}),
],
activeProviderProfileId: 'saved_github',
} as any)
expect(applied).toBeUndefined()
expect(process.env.CLAUDE_CODE_EXPLICIT_PROVIDER).toBe('anthropic')
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBeUndefined()
expect(process.env.OPENAI_MODEL).toBeUndefined()
})
test('does not override explicit startup provider selection', async () => { test('does not override explicit startup provider selection', async () => {
const { applyActiveProviderProfileFromConfig } = const { applyActiveProviderProfileFromConfig } =
await importFreshProviderProfileModules() await importFreshProviderProfileModules()

View File

@@ -5,7 +5,6 @@ import {
type ProviderProfile, type ProviderProfile,
} from './config.js' } from './config.js'
import type { ModelOption } from './model/modelOptions.js' import type { ModelOption } from './model/modelOptions.js'
import { EXPLICIT_PROVIDER_ENV_VAR } from './providerEnvSelection.js'
export type ProviderPreset = export type ProviderPreset =
| 'anthropic' | 'anthropic'
@@ -257,7 +256,6 @@ function hasProviderSelectionFlags(
processEnv: NodeJS.ProcessEnv = process.env, processEnv: NodeJS.ProcessEnv = process.env,
): boolean { ): boolean {
return ( return (
processEnv[EXPLICIT_PROVIDER_ENV_VAR] !== undefined ||
processEnv.CLAUDE_CODE_USE_OPENAI !== undefined || processEnv.CLAUDE_CODE_USE_OPENAI !== undefined ||
processEnv.CLAUDE_CODE_USE_GEMINI !== undefined || processEnv.CLAUDE_CODE_USE_GEMINI !== undefined ||
processEnv.CLAUDE_CODE_USE_GITHUB !== undefined || processEnv.CLAUDE_CODE_USE_GITHUB !== undefined ||

View File

@@ -1,11 +1,52 @@
import { expect, test } from 'bun:test' import { expect, test } from 'bun:test'
import path from 'path'
import { wrapRipgrepUnavailableError } from './ripgrep.ts' import { resolveRipgrepConfig, wrapRipgrepUnavailableError } from './ripgrep.js'
const MOCK_BUILTIN_PATH = path.normalize(
process.platform === 'win32'
? `vendor/ripgrep/${process.arch}-win32/rg.exe`
: `vendor/ripgrep/${process.arch}-${process.platform}/rg`,
)
test('ripgrepCommand falls back to system rg when builtin binary is missing', () => {
const config = resolveRipgrepConfig({
userWantsSystemRipgrep: false,
bundledMode: false,
builtinCommand: MOCK_BUILTIN_PATH,
builtinExists: false,
systemExecutablePath: '/usr/bin/rg',
processExecPath: '/fake/bun',
})
expect(config).toMatchObject({
mode: 'system',
command: 'rg',
args: [],
})
})
test('ripgrepCommand keeps builtin mode when bundled binary exists', () => {
const config = resolveRipgrepConfig({
userWantsSystemRipgrep: false,
bundledMode: false,
builtinCommand: MOCK_BUILTIN_PATH,
builtinExists: true,
systemExecutablePath: '/usr/bin/rg',
processExecPath: '/fake/bun',
})
expect(config).toMatchObject({
mode: 'builtin',
command: MOCK_BUILTIN_PATH,
args: [],
})
})
test('wrapRipgrepUnavailableError explains missing packaged fallback', () => { test('wrapRipgrepUnavailableError explains missing packaged fallback', () => {
const error = wrapRipgrepUnavailableError( const error = wrapRipgrepUnavailableError(
{ code: 'ENOENT', message: 'spawn rg ENOENT' }, { code: 'ENOENT', message: 'spawn rg ENOENT' },
{ mode: 'builtin', command: 'C:\\fake\\vendor\\ripgrep\\rg.exe' }, { mode: 'builtin', command: 'C:\\fake\\vendor\\ripgrep\\rg.exe', args: [] },
'win32', 'win32',
) )
@@ -18,7 +59,7 @@ test('wrapRipgrepUnavailableError explains missing packaged fallback', () => {
test('wrapRipgrepUnavailableError explains missing system ripgrep', () => { test('wrapRipgrepUnavailableError explains missing system ripgrep', () => {
const error = wrapRipgrepUnavailableError( const error = wrapRipgrepUnavailableError(
{ code: 'ENOENT', message: 'spawn rg ENOENT' }, { code: 'ENOENT', message: 'spawn rg ENOENT' },
{ mode: 'system', command: 'rg' }, { mode: 'system', command: 'rg', args: [] },
'linux', 'linux',
) )

View File

@@ -1,5 +1,6 @@
import type { ChildProcess, ExecFileException } from 'child_process' import type { ChildProcess, ExecFileException } from 'child_process'
import { execFile, spawn } from 'child_process' import { execFile, spawn } from 'child_process'
import { existsSync } from 'fs'
import memoize from 'lodash-es/memoize.js' import memoize from 'lodash-es/memoize.js'
import { homedir } from 'os' import { homedir } from 'os'
import * as path from 'path' import * as path from 'path'
@@ -30,40 +31,72 @@ type RipgrepConfig = {
type RipgrepErrorLike = Pick<NodeJS.ErrnoException, 'code' | 'message'> type RipgrepErrorLike = Pick<NodeJS.ErrnoException, 'code' | 'message'>
const getRipgrepConfig = memoize((): RipgrepConfig => { function isErrnoException(error: unknown): error is NodeJS.ErrnoException {
const userWantsSystemRipgrep = isEnvDefinedFalsy( return error instanceof Error
process.env.USE_BUILTIN_RIPGREP, }
)
// Try system ripgrep if user wants it type ResolveRipgrepConfigArgs = {
if (userWantsSystemRipgrep) { userWantsSystemRipgrep: boolean
const { cmd: systemPath } = findExecutable('rg', []) bundledMode: boolean
if (systemPath !== 'rg') { builtinCommand: string
// SECURITY: Use command name 'rg' instead of systemPath to prevent PATH hijacking builtinExists: boolean
// If we used systemPath, a malicious ./rg.exe in current directory could be executed systemExecutablePath: string
// Using just 'rg' lets the OS resolve it safely with NoDefaultCurrentDirectoryInExePath protection processExecPath?: string
return { mode: 'system', command: 'rg', args: [] } }
}
export function resolveRipgrepConfig({
userWantsSystemRipgrep,
bundledMode,
builtinCommand,
builtinExists,
systemExecutablePath,
processExecPath = process.execPath,
}: ResolveRipgrepConfigArgs): RipgrepConfig {
if (userWantsSystemRipgrep && systemExecutablePath !== 'rg') {
// SECURITY: Use command name 'rg' instead of systemExecutablePath to prevent PATH hijacking
return { mode: 'system', command: 'rg', args: [] }
} }
// In bundled (native) mode, ripgrep is statically compiled into bun-internal if (bundledMode) {
// and dispatches based on argv[0]. We spawn ourselves with argv0='rg'.
if (isInBundledMode()) {
return { return {
mode: 'embedded', mode: 'embedded',
command: process.execPath, command: processExecPath,
args: ['--no-config'], args: ['--no-config'],
argv0: 'rg', argv0: 'rg',
} }
} }
if (builtinExists) {
return { mode: 'builtin', command: builtinCommand, args: [] }
}
if (systemExecutablePath !== 'rg') {
return { mode: 'system', command: 'rg', args: [] }
}
return { mode: 'builtin', command: builtinCommand, args: [] }
}
const getRipgrepConfig = memoize((): RipgrepConfig => {
const userWantsSystemRipgrep = isEnvDefinedFalsy(
process.env.USE_BUILTIN_RIPGREP,
)
const bundledMode = isInBundledMode()
const rgRoot = path.resolve(__dirname, 'vendor', 'ripgrep') const rgRoot = path.resolve(__dirname, 'vendor', 'ripgrep')
const command = const builtinCommand =
process.platform === 'win32' process.platform === 'win32'
? path.resolve(rgRoot, `${process.arch}-win32`, 'rg.exe') ? path.resolve(rgRoot, `${process.arch}-win32`, 'rg.exe')
: path.resolve(rgRoot, `${process.arch}-${process.platform}`, 'rg') : path.resolve(rgRoot, `${process.arch}-${process.platform}`, 'rg')
const builtinExists = existsSync(builtinCommand)
const { cmd: systemExecutablePath } = findExecutable('rg', [])
return { mode: 'builtin', command, args: [] } return resolveRipgrepConfig({
userWantsSystemRipgrep,
bundledMode,
builtinCommand,
builtinExists,
systemExecutablePath,
})
}) })
export function ripgrepCommand(): { export function ripgrepCommand(): {
@@ -324,7 +357,9 @@ async function ripGrepFileCount(
if (settled) return if (settled) return
settled = true settled = true
reject( reject(
err.code === 'ENOENT' ? wrapRipgrepUnavailableError(err) : err, isErrnoException(err) && err.code === 'ENOENT'
? wrapRipgrepUnavailableError(err)
: err,
) )
}) })
}) })
@@ -388,7 +423,9 @@ export async function ripGrepStream(
if (settled) return if (settled) return
settled = true settled = true
reject( reject(
err.code === 'ENOENT' ? wrapRipgrepUnavailableError(err) : err, isErrnoException(err) && err.code === 'ENOENT'
? wrapRipgrepUnavailableError(err)
: err,
) )
}) })
}) })
@@ -436,7 +473,9 @@ export async function ripGrep(
const CRITICAL_ERROR_CODES = ['ENOENT', 'EACCES', 'EPERM'] const CRITICAL_ERROR_CODES = ['ENOENT', 'EACCES', 'EPERM']
if (CRITICAL_ERROR_CODES.includes(error.code as string)) { if (CRITICAL_ERROR_CODES.includes(error.code as string)) {
reject( reject(
error.code === 'ENOENT' ? wrapRipgrepUnavailableError(error) : error, isErrnoException(error) && error.code === 'ENOENT'
? wrapRipgrepUnavailableError(error)
: error,
) )
return return
} }

View File

@@ -0,0 +1,68 @@
import { describe, expect, test } from 'bun:test'
import { sanitizeSchemaForOpenAICompat } from './schemaSanitizer'
describe('sanitizeSchemaForOpenAICompat', () => {
test('preserves Grep-like properties.pattern while keeping it required', () => {
const schema = {
type: 'object',
properties: {
pattern: {
type: 'string',
description: 'The regular expression pattern to search for in file contents',
},
path: { type: 'string' },
glob: { type: 'string' },
},
required: ['pattern'],
}
const sanitized = sanitizeSchemaForOpenAICompat(schema)
const properties = sanitized.properties as Record<string, unknown> | undefined
expect(Object.keys(properties ?? {})).toEqual(['pattern', 'path', 'glob'])
expect(properties?.pattern).toEqual({
type: 'string',
description: 'The regular expression pattern to search for in file contents',
})
expect(sanitized.required).toEqual(['pattern'])
})
test('preserves Glob-like properties.pattern while keeping it required', () => {
const schema = {
type: 'object',
properties: {
pattern: {
type: 'string',
description: 'The glob pattern to match files against',
},
path: { type: 'string' },
},
required: ['pattern'],
}
const sanitized = sanitizeSchemaForOpenAICompat(schema)
const properties = sanitized.properties as Record<string, unknown> | undefined
expect(Object.keys(properties ?? {})).toEqual(['pattern', 'path'])
expect(properties?.pattern).toEqual({
type: 'string',
description: 'The glob pattern to match files against',
})
expect(sanitized.required).toEqual(['pattern'])
})
test('strips JSON Schema validator pattern from string schemas', () => {
const schema = {
type: 'string',
pattern: '^[a-z]+$',
minLength: 1,
}
const sanitized = sanitizeSchemaForOpenAICompat(schema)
expect(sanitized).toEqual({
type: 'string',
})
})
})

View File

@@ -33,6 +33,15 @@ function stripSchemaKeywords(schema: unknown, keywords: Set<string>): unknown {
const result: Record<string, unknown> = {} const result: Record<string, unknown> = {}
for (const [key, value] of Object.entries(schema)) { for (const [key, value] of Object.entries(schema)) {
if (key === 'properties' && isSchemaRecord(value)) {
const sanitizedProps: Record<string, unknown> = {}
for (const [propName, propSchema] of Object.entries(value)) {
sanitizedProps[propName] = stripSchemaKeywords(propSchema, keywords)
}
result[key] = sanitizedProps
continue
}
if (keywords.has(key)) { if (keywords.has(key)) {
continue continue
} }
@@ -215,10 +224,13 @@ export function sanitizeSchemaForOpenAICompat(
} }
} }
if (Array.isArray(record.required) && isSchemaRecord(record.properties)) { const properties = isSchemaRecord(record.properties)
? record.properties
: undefined
if (Array.isArray(record.required) && properties) {
record.required = record.required.filter( record.required = record.required.filter(
(value): value is string => (value): value is string => typeof value === 'string' && value in properties,
typeof value === 'string' && value in record.properties,
) )
} }