Compare commits

..

5 Commits

Author SHA1 Message Date
OpenClaude Worker 3
af5bb8fed8 fix: gate startup checks strictly on first submission, remove grace period (issue #363)
As gnanam1990 pointed out, the 3s grace period still allows the failure
mode: if a user pauses for a few seconds before typing, startup checks
fire and recommendation dialogs steal focus. A grace period is still a
timing mitigation, not a reliable fix.

New approach: startup checks only run after the user has submitted their
first message (submitCount > 0). No grace period, no timeout. This
guarantees the prompt gets first interaction — no dialog can steal focus
before the user has actually used the CLI.

If the user never submits a message, startup checks never run. That's
acceptable because with no user interaction there's no need for plugin
installations or marketplace seeding.
2026-04-08 11:55:23 +05:30
OpenClaude Worker 3
ad76b1174a fix: move startup checks after submitCount declaration to avoid temporal dead zone
Code quality bot flagged that submitCount was used before its declaration.
Moved the entire startup checks block to after the submitCount useState
declaration. Also added nullish coalescing (submitCount ?? 0) per bot
suggestion.
2026-04-08 11:47:26 +05:30
OpenClaude Worker 3
c457d9db3c fix: gate startup checks on prompt readiness, not just a timeout (issue #363)
The previous approach used a fixed 1500ms timeout, but as gnanam1990
pointed out, if a user pauses for >1.5s before typing the timer can
still fire and recommendation dialogs can steal focus. This is a
timing mitigation, not a reliable fix.

New approach: gate startup checks on actual prompt readiness:
1. After first message submission (submitCount > 0) — always safe
2. After grace period (3s) elapsed AND user is idle — safe because
   no dialog will interrupt an idle user who hasn't started typing
3. While user is actively typing — deferrred until they stop

This ensures startup checks never steal focus from a prompt the user
is about to type into, regardless of how long they pause before typing.

Also removes the old STARTUP_CHECK_DELAY_MS constant in favor of
STARTUP_GRACE_PERIOD_MS with clearer semantics.
2026-04-08 11:39:21 +05:30
OpenClaude Worker 3
d1f79088a1 fix: move startup checks effect after promptTypingSuppressionActive declaration
Fixes temporal dead zone warning flagged by code-quality bot.
promptTypingSuppressionActive is declared on line ~1340 but the
useEffect was on line ~800, causing a reference-before-declaration.
Also adds missing semicolons for style consistency.
2026-04-08 11:35:48 +05:30
OpenClaude Worker 3
106f85d0bf fix: defer startup plugin checks and suppress recommendation dialogs during startup window (issue #363)
Root cause: performStartupChecks() fires immediately on REPL mount,
triggering plugin loading which populates trackedFiles, which triggers
useLspPluginRecommendation to surface an LSP recommendation dialog.
Since promptTypingSuppressionActive is false before any user input,
getFocusedInputDialog() returns the dialog, unmounting PromptInput
entirely and making the CLI appear frozen.

Fix: Two-pronged approach:
1. Defer performStartupChecks by 1500ms and gate on
   promptTypingSuppressionActive so startup checks dont run while
   the user is typing or has early input buffered
2. Suppress lower-priority startup dialogs (LSP recommendation,
   plugin hint, desktop upsell) until startupChecksStartedRef is
   true, preventing them from stealing focus during the vulnerable
   startup window

This also explains why --bare mode and disabling plugins work:
--bare mode skips plugin loading entirely, and disabling the
autoresearch plugin eliminates the LSP match, so lspRecommendation
stays null and PromptInput renders normally.
2026-04-08 11:24:36 +05:30
43 changed files with 165 additions and 2092 deletions

View File

@@ -29,13 +29,6 @@ jobs:
with: with:
bun-version: 1.3.11 bun-version: 1.3.11
- name: Set up Python
uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
with:
python-version: "3.12"
cache: "pip"
cache-dependency-path: python/requirements.txt
- name: Install dependencies - name: Install dependencies
run: bun install --frozen-lockfile run: bun install --frozen-lockfile
@@ -45,12 +38,6 @@ jobs:
- name: Full unit test suite - name: Full unit test suite
run: bun test --max-concurrency=1 run: bun test --max-concurrency=1
- name: Install Python test dependencies
run: python -m pip install -r python/requirements.txt
- name: Python unit tests
run: python -m pytest -q python/tests
- name: Suspicious PR intent scan - name: Suspicious PR intent scan
run: bun run security:pr-scan -- --base ${{ github.event.pull_request.base.sha || 'origin/main' }} run: bun run security:pr-scan -- --base ${{ github.event.pull_request.base.sha || 'origin/main' }}
- name: Provider tests - name: Provider tests

View File

@@ -1,3 +0,0 @@
pytest==7.4.4
pytest-asyncio==0.23.3
httpx==0.25.2

View File

@@ -118,14 +118,14 @@ function isLocalBaseUrl(baseUrl: string): boolean {
} }
const GEMINI_DEFAULT_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta/openai' const GEMINI_DEFAULT_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta/openai'
const GITHUB_COPILOT_BASE = 'https://api.githubcopilot.com' const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
function currentBaseUrl(): string { function currentBaseUrl(): string {
if (isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) { if (isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
return process.env.GEMINI_BASE_URL ?? GEMINI_DEFAULT_BASE_URL return process.env.GEMINI_BASE_URL ?? GEMINI_DEFAULT_BASE_URL
} }
if (isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) { if (isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
return process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE return process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
} }
return process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1' return process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1'
} }
@@ -157,7 +157,7 @@ function checkGeminiEnv(): CheckResult[] {
function checkGithubEnv(): CheckResult[] { function checkGithubEnv(): CheckResult[] {
const results: CheckResult[] = [] const results: CheckResult[] = []
const baseUrl = process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE const baseUrl = process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
results.push(pass('Provider mode', 'GitHub Models provider enabled.')) results.push(pass('Provider mode', 'GitHub Models provider enabled.'))
const token = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN const token = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN
@@ -435,7 +435,7 @@ function serializeSafeEnvSummary(): Record<string, string | boolean> {
process.env.OPENAI_MODEL ?? process.env.OPENAI_MODEL ??
'(unset, default: github:copilot → openai/gpt-4.1)', '(unset, default: github:copilot → openai/gpt-4.1)',
OPENAI_BASE_URL: OPENAI_BASE_URL:
process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE, process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE,
GITHUB_TOKEN_SET: Boolean( GITHUB_TOKEN_SET: Boolean(
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN, process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN,
), ),

View File

@@ -136,7 +136,6 @@ import hooks from './commands/hooks/index.js'
import files from './commands/files/index.js' import files from './commands/files/index.js'
import branch from './commands/branch/index.js' import branch from './commands/branch/index.js'
import agents from './commands/agents/index.js' import agents from './commands/agents/index.js'
import autoFix from './commands/auto-fix.js'
import plugin from './commands/plugin/index.js' import plugin from './commands/plugin/index.js'
import reloadPlugins from './commands/reload-plugins/index.js' import reloadPlugins from './commands/reload-plugins/index.js'
import rewind from './commands/rewind/index.js' import rewind from './commands/rewind/index.js'
@@ -264,7 +263,6 @@ const COMMANDS = memoize((): Command[] => [
addDir, addDir,
advisor, advisor,
agents, agents,
autoFix,
branch, branch,
btw, btw,
chrome, chrome,

View File

@@ -1,25 +0,0 @@
import type { Command } from '../types/command.js'
const command: Command = {
name: 'auto-fix',
description: 'Configure auto-fix: run lint/test after AI edits',
isEnabled: () => true,
type: 'prompt',
progressMessage: 'Configuring auto-fix...',
contentLength: 0,
source: 'builtin',
async getPromptForCommand() {
return [
{
type: 'text',
text:
'The user wants to configure auto-fix settings. Auto-fix automatically runs lint and test commands after AI file edits, feeding errors back for self-repair.\n\n' +
'Current settings location: `.claude/settings.json` or `.claude/settings.local.json`\n\n' +
'Example configuration:\n```json\n{\n "autoFix": {\n "enabled": true,\n "lint": "eslint . --fix",\n "test": "bun test",\n "maxRetries": 3,\n "timeout": 30000\n }\n}\n```\n\n' +
'Ask the user what lint and test commands they use, then help them set up the configuration.',
},
]
},
}
export default command

View File

@@ -4,7 +4,7 @@ const onboardGithub: Command = {
name: 'onboard-github', name: 'onboard-github',
aliases: ['onboarding-github', 'onboardgithub', 'onboardinggithub'], aliases: ['onboarding-github', 'onboardgithub', 'onboardinggithub'],
description: description:
'Interactive setup for GitHub Copilot: OAuth device login stored in secure storage', 'Interactive setup for GitHub Models: device login or PAT, saved to secure storage',
type: 'local-jsx', type: 'local-jsx',
load: () => import('./onboard-github.js'), load: () => import('./onboard-github.js'),
} }

View File

@@ -2,9 +2,9 @@ import * as React from 'react'
import { useCallback, useState } from 'react' import { useCallback, useState } from 'react'
import { Select } from '../../components/CustomSelect/select.js' import { Select } from '../../components/CustomSelect/select.js'
import { Spinner } from '../../components/Spinner.js' import { Spinner } from '../../components/Spinner.js'
import TextInput from '../../components/TextInput.js'
import { Box, Text } from '../../ink.js' import { Box, Text } from '../../ink.js'
import { import {
exchangeForCopilotToken,
openVerificationUri, openVerificationUri,
pollAccessToken, pollAccessToken,
requestDeviceCode, requestDeviceCode,
@@ -15,7 +15,7 @@ import {
readGithubModelsToken, readGithubModelsToken,
saveGithubModelsToken, saveGithubModelsToken,
} from '../../utils/githubModelsCredentials.js' } from '../../utils/githubModelsCredentials.js'
import { getSettingsForSource, updateSettingsForSource } from '../../utils/settings/settings.js' import { updateSettingsForSource } from '../../utils/settings/settings.js'
const DEFAULT_MODEL = 'github:copilot' const DEFAULT_MODEL = 'github:copilot'
const FORCE_RELOGIN_ARGS = new Set([ const FORCE_RELOGIN_ARGS = new Set([
@@ -27,25 +27,11 @@ const FORCE_RELOGIN_ARGS = new Set([
'--reauth', '--reauth',
]) ])
type Step = 'menu' | 'device-busy' | 'error' type Step =
| 'menu'
const PROVIDER_SPECIFIC_KEYS = new Set([ | 'device-busy'
'CLAUDE_CODE_USE_OPENAI', | 'pat'
'CLAUDE_CODE_USE_GEMINI', | 'error'
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
'OPENAI_BASE_URL',
'OPENAI_API_BASE',
'OPENAI_API_KEY',
'OPENAI_MODEL',
'GEMINI_API_KEY',
'GOOGLE_API_KEY',
'GEMINI_BASE_URL',
'GEMINI_MODEL',
'GEMINI_ACCESS_TOKEN',
'GEMINI_AUTH_MODE',
])
export function shouldForceGithubRelogin(args?: string): boolean { export function shouldForceGithubRelogin(args?: string): boolean {
const normalized = (args ?? '').trim().toLowerCase() const normalized = (args ?? '').trim().toLowerCase()
@@ -55,29 +41,15 @@ export function shouldForceGithubRelogin(args?: string): boolean {
return normalized.split(/\s+/).some(arg => FORCE_RELOGIN_ARGS.has(arg)) return normalized.split(/\s+/).some(arg => FORCE_RELOGIN_ARGS.has(arg))
} }
const GITHUB_PAT_PREFIXES = ['ghp_', 'gho_','ghs_', 'ghr_', 'github_pat_']
function isGithubPat(token: string): boolean {
return GITHUB_PAT_PREFIXES.some(prefix => token.startsWith(prefix))
}
export function hasExistingGithubModelsLoginToken( export function hasExistingGithubModelsLoginToken(
env: NodeJS.ProcessEnv = process.env, env: NodeJS.ProcessEnv = process.env,
storedToken?: string, storedToken?: string,
): boolean { ): boolean {
const envToken = env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim() const envToken = env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()
if (envToken) { if (envToken) {
// PATs are no longer supported - require OAuth re-auth
if (isGithubPat(envToken)) {
return false
}
return true return true
} }
const persisted = (storedToken ?? readGithubModelsToken())?.trim() const persisted = (storedToken ?? readGithubModelsToken())?.trim()
// PATs are no longer supported - require OAuth re-auth
if (persisted && isGithubPat(persisted)) {
return false
}
return Boolean(persisted) return Boolean(persisted)
} }
@@ -125,21 +97,8 @@ export function applyGithubOnboardingProcessEnv(
} }
function mergeUserSettingsEnv(model: string): { ok: boolean; detail?: string } { function mergeUserSettingsEnv(model: string): { ok: boolean; detail?: string } {
const currentSettings = getSettingsForSource('userSettings')
const currentEnv = currentSettings?.env ?? {}
const newEnv: Record<string, string> = {}
for (const [key, value] of Object.entries(currentEnv)) {
if (!PROVIDER_SPECIFIC_KEYS.has(key)) {
newEnv[key] = value
}
}
newEnv.CLAUDE_CODE_USE_GITHUB = '1'
newEnv.OPENAI_MODEL = model
const { error } = updateSettingsForSource('userSettings', { const { error } = updateSettingsForSource('userSettings', {
env: newEnv, env: buildGithubOnboardingSettingsEnv(model) as any,
}) })
if (error) { if (error) {
return { ok: false, detail: error.message } return { ok: false, detail: error.message }
@@ -184,14 +143,12 @@ function OnboardGithub(props: {
user_code: string user_code: string
verification_uri: string verification_uri: string
} | null>(null) } | null>(null)
const [patDraft, setPatDraft] = useState('')
const [cursorOffset, setCursorOffset] = useState(0)
const finalize = useCallback( const finalize = useCallback(
async ( async (token: string, model: string = DEFAULT_MODEL) => {
token: string, const saved = saveGithubModelsToken(token)
model: string = DEFAULT_MODEL,
oauthToken?: string,
) => {
const saved = saveGithubModelsToken(token, oauthToken)
if (!saved.success) { if (!saved.success) {
setErrorMsg(saved.warning ?? 'Could not save token to secure storage.') setErrorMsg(saved.warning ?? 'Could not save token to secure storage.')
setStep('error') setStep('error')
@@ -208,18 +165,8 @@ function OnboardGithub(props: {
setStep('error') setStep('error')
return return
} }
// Clear stale provider-specific env vars from the current session
// so resolveProviderRequest() doesn't pick up a previous provider's
// base URL or key after onboarding completes.
for (const key of PROVIDER_SPECIFIC_KEYS) {
delete process.env[key]
}
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.OPENAI_MODEL = model.trim() || DEFAULT_MODEL
hydrateGithubModelsTokenFromSecureStorage()
onChangeAPIKey()
onDone( onDone(
'GitHub Copilot onboard complete. Copilot token and OAuth token stored in secure storage (Windows/Linux: ~/.claude/.credentials.json, macOS: Keychain fallback to ~/.claude/.credentials.json); user settings updated. Restart if the model does not switch.', 'GitHub Models onboard complete. Token stored in secure storage; user settings updated. Restart if the model does not switch.',
{ display: 'user' }, { display: 'user' },
) )
}, },
@@ -237,12 +184,11 @@ function OnboardGithub(props: {
verification_uri: device.verification_uri, verification_uri: device.verification_uri,
}) })
await openVerificationUri(device.verification_uri) await openVerificationUri(device.verification_uri)
const oauthToken = await pollAccessToken(device.device_code, { const token = await pollAccessToken(device.device_code, {
initialInterval: device.interval, initialInterval: device.interval,
timeoutSeconds: device.expires_in, timeoutSeconds: device.expires_in,
}) })
const copilotToken = await exchangeForCopilotToken(oauthToken) await finalize(token, DEFAULT_MODEL)
await finalize(copilotToken.token, DEFAULT_MODEL, oauthToken)
} catch (e) { } catch (e) {
setErrorMsg(e instanceof Error ? e.message : String(e)) setErrorMsg(e instanceof Error ? e.message : String(e))
setStep('error') setStep('error')
@@ -281,7 +227,7 @@ function OnboardGithub(props: {
if (step === 'device-busy') { if (step === 'device-busy') {
return ( return (
<Box flexDirection="column" gap={1}> <Box flexDirection="column" gap={1}>
<Text>GitHub Copilot sign-in</Text> <Text>GitHub device login</Text>
{deviceHint ? ( {deviceHint ? (
<> <>
<Text> <Text>
@@ -300,11 +246,43 @@ function OnboardGithub(props: {
) )
} }
if (step === 'pat') {
return (
<Box flexDirection="column" gap={1}>
<Text>Paste a GitHub personal access token with access to GitHub Models.</Text>
<Text dimColor>Input is masked. Enter to submit; Esc to go back.</Text>
<TextInput
value={patDraft}
mask="*"
onChange={setPatDraft}
onSubmit={async (value: string) => {
const t = value.trim()
if (!t) {
return
}
await finalize(t, DEFAULT_MODEL)
}}
onExit={() => {
setStep('menu')
setPatDraft('')
}}
columns={80}
cursorOffset={cursorOffset}
onChangeCursorOffset={setCursorOffset}
/>
</Box>
)
}
const menuOptions = [ const menuOptions = [
{ {
label: 'Sign in with browser', label: 'Sign in with browser (device code)',
value: 'device' as const, value: 'device' as const,
}, },
{
label: 'Paste personal access token',
value: 'pat' as const,
},
{ {
label: 'Cancel', label: 'Cancel',
value: 'cancel' as const, value: 'cancel' as const,
@@ -313,7 +291,7 @@ function OnboardGithub(props: {
return ( return (
<Box flexDirection="column" gap={1}> <Box flexDirection="column" gap={1}>
<Text bold>GitHub Copilot setup</Text> <Text bold>GitHub Models setup</Text>
<Text dimColor> <Text dimColor>
Stores your token in the OS credential store (macOS Keychain when available) Stores your token in the OS credential store (macOS Keychain when available)
and enables CLAUDE_CODE_USE_GITHUB in your user settings - no export and enables CLAUDE_CODE_USE_GITHUB in your user settings - no export
@@ -326,6 +304,10 @@ function OnboardGithub(props: {
onDone('GitHub onboard cancelled', { display: 'system' }) onDone('GitHub onboard cancelled', { display: 'system' })
return return
} }
if (v === 'pat') {
setStep('pat')
return
}
void runDeviceFlow() void runDeviceFlow()
}} }}
/> />

View File

@@ -1,4 +1,5 @@
import { useCallback, useState } from 'react' import { useCallback, useState } from 'react'
import { isDeepStrictEqual } from 'util'
import { useRegisterOverlay } from '../../context/overlayContext.js' import { useRegisterOverlay } from '../../context/overlayContext.js'
import type { InputEvent } from '../../ink/events/input-event.js' import type { InputEvent } from '../../ink/events/input-event.js'
// eslint-disable-next-line custom-rules/prefer-use-keybindings -- raw space/arrow multiselect input // eslint-disable-next-line custom-rules/prefer-use-keybindings -- raw space/arrow multiselect input
@@ -8,7 +9,6 @@ import {
normalizeFullWidthSpace, normalizeFullWidthSpace,
} from '../../utils/stringUtils.js' } from '../../utils/stringUtils.js'
import type { OptionWithDescription } from './select.js' import type { OptionWithDescription } from './select.js'
import { optionsNavigateEqual } from './use-select-navigation.js'
import { useSelectNavigation } from './use-select-navigation.js' import { useSelectNavigation } from './use-select-navigation.js'
export type UseMultiSelectStateProps<T> = { export type UseMultiSelectStateProps<T> = {
@@ -174,7 +174,7 @@ export function useMultiSelectState<T>({
// and the deleted ui/useMultiSelectState.ts — without this, MCPServerDesktopImportDialog // and the deleted ui/useMultiSelectState.ts — without this, MCPServerDesktopImportDialog
// keeps colliding servers checked after getAllMcpConfigs() resolves. // keeps colliding servers checked after getAllMcpConfigs() resolves.
const [lastOptions, setLastOptions] = useState(options) const [lastOptions, setLastOptions] = useState(options)
if (options !== lastOptions && !optionsNavigateEqual(options, lastOptions)) { if (options !== lastOptions && !isDeepStrictEqual(options, lastOptions)) {
setSelectedValues(defaultValue) setSelectedValues(defaultValue)
setLastOptions(options) setLastOptions(options)
} }

View File

@@ -6,34 +6,10 @@ import {
useRef, useRef,
useState, useState,
} from 'react' } from 'react'
import { isDeepStrictEqual } from 'util'
import OptionMap from './option-map.js' import OptionMap from './option-map.js'
import type { OptionWithDescription } from './select.js' import type { OptionWithDescription } from './select.js'
/**
* Compare two option arrays for structural equality on properties that
* affect navigation behavior. ReactNode `label` and function `onChange`
* are intentionally excluded — they are identity-unstable (new reference
* each render) but don't change navigation semantics.
*/
export function optionsNavigateEqual<T>(
a: OptionWithDescription<T>[],
b: OptionWithDescription<T>[],
): boolean {
if (a.length !== b.length) return false
for (let i = 0; i < a.length; i++) {
const ao = a[i]!
const bo = b[i]!
if (
ao.value !== bo.value ||
ao.disabled !== bo.disabled ||
ao.type !== bo.type
) {
return false
}
}
return true
}
type State<T> = { type State<T> = {
/** /**
* Map where key is option's value and value is option's index. * Map where key is option's value and value is option's index.
@@ -548,7 +524,7 @@ export function useSelectNavigation<T>({
const [lastOptions, setLastOptions] = useState(options) const [lastOptions, setLastOptions] = useState(options)
if (options !== lastOptions && !optionsNavigateEqual(options, lastOptions)) { if (options !== lastOptions && !isDeepStrictEqual(options, lastOptions)) {
dispatch({ dispatch({
type: 'reset', type: 'reset',
state: createDefaultState({ state: createDefaultState({

View File

@@ -95,8 +95,8 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
if (useGithub) { if (useGithub) {
const model = process.env.OPENAI_MODEL || 'github:copilot' const model = process.env.OPENAI_MODEL || 'github:copilot'
const baseUrl = const baseUrl =
process.env.OPENAI_BASE_URL || 'https://api.githubcopilot.com' process.env.OPENAI_BASE_URL || 'https://models.github.ai/inference'
return { name: 'GitHub Copilot', model, baseUrl, isLocal: false } return { name: 'GitHub Models', model, baseUrl, isLocal: false }
} }
if (useOpenAI) { if (useOpenAI) {

View File

@@ -96,16 +96,15 @@ async function main(): Promise<void> {
} }
} }
// Enable configs first so we can read settings
{ {
const { enableConfigs } = await import('../utils/config.js') const { enableConfigs } = await import('../utils/config.js')
enableConfigs() enableConfigs()
}
// Apply settings.env from user settings (includes GitHub provider settings from /onboard-github)
{
const { applySafeConfigEnvironmentVariables } = await import('../utils/managedEnv.js') const { applySafeConfigEnvironmentVariables } = await import('../utils/managedEnv.js')
applySafeConfigEnvironmentVariables() applySafeConfigEnvironmentVariables()
const { hydrateGeminiAccessTokenFromSecureStorage } = await import('../utils/geminiCredentials.js')
hydrateGeminiAccessTokenFromSecureStorage()
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../utils/githubModelsCredentials.js')
hydrateGithubModelsTokenFromSecureStorage()
} }
const startupEnv = await buildStartupEnvFromProfile({ const startupEnv = await buildStartupEnvFromProfile({
@@ -122,16 +121,6 @@ async function main(): Promise<void> {
} }
} }
// Hydrate GitHub credentials after profile is applied so CLAUDE_CODE_USE_GITHUB from profile is available
{
const {
hydrateGithubModelsTokenFromSecureStorage,
refreshGithubModelsTokenIfNeeded,
} = await import('../utils/githubModelsCredentials.js')
await refreshGithubModelsTokenIfNeeded()
hydrateGithubModelsTokenFromSecureStorage()
}
await validateProviderEnvOrExit() await validateProviderEnvOrExit()
// Print the gradient startup screen before the Ink UI loads // Print the gradient startup screen before the Ink UI loads

View File

@@ -18,7 +18,6 @@ const originalEnv = {
GEMINI_API_KEY: process.env.GEMINI_API_KEY, GEMINI_API_KEY: process.env.GEMINI_API_KEY,
GEMINI_MODEL: process.env.GEMINI_MODEL, GEMINI_MODEL: process.env.GEMINI_MODEL,
GEMINI_BASE_URL: process.env.GEMINI_BASE_URL, GEMINI_BASE_URL: process.env.GEMINI_BASE_URL,
GEMINI_AUTH_MODE: process.env.GEMINI_AUTH_MODE,
GOOGLE_API_KEY: process.env.GOOGLE_API_KEY, GOOGLE_API_KEY: process.env.GOOGLE_API_KEY,
OPENAI_API_KEY: process.env.OPENAI_API_KEY, OPENAI_API_KEY: process.env.OPENAI_API_KEY,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL, OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
@@ -33,7 +32,6 @@ beforeEach(() => {
process.env.GEMINI_API_KEY = 'gemini-test-key' process.env.GEMINI_API_KEY = 'gemini-test-key'
process.env.GEMINI_MODEL = 'gemini-2.0-flash' process.env.GEMINI_MODEL = 'gemini-2.0-flash'
process.env.GEMINI_BASE_URL = 'https://gemini.example/v1beta/openai' process.env.GEMINI_BASE_URL = 'https://gemini.example/v1beta/openai'
process.env.GEMINI_AUTH_MODE = 'api-key'
delete process.env.GOOGLE_API_KEY delete process.env.GOOGLE_API_KEY
delete process.env.OPENAI_API_KEY delete process.env.OPENAI_API_KEY
@@ -49,7 +47,6 @@ afterEach(() => {
process.env.GEMINI_API_KEY = originalEnv.GEMINI_API_KEY process.env.GEMINI_API_KEY = originalEnv.GEMINI_API_KEY
process.env.GEMINI_MODEL = originalEnv.GEMINI_MODEL process.env.GEMINI_MODEL = originalEnv.GEMINI_MODEL
process.env.GEMINI_BASE_URL = originalEnv.GEMINI_BASE_URL process.env.GEMINI_BASE_URL = originalEnv.GEMINI_BASE_URL
process.env.GEMINI_AUTH_MODE = originalEnv.GEMINI_AUTH_MODE
process.env.GOOGLE_API_KEY = originalEnv.GOOGLE_API_KEY process.env.GOOGLE_API_KEY = originalEnv.GOOGLE_API_KEY
process.env.OPENAI_API_KEY = originalEnv.OPENAI_API_KEY process.env.OPENAI_API_KEY = originalEnv.OPENAI_API_KEY
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL

View File

@@ -17,23 +17,16 @@ const tempDirs: string[] = []
const originalEnv = { const originalEnv = {
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL, OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_BASE: process.env.OPENAI_API_BASE, OPENAI_API_BASE: process.env.OPENAI_API_BASE,
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
} }
afterEach(() => { afterEach(() => {
if (originalEnv.OPENAI_BASE_URL === undefined) delete process.env.OPENAI_BASE_URL
else process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
if (originalEnv.OPENAI_API_BASE === undefined) delete process.env.OPENAI_API_BASE
else process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
if (originalEnv.CLAUDE_CODE_USE_GITHUB === undefined) delete process.env.CLAUDE_CODE_USE_GITHUB
else process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
while (tempDirs.length > 0) { while (tempDirs.length > 0) {
const dir = tempDirs.pop() const dir = tempDirs.pop()
if (dir) rmSync(dir, { recursive: true, force: true }) if (dir) rmSync(dir, { recursive: true, force: true })
} }
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
}) })
function createTempAuthJson(payload: Record<string, unknown>): string { function createTempAuthJson(payload: Record<string, unknown>): string {
@@ -78,7 +71,6 @@ describe('Codex provider config', () => {
test('resolves codexplan alias to Codex transport with reasoning', () => { test('resolves codexplan alias to Codex transport with reasoning', () => {
delete process.env.OPENAI_BASE_URL delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_BASE delete process.env.OPENAI_API_BASE
delete process.env.CLAUDE_CODE_USE_GITHUB
const resolved = resolveProviderRequest({ model: 'codexplan' }) const resolved = resolveProviderRequest({ model: 'codexplan' })
expect(resolved.transport).toBe('codex_responses') expect(resolved.transport).toBe('codex_responses')

View File

@@ -1806,70 +1806,12 @@ test('sanitizes malformed MCP tool schemas before sending them to OpenAI', async
| undefined | undefined
expect(parameters?.additionalProperties).toBe(false) expect(parameters?.additionalProperties).toBe(false)
// No required[] in the original schema → none added (optional properties must not be forced required) expect(parameters?.required).toEqual(['priority'])
expect(parameters?.required).toEqual([])
expect(properties?.priority?.type).toBe('integer') expect(properties?.priority?.type).toBe('integer')
expect(properties?.priority?.enum).toEqual([0, 1, 2, 3]) expect(properties?.priority?.enum).toEqual([0, 1, 2, 3])
expect(properties?.priority).not.toHaveProperty('default') expect(properties?.priority).not.toHaveProperty('default')
}) })
test('optional tool properties are not added to required[] — fixes Groq/Azure 400 tool_use_failed', async () => {
// Regression test for: all optional properties being sent as required in strict mode,
// causing providers like Groq to reject valid tool calls where the model omits optional args.
let requestBody: Record<string, unknown> | undefined
globalThis.fetch = (async (_input, init) => {
requestBody = JSON.parse(String(init?.body))
return new Response(
JSON.stringify({
id: 'chatcmpl-4',
model: 'gpt-4o',
choices: [{ message: { role: 'assistant', content: 'ok' }, finish_reason: 'stop' }],
usage: { prompt_tokens: 5, completion_tokens: 2, total_tokens: 7 },
}),
{ headers: { 'Content-Type': 'application/json' } },
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'read a file' }],
tools: [
{
name: 'Read',
description: 'Read a file',
input_schema: {
type: 'object',
properties: {
file_path: { type: 'string', description: 'Absolute path to file' },
offset: { type: 'number', description: 'Line to start from' },
limit: { type: 'number', description: 'Max lines to read' },
pages: { type: 'string', description: 'Page range for PDFs' },
},
required: ['file_path'],
},
},
],
max_tokens: 16,
stream: false,
})
const parameters = (
requestBody?.tools as Array<{ function?: { parameters?: Record<string, unknown> } }>
)?.[0]?.function?.parameters
expect(parameters?.required).toEqual(['file_path'])
const required = parameters?.required as string[] | undefined
expect(required).not.toContain('offset')
expect(required).not.toContain('limit')
expect(required).not.toContain('pages')
expect(parameters?.additionalProperties).toBe(false)
})
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Issue #202 — consecutive role coalescing (Devstral, Mistral strict templates) // Issue #202 — consecutive role coalescing (Devstral, Mistral strict templates)
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
@@ -1907,7 +1849,7 @@ test('coalesces consecutive user messages to avoid alternation errors (issue #20
stream: false, stream: false,
}) })
expect(sentMessages?.length).toBe(2) expect(sentMessages?.length).toBe(2) // system + 1 merged user
expect(sentMessages?.[0]?.role).toBe('system') expect(sentMessages?.[0]?.role).toBe('system')
expect(sentMessages?.[1]?.role).toBe('user') expect(sentMessages?.[1]?.role).toBe('user')
const userContent = sentMessages?.[1]?.content as string const userContent = sentMessages?.[1]?.content as string
@@ -1941,8 +1883,9 @@ test('coalesces consecutive assistant messages preserving tool_calls (issue #202
stream: false, stream: false,
}) })
// system + user + merged assistant + tool
const assistantMsgs = sentMessages?.filter(m => m.role === 'assistant') const assistantMsgs = sentMessages?.filter(m => m.role === 'assistant')
expect(assistantMsgs?.length).toBe(1) expect(assistantMsgs?.length).toBe(1) // two assistant turns merged into one
expect(assistantMsgs?.[0]?.tool_calls?.length).toBeGreaterThan(0) expect(assistantMsgs?.[0]?.tool_calls?.length).toBeGreaterThan(0)
}) })
@@ -2032,6 +1975,8 @@ test('non-streaming: empty string content does not fall through to reasoning_con
stream: false, stream: false,
})) as { content: Array<Record<string, unknown>> } })) as { content: Array<Record<string, unknown>> }
// reasoning_content should be a thinking block, and also used as text
// since content is empty string (treated as absent)
expect(result.content).toEqual([ expect(result.content).toEqual([
{ type: 'thinking', thinking: 'Chain of thought here.' }, { type: 'thinking', thinking: 'Chain of thought here.' },
{ type: 'text', text: 'Chain of thought here.' }, { type: 'text', text: 'Chain of thought here.' },
@@ -2159,6 +2104,7 @@ test('streaming: thinking block closed before tool call', async () => {
const types = events.map(e => e.type) const types = events.map(e => e.type)
// Verify thinking block is started, then closed, then tool call starts
const thinkingStartIdx = types.indexOf('content_block_start') const thinkingStartIdx = types.indexOf('content_block_start')
const firstStopIdx = types.indexOf('content_block_stop') const firstStopIdx = types.indexOf('content_block_stop')
const toolStartIdx = types.indexOf( const toolStartIdx = types.indexOf(
@@ -2170,6 +2116,7 @@ test('streaming: thinking block closed before tool call', async () => {
expect(firstStopIdx).toBeGreaterThan(thinkingStartIdx) expect(firstStopIdx).toBeGreaterThan(thinkingStartIdx)
expect(toolStartIdx).toBeGreaterThan(firstStopIdx) expect(toolStartIdx).toBeGreaterThan(firstStopIdx)
// Verify thinking block start content
const thinkingStart = events[thinkingStartIdx] as { const thinkingStart = events[thinkingStartIdx] as {
content_block?: Record<string, unknown> content_block?: Record<string, unknown>
} }

View File

@@ -15,9 +15,9 @@
* OPENAI_MODEL=gpt-4o — default model override * OPENAI_MODEL=gpt-4o — default model override
* CODEX_API_KEY / ~/.codex/auth.json — Codex auth for codexplan/codexspark * CODEX_API_KEY / ~/.codex/auth.json — Codex auth for codexplan/codexspark
* *
* GitHub Copilot API (api.githubcopilot.com), OpenAI-compatible: * GitHub Models (models.github.ai), OpenAI-compatible:
* CLAUDE_CODE_USE_GITHUB=1 — enable GitHub inference (no need for USE_OPENAI) * CLAUDE_CODE_USE_GITHUB=1 — enable GitHub inference (no need for USE_OPENAI)
* GITHUB_TOKEN or GH_TOKEN — Copilot API token (mapped to Bearer auth) * GITHUB_TOKEN or GH_TOKEN — PAT with models access (mapped to Bearer auth)
* OPENAI_MODEL — optional; use github:copilot or openai/gpt-4.1 style IDs * OPENAI_MODEL — optional; use github:copilot or openai/gpt-4.1 style IDs
*/ */
@@ -29,9 +29,7 @@ import { hydrateGithubModelsTokenFromSecureStorage } from '../../utils/githubMod
import { import {
codexStreamToAnthropic, codexStreamToAnthropic,
collectCodexCompletedResponse, collectCodexCompletedResponse,
convertAnthropicMessagesToResponsesInput,
convertCodexResponseToAnthropicMessage, convertCodexResponseToAnthropicMessage,
convertToolsToResponsesTools,
performCodexRequest, performCodexRequest,
type AnthropicStreamEvent, type AnthropicStreamEvent,
type AnthropicUsage, type AnthropicUsage,
@@ -41,7 +39,6 @@ import {
isLocalProviderUrl, isLocalProviderUrl,
resolveCodexApiCredentials, resolveCodexApiCredentials,
resolveProviderRequest, resolveProviderRequest,
getGithubEndpointType,
} from './providerConfig.js' } from './providerConfig.js'
import { sanitizeSchemaForOpenAICompat } from '../../utils/schemaSanitizer.js' import { sanitizeSchemaForOpenAICompat } from '../../utils/schemaSanitizer.js'
import { redactSecretValueForDisplay } from '../../utils/providerProfile.js' import { redactSecretValueForDisplay } from '../../utils/providerProfile.js'
@@ -58,19 +55,13 @@ type SecretValueSource = Partial<{
GEMINI_ACCESS_TOKEN: string GEMINI_ACCESS_TOKEN: string
}> }>
const GITHUB_COPILOT_BASE = 'https://api.githubcopilot.com' const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
const GITHUB_API_VERSION = '2022-11-28'
const GITHUB_429_MAX_RETRIES = 3 const GITHUB_429_MAX_RETRIES = 3
const GITHUB_429_BASE_DELAY_SEC = 1 const GITHUB_429_BASE_DELAY_SEC = 1
const GITHUB_429_MAX_DELAY_SEC = 32 const GITHUB_429_MAX_DELAY_SEC = 32
const GEMINI_API_HOST = 'generativelanguage.googleapis.com' const GEMINI_API_HOST = 'generativelanguage.googleapis.com'
const COPILOT_HEADERS: Record<string, string> = {
'User-Agent': 'GitHubCopilotChat/0.26.7',
'Editor-Version': 'vscode/1.99.3',
'Editor-Plugin-Version': 'copilot-chat/0.26.7',
'Copilot-Integration-Id': 'vscode-chat',
}
function isGithubModelsMode(): boolean { function isGithubModelsMode(): boolean {
return isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) return isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
} }
@@ -421,13 +412,11 @@ function normalizeSchemaForOpenAI(
record.properties = normalizedProps record.properties = normalizedProps
if (strict) { if (strict) {
// Keep only the properties that were originally marked required in the schema. // OpenAI strict mode requires every property to be listed in required[]
// Adding every property to required[] (the previous behaviour) caused strict const allKeys = Object.keys(normalizedProps)
// OpenAI-compatible providers (Groq, Azure, etc.) to reject tool calls because record.required = Array.from(new Set([...existingRequired, ...allKeys]))
// the model correctly omits optional arguments — but the provider treats them // OpenAI strict mode requires additionalProperties: false on all object
// as missing required fields and returns a 400 / tool_use_failed error. // schemas — override unconditionally to ensure nested objects comply.
record.required = existingRequired.filter(k => k in normalizedProps)
// additionalProperties: false is still required by strict-mode providers.
record.additionalProperties = false record.additionalProperties = false
} else { } else {
// For Gemini: keep only existing required keys that are present in properties // For Gemini: keep only existing required keys that are present in properties
@@ -955,9 +944,8 @@ class OpenAIShimMessages {
httpResponse = response httpResponse = response
if (params.stream) { if (params.stream) {
const isResponsesStream = response.url?.includes('/responses')
return new OpenAIShimStream( return new OpenAIShimStream(
(request.transport === 'codex_responses' || isResponsesStream) request.transport === 'codex_responses'
? codexStreamToAnthropic(response, request.resolvedModel) ? codexStreamToAnthropic(response, request.resolvedModel)
: openaiStreamToAnthropic(response, request.resolvedModel), : openaiStreamToAnthropic(response, request.resolvedModel),
) )
@@ -971,38 +959,8 @@ class OpenAIShimMessages {
) )
} }
const isResponsesNonStream = response.url?.includes('/responses')
if (isResponsesNonStream || (request.transport === 'chat_completions' && isGithubModelsMode())) {
const contentType = response.headers.get('content-type') ?? ''
if (contentType.includes('application/json')) {
const parsed = await response.json() as Record<string, unknown>
if (
parsed &&
typeof parsed === 'object' &&
('output' in parsed || 'incomplete_details' in parsed)
) {
return convertCodexResponseToAnthropicMessage(
parsed,
request.resolvedModel,
)
}
return self._convertNonStreamingResponse(parsed, request.resolvedModel)
}
}
const contentType = response.headers.get('content-type') ?? ''
if (contentType.includes('application/json')) {
const data = await response.json() const data = await response.json()
return self._convertNonStreamingResponse(data, request.resolvedModel) return self._convertNonStreamingResponse(data, request.resolvedModel)
}
const textBody = await response.text().catch(() => '')
throw APIError.generate(
response.status,
undefined,
`OpenAI API error ${response.status}: unexpected response: ${textBody.slice(0, 500)}`,
response.headers as unknown as Headers,
)
})() })()
; (promise as unknown as Record<string, unknown>).withResponse = ; (promise as unknown as Record<string, unknown>).withResponse =
@@ -1024,36 +982,7 @@ class OpenAIShimMessages {
params: ShimCreateParams, params: ShimCreateParams,
options?: { signal?: AbortSignal; headers?: Record<string, string> }, options?: { signal?: AbortSignal; headers?: Record<string, string> },
): Promise<Response> { ): Promise<Response> {
const githubEndpointType = getGithubEndpointType(request.baseUrl) if (request.transport === 'codex_responses') {
const isGithubMode = isGithubModelsMode()
const isGithubWithCodexTransport = isGithubMode && request.transport === 'codex_responses'
const isGithubCopilotEndpoint = isGithubMode && githubEndpointType === 'copilot'
if (isGithubWithCodexTransport) {
const apiKey = this.providerOverride?.apiKey ?? process.env.OPENAI_API_KEY ?? ''
if (!apiKey) {
throw new Error(
'GitHub Copilot auth is required. Run /onboard-github to sign in.',
)
}
return performCodexRequest({
request,
credentials: {
apiKey,
source: 'env',
},
params,
defaultHeaders: {
...this.defaultHeaders,
...(options?.headers ?? {}),
...COPILOT_HEADERS,
},
signal: options?.signal,
})
}
if (request.transport === 'codex_responses' && !isGithubMode) {
const credentials = resolveCodexApiCredentials() const credentials = resolveCodexApiCredentials()
if (!credentials.apiKey) { if (!credentials.apiKey) {
const authHint = credentials.authPath const authHint = credentials.authPath
@@ -1127,10 +1056,6 @@ class OpenAIShimMessages {
} }
const isGithub = isGithubModelsMode() const isGithub = isGithubModelsMode()
const githubEndpointType = getGithubEndpointType(request.baseUrl)
const isGithubCopilot = isGithub && githubEndpointType === 'copilot'
const isGithubModels = isGithub && (githubEndpointType === 'models' || githubEndpointType === 'custom')
if (isGithub && body.max_completion_tokens !== undefined) { if (isGithub && body.max_completion_tokens !== undefined) {
body.max_tokens = body.max_completion_tokens body.max_tokens = body.max_completion_tokens
delete body.max_completion_tokens delete body.max_completion_tokens
@@ -1196,17 +1121,15 @@ class OpenAIShimMessages {
const geminiCredential = await resolveGeminiCredential(process.env) const geminiCredential = await resolveGeminiCredential(process.env)
if (geminiCredential.kind !== 'none') { if (geminiCredential.kind !== 'none') {
headers.Authorization = `Bearer ${geminiCredential.credential}` headers.Authorization = `Bearer ${geminiCredential.credential}`
if (geminiCredential.kind !== 'api-key' && 'projectId' in geminiCredential && geminiCredential.projectId) { if (geminiCredential.projectId) {
headers['x-goog-user-project'] = geminiCredential.projectId headers['x-goog-user-project'] = geminiCredential.projectId
} }
} }
} }
if (isGithubCopilot) { if (isGithub) {
Object.assign(headers, COPILOT_HEADERS) headers.Accept = 'application/vnd.github.v3+json'
} else if (isGithubModels) { headers['X-GitHub-Api-Version'] = GITHUB_API_VERSION
headers['Accept'] = 'application/vnd.github+json'
headers['X-GitHub-Api-Version'] = '2022-11-28'
} }
// Build the chat completions URL // Build the chat completions URL
@@ -1258,82 +1181,9 @@ class OpenAIShimMessages {
await sleepMs(delaySec * 1000) await sleepMs(delaySec * 1000)
continue continue
} }
// Read body exactly once here — Response body is a stream that can only
// be consumed a single time.
const errorBody = await response.text().catch(() => 'unknown error') const errorBody = await response.text().catch(() => 'unknown error')
const rateHint = const rateHint =
isGithub && response.status === 429 ? formatRetryAfterHint(response) : '' isGithub && response.status === 429 ? formatRetryAfterHint(response) : ''
// If GitHub Copilot returns error about /chat/completions,
// try the /responses endpoint (needed for GPT-5+ models)
if (isGithub && response.status === 400) {
if (errorBody.includes('/chat/completions') || errorBody.includes('not accessible')) {
const responsesUrl = `${request.baseUrl}/responses`
const responsesBody: Record<string, unknown> = {
model: request.resolvedModel,
input: convertAnthropicMessagesToResponsesInput(
params.messages as Array<{
role?: string
message?: { role?: string; content?: unknown }
content?: unknown
}>,
),
stream: params.stream ?? false,
}
if (!Array.isArray(responsesBody.input) || responsesBody.input.length === 0) {
responsesBody.input = [
{
type: 'message',
role: 'user',
content: [{ type: 'input_text', text: '' }],
},
]
}
const systemText = convertSystemPrompt(params.system)
if (systemText) {
responsesBody.instructions = systemText
}
if (body.max_tokens !== undefined) {
responsesBody.max_output_tokens = body.max_tokens
}
if (params.tools && params.tools.length > 0) {
const convertedTools = convertToolsToResponsesTools(
params.tools as Array<{
name?: string
description?: string
input_schema?: Record<string, unknown>
}>,
)
if (convertedTools.length > 0) {
responsesBody.tools = convertedTools
}
}
const responsesResponse = await fetch(responsesUrl, {
method: 'POST',
headers,
body: JSON.stringify(responsesBody),
signal: options?.signal,
})
if (responsesResponse.ok) {
return responsesResponse
}
const responsesErrorBody = await responsesResponse.text().catch(() => 'unknown error')
let responsesErrorResponse: object | undefined
try { responsesErrorResponse = JSON.parse(responsesErrorBody) } catch { /* raw text */ }
throw APIError.generate(
responsesResponse.status,
responsesErrorResponse,
`OpenAI API error ${responsesResponse.status}: ${responsesErrorBody}`,
responsesResponse.headers,
)
}
}
let errorResponse: object | undefined let errorResponse: object | undefined
try { errorResponse = JSON.parse(errorBody) } catch { /* raw text */ } try { errorResponse = JSON.parse(errorBody) } catch { /* raw text */ }
throw APIError.generate( throw APIError.generate(
@@ -1501,7 +1351,7 @@ export function createOpenAIShimClient(options: {
process.env.OPENAI_MODEL = process.env.GEMINI_MODEL process.env.OPENAI_MODEL = process.env.GEMINI_MODEL
} }
} else if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) { } else if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
process.env.OPENAI_BASE_URL ??= GITHUB_COPILOT_BASE process.env.OPENAI_BASE_URL ??= GITHUB_MODELS_DEFAULT_BASE
process.env.OPENAI_API_KEY ??= process.env.OPENAI_API_KEY ??=
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN ?? '' process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN ?? ''
} }

View File

@@ -23,9 +23,6 @@ test.each([
['github:gpt-4o', 'gpt-4o'], ['github:gpt-4o', 'gpt-4o'],
['gpt-4o', 'gpt-4o'], ['gpt-4o', 'gpt-4o'],
['github:copilot?reasoning=high', DEFAULT_GITHUB_MODELS_API_MODEL], ['github:copilot?reasoning=high', DEFAULT_GITHUB_MODELS_API_MODEL],
// normalizeGithubModelsApiModel preserves provider prefix for models.github.ai compatibility
['github:openai/gpt-4.1', 'openai/gpt-4.1'],
['openai/gpt-4.1', 'openai/gpt-4.1'],
] as const)('normalizeGithubModelsApiModel(%s) -> %s', (input, expected) => { ] as const)('normalizeGithubModelsApiModel(%s) -> %s', (input, expected) => {
expect(normalizeGithubModelsApiModel(input)).toBe(expected) expect(normalizeGithubModelsApiModel(input)).toBe(expected)
}) })
@@ -37,20 +34,6 @@ test('resolveProviderRequest applies GitHub normalization when CLAUDE_CODE_USE_G
expect(r.transport).toBe('chat_completions') expect(r.transport).toBe('chat_completions')
}) })
test('resolveProviderRequest routes GitHub GPT-5 codex models to responses transport', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const r = resolveProviderRequest({ model: 'gpt-5.3-codex' })
expect(r.resolvedModel).toBe('gpt-5.3-codex')
expect(r.transport).toBe('codex_responses')
})
test('resolveProviderRequest keeps gpt-5-mini on chat_completions for GitHub', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const r = resolveProviderRequest({ model: 'gpt-5-mini' })
expect(r.resolvedModel).toBe('gpt-5-mini')
expect(r.transport).toBe('chat_completions')
})
test('resolveProviderRequest leaves model unchanged without GitHub flag', () => { test('resolveProviderRequest leaves model unchanged without GitHub flag', () => {
delete process.env.CLAUDE_CODE_USE_GITHUB delete process.env.CLAUDE_CODE_USE_GITHUB
const r = resolveProviderRequest({ model: 'github:gpt-4o' }) const r = resolveProviderRequest({ model: 'github:gpt-4o' })

View File

@@ -7,8 +7,8 @@ import { isEnvTruthy } from '../../utils/envUtils.js'
export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1' export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1'
export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex' export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex'
/** Default GitHub Copilot API model when user selects copilot / github:copilot */ /** Default GitHub Models API model when user selects copilot / github:copilot */
export const DEFAULT_GITHUB_MODELS_API_MODEL = 'gpt-4o' export const DEFAULT_GITHUB_MODELS_API_MODEL = 'openai/gpt-4.1'
const CODEX_ALIAS_MODELS: Record< const CODEX_ALIAS_MODELS: Record<
string, string,
@@ -227,21 +227,6 @@ export function shouldUseCodexTransport(
return isCodexBaseUrl(explicitBaseUrl) || (!explicitBaseUrl && isCodexAlias(model)) return isCodexBaseUrl(explicitBaseUrl) || (!explicitBaseUrl && isCodexAlias(model))
} }
function shouldUseGithubResponsesApi(model: string): boolean {
const normalized = model.trim().toLowerCase()
// Codex-branded models require /responses.
if (normalized.includes('codex')) return true
// GPT-5+ models use /responses, except gpt-5-mini.
const match = /^gpt-(\d+)/.exec(normalized)
if (!match) return false
const major = Number(match[1])
if (major < 5) return false
if (normalized.startsWith('gpt-5-mini')) return false
return true
}
export function isLocalProviderUrl(baseUrl: string | undefined): boolean { export function isLocalProviderUrl(baseUrl: string | undefined): boolean {
if (!baseUrl) return false if (!baseUrl) return false
try { try {
@@ -295,61 +280,19 @@ export function isCodexBaseUrl(baseUrl: string | undefined): boolean {
} }
/** /**
* Normalize user model string for GitHub Copilot API inference. * Normalize user model string for GitHub Models inference (models.github.ai).
* Mirrors how Copilot resolves model IDs internally. * Mirrors runtime devsper `github._normalize_model_id`.
*/
export function normalizeGithubCopilotModel(requestedModel: string): string {
const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel
const segment =
noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim()
if (!segment || segment.toLowerCase() === 'copilot') {
return DEFAULT_GITHUB_MODELS_API_MODEL
}
// Strip provider prefix if present (e.g., "openai/gpt-4o" -> "gpt-4o")
const slashIndex = segment.indexOf('/')
if (slashIndex !== -1) {
return segment.slice(slashIndex + 1)
}
return segment
}
/**
* Normalize user model string for GitHub Models API inference.
* Only normalizes the default alias, preserves provider-qualified models.
*/ */
export function normalizeGithubModelsApiModel(requestedModel: string): string { export function normalizeGithubModelsApiModel(requestedModel: string): string {
const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel
const segment = const segment =
noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim() noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim()
// Only normalize the default alias for GitHub Models
if (!segment || segment.toLowerCase() === 'copilot') { if (!segment || segment.toLowerCase() === 'copilot') {
return DEFAULT_GITHUB_MODELS_API_MODEL return DEFAULT_GITHUB_MODELS_API_MODEL
} }
// Preserve provider prefix for GitHub Models (e.g., "openai/gpt-4.1" stays as-is)
return segment return segment
} }
export const GITHUB_COPILOT_BASE_URL = 'https://api.githubcopilot.com'
export const GITHUB_MODELS_BASE_URL = 'https://models.github.ai/inference'
export function getGithubEndpointType(
baseUrl: string | undefined,
): 'copilot' | 'models' | 'custom' {
if (!baseUrl) return 'copilot'
try {
const hostname = new URL(baseUrl).hostname.toLowerCase()
if (hostname === 'api.githubcopilot.com') {
return 'copilot'
}
if (hostname === 'models.github.ai' || hostname.endsWith('.github.ai')) {
return 'models'
}
return 'custom'
} catch {
return 'copilot'
}
}
export function resolveProviderRequest(options?: { export function resolveProviderRequest(options?: {
model?: string model?: string
baseUrl?: string baseUrl?: string
@@ -367,49 +310,31 @@ export function resolveProviderRequest(options?: {
asEnvUrl(options?.baseUrl) ?? asEnvUrl(options?.baseUrl) ??
asEnvUrl(process.env.OPENAI_BASE_URL) ?? asEnvUrl(process.env.OPENAI_BASE_URL) ??
asEnvUrl(process.env.OPENAI_API_BASE) asEnvUrl(process.env.OPENAI_API_BASE)
const githubEndpointType = isGithubMode
? getGithubEndpointType(rawBaseUrl)
: 'custom'
const isGithubCopilot = isGithubMode && githubEndpointType === 'copilot'
const isGithubModels = isGithubMode && githubEndpointType === 'models'
const isGithubCustom = isGithubMode && githubEndpointType === 'custom'
const githubResolvedModel = isGithubMode
? normalizeGithubModelsApiModel(requestedModel)
: requestedModel
const transport: ProviderTransport = const transport: ProviderTransport =
shouldUseCodexTransport(requestedModel, rawBaseUrl) || shouldUseCodexTransport(requestedModel, rawBaseUrl)
(isGithubCopilot && shouldUseGithubResponsesApi(githubResolvedModel))
? 'codex_responses' ? 'codex_responses'
: 'chat_completions' : 'chat_completions'
// For GitHub Copilot API, normalize to real model ID (e.g., "github:copilot" -> "gpt-4o") const resolvedModel =
// For GitHub Models/custom endpoints: transport === 'chat_completions' &&
// - Normalize default alias (github:copilot -> gpt-4o) isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
// - Preserve provider-qualified models (openai/gpt-4.1 stays as-is) ? normalizeGithubModelsApiModel(requestedModel)
const resolvedModel = isGithubCopilot : descriptor.baseModel
? normalizeGithubCopilotModel(descriptor.baseModel)
: (isGithubModels || isGithubCustom
? normalizeGithubModelsApiModel(descriptor.baseModel)
: descriptor.baseModel)
const reasoning = options?.reasoningEffortOverride const reasoning = options?.reasoningEffortOverride
? { effort: options.reasoningEffortOverride } ? { effort: options.reasoningEffortOverride }
: descriptor.reasoning : descriptor.reasoning
return { return {
transport, transport,
requestedModel, requestedModel,
resolvedModel, resolvedModel,
baseUrl: baseUrl:
(rawBaseUrl ?? (rawBaseUrl ??
(isGithubCopilot && transport === 'codex_responses' (transport === 'codex_responses'
? GITHUB_COPILOT_BASE_URL ? DEFAULT_CODEX_BASE_URL
: (isGithubMode : DEFAULT_OPENAI_BASE_URL)
? GITHUB_COPILOT_BASE_URL
: DEFAULT_OPENAI_BASE_URL))
).replace(/\/+$/, ''), ).replace(/\/+$/, ''),
reasoning, reasoning,
} }

View File

@@ -1,4 +1,4 @@
import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test' import { afterEach, describe, expect, mock, test } from 'bun:test'
import { APIError } from '@anthropic-ai/sdk' import { APIError } from '@anthropic-ai/sdk'
// Helper to build a mock APIError with specific headers // Helper to build a mock APIError with specific headers
@@ -15,27 +15,15 @@ function makeError(headers: Record<string, string>): APIError {
// Save/restore env vars between tests // Save/restore env vars between tests
const originalEnv = { ...process.env } const originalEnv = { ...process.env }
afterEach(() => {
const envKeys = [ for (const key of [
'CLAUDE_CODE_USE_OPENAI', 'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI', 'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB', 'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK', 'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX', 'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY', 'CLAUDE_CODE_USE_FOUNDRY',
'OPENAI_MODEL', ]) {
'OPENAI_BASE_URL',
'OPENAI_API_BASE',
] as const
beforeEach(() => {
for (const key of envKeys) {
delete process.env[key]
}
})
afterEach(() => {
for (const key of envKeys) {
if (originalEnv[key] === undefined) delete process.env[key] if (originalEnv[key] === undefined) delete process.env[key]
else process.env[key] = originalEnv[key] else process.env[key] = originalEnv[key]
} }

View File

@@ -1,106 +0,0 @@
import { describe, expect, test } from 'bun:test'
import { AutoFixConfigSchema, getAutoFixConfig, type AutoFixConfig } from './autoFixConfig.js'
describe('AutoFixConfigSchema', () => {
test('parses valid full config', () => {
const input = {
enabled: true,
lint: 'eslint . --fix',
test: 'bun test',
maxRetries: 3,
timeout: 30000,
}
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.enabled).toBe(true)
expect(result.data.lint).toBe('eslint . --fix')
expect(result.data.test).toBe('bun test')
expect(result.data.maxRetries).toBe(3)
expect(result.data.timeout).toBe(30000)
}
})
test('parses minimal config with defaults', () => {
const input = { enabled: true, lint: 'eslint .' }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.maxRetries).toBe(3)
expect(result.data.timeout).toBe(30000)
expect(result.data.test).toBeUndefined()
}
})
test('rejects config with enabled but no lint or test', () => {
const input = { enabled: true }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(false)
})
test('accepts disabled config without commands', () => {
const input = { enabled: false }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(true)
})
test('rejects negative maxRetries', () => {
const input = { enabled: true, lint: 'eslint .', maxRetries: -1 }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(false)
})
test('rejects maxRetries above 10', () => {
const input = { enabled: true, lint: 'eslint .', maxRetries: 11 }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(false)
})
})
describe('getAutoFixConfig', () => {
test('returns null when settings have no autoFix', () => {
const result = getAutoFixConfig(undefined)
expect(result).toBeNull()
})
test('returns null when autoFix is disabled', () => {
const result = getAutoFixConfig({ enabled: false })
expect(result).toBeNull()
})
test('returns parsed config when valid and enabled', () => {
const result = getAutoFixConfig({ enabled: true, lint: 'eslint .' })
expect(result).not.toBeNull()
expect(result!.enabled).toBe(true)
expect(result!.lint).toBe('eslint .')
})
})
describe('SettingsSchema autoFix integration', () => {
test('SettingsSchema accepts autoFix field', async () => {
const { SettingsSchema } = await import('../../utils/settings/types.js')
const settings = {
autoFix: {
enabled: true,
lint: 'eslint .',
test: 'bun test',
maxRetries: 3,
timeout: 30000,
},
}
const result = SettingsSchema().safeParse(settings)
expect(result.success).toBe(true)
})
test('SettingsSchema rejects invalid autoFix', async () => {
const { SettingsSchema } = await import('../../utils/settings/types.js')
const settings = {
autoFix: {
enabled: true,
// missing lint and test - should fail refine
},
}
const result = SettingsSchema().safeParse(settings)
expect(result.success).toBe(false)
})
})

View File

@@ -1,52 +0,0 @@
import { z } from 'zod/v4'
export const AutoFixConfigSchema = z
.object({
enabled: z.boolean().describe('Whether auto-fix is enabled'),
lint: z
.string()
.optional()
.describe('Lint command to run after file edits (e.g. "eslint . --fix")'),
test: z
.string()
.optional()
.describe('Test command to run after file edits (e.g. "bun test")'),
maxRetries: z
.number()
.int()
.min(0)
.max(10)
.default(3)
.describe('Maximum number of auto-fix retry attempts (default: 3)'),
timeout: z
.number()
.int()
.min(1000)
.max(300000)
.default(30000)
.describe('Timeout in ms for each lint/test command (default: 30000)'),
})
.refine(
data => !data.enabled || data.lint !== undefined || data.test !== undefined,
{
message: 'At least one of "lint" or "test" must be set when enabled',
},
)
export type AutoFixConfig = z.infer<typeof AutoFixConfigSchema>
export function getAutoFixConfig(
rawConfig: unknown,
): AutoFixConfig | null {
if (!rawConfig || typeof rawConfig !== 'object') {
return null
}
const parsed = AutoFixConfigSchema.safeParse(rawConfig)
if (!parsed.success) {
return null
}
if (!parsed.data.enabled) {
return null
}
return parsed.data
}

View File

@@ -1,63 +0,0 @@
import { describe, expect, test } from 'bun:test'
import {
shouldRunAutoFix,
buildAutoFixContext,
} from './autoFixHook.js'
describe('shouldRunAutoFix', () => {
test('returns true for file_edit tool when autoFix enabled', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('file_edit', config)).toBe(true)
})
test('returns true for file_write tool when autoFix enabled', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('file_write', config)).toBe(true)
})
test('returns false for bash tool', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('bash', config)).toBe(false)
})
test('returns false for file_read tool', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('file_read', config)).toBe(false)
})
test('returns false when config is null', () => {
expect(shouldRunAutoFix('file_edit', null)).toBe(false)
})
})
describe('buildAutoFixContext', () => {
test('formats lint errors as AI-readable context', () => {
const context = buildAutoFixContext({
hasErrors: true,
lintOutput: 'src/foo.ts:10:5 error no-unused-vars',
lintExitCode: 1,
errorSummary: 'Lint errors (exit code 1):\nsrc/foo.ts:10:5 error no-unused-vars',
})
expect(context).toContain('AUTO-FIX')
expect(context).toContain('no-unused-vars')
expect(context).toContain('Please fix')
})
test('returns null when no errors', () => {
const context = buildAutoFixContext({
hasErrors: false,
})
expect(context).toBeNull()
})
test('formats test failures as AI-readable context', () => {
const context = buildAutoFixContext({
hasErrors: true,
testOutput: 'FAIL src/foo.test.ts\n expected true, got false',
testExitCode: 1,
errorSummary: 'Test failures (exit code 1):\nFAIL src/foo.test.ts',
})
expect(context).toContain('AUTO-FIX')
expect(context).toContain('FAIL')
})
})

View File

@@ -1,25 +0,0 @@
import type { AutoFixConfig } from './autoFixConfig.js'
import type { AutoFixResult } from './autoFixRunner.js'
const AUTO_FIX_TOOLS = new Set(['file_edit', 'file_write'])
export function shouldRunAutoFix(
toolName: string,
config: AutoFixConfig | null,
): boolean {
if (!config) return false
return AUTO_FIX_TOOLS.has(toolName)
}
export function buildAutoFixContext(result: AutoFixResult): string | null {
if (!result.hasErrors || !result.errorSummary) return null
return (
`<auto_fix_feedback>\n` +
`AUTO-FIX: The file you just edited has errors. Please fix them:\n\n` +
`${result.errorSummary}\n\n` +
`Please fix these errors in the files you just edited. ` +
`Do not ask the user — just apply the fix.\n` +
`</auto_fix_feedback>`
)
}

View File

@@ -1,48 +0,0 @@
import { describe, expect, test } from 'bun:test'
import { getAutoFixConfig } from './autoFixConfig.js'
import { shouldRunAutoFix, buildAutoFixContext } from './autoFixHook.js'
import { runAutoFixCheck } from './autoFixRunner.js'
describe('autoFix end-to-end flow', () => {
test('full flow: config → shouldRun → check → context', async () => {
const config = getAutoFixConfig({
enabled: true,
lint: 'echo "error: unused" && exit 1',
maxRetries: 2,
timeout: 5000,
})
expect(config).not.toBeNull()
expect(shouldRunAutoFix('file_edit', config)).toBe(true)
const result = await runAutoFixCheck({
lint: config!.lint,
test: config!.test,
timeout: config!.timeout,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
const context = buildAutoFixContext(result)
expect(context).not.toBeNull()
expect(context).toContain('AUTO-FIX')
expect(context).toContain('unused')
})
test('full flow: no errors = no context', async () => {
const config = getAutoFixConfig({
enabled: true,
lint: 'echo "all clean"',
timeout: 5000,
})
const result = await runAutoFixCheck({
lint: config!.lint,
timeout: config!.timeout,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
const context = buildAutoFixContext(result)
expect(context).toBeNull()
})
})

View File

@@ -1,103 +0,0 @@
import { describe, expect, test } from 'bun:test'
import {
runAutoFixCheck,
type AutoFixResult,
type AutoFixCheckOptions,
} from './autoFixRunner.js'
describe('runAutoFixCheck', () => {
test('returns success when lint command exits 0', async () => {
const result = await runAutoFixCheck({
lint: 'echo "all clean"',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
expect(result.lintOutput).toContain('all clean')
expect(result.testOutput).toBeUndefined()
})
test('returns errors when lint command exits non-zero', async () => {
const result = await runAutoFixCheck({
lint: 'echo "error: unused var" && exit 1',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.lintOutput).toContain('unused var')
expect(result.lintExitCode).toBe(1)
})
test('returns errors when test command exits non-zero', async () => {
const result = await runAutoFixCheck({
test: 'echo "FAIL test_foo" && exit 1',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.testOutput).toContain('FAIL test_foo')
expect(result.testExitCode).toBe(1)
})
test('runs both lint and test commands', async () => {
const result = await runAutoFixCheck({
lint: 'echo "lint ok"',
test: 'echo "test ok"',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
expect(result.lintOutput).toContain('lint ok')
expect(result.testOutput).toContain('test ok')
})
test('skips test if lint fails', async () => {
const result = await runAutoFixCheck({
lint: 'echo "lint error" && exit 1',
test: 'echo "should not run"',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.lintOutput).toContain('lint error')
expect(result.testOutput).toBeUndefined()
})
test('handles timeout gracefully', async () => {
const result = await runAutoFixCheck({
lint: 'sleep 10',
timeout: 100,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.timedOut).toBe(true)
})
test('returns success with no commands configured', async () => {
const result = await runAutoFixCheck({
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
})
test('formats error summary for AI consumption', async () => {
const result = await runAutoFixCheck({
lint: 'echo "src/foo.ts:10:5 error no-unused-vars" && exit 1',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
const summary = result.errorSummary
expect(summary).toContain('Lint errors')
expect(summary).toContain('no-unused-vars')
})
})

View File

@@ -1,169 +0,0 @@
import { spawn } from 'child_process'
export interface AutoFixCheckOptions {
lint?: string
test?: string
timeout: number
cwd: string
signal?: AbortSignal
}
export interface AutoFixResult {
hasErrors: boolean
lintOutput?: string
lintExitCode?: number
testOutput?: string
testExitCode?: number
timedOut?: boolean
errorSummary?: string
}
async function runCommand(
command: string,
cwd: string,
timeout: number,
signal?: AbortSignal,
): Promise<{ stdout: string; stderr: string; exitCode: number; timedOut: boolean }> {
return new Promise((resolve) => {
if (signal?.aborted) {
resolve({ stdout: '', stderr: 'Aborted', exitCode: 1, timedOut: false })
return
}
let timedOut = false
let stdout = ''
let stderr = ''
const isWindows = process.platform === 'win32'
const proc = spawn(command, [], {
cwd,
env: { ...process.env },
shell: true,
windowsHide: true,
// On Unix, create a process group so we can kill child processes on timeout/abort
detached: !isWindows,
})
const killTree = () => {
try {
if (!isWindows && proc.pid) {
// Kill the entire process group
process.kill(-proc.pid, 'SIGTERM')
} else {
proc.kill('SIGTERM')
}
} catch {
// Process may have already exited
}
}
const onAbort = () => {
killTree()
}
signal?.addEventListener('abort', onAbort, { once: true })
proc.stdout?.on('data', (data: Buffer) => {
stdout += data.toString()
})
proc.stderr?.on('data', (data: Buffer) => {
stderr += data.toString()
})
const timer = setTimeout(() => {
timedOut = true
killTree()
}, timeout)
proc.on('close', (code) => {
clearTimeout(timer)
signal?.removeEventListener('abort', onAbort)
resolve({
stdout: stdout.slice(0, 10000),
stderr: stderr.slice(0, 10000),
exitCode: code ?? 1,
timedOut,
})
})
proc.on('error', () => {
clearTimeout(timer)
signal?.removeEventListener('abort', onAbort)
resolve({
stdout,
stderr: stderr || 'Command failed to start',
exitCode: 1,
timedOut: false,
})
})
})
}
function buildErrorSummary(result: AutoFixResult): string | undefined {
if (!result.hasErrors) return undefined
const parts: string[] = []
if (result.timedOut) {
parts.push('Command timed out.')
}
if (result.lintExitCode !== undefined && result.lintExitCode !== 0) {
parts.push(`Lint errors (exit code ${result.lintExitCode}):\n${result.lintOutput ?? ''}`)
}
if (result.testExitCode !== undefined && result.testExitCode !== 0) {
parts.push(`Test failures (exit code ${result.testExitCode}):\n${result.testOutput ?? ''}`)
}
return parts.join('\n\n')
}
export async function runAutoFixCheck(
options: AutoFixCheckOptions,
): Promise<AutoFixResult> {
const { lint, test, timeout, cwd, signal } = options
if (!lint && !test) {
return { hasErrors: false }
}
if (signal?.aborted) {
return { hasErrors: false }
}
const result: AutoFixResult = { hasErrors: false }
// Run lint first
if (lint) {
const lintResult = await runCommand(lint, cwd, timeout, signal)
result.lintOutput = (lintResult.stdout + '\n' + lintResult.stderr).trim()
result.lintExitCode = lintResult.exitCode
if (lintResult.timedOut) {
result.hasErrors = true
result.timedOut = true
result.errorSummary = buildErrorSummary(result)
return result
}
if (lintResult.exitCode !== 0) {
result.hasErrors = true
result.errorSummary = buildErrorSummary(result)
return result
}
}
// Run tests only if lint passed (or no lint configured)
if (test) {
const testResult = await runCommand(test, cwd, timeout, signal)
result.testOutput = (testResult.stdout + '\n' + testResult.stderr).trim()
result.testExitCode = testResult.exitCode
if (testResult.timedOut) {
result.hasErrors = true
result.timedOut = true
} else if (testResult.exitCode !== 0) {
result.hasErrors = true
}
}
result.errorSummary = buildErrorSummary(result)
return result
}

View File

@@ -1,4 +1,4 @@
import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test' import { afterEach, describe, expect, mock, test } from 'bun:test'
import { import {
DEFAULT_GITHUB_DEVICE_SCOPE, DEFAULT_GITHUB_DEVICE_SCOPE,
@@ -7,26 +7,14 @@ import {
requestDeviceCode, requestDeviceCode,
} from './deviceFlow.js' } from './deviceFlow.js'
async function importFreshModule() {
mock.restore()
return import(`./deviceFlow.ts?ts=${Date.now()}-${Math.random()}`)
}
describe('requestDeviceCode', () => { describe('requestDeviceCode', () => {
const originalFetch = globalThis.fetch const originalFetch = globalThis.fetch
beforeEach(() => {
mock.restore()
globalThis.fetch = originalFetch
})
afterEach(() => { afterEach(() => {
globalThis.fetch = originalFetch globalThis.fetch = originalFetch
}) })
test('parses successful device code response', async () => { test('parses successful device code response', async () => {
const { requestDeviceCode } = await importFreshModule()
globalThis.fetch = mock(() => globalThis.fetch = mock(() =>
Promise.resolve( Promise.resolve(
new Response( new Response(
@@ -54,9 +42,6 @@ describe('requestDeviceCode', () => {
}) })
test('throws on HTTP error', async () => { test('throws on HTTP error', async () => {
const { requestDeviceCode, GitHubDeviceFlowError } =
await importFreshModule()
globalThis.fetch = mock(() => globalThis.fetch = mock(() =>
Promise.resolve(new Response('bad', { status: 500 })), Promise.resolve(new Response('bad', { status: 500 })),
) )
@@ -149,8 +134,6 @@ describe('pollAccessToken', () => {
}) })
test('returns token when GitHub responds with access_token immediately', async () => { test('returns token when GitHub responds with access_token immediately', async () => {
const { pollAccessToken } = await importFreshModule()
let calls = 0 let calls = 0
globalThis.fetch = mock(() => { globalThis.fetch = mock(() => {
calls++ calls++
@@ -170,8 +153,6 @@ describe('pollAccessToken', () => {
}) })
test('throws on access_denied', async () => { test('throws on access_denied', async () => {
const { pollAccessToken } = await importFreshModule()
globalThis.fetch = mock(() => globalThis.fetch = mock(() =>
Promise.resolve( Promise.resolve(
new Response(JSON.stringify({ error: 'access_denied' }), { new Response(JSON.stringify({ error: 'access_denied' }), {
@@ -187,62 +168,3 @@ describe('pollAccessToken', () => {
).rejects.toThrow(/denied/) ).rejects.toThrow(/denied/)
}) })
}) })
describe('exchangeForCopilotToken', () => {
const originalFetch = globalThis.fetch
afterEach(() => {
globalThis.fetch = originalFetch
})
test('parses successful Copilot token response', async () => {
const { exchangeForCopilotToken } = await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(
new Response(
JSON.stringify({
token: 'copilot-token-xyz',
expires_at: 1700000000,
refresh_in: 3600,
endpoints: {
api: 'https://api.githubcopilot.com',
},
}),
{ status: 200 },
),
),
)
const result = await exchangeForCopilotToken('oauth-token', globalThis.fetch)
expect(result.token).toBe('copilot-token-xyz')
expect(result.expires_at).toBe(1700000000)
expect(result.refresh_in).toBe(3600)
expect(result.endpoints.api).toBe('https://api.githubcopilot.com')
})
test('throws on HTTP error', async () => {
const { exchangeForCopilotToken, GitHubDeviceFlowError } =
await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(new Response('unauthorized', { status: 401 })),
)
await expect(
exchangeForCopilotToken('bad-token', globalThis.fetch),
).rejects.toThrow(GitHubDeviceFlowError)
})
test('throws on malformed response', async () => {
const { exchangeForCopilotToken } = await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(
new Response(JSON.stringify({ invalid: 'data' }), { status: 200 }),
),
)
await expect(
exchangeForCopilotToken('oauth-token', globalThis.fetch),
).rejects.toThrow(/Malformed/)
})
})

View File

@@ -1,35 +1,19 @@
/** /**
* GitHub OAuth device flow for CLI login (https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow). * GitHub OAuth device flow for CLI login (https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow).
* Uses GitHub Copilot's official OAuth app for device authentication.
*/ */
import { execFileNoThrow } from '../../utils/execFileNoThrow.js' import { execFileNoThrow } from '../../utils/execFileNoThrow.js'
export const DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID = 'Iv1.b507a08c87ecfe98' export const DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID = 'Ov23liXjWSSui6QIahPl'
export const GITHUB_DEVICE_CODE_URL = 'https://github.com/login/device/code' export const GITHUB_DEVICE_CODE_URL = 'https://github.com/login/device/code'
export const GITHUB_DEVICE_ACCESS_TOKEN_URL = export const GITHUB_DEVICE_ACCESS_TOKEN_URL =
'https://github.com/login/oauth/access_token' 'https://github.com/login/oauth/access_token'
export const COPILOT_TOKEN_URL = 'https://api.github.com/copilot_internal/v2/token'
/** Only read:user scope — required for Copilot OAuth */ // OAuth app device flow does not accept the GitHub Models permission token
export const DEFAULT_GITHUB_DEVICE_SCOPE = 'read:user' // scope (models:read). Use an OAuth-safe default.
const OAUTH_SAFE_GITHUB_DEVICE_SCOPE = 'read:user'
export const COPILOT_HEADERS: Record<string, string> = { export const DEFAULT_GITHUB_DEVICE_SCOPE = OAUTH_SAFE_GITHUB_DEVICE_SCOPE
'User-Agent': 'GitHubCopilotChat/0.26.7',
'Editor-Version': 'vscode/1.99.3',
'Editor-Plugin-Version': 'copilot-chat/0.26.7',
'Copilot-Integration-Id': 'vscode-chat',
}
export type CopilotTokenResponse = {
token: string
expires_at: number
refresh_in: number
endpoints: {
api: string
}
}
export class GitHubDeviceFlowError extends Error { export class GitHubDeviceFlowError extends Error {
constructor(message: string) { constructor(message: string) {
@@ -46,8 +30,6 @@ export type DeviceCodeResult = {
interval: number interval: number
} }
type FetchLike = (input: RequestInfo | URL, init?: RequestInit) => Promise<Response>
export function getGithubDeviceFlowClientId(): string { export function getGithubDeviceFlowClientId(): string {
return ( return (
process.env.GITHUB_DEVICE_FLOW_CLIENT_ID?.trim() || process.env.GITHUB_DEVICE_FLOW_CLIENT_ID?.trim() ||
@@ -62,21 +44,21 @@ function sleep(ms: number): Promise<void> {
export async function requestDeviceCode(options?: { export async function requestDeviceCode(options?: {
clientId?: string clientId?: string
scope?: string scope?: string
fetchImpl?: FetchLike fetchImpl?: typeof fetch
}): Promise<DeviceCodeResult> { }): Promise<DeviceCodeResult> {
const clientId = options?.clientId ?? getGithubDeviceFlowClientId() const clientId = options?.clientId ?? getGithubDeviceFlowClientId()
if (!clientId) { if (!clientId) {
throw new GitHubDeviceFlowError( throw new GitHubDeviceFlowError(
'No OAuth client ID: set GITHUB_DEVICE_FLOW_CLIENT_ID.', 'No OAuth client ID: set GITHUB_DEVICE_FLOW_CLIENT_ID or paste a PAT instead.',
) )
} }
const fetchFn = options?.fetchImpl ?? fetch const fetchFn = options?.fetchImpl ?? fetch
const requestedScope = const requestedScope =
options?.scope?.trim() || DEFAULT_GITHUB_DEVICE_SCOPE options?.scope?.trim() || DEFAULT_GITHUB_DEVICE_SCOPE
const scopesToTry = const scopesToTry =
requestedScope === DEFAULT_GITHUB_DEVICE_SCOPE requestedScope === OAUTH_SAFE_GITHUB_DEVICE_SCOPE
? [requestedScope] ? [requestedScope]
: [requestedScope, DEFAULT_GITHUB_DEVICE_SCOPE] : [requestedScope, OAUTH_SAFE_GITHUB_DEVICE_SCOPE]
let lastError = 'Device code request failed.' let lastError = 'Device code request failed.'
@@ -95,7 +77,7 @@ export async function requestDeviceCode(options?: {
lastError = `Device code request failed: ${res.status} ${text}` lastError = `Device code request failed: ${res.status} ${text}`
const isInvalidScope = /invalid_scope/i.test(text) const isInvalidScope = /invalid_scope/i.test(text)
const canRetryWithFallback = const canRetryWithFallback =
scope !== DEFAULT_GITHUB_DEVICE_SCOPE && isInvalidScope scope !== OAUTH_SAFE_GITHUB_DEVICE_SCOPE && isInvalidScope
if (canRetryWithFallback) { if (canRetryWithFallback) {
continue continue
} }
@@ -132,7 +114,7 @@ export type PollOptions = {
clientId?: string clientId?: string
initialInterval?: number initialInterval?: number
timeoutSeconds?: number timeoutSeconds?: number
fetchImpl?: FetchLike fetchImpl?: typeof fetch
} }
export async function pollAccessToken( export async function pollAccessToken(
@@ -215,49 +197,3 @@ export async function openVerificationUri(uri: string): Promise<void> {
// User can open the URL manually // User can open the URL manually
} }
} }
/**
* Exchange an OAuth access token for a Copilot API token.
* The OAuth token alone cannot be used with the Copilot API endpoint.
*/
export async function exchangeForCopilotToken(
oauthToken: string,
fetchImpl?: FetchLike,
): Promise<CopilotTokenResponse> {
const fetchFn = fetchImpl ?? fetch
const res = await fetchFn(COPILOT_TOKEN_URL, {
method: 'GET',
headers: {
Accept: 'application/json',
Authorization: `Bearer ${oauthToken}`,
...COPILOT_HEADERS,
},
})
if (!res.ok) {
const text = await res.text().catch(() => '')
throw new GitHubDeviceFlowError(
`Copilot token exchange failed: ${res.status} ${text}`,
)
}
const data = (await res.json()) as Record<string, unknown>
const token = data.token
const expires_at = data.expires_at
const refresh_in = data.refresh_in
const endpoints = data.endpoints
if (
typeof token !== 'string' ||
typeof expires_at !== 'number' ||
typeof refresh_in !== 'number' ||
!endpoints ||
typeof endpoints !== 'object' ||
typeof (endpoints as Record<string, unknown>).api !== 'string'
) {
throw new GitHubDeviceFlowError('Malformed Copilot token response')
}
return {
token,
expires_at,
refresh_in,
endpoints: endpoints as { api: string },
}
}

View File

@@ -1,11 +1,6 @@
// Mock rate limits for testing [internal-only] // Mock rate limits for testing [internal-only]
// The external build keeps this module as a stable no-op surface so imports // The external build keeps this module as a stable no-op surface so imports
// remain valid without exposing internal-only rate-limit simulation behavior. // remain valid without exposing internal-only rate-limit simulation behavior.
// This allows testing various rate limit scenarios without hitting actual limits
//
// WARNING: This is for internal testing/demo purposes only!
// The mock headers may not exactly match the API specification or real-world behavior.
// Always validate against actual API responses before relying on this for production features.
import { setMockBillingAccessOverride } from '../utils/billing.js' import { setMockBillingAccessOverride } from '../utils/billing.js'
import type { OverageDisabledReason } from './claudeAiLimits.js' import type { OverageDisabledReason } from './claudeAiLimits.js'

View File

@@ -645,7 +645,7 @@ const internalOnlyTips: Tip[] =
{ {
id: 'skillify', id: 'skillify',
content: async () => content: async () =>
'[internal] Use /skillify to turn repeatable recurring workflows into reusable project skills', '[internal] Turn repeatable workflows into reusable project skills when they keep recurring',
cooldownSessions: 15, cooldownSessions: 15,
isRelevant: async () => true, isRelevant: async () => true,
}, },

View File

@@ -29,13 +29,6 @@ import {
} from '../../utils/permissions/PermissionResult.js' } from '../../utils/permissions/PermissionResult.js'
import { checkRuleBasedPermissions } from '../../utils/permissions/permissions.js' import { checkRuleBasedPermissions } from '../../utils/permissions/permissions.js'
import { formatError } from '../../utils/toolErrors.js' import { formatError } from '../../utils/toolErrors.js'
import { getAutoFixConfig } from '../autoFix/autoFixConfig.js'
import { shouldRunAutoFix, buildAutoFixContext } from '../autoFix/autoFixHook.js'
import { runAutoFixCheck } from '../autoFix/autoFixRunner.js'
// Track auto-fix retry count per query chain to enforce maxRetries cap.
// Key: queryChainId (or 'default'), Value: number of auto-fix attempts used.
const autoFixRetryCount = new Map<string, number>()
import { isMcpTool } from '../mcp/utils.js' import { isMcpTool } from '../mcp/utils.js'
import type { McpServerType, MessageUpdateLazy } from './toolExecution.js' import type { McpServerType, MessageUpdateLazy } from './toolExecution.js'
@@ -192,65 +185,6 @@ export async function* runPostToolUseHooks<Input extends AnyObject, Output>(
} }
} }
} }
// Auto-fix: run lint/test if configured for this tool
const autoFixSettings = toolUseContext.getAppState().settings
const autoFixConfig = getAutoFixConfig(
autoFixSettings && typeof autoFixSettings === 'object' && 'autoFix' in autoFixSettings
? (autoFixSettings as Record<string, unknown>).autoFix
: undefined,
)
if (shouldRunAutoFix(tool.name, autoFixConfig) && autoFixConfig) {
// Enforce maxRetries cap to prevent unbounded auto-fix loops.
// Uses queryChainId to scope the counter to the current conversation turn.
const chainKey = (toolUseContext.queryTracking?.chainId as string) ?? 'default'
const currentRetries = autoFixRetryCount.get(chainKey) ?? 0
if (currentRetries >= autoFixConfig.maxRetries) {
// Max retries reached — skip auto-fix and let the user know
yield {
message: createAttachmentMessage({
type: 'hook_additional_context',
content: [
`<auto_fix_feedback>\nAUTO-FIX: Maximum retry limit (${autoFixConfig.maxRetries}) reached. ` +
`Skipping further auto-fix attempts. Please review the errors manually.\n</auto_fix_feedback>`,
],
hookName: `AutoFix:${tool.name}`,
toolUseID,
hookEvent: 'PostToolUse',
}),
}
} else {
try {
const cwd = toolUseContext.options?.cwd ?? process.cwd()
const autoFixResult = await runAutoFixCheck({
lint: autoFixConfig.lint,
test: autoFixConfig.test,
timeout: autoFixConfig.timeout,
cwd,
signal: toolUseContext.abortController.signal,
})
const autoFixContext = buildAutoFixContext(autoFixResult)
if (autoFixContext) {
autoFixRetryCount.set(chainKey, currentRetries + 1)
yield {
message: createAttachmentMessage({
type: 'hook_additional_context',
content: [autoFixContext],
hookName: `AutoFix:${tool.name}`,
toolUseID,
hookEvent: 'PostToolUse',
}),
}
} else {
// Lint/test passed — reset the retry counter for this chain
autoFixRetryCount.delete(chainKey)
}
} catch (autoFixError) {
logError(autoFixError)
}
}
}
} catch (error) { } catch (error) {
logError(error) logError(error)
} }

View File

@@ -3,7 +3,6 @@ import { afterEach, beforeEach, expect, mock, test } from 'bun:test'
type MockStorageData = Record<string, unknown> type MockStorageData = Record<string, unknown>
const originalEnv = { ...process.env } const originalEnv = { ...process.env }
const originalArgv = [...process.argv]
let storageState: MockStorageData = {} let storageState: MockStorageData = {}
async function importFreshModule() { async function importFreshModule() {
@@ -28,14 +27,11 @@ async function importFreshModule() {
beforeEach(() => { beforeEach(() => {
process.env = { ...originalEnv } process.env = { ...originalEnv }
delete process.env.CLAUDE_CODE_SIMPLE
process.argv = originalArgv.filter(arg => arg !== '--bare')
storageState = {} storageState = {}
}) })
afterEach(() => { afterEach(() => {
process.env = { ...originalEnv } process.env = { ...originalEnv }
process.argv = [...originalArgv]
storageState = {} storageState = {}
mock.restore() mock.restore()
}) })

View File

@@ -1,118 +0,0 @@
import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test'
async function importFreshModule() {
mock.restore()
return import(`./githubModelsCredentials.ts?ts=${Date.now()}-${Math.random()}`)
}
describe('refreshGithubModelsTokenIfNeeded', () => {
const orig = {
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
CLAUDE_CODE_SIMPLE: process.env.CLAUDE_CODE_SIMPLE,
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
GH_TOKEN: process.env.GH_TOKEN,
}
beforeEach(() => {
mock.restore()
})
afterEach(() => {
for (const [k, v] of Object.entries(orig)) {
if (v === undefined) {
delete process.env[k as keyof typeof orig]
} else {
process.env[k as keyof typeof orig] = v
}
}
})
test('refreshes expired Copilot token using stored OAuth token', async () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
delete process.env.CLAUDE_CODE_SIMPLE
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const futureExp = Math.floor(Date.now() / 1000) + 3600
let store: Record<string, unknown> = {
githubModels: {
accessToken: 'tid=stale;exp=1;sku=free',
oauthAccessToken: 'ghu_oauth_secret',
},
}
mock.module('./secureStorage/index.js', () => ({
getSecureStorage: () => ({
read: () => store,
update: (next: Record<string, unknown>) => {
store = next
return { success: true }
},
}),
}))
mock.module('../services/github/deviceFlow.js', () => ({
DEFAULT_GITHUB_DEVICE_SCOPE: 'read:user',
exchangeForCopilotToken: async () => ({
token: `tid=fresh;exp=${futureExp};sku=free`,
expires_at: futureExp,
refresh_in: 1500,
endpoints: { api: 'https://api.githubcopilot.com' },
}),
}))
const { refreshGithubModelsTokenIfNeeded } = await importFreshModule()
const refreshed = await refreshGithubModelsTokenIfNeeded()
expect(refreshed).toBe(true)
expect(process.env.GITHUB_TOKEN?.startsWith('tid=fresh;exp=')).toBe(true)
const githubModels = (store.githubModels ?? {}) as {
accessToken?: string
oauthAccessToken?: string
}
expect(githubModels.accessToken?.startsWith('tid=fresh;exp=')).toBe(true)
expect(githubModels.oauthAccessToken).toBe('ghu_oauth_secret')
})
test('does not refresh when current Copilot token is valid', async () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
delete process.env.CLAUDE_CODE_SIMPLE
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const futureExp = Math.floor(Date.now() / 1000) + 3600
const exchangeSpy = mock(async () => ({
token: `tid=unexpected;exp=${futureExp};sku=free`,
expires_at: futureExp,
refresh_in: 1500,
endpoints: { api: 'https://api.githubcopilot.com' },
}))
mock.module('./secureStorage/index.js', () => ({
getSecureStorage: () => ({
read: () => ({
githubModels: {
accessToken: `tid=already-valid;exp=${futureExp};sku=free`,
oauthAccessToken: 'ghu_oauth_secret',
},
}),
update: () => ({ success: true }),
}),
}))
mock.module('../services/github/deviceFlow.js', () => ({
DEFAULT_GITHUB_DEVICE_SCOPE: 'read:user',
exchangeForCopilotToken: exchangeSpy,
}))
const { refreshGithubModelsTokenIfNeeded } = await importFreshModule()
const refreshed = await refreshGithubModelsTokenIfNeeded()
expect(refreshed).toBe(false)
expect(exchangeSpy).not.toHaveBeenCalled()
expect(process.env.GITHUB_TOKEN?.startsWith('tid=already-valid;exp=')).toBe(
true,
)
})
})

View File

@@ -1,6 +1,5 @@
import { isBareMode, isEnvTruthy } from './envUtils.js' import { isBareMode, isEnvTruthy } from './envUtils.js'
import { getSecureStorage } from './secureStorage/index.js' import { getSecureStorage } from './secureStorage/index.js'
import { exchangeForCopilotToken } from '../services/github/deviceFlow.js'
/** JSON key in the shared OpenClaude secure storage blob. */ /** JSON key in the shared OpenClaude secure storage blob. */
export const GITHUB_MODELS_STORAGE_KEY = 'githubModels' as const export const GITHUB_MODELS_STORAGE_KEY = 'githubModels' as const
@@ -9,38 +8,6 @@ export const GITHUB_MODELS_HYDRATED_ENV_MARKER =
export type GithubModelsCredentialBlob = { export type GithubModelsCredentialBlob = {
accessToken: string accessToken: string
oauthAccessToken?: string
}
type GithubTokenStatus = 'valid' | 'expired' | 'invalid_format'
function checkGithubTokenStatus(token: string): GithubTokenStatus {
const expMatch = token.match(/exp=(\d+)/)
if (expMatch) {
const expSeconds = Number(expMatch[1])
if (!Number.isNaN(expSeconds)) {
return Date.now() >= expSeconds * 1000 ? 'expired' : 'valid'
}
}
const parts = token.split('.')
const looksLikeJwt =
parts.length === 3 && parts.every(part => /^[A-Za-z0-9_-]+$/.test(part))
if (looksLikeJwt) {
try {
const normalized = parts[1].replace(/-/g, '+').replace(/_/g, '/')
const padded = normalized + '='.repeat((4 - (normalized.length % 4)) % 4)
const json = Buffer.from(padded, 'base64').toString('utf8')
const parsed = JSON.parse(json)
if (parsed && typeof parsed === 'object' && parsed.exp) {
return Date.now() >= (parsed.exp as number) * 1000 ? 'expired' : 'valid'
}
} catch {
return 'invalid_format'
}
}
return 'invalid_format'
} }
export function readGithubModelsToken(): string | undefined { export function readGithubModelsToken(): string | undefined {
@@ -99,62 +66,7 @@ export function hydrateGithubModelsTokenFromSecureStorage(): void {
delete process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER] delete process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER]
} }
/** export function saveGithubModelsToken(token: string): {
* Startup auto-refresh for GitHub Models mode.
*
* If a stored Copilot token is expired/invalid and an OAuth token is present,
* exchange the OAuth token for a fresh Copilot token and persist it.
*/
export async function refreshGithubModelsTokenIfNeeded(): Promise<boolean> {
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
return false
}
if (isBareMode()) {
return false
}
try {
const secureStorage = getSecureStorage()
const data = secureStorage.read() as
| ({ githubModels?: GithubModelsCredentialBlob } & Record<string, unknown>)
| null
const blob = data?.githubModels
const accessToken = blob?.accessToken?.trim() || ''
const oauthToken = blob?.oauthAccessToken?.trim() || ''
if (!accessToken && !oauthToken) {
return false
}
const status = accessToken ? checkGithubTokenStatus(accessToken) : 'expired'
if (status === 'valid') {
if (!process.env.GITHUB_TOKEN?.trim() && !process.env.GH_TOKEN?.trim()) {
process.env.GITHUB_TOKEN = accessToken
}
return false
}
if (!oauthToken) {
return false
}
const refreshed = await exchangeForCopilotToken(oauthToken)
const saved = saveGithubModelsToken(refreshed.token, oauthToken)
if (!saved.success) {
return false
}
process.env.GITHUB_TOKEN = refreshed.token
return true
} catch {
return false
}
}
export function saveGithubModelsToken(
token: string,
oauthToken?: string,
): {
success: boolean success: boolean
warning?: string warning?: string
} { } {
@@ -167,21 +79,9 @@ export function saveGithubModelsToken(
} }
const secureStorage = getSecureStorage() const secureStorage = getSecureStorage()
const prev = secureStorage.read() || {} const prev = secureStorage.read() || {}
const prevGithubModels = (prev as Record<string, unknown>)[
GITHUB_MODELS_STORAGE_KEY
] as GithubModelsCredentialBlob | undefined
const oauthTrimmed = oauthToken?.trim()
const mergedBlob: GithubModelsCredentialBlob = {
accessToken: trimmed,
}
if (oauthTrimmed) {
mergedBlob.oauthAccessToken = oauthTrimmed
} else if (prevGithubModels?.oauthAccessToken?.trim()) {
mergedBlob.oauthAccessToken = prevGithubModels.oauthAccessToken.trim()
}
const merged = { const merged = {
...(prev as Record<string, unknown>), ...(prev as Record<string, unknown>),
[GITHUB_MODELS_STORAGE_KEY]: mergedBlob, [GITHUB_MODELS_STORAGE_KEY]: { accessToken: trimmed },
} }
return secureStorage.update(merged as typeof prev) return secureStorage.update(merged as typeof prev)
} }

View File

@@ -35,8 +35,6 @@ export const CLAUDE_3_7_SONNET_CONFIG = {
foundry: 'claude-3-7-sonnet', foundry: 'claude-3-7-sonnet',
openai: 'gpt-4o-mini', openai: 'gpt-4o-mini',
gemini: 'gemini-2.0-flash', gemini: 'gemini-2.0-flash',
github: 'github:copilot',
codex: 'gpt-5.4',
} as const satisfies ModelConfig } as const satisfies ModelConfig
export const CLAUDE_3_5_V2_SONNET_CONFIG = { export const CLAUDE_3_5_V2_SONNET_CONFIG = {
@@ -46,8 +44,6 @@ export const CLAUDE_3_5_V2_SONNET_CONFIG = {
foundry: 'claude-3-5-sonnet', foundry: 'claude-3-5-sonnet',
openai: 'gpt-4o-mini', openai: 'gpt-4o-mini',
gemini: 'gemini-2.0-flash', gemini: 'gemini-2.0-flash',
github: 'github:copilot',
codex: 'gpt-5.4',
} as const satisfies ModelConfig } as const satisfies ModelConfig
export const CLAUDE_3_5_HAIKU_CONFIG = { export const CLAUDE_3_5_HAIKU_CONFIG = {
@@ -57,8 +53,6 @@ export const CLAUDE_3_5_HAIKU_CONFIG = {
foundry: 'claude-3-5-haiku', foundry: 'claude-3-5-haiku',
openai: 'gpt-4o-mini', openai: 'gpt-4o-mini',
gemini: 'gemini-2.0-flash-lite', gemini: 'gemini-2.0-flash-lite',
github: 'github:copilot',
codex: 'gpt-5.4',
} as const satisfies ModelConfig } as const satisfies ModelConfig
export const CLAUDE_HAIKU_4_5_CONFIG = { export const CLAUDE_HAIKU_4_5_CONFIG = {
@@ -68,8 +62,6 @@ export const CLAUDE_HAIKU_4_5_CONFIG = {
foundry: 'claude-haiku-4-5', foundry: 'claude-haiku-4-5',
openai: 'gpt-4o-mini', openai: 'gpt-4o-mini',
gemini: 'gemini-2.0-flash-lite', gemini: 'gemini-2.0-flash-lite',
github: 'github:copilot',
codex: 'gpt-5.4',
} as const satisfies ModelConfig } as const satisfies ModelConfig
export const CLAUDE_SONNET_4_CONFIG = { export const CLAUDE_SONNET_4_CONFIG = {
@@ -79,8 +71,6 @@ export const CLAUDE_SONNET_4_CONFIG = {
foundry: 'claude-sonnet-4', foundry: 'claude-sonnet-4',
openai: 'gpt-4o-mini', openai: 'gpt-4o-mini',
gemini: 'gemini-2.0-flash', gemini: 'gemini-2.0-flash',
github: 'github:copilot',
codex: 'gpt-5.4',
} as const satisfies ModelConfig } as const satisfies ModelConfig
export const CLAUDE_SONNET_4_5_CONFIG = { export const CLAUDE_SONNET_4_5_CONFIG = {
@@ -90,8 +80,6 @@ export const CLAUDE_SONNET_4_5_CONFIG = {
foundry: 'claude-sonnet-4-5', foundry: 'claude-sonnet-4-5',
openai: 'gpt-4o', openai: 'gpt-4o',
gemini: 'gemini-2.0-flash', gemini: 'gemini-2.0-flash',
github: 'github:copilot',
codex: 'gpt-5.4',
} as const satisfies ModelConfig } as const satisfies ModelConfig
export const CLAUDE_OPUS_4_CONFIG = { export const CLAUDE_OPUS_4_CONFIG = {
@@ -101,8 +89,6 @@ export const CLAUDE_OPUS_4_CONFIG = {
foundry: 'claude-opus-4', foundry: 'claude-opus-4',
openai: 'gpt-4o', openai: 'gpt-4o',
gemini: 'gemini-2.5-pro-preview-03-25', gemini: 'gemini-2.5-pro-preview-03-25',
github: 'github:copilot',
codex: 'gpt-5.4',
} as const satisfies ModelConfig } as const satisfies ModelConfig
export const CLAUDE_OPUS_4_1_CONFIG = { export const CLAUDE_OPUS_4_1_CONFIG = {
@@ -112,8 +98,6 @@ export const CLAUDE_OPUS_4_1_CONFIG = {
foundry: 'claude-opus-4-1', foundry: 'claude-opus-4-1',
openai: 'gpt-4o', openai: 'gpt-4o',
gemini: 'gemini-2.5-pro-preview-03-25', gemini: 'gemini-2.5-pro-preview-03-25',
github: 'github:copilot',
codex: 'gpt-5.4',
} as const satisfies ModelConfig } as const satisfies ModelConfig
export const CLAUDE_OPUS_4_5_CONFIG = { export const CLAUDE_OPUS_4_5_CONFIG = {
@@ -123,8 +107,6 @@ export const CLAUDE_OPUS_4_5_CONFIG = {
foundry: 'claude-opus-4-5', foundry: 'claude-opus-4-5',
openai: 'gpt-4o', openai: 'gpt-4o',
gemini: 'gemini-2.5-pro-preview-03-25', gemini: 'gemini-2.5-pro-preview-03-25',
github: 'github:copilot',
codex: 'gpt-5.4',
} as const satisfies ModelConfig } as const satisfies ModelConfig
export const CLAUDE_OPUS_4_6_CONFIG = { export const CLAUDE_OPUS_4_6_CONFIG = {
@@ -134,8 +116,6 @@ export const CLAUDE_OPUS_4_6_CONFIG = {
foundry: 'claude-opus-4-6', foundry: 'claude-opus-4-6',
openai: 'gpt-4o', openai: 'gpt-4o',
gemini: 'gemini-2.5-pro-preview-03-25', gemini: 'gemini-2.5-pro-preview-03-25',
github: 'github:copilot',
codex: 'gpt-5.4',
} as const satisfies ModelConfig } as const satisfies ModelConfig
export const CLAUDE_SONNET_4_6_CONFIG = { export const CLAUDE_SONNET_4_6_CONFIG = {
@@ -145,8 +125,6 @@ export const CLAUDE_SONNET_4_6_CONFIG = {
foundry: 'claude-sonnet-4-6', foundry: 'claude-sonnet-4-6',
openai: 'gpt-4o', openai: 'gpt-4o',
gemini: 'gemini-2.0-flash', gemini: 'gemini-2.0-flash',
github: 'github:copilot',
codex: 'gpt-5.4',
} as const satisfies ModelConfig } as const satisfies ModelConfig
// @[MODEL LAUNCH]: Register the new config here. // @[MODEL LAUNCH]: Register the new config here.

View File

@@ -1,351 +0,0 @@
/**
* Hardcoded Copilot model registry from models.dev/api.json
* These are the 19 models available through GitHub Copilot.
*/
export type CopilotModel = {
id: string
name: string
family: string
attachment: boolean
reasoning: boolean
tool_call: boolean
temperature: boolean
knowledge: string
release_date: string
last_updated: string
modalities: {
input: string[]
output: string[]
}
open_weights: boolean
cost: {
input: number
output: number
cache_read?: number
}
limit: {
context: number
input?: number
output: number
}
}
export const COPILOT_MODELS: Record<string, CopilotModel> = {
'gpt-5.4': {
id: 'gpt-5.4',
name: 'GPT-5.4',
family: 'gpt',
attachment: false,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 400000, output: 32768 },
},
'gpt-5.4-mini': {
id: 'gpt-5.4-mini',
name: 'GPT-5.4 mini',
family: 'gpt-mini',
attachment: false,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 400000, output: 32768 },
},
'gpt-5.3-codex': {
id: 'gpt-5.3-codex',
name: 'GPT-5.3-Codex',
family: 'gpt-codex',
attachment: false,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 400000, output: 32768 },
},
'gpt-5.2-codex': {
id: 'gpt-5.2-codex',
name: 'GPT-5.2-Codex',
family: 'gpt-codex',
attachment: false,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 400000, output: 32768 },
},
'gpt-5.2': {
id: 'gpt-5.2',
name: 'GPT-5.2',
family: 'gpt',
attachment: false,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 264000, output: 32768 },
},
'gpt-5.1-codex': {
id: 'gpt-5.1-codex',
name: 'GPT-5.1-Codex',
family: 'gpt-codex',
attachment: false,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 400000, output: 32768 },
},
'gpt-5.1-codex-max': {
id: 'gpt-5.1-codex-max',
name: 'GPT-5.1-Codex-max',
family: 'gpt-codex',
attachment: false,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 400000, output: 32768 },
},
'gpt-5.1-codex-mini': {
id: 'gpt-5.1-codex-mini',
name: 'GPT-5.1-Codex-mini',
family: 'gpt-codex',
attachment: false,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 400000, output: 32768 },
},
'gpt-4o': {
id: 'gpt-4o',
name: 'GPT-4o',
family: 'gpt',
attachment: true,
reasoning: false,
tool_call: true,
temperature: true,
knowledge: '2023-10',
release_date: '2024-05-01',
last_updated: '2024-05-01',
modalities: { input: ['text', 'image'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 128000, output: 16384 },
},
'gpt-4.1': {
id: 'gpt-4.1',
name: 'GPT-4.1',
family: 'gpt',
attachment: false,
reasoning: false,
tool_call: true,
temperature: true,
knowledge: '2024-06',
release_date: '2024-06-01',
last_updated: '2024-06-01',
modalities: { input: ['text'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 128000, output: 32768 },
},
'claude-opus-4.6': {
id: 'claude-opus-4.6',
name: 'Claude Opus 4.6',
family: 'claude-opus',
attachment: true,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text', 'image'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 144000, output: 32768 },
},
'claude-opus-4.5': {
id: 'claude-opus-4.5',
name: 'Claude Opus 4.5',
family: 'claude-opus',
attachment: true,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text', 'image'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 160000, output: 32768 },
},
'claude-sonnet-4.6': {
id: 'claude-sonnet-4.6',
name: 'Claude Sonnet 4.6',
family: 'claude-sonnet',
attachment: true,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text', 'image'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 200000, output: 32768 },
},
'claude-sonnet-4.5': {
id: 'claude-sonnet-4.5',
name: 'Claude Sonnet 4.5',
family: 'claude-sonnet',
attachment: true,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text', 'image'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 144000, output: 32768 },
},
'claude-haiku-4.5': {
id: 'claude-haiku-4.5',
name: 'Claude Haiku 4.5',
family: 'claude-haiku',
attachment: true,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text', 'image'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 144000, output: 32768 },
},
'gemini-3.1-pro-preview': {
id: 'gemini-3.1-pro-preview',
name: 'Gemini 3.1 Pro Preview',
family: 'gemini-pro',
attachment: true,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text', 'image', 'audio'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 128000, output: 32768 },
},
'gemini-3-flash-preview': {
id: 'gemini-3-flash-preview',
name: 'Gemini 3 Flash',
family: 'gemini-flash',
attachment: true,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text', 'image'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 128000, output: 32768 },
},
'gemini-2.5-pro': {
id: 'gemini-2.5-pro',
name: 'Gemini 2.5 Pro',
family: 'gemini-pro',
attachment: true,
reasoning: false,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text', 'image'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 128000, output: 32768 },
},
'grok-code-fast-1': {
id: 'grok-code-fast-1',
name: 'Grok Code Fast 1',
family: 'grok',
attachment: false,
reasoning: true,
tool_call: true,
temperature: true,
knowledge: '2025-05',
release_date: '2025-05-01',
last_updated: '2025-05-01',
modalities: { input: ['text'], output: ['text'] },
open_weights: false,
cost: { input: 0, output: 0 },
limit: { context: 128000, output: 32768 },
},
}
export function getCopilotModelIds(): string[] {
return Object.keys(COPILOT_MODELS)
}
export function getCopilotModel(id: string): CopilotModel | undefined {
return COPILOT_MODELS[id]
}
export function getAllCopilotModels(): CopilotModel[] {
return Object.values(COPILOT_MODELS)
}

View File

@@ -43,10 +43,6 @@ export function getSmallFastModel(): ModelName {
if (getAPIProvider() === 'openai') { if (getAPIProvider() === 'openai') {
return process.env.OPENAI_MODEL || 'gpt-4o-mini' return process.env.OPENAI_MODEL || 'gpt-4o-mini'
} }
// For GitHub Copilot provider
if (getAPIProvider() === 'github') {
return process.env.OPENAI_MODEL || 'github:copilot'
}
return getDefaultHaikuModel() return getDefaultHaikuModel()
} }
@@ -141,10 +137,6 @@ export function getDefaultOpusModel(): ModelName {
if (getAPIProvider() === 'codex') { if (getAPIProvider() === 'codex') {
return process.env.OPENAI_MODEL || 'gpt-5.4' return process.env.OPENAI_MODEL || 'gpt-5.4'
} }
// GitHub Copilot provider
if (getAPIProvider() === 'github') {
return process.env.OPENAI_MODEL || 'github:copilot'
}
// 3P providers (Bedrock, Vertex, Foundry) — kept as a separate branch // 3P providers (Bedrock, Vertex, Foundry) — kept as a separate branch
// even when values match, since 3P availability lags firstParty and // even when values match, since 3P availability lags firstParty and
// these will diverge again at the next model launch. // these will diverge again at the next model launch.
@@ -171,10 +163,6 @@ export function getDefaultSonnetModel(): ModelName {
if (getAPIProvider() === 'codex') { if (getAPIProvider() === 'codex') {
return process.env.OPENAI_MODEL || 'gpt-5.4' return process.env.OPENAI_MODEL || 'gpt-5.4'
} }
// GitHub Copilot provider
if (getAPIProvider() === 'github') {
return process.env.OPENAI_MODEL || 'github:copilot'
}
// Default to Sonnet 4.5 for 3P since they may not have 4.6 yet // Default to Sonnet 4.5 for 3P since they may not have 4.6 yet
if (getAPIProvider() !== 'firstParty') { if (getAPIProvider() !== 'firstParty') {
return getModelStrings().sonnet45 return getModelStrings().sonnet45
@@ -187,6 +175,10 @@ export function getDefaultHaikuModel(): ModelName {
if (process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL) { if (process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL) {
return process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL return process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL
} }
// Gemini provider
if (getAPIProvider() === 'gemini') {
return process.env.GEMINI_MODEL || 'gemini-2.0-flash-lite'
}
// OpenAI provider // OpenAI provider
if (getAPIProvider() === 'openai') { if (getAPIProvider() === 'openai') {
return process.env.OPENAI_MODEL || 'gpt-4o-mini' return process.env.OPENAI_MODEL || 'gpt-4o-mini'
@@ -195,14 +187,6 @@ export function getDefaultHaikuModel(): ModelName {
if (getAPIProvider() === 'codex') { if (getAPIProvider() === 'codex') {
return process.env.OPENAI_MODEL || 'gpt-5.4' return process.env.OPENAI_MODEL || 'gpt-5.4'
} }
// GitHub Copilot provider
if (getAPIProvider() === 'github') {
return process.env.OPENAI_MODEL || 'github:copilot'
}
// Gemini provider
if (getAPIProvider() === 'gemini') {
return process.env.GEMINI_MODEL || 'gemini-2.0-flash-lite'
}
// Haiku 4.5 is available on all platforms (first-party, Foundry, Bedrock, Vertex) // Haiku 4.5 is available on all platforms (first-party, Foundry, Bedrock, Vertex)
return getModelStrings().haiku45 return getModelStrings().haiku45
@@ -247,11 +231,6 @@ export function getRuntimeMainLoopModel(params: {
* @returns The default model setting to use * @returns The default model setting to use
*/ */
export function getDefaultMainLoopModelSetting(): ModelName | ModelAlias { export function getDefaultMainLoopModelSetting(): ModelName | ModelAlias {
// GitHub Copilot provider: check settings.model first, then env, then default
if (getAPIProvider() === 'github') {
const settings = getSettings_DEPRECATED() || {}
return settings.model || process.env.OPENAI_MODEL || 'github:copilot'
}
// Gemini provider: always use the configured Gemini model // Gemini provider: always use the configured Gemini model
if (getAPIProvider() === 'gemini') { if (getAPIProvider() === 'gemini') {
return process.env.GEMINI_MODEL || 'gemini-2.0-flash' return process.env.GEMINI_MODEL || 'gemini-2.0-flash'
@@ -260,6 +239,10 @@ export function getDefaultMainLoopModelSetting(): ModelName | ModelAlias {
if (getAPIProvider() === 'openai') { if (getAPIProvider() === 'openai') {
return process.env.OPENAI_MODEL || 'gpt-4o' return process.env.OPENAI_MODEL || 'gpt-4o'
} }
// GitHub provider: always use the configured GitHub model
if (getAPIProvider() === 'github') {
return process.env.OPENAI_MODEL || 'github:copilot'
}
// Codex provider: always use the configured Codex model (default gpt-5.4) // Codex provider: always use the configured Codex model (default gpt-5.4)
if (getAPIProvider() === 'codex') { if (getAPIProvider() === 'codex') {
return process.env.OPENAI_MODEL || 'gpt-5.4' return process.env.OPENAI_MODEL || 'gpt-5.4'
@@ -443,33 +426,8 @@ export function renderModelSetting(setting: ModelName | ModelAlias): string {
* if the model is not recognized as a public model. * if the model is not recognized as a public model.
*/ */
export function getPublicModelDisplayName(model: ModelName): string | null { export function getPublicModelDisplayName(model: ModelName): string | null {
// For OpenAI/Gemini/Codex/GitHub providers, show the actual model name not a Claude alias // For OpenAI/Gemini/Codex providers, show the actual model name not a Claude alias
if (getAPIProvider() === 'openai' || getAPIProvider() === 'gemini' || getAPIProvider() === 'codex' || getAPIProvider() === 'github') { if (getAPIProvider() === 'openai' || getAPIProvider() === 'gemini' || getAPIProvider() === 'codex') {
// Return display names for known GitHub Copilot models
const copilotModelNames: Record<string, string> = {
'gpt-5.4': 'GPT-5.4',
'gpt-5.4-mini': 'GPT-5.4 mini',
'gpt-5.3-codex': 'GPT-5.3 Codex',
'gpt-5.2-codex': 'GPT-5.2 Codex',
'gpt-5.2': 'GPT-5.2',
'gpt-5.1-codex': 'GPT-5.1 Codex',
'gpt-5.1-codex-max': 'GPT-5.1 Codex max',
'gpt-5.1-codex-mini': 'GPT-5.1 Codex mini',
'gpt-4o': 'GPT-4o',
'gpt-4.1': 'GPT-4.1',
'claude-opus-4.6': 'Claude Opus 4.6',
'claude-opus-4.5': 'Claude Opus 4.5',
'claude-sonnet-4.6': 'Claude Sonnet 4.6',
'claude-sonnet-4.5': 'Claude Sonnet 4.5',
'claude-haiku-4.5': 'Claude Haiku 4.5',
'gemini-3.1-pro-preview': 'Gemini 3.1 Pro Preview',
'gemini-3-flash-preview': 'Gemini 3 Flash',
'gemini-2.5-pro': 'Gemini 2.5 Pro',
'grok-code-fast-1': 'Grok Code Fast 1',
}
if (copilotModelNames[model]) {
return copilotModelNames[model]
}
return null return null
} }
switch (model) { switch (model) {
@@ -526,10 +484,6 @@ export function renderModelName(model: ModelName): string {
if (publicName) { if (publicName) {
return publicName return publicName
} }
// Handle GitHub Copilot special model aliases
if (model === 'github:copilot') {
return 'GPT-4o'
}
if (process.env.USER_TYPE === 'ant') { if (process.env.USER_TYPE === 'ant') {
const resolved = parseUserSpecifiedModel(model) const resolved = parseUserSpecifiedModel(model)
const antModel = resolveAntModel(model) const antModel = resolveAntModel(model)

View File

@@ -61,7 +61,7 @@ afterEach(() => {
resetModelStringsForTestingOnly() resetModelStringsForTestingOnly()
}) })
test('GitHub provider exposes default + all Copilot models in /model options', async () => { test('GitHub provider exposes only default + GitHub model in /model options', async () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1' process.env.CLAUDE_CODE_USE_GITHUB = '1'
delete process.env.CLAUDE_CODE_USE_OPENAI delete process.env.CLAUDE_CODE_USE_OPENAI
delete process.env.CLAUDE_CODE_USE_GEMINI delete process.env.CLAUDE_CODE_USE_GEMINI
@@ -69,7 +69,7 @@ test('GitHub provider exposes default + all Copilot models in /model options', a
delete process.env.CLAUDE_CODE_USE_VERTEX delete process.env.CLAUDE_CODE_USE_VERTEX
delete process.env.CLAUDE_CODE_USE_FOUNDRY delete process.env.CLAUDE_CODE_USE_FOUNDRY
process.env.OPENAI_MODEL = 'gpt-4o' process.env.OPENAI_MODEL = 'github:copilot'
delete process.env.ANTHROPIC_CUSTOM_MODEL_OPTION delete process.env.ANTHROPIC_CUSTOM_MODEL_OPTION
const { getModelOptions } = await importFreshModelOptionsModule() const { getModelOptions } = await importFreshModelOptionsModule()
@@ -78,7 +78,6 @@ test('GitHub provider exposes default + all Copilot models in /model options', a
(option: { value: unknown }) => option.value !== null, (option: { value: unknown }) => option.value !== null,
) )
expect(nonDefault.length).toBeGreaterThan(1) expect(nonDefault.length).toBe(1)
expect(nonDefault.some((o: { value: unknown }) => o.value === 'gpt-4o')).toBe(true) expect(nonDefault[0]?.value).toBe('github:copilot')
expect(nonDefault.some((o: { value: unknown }) => o.value === 'gpt-5.3-codex')).toBe(true)
}) })

View File

@@ -35,7 +35,6 @@ import { has1mContext } from '../context.js'
import { getGlobalConfig } from '../config.js' import { getGlobalConfig } from '../config.js'
import { getActiveOpenAIModelOptionsCache } from '../providerProfiles.js' import { getActiveOpenAIModelOptionsCache } from '../providerProfiles.js'
import { getCachedOllamaModelOptions, isOllamaProvider } from './ollamaModels.js' import { getCachedOllamaModelOptions, isOllamaProvider } from './ollamaModels.js'
import { getAntModels } from './antModels.js'
// @[MODEL LAUNCH]: Update all the available and default model option strings below. // @[MODEL LAUNCH]: Update all the available and default model option strings below.
@@ -352,20 +351,17 @@ function getCodexModelOptions(): ModelOption[] {
// @[MODEL LAUNCH]: Update the model picker lists below to include/reorder options for the new model. // @[MODEL LAUNCH]: Update the model picker lists below to include/reorder options for the new model.
// Each user tier (ant, Max/Team Premium, Pro/Team Standard/Enterprise, PAYG 1P, PAYG 3P) has its own list. // Each user tier (ant, Max/Team Premium, Pro/Team Standard/Enterprise, PAYG 1P, PAYG 3P) has its own list.
import { getAllCopilotModels } from './copilotModels.js'
function getCopilotModelOptions(): ModelOption[] {
return getAllCopilotModels().map(m => ({
value: m.id,
label: m.name,
description: `${m.family}${m.reasoning ? ' · Reasoning' : ''}${m.tool_call ? ' · Tool call' : ''} · ${Math.round(m.limit.context / 1000)}K context`,
}))
}
function getModelOptionsBase(fastMode = false): ModelOption[] { function getModelOptionsBase(fastMode = false): ModelOption[] {
if (getAPIProvider() === 'github') { if (getAPIProvider() === 'github') {
return [getDefaultOptionForUser(fastMode), ...getCopilotModelOptions()] const githubModel = process.env.OPENAI_MODEL?.trim() || 'github:copilot'
return [
getDefaultOptionForUser(fastMode),
{
value: githubModel,
label: githubModel,
description: 'GitHub Models default',
},
]
} }
// When using Ollama, show models from the Ollama server instead of Claude models // When using Ollama, show models from the Ollama server instead of Claude models

View File

@@ -51,7 +51,6 @@ export const DANGEROUS_BASH_PATTERNS: readonly string[] = [
'xargs', 'xargs',
'sudo', 'sudo',
// Internal-only: internal-only tools plus general tools that ant sandbox // Internal-only: internal-only tools plus general tools that ant sandbox
// data shows are frequently over-allowlisted as broad prefixes.
// dotfile data shows are commonly over-allowlisted as broad prefixes. // dotfile data shows are commonly over-allowlisted as broad prefixes.
// These stay internal-only — external users don't have coo, and the rest are // These stay internal-only — external users don't have coo, and the rest are
// an empirical-risk call grounded in ant sandbox data, not a universal // an empirical-risk call grounded in ant sandbox data, not a universal

View File

@@ -6,26 +6,7 @@ import {
VALID_PROVIDERS, VALID_PROVIDERS,
} from './providerFlag.js' } from './providerFlag.js'
const ENV_KEYS = [ const originalEnv = { ...process.env }
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'OPENAI_BASE_URL',
'OPENAI_API_KEY',
'OPENAI_MODEL',
'GEMINI_MODEL',
]
const originalEnv: Record<string, string | undefined> = {}
beforeEach(() => {
for (const key of ENV_KEYS) {
originalEnv[key] = process.env[key]
delete process.env[key]
}
})
const RESET_KEYS = [ const RESET_KEYS = [
'CLAUDE_CODE_USE_OPENAI', 'CLAUDE_CODE_USE_OPENAI',
@@ -46,12 +27,9 @@ beforeEach(() => {
}) })
afterEach(() => { afterEach(() => {
for (const key of ENV_KEYS) { for (const key of RESET_KEYS) {
if (originalEnv[key] === undefined) { if (originalEnv[key] === undefined) delete process.env[key]
delete process.env[key] else process.env[key] = originalEnv[key]
} else {
process.env[key] = originalEnv[key]
}
} }
}) })

View File

@@ -1,5 +1,4 @@
import { import {
getGithubEndpointType,
isLocalProviderUrl, isLocalProviderUrl,
resolveCodexApiCredentials, resolveCodexApiCredentials,
resolveProviderRequest, resolveProviderRequest,
@@ -16,51 +15,6 @@ function isEnvTruthy(value: string | undefined): boolean {
return normalized !== '' && normalized !== '0' && normalized !== 'false' && normalized !== 'no' return normalized !== '' && normalized !== '0' && normalized !== 'false' && normalized !== 'no'
} }
type GithubTokenStatus = 'valid' | 'expired' | 'invalid_format'
const GITHUB_PAT_PREFIXES = ['ghp_', 'gho_', 'ghs_', 'ghr_', 'github_pat_']
function checkGithubTokenStatus(
token: string,
endpointType: 'copilot' | 'models' | 'custom' = 'copilot',
): GithubTokenStatus {
// PATs work with GitHub Models but not with Copilot API
if (GITHUB_PAT_PREFIXES.some(prefix => token.startsWith(prefix))) {
if (endpointType === 'copilot') {
return 'expired'
}
return 'valid'
}
const expMatch = token.match(/exp=(\d+)/)
if (expMatch) {
const expSeconds = Number(expMatch[1])
if (!Number.isNaN(expSeconds)) {
return Date.now() >= expSeconds * 1000 ? 'expired' : 'valid'
}
}
const parts = token.split('.')
const looksLikeJwt =
parts.length === 3 && parts.every(part => /^[A-Za-z0-9_-]+$/.test(part))
if (looksLikeJwt) {
try {
const normalized = parts[1].replace(/-/g, '+').replace(/_/g, '/')
const padded = normalized + '='.repeat((4 - (normalized.length % 4)) % 4)
const json = Buffer.from(padded, 'base64').toString('utf8')
const parsed = JSON.parse(json)
if (parsed && typeof parsed === 'object' && parsed.exp) {
return Date.now() >= (parsed.exp as number) * 1000 ? 'expired' : 'valid'
}
} catch {
return 'invalid_format'
}
}
// Keep compatibility with opaque token formats that do not expose expiry.
return 'valid'
}
export async function getProviderValidationError( export async function getProviderValidationError(
env: NodeJS.ProcessEnv = process.env, env: NodeJS.ProcessEnv = process.env,
options?: { options?: {
@@ -85,19 +39,7 @@ export async function getProviderValidationError(
if (useGithub && !useOpenAI) { if (useGithub && !useOpenAI) {
const token = (env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()) ?? '' const token = (env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()) ?? ''
if (!token) { if (!token) {
return 'GitHub Copilot authentication required.\n' + return 'GITHUB_TOKEN or GH_TOKEN is required when CLAUDE_CODE_USE_GITHUB=1.'
'Run /onboard-github in the CLI to sign in with your GitHub account.\n' +
'This will store your OAuth token securely and enable Copilot models.'
}
const endpointType = getGithubEndpointType(env.OPENAI_BASE_URL)
const status = checkGithubTokenStatus(token, endpointType)
if (status === 'expired') {
return 'GitHub Copilot token has expired.\n' +
'Run /onboard-github to sign in again and get a fresh token.'
}
if (status === 'invalid_format') {
return 'GitHub Copilot token is invalid or corrupted.\n' +
'Run /onboard-github to sign in again with your GitHub account.'
} }
return null return null
} }

View File

@@ -27,7 +27,6 @@ export {
// Also import for use within this file // Also import for use within this file
import { type HookCommand, HooksSchema } from '../../schemas/hooks.js' import { type HookCommand, HooksSchema } from '../../schemas/hooks.js'
import { AutoFixConfigSchema } from '../../services/autoFix/autoFixConfig.js'
import { count } from '../array.js' import { count } from '../array.js'
/** /**
@@ -436,12 +435,6 @@ export const SettingsSchema = lazySchema(() =>
hooks: HooksSchema() hooks: HooksSchema()
.optional() .optional()
.describe('Custom commands to run before/after tool executions'), .describe('Custom commands to run before/after tool executions'),
autoFix: AutoFixConfigSchema
.optional()
.describe(
'Auto-fix configuration: automatically run lint/test after AI file edits ' +
'and feed errors back for self-repair.',
),
worktree: z worktree: z
.object({ .object({
symlinkDirectories: z symlinkDirectories: z