Improve GitHub Copilot provider: official OAuth onboarding, Copilot API routing, and test hardening and auto refresh token logic (#288)
* update gitHub copilot API with offical client id and update model configurations * test: add unit tests for exchangeForCopilotToken and enhance GitHub model normalization * remove PAT token feature * test(api): harden provider tests against env leakage * Added back trimmed github auth token * added auto refresh logic for auto token along with test * fix: remove forked provider validation in cli.tsx and clear stale provider env vars in /onboard-github * refactor: streamline environment variable handling in mergeUserSettingsEnv * fix: clear stale provider env vars to ensure correct GH routing * Remove internal-only tooling from the external build (#352) * Remove internal-only tooling without changing external runtime contracts This trims the lowest-risk internal-only surfaces first: deleted internal modules are replaced by build-time no-op stubs, the bundled stuck skill is removed, and the insights S3 upload path now stays local-only. The privacy verifier is expanded and the remaining bundled internal Slack/Artifactory strings are neutralized without broad repo-wide renames. Constraint: Keep the first PR deletion-heavy and avoid mass rewrites of USER_TYPE, tengu, or claude_code identifiers Rejected: One-shot DMCA cleanup branch | too much semantic risk for a first PR Confidence: medium Scope-risk: moderate Reversibility: clean Directive: Treat full-repo typecheck as a baseline issue on this upstream snapshot; do not claim this commit introduced the existing non-Phase-A errors without isolating them first Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Not-tested: Full repo typecheck (currently fails on widespread pre-existing upstream errors outside this change set) * Keep minimal source shims so CI can import Phase A cleanup paths The first PR removed internal-only source files entirely, but CI provider and context tests import those modules directly from source rather than through the build-time no-telemetry stubs. This restores tiny no-op source shims so tests and local source imports resolve while preserving the same external runtime behavior. Constraint: GitHub Actions runs source-level tests in addition to bundled build/privacy checks Rejected: Revert the entire deletion pass | unnecessary once the import contract is satisfied by small shims Confidence: high Scope-risk: narrow Reversibility: clean Directive: For later cleanup phases, treat build-time stubs and source-test imports as separate compatibility surfaces Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (still noisy on this upstream snapshot) --------- Co-authored-by: anandh8x <test@example.com> * Reduce internal-only labeling noise in source comments (#355) This pass rewrites comment-only ANT-ONLY markers to neutral internal-only language across the source tree without changing runtime strings, flags, commands, or protocol identifiers. The goal is to lower obvious internal prose leakage while keeping the diff mechanically safe and easy to review. Constraint: Phase B is limited to comments/prose only; runtime strings and user-facing labels remain deferred Rejected: Broad search-and-replace across strings and command descriptions | too risky for a prose-only pass Confidence: high Scope-risk: narrow Reversibility: clean Directive: Remaining ANT-ONLY hits are mostly runtime/user-facing strings and should be handled separately from comment cleanup Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) Co-authored-by: anandh8x <test@example.com> * Neutralize internal Anthropic prose in explanatory comments (#357) This is a small prose-only follow-up that rewrites clearly internal or explanatory Anthropic comment language to neutral wording in a handful of high-confidence files. It avoids runtime strings, flags, command labels, protocol identifiers, and provider-facing references. Constraint: Keep this pass narrowly scoped to comments/documentation only Rejected: Broader Anthropic comment sweep across functional API/protocol references | too ambiguous for a safe prose-only PR Confidence: high Scope-risk: narrow Reversibility: clean Directive: Leave functional Anthropic references (API behavior, SDKs, URLs, provider labels, protocol docs) for separate reviewed passes Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) Co-authored-by: anandh8x <test@example.com> * Neutralize remaining internal-only diagnostic labels (#359) This pass rewrites a small set of ant-only diagnostic and UI labels to neutral internal wording while leaving command definitions, flags, and runtime logic untouched. It focuses on internal debug output, dead UI branches, and noninteractive headings rather than broader product text. Constraint: Label cleanup only; do not change command semantics or ant-only logic gates Rejected: Renaming ant-only command descriptions in main.tsx | broader UX surface better handled in a separate reviewed pass Confidence: high Scope-risk: narrow Reversibility: clean Directive: Remaining ANT-ONLY hits are mostly command descriptions and intentionally deferred user-facing strings Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) Co-authored-by: anandh8x <test@example.com> * Finish eliminating remaining ANT-ONLY source labels (#360) This extends the label-only cleanup to the remaining internal-only command, debug, and heading strings so the source tree no longer contains ANT-ONLY markers. The pass still avoids logic changes and only renames labels shown in internal or gated surfaces. Constraint: Update the existing label-cleanup PR without widening scope into behavior changes Rejected: Leave the last ANT-ONLY strings for a later pass | low-cost cleanup while the branch is already focused on labels Confidence: high Scope-risk: narrow Reversibility: clean Directive: The next phase should move off label cleanup and onto a separately scoped logic or rebrand slice Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) Co-authored-by: anandh8x <test@example.com> * Stub internal-only recording and model capability helpers (#377) This follow-up Phase C-lite slice replaces purely internal helper modules with stable external no-op surfaces and collapses internal elevated error logging to a no-op. The change removes additional USER_TYPE-gated helper behavior without touching product-facing runtime flows. Constraint: Keep this PR limited to isolated helper modules that are already external no-ops in practice Rejected: Pulling in broader speculation or logging sink changes | less isolated and easier to debate during review Confidence: high Scope-risk: narrow Reversibility: clean Directive: Continue Phase C with similarly isolated helpers before moving into mixed behavior files Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) Co-authored-by: anandh8x <test@example.com> * Remove internal-only bundled skills and mock helpers (#376) * Remove internal-only bundled skills and mock rate-limit behavior This takes the next planned Phase C-lite slice by deleting bundled skills that only ever registered for internal users and replacing the internal mock rate-limit helper with a stable no-op external stub. The external build keeps the same behavior while removing a concentrated block of USER_TYPE-gated dead code. Constraint: Limit this PR to isolated internal-only helpers and avoid bridge, oauth, or rebrand behavior Rejected: Broad USER_TYPE cleanup across mixed runtime surfaces | too risky for the next medium-sized PR Confidence: high Scope-risk: moderate Reversibility: clean Directive: The next cleanup pass should continue with similarly isolated USER_TYPE helpers before touching main.tsx or protocol-heavy code Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) * Align internal-only helper removal with remaining user guidance This follow-up fixes the mock billing stub to be a true no-op and removes stale user-facing references to /verify and /skillify from the same PR. It also leaves a clearer paper trail for review: the deleted verify skill was explicitly ant-gated before removal, and the remaining mock helper callers still resolve to safe no-op returns in the external build. Constraint: Keep the PR focused on consistency fixes and reviewer-requested evidence, not new cleanup scope Rejected: Leave stale guidance for a later PR | would make this branch internally inconsistent after skill removal Confidence: high Scope-risk: narrow Reversibility: clean Directive: When deleting gated features, always sweep user guidance and coordinator prompts in the same pass Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy; changed-file scan still shows only pre-existing tipRegistry errors outside edited lines) * Clarify generic workflow wording after skill removal This removes the last generic verification-skill wording that could still be read as pointing at a deleted bundled command. The guidance now talks about project workflows rather than a specific bundled verify skill. Constraint: Keep the follow-up limited to reviewer-facing wording cleanup on the same PR Rejected: Leave generic wording as-is | still too easy to misread after the explicit /verify references were removed Confidence: high Scope-risk: narrow Reversibility: clean Directive: When removing bundled commands, scrub both explicit and generic references in the same branch Tested: bun run build Tested: bun run smoke Not-tested: Additional checks unchanged by wording-only follow-up --------- Co-authored-by: anandh8x <test@example.com> * test(api): add GEMINI_AUTH_MODE to environment setup in tests * test: isolate GitHub/Gemini credential tests with fresh module imports and explicit non-bare env setup to prevent cross-test mock/cache leaks * fix: update GitHub Copilot base URL and model defaults for improved compatibility * fix: enhance error handling in OpenAI API response processing * fix: improve error handling for GitHub Copilot API responses and streamline error body consumption * fix: enhance response handling in OpenAI API shim for better error reporting and support for streaming responses * feat: enhance GitHub device flow with fresh module import and token validation improvements * fix: separate Copilot API routing from GitHub Models, clear stale env vars, honor providerOverride.apiKey * fix: route GitHub GPT-5/Codex to Copilot API, show all Copilot models in picker, clear stale env vars * fix GitHub Models API regression * feat: update GitHub authentication to require OAuth tokens, normalize model handling for Copilot and GitHub Models * fix: update GitHub token validation to support OAuth tokens and improve endpoint type handling --------- Co-authored-by: Anandan <anandan.8x@gmail.com> Co-authored-by: anandh8x <test@example.com>
This commit is contained in:
@@ -4,7 +4,7 @@ const onboardGithub: Command = {
|
||||
name: 'onboard-github',
|
||||
aliases: ['onboarding-github', 'onboardgithub', 'onboardinggithub'],
|
||||
description:
|
||||
'Interactive setup for GitHub Models: device login or PAT, saved to secure storage',
|
||||
'Interactive setup for GitHub Copilot: OAuth device login stored in secure storage',
|
||||
type: 'local-jsx',
|
||||
load: () => import('./onboard-github.js'),
|
||||
}
|
||||
|
||||
@@ -2,9 +2,9 @@ import * as React from 'react'
|
||||
import { useCallback, useState } from 'react'
|
||||
import { Select } from '../../components/CustomSelect/select.js'
|
||||
import { Spinner } from '../../components/Spinner.js'
|
||||
import TextInput from '../../components/TextInput.js'
|
||||
import { Box, Text } from '../../ink.js'
|
||||
import {
|
||||
exchangeForCopilotToken,
|
||||
openVerificationUri,
|
||||
pollAccessToken,
|
||||
requestDeviceCode,
|
||||
@@ -15,7 +15,7 @@ import {
|
||||
readGithubModelsToken,
|
||||
saveGithubModelsToken,
|
||||
} from '../../utils/githubModelsCredentials.js'
|
||||
import { updateSettingsForSource } from '../../utils/settings/settings.js'
|
||||
import { getSettingsForSource, updateSettingsForSource } from '../../utils/settings/settings.js'
|
||||
|
||||
const DEFAULT_MODEL = 'github:copilot'
|
||||
const FORCE_RELOGIN_ARGS = new Set([
|
||||
@@ -27,11 +27,25 @@ const FORCE_RELOGIN_ARGS = new Set([
|
||||
'--reauth',
|
||||
])
|
||||
|
||||
type Step =
|
||||
| 'menu'
|
||||
| 'device-busy'
|
||||
| 'pat'
|
||||
| 'error'
|
||||
type Step = 'menu' | 'device-busy' | 'error'
|
||||
|
||||
const PROVIDER_SPECIFIC_KEYS = new Set([
|
||||
'CLAUDE_CODE_USE_OPENAI',
|
||||
'CLAUDE_CODE_USE_GEMINI',
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'CLAUDE_CODE_USE_FOUNDRY',
|
||||
'OPENAI_BASE_URL',
|
||||
'OPENAI_API_BASE',
|
||||
'OPENAI_API_KEY',
|
||||
'OPENAI_MODEL',
|
||||
'GEMINI_API_KEY',
|
||||
'GOOGLE_API_KEY',
|
||||
'GEMINI_BASE_URL',
|
||||
'GEMINI_MODEL',
|
||||
'GEMINI_ACCESS_TOKEN',
|
||||
'GEMINI_AUTH_MODE',
|
||||
])
|
||||
|
||||
export function shouldForceGithubRelogin(args?: string): boolean {
|
||||
const normalized = (args ?? '').trim().toLowerCase()
|
||||
@@ -41,15 +55,29 @@ export function shouldForceGithubRelogin(args?: string): boolean {
|
||||
return normalized.split(/\s+/).some(arg => FORCE_RELOGIN_ARGS.has(arg))
|
||||
}
|
||||
|
||||
const GITHUB_PAT_PREFIXES = ['ghp_', 'gho_','ghs_', 'ghr_', 'github_pat_']
|
||||
|
||||
function isGithubPat(token: string): boolean {
|
||||
return GITHUB_PAT_PREFIXES.some(prefix => token.startsWith(prefix))
|
||||
}
|
||||
|
||||
export function hasExistingGithubModelsLoginToken(
|
||||
env: NodeJS.ProcessEnv = process.env,
|
||||
storedToken?: string,
|
||||
): boolean {
|
||||
const envToken = env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()
|
||||
if (envToken) {
|
||||
// PATs are no longer supported - require OAuth re-auth
|
||||
if (isGithubPat(envToken)) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
const persisted = (storedToken ?? readGithubModelsToken())?.trim()
|
||||
// PATs are no longer supported - require OAuth re-auth
|
||||
if (persisted && isGithubPat(persisted)) {
|
||||
return false
|
||||
}
|
||||
return Boolean(persisted)
|
||||
}
|
||||
|
||||
@@ -97,8 +125,21 @@ export function applyGithubOnboardingProcessEnv(
|
||||
}
|
||||
|
||||
function mergeUserSettingsEnv(model: string): { ok: boolean; detail?: string } {
|
||||
const currentSettings = getSettingsForSource('userSettings')
|
||||
const currentEnv = currentSettings?.env ?? {}
|
||||
|
||||
const newEnv: Record<string, string> = {}
|
||||
for (const [key, value] of Object.entries(currentEnv)) {
|
||||
if (!PROVIDER_SPECIFIC_KEYS.has(key)) {
|
||||
newEnv[key] = value
|
||||
}
|
||||
}
|
||||
|
||||
newEnv.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
newEnv.OPENAI_MODEL = model
|
||||
|
||||
const { error } = updateSettingsForSource('userSettings', {
|
||||
env: buildGithubOnboardingSettingsEnv(model) as any,
|
||||
env: newEnv,
|
||||
})
|
||||
if (error) {
|
||||
return { ok: false, detail: error.message }
|
||||
@@ -143,12 +184,14 @@ function OnboardGithub(props: {
|
||||
user_code: string
|
||||
verification_uri: string
|
||||
} | null>(null)
|
||||
const [patDraft, setPatDraft] = useState('')
|
||||
const [cursorOffset, setCursorOffset] = useState(0)
|
||||
|
||||
const finalize = useCallback(
|
||||
async (token: string, model: string = DEFAULT_MODEL) => {
|
||||
const saved = saveGithubModelsToken(token)
|
||||
async (
|
||||
token: string,
|
||||
model: string = DEFAULT_MODEL,
|
||||
oauthToken?: string,
|
||||
) => {
|
||||
const saved = saveGithubModelsToken(token, oauthToken)
|
||||
if (!saved.success) {
|
||||
setErrorMsg(saved.warning ?? 'Could not save token to secure storage.')
|
||||
setStep('error')
|
||||
@@ -165,8 +208,18 @@ function OnboardGithub(props: {
|
||||
setStep('error')
|
||||
return
|
||||
}
|
||||
// Clear stale provider-specific env vars from the current session
|
||||
// so resolveProviderRequest() doesn't pick up a previous provider's
|
||||
// base URL or key after onboarding completes.
|
||||
for (const key of PROVIDER_SPECIFIC_KEYS) {
|
||||
delete process.env[key]
|
||||
}
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
process.env.OPENAI_MODEL = model.trim() || DEFAULT_MODEL
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
onChangeAPIKey()
|
||||
onDone(
|
||||
'GitHub Models onboard complete. Token stored in secure storage; user settings updated. Restart if the model does not switch.',
|
||||
'GitHub Copilot onboard complete. Copilot token and OAuth token stored in secure storage (Windows/Linux: ~/.claude/.credentials.json, macOS: Keychain fallback to ~/.claude/.credentials.json); user settings updated. Restart if the model does not switch.',
|
||||
{ display: 'user' },
|
||||
)
|
||||
},
|
||||
@@ -184,11 +237,12 @@ function OnboardGithub(props: {
|
||||
verification_uri: device.verification_uri,
|
||||
})
|
||||
await openVerificationUri(device.verification_uri)
|
||||
const token = await pollAccessToken(device.device_code, {
|
||||
const oauthToken = await pollAccessToken(device.device_code, {
|
||||
initialInterval: device.interval,
|
||||
timeoutSeconds: device.expires_in,
|
||||
})
|
||||
await finalize(token, DEFAULT_MODEL)
|
||||
const copilotToken = await exchangeForCopilotToken(oauthToken)
|
||||
await finalize(copilotToken.token, DEFAULT_MODEL, oauthToken)
|
||||
} catch (e) {
|
||||
setErrorMsg(e instanceof Error ? e.message : String(e))
|
||||
setStep('error')
|
||||
@@ -227,7 +281,7 @@ function OnboardGithub(props: {
|
||||
if (step === 'device-busy') {
|
||||
return (
|
||||
<Box flexDirection="column" gap={1}>
|
||||
<Text>GitHub device login</Text>
|
||||
<Text>GitHub Copilot sign-in</Text>
|
||||
{deviceHint ? (
|
||||
<>
|
||||
<Text>
|
||||
@@ -246,43 +300,11 @@ function OnboardGithub(props: {
|
||||
)
|
||||
}
|
||||
|
||||
if (step === 'pat') {
|
||||
return (
|
||||
<Box flexDirection="column" gap={1}>
|
||||
<Text>Paste a GitHub personal access token with access to GitHub Models.</Text>
|
||||
<Text dimColor>Input is masked. Enter to submit; Esc to go back.</Text>
|
||||
<TextInput
|
||||
value={patDraft}
|
||||
mask="*"
|
||||
onChange={setPatDraft}
|
||||
onSubmit={async (value: string) => {
|
||||
const t = value.trim()
|
||||
if (!t) {
|
||||
return
|
||||
}
|
||||
await finalize(t, DEFAULT_MODEL)
|
||||
}}
|
||||
onExit={() => {
|
||||
setStep('menu')
|
||||
setPatDraft('')
|
||||
}}
|
||||
columns={80}
|
||||
cursorOffset={cursorOffset}
|
||||
onChangeCursorOffset={setCursorOffset}
|
||||
/>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
const menuOptions = [
|
||||
{
|
||||
label: 'Sign in with browser (device code)',
|
||||
label: 'Sign in with browser',
|
||||
value: 'device' as const,
|
||||
},
|
||||
{
|
||||
label: 'Paste personal access token',
|
||||
value: 'pat' as const,
|
||||
},
|
||||
{
|
||||
label: 'Cancel',
|
||||
value: 'cancel' as const,
|
||||
@@ -291,7 +313,7 @@ function OnboardGithub(props: {
|
||||
|
||||
return (
|
||||
<Box flexDirection="column" gap={1}>
|
||||
<Text bold>GitHub Models setup</Text>
|
||||
<Text bold>GitHub Copilot setup</Text>
|
||||
<Text dimColor>
|
||||
Stores your token in the OS credential store (macOS Keychain when available)
|
||||
and enables CLAUDE_CODE_USE_GITHUB in your user settings - no export
|
||||
@@ -304,10 +326,6 @@ function OnboardGithub(props: {
|
||||
onDone('GitHub onboard cancelled', { display: 'system' })
|
||||
return
|
||||
}
|
||||
if (v === 'pat') {
|
||||
setStep('pat')
|
||||
return
|
||||
}
|
||||
void runDeviceFlow()
|
||||
}}
|
||||
/>
|
||||
|
||||
@@ -112,7 +112,7 @@ export function HelpV2(t0) {
|
||||
}
|
||||
tabs.push(t6);
|
||||
if (false && antOnlyCommands.length > 0) {
|
||||
let t7;
|
||||
let t7;
|
||||
if ($[26] !== antOnlyCommands || $[27] !== close || $[28] !== columns || $[29] !== maxHeight) {
|
||||
t7 = <Tab key="internal-only" title="[internal-only]"><Commands commands={antOnlyCommands} maxHeight={maxHeight} columns={columns} title="Browse internal-only commands:" onCancel={close} /></Tab>;
|
||||
$[26] = antOnlyCommands;
|
||||
|
||||
@@ -95,8 +95,8 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
|
||||
if (useGithub) {
|
||||
const model = process.env.OPENAI_MODEL || 'github:copilot'
|
||||
const baseUrl =
|
||||
process.env.OPENAI_BASE_URL || 'https://models.github.ai/inference'
|
||||
return { name: 'GitHub Models', model, baseUrl, isLocal: false }
|
||||
process.env.OPENAI_BASE_URL || 'https://api.githubcopilot.com'
|
||||
return { name: 'GitHub Copilot', model, baseUrl, isLocal: false }
|
||||
}
|
||||
|
||||
if (useOpenAI) {
|
||||
|
||||
@@ -96,15 +96,16 @@ async function main(): Promise<void> {
|
||||
}
|
||||
}
|
||||
|
||||
// Enable configs first so we can read settings
|
||||
{
|
||||
const { enableConfigs } = await import('../utils/config.js')
|
||||
enableConfigs()
|
||||
}
|
||||
|
||||
// Apply settings.env from user settings (includes GitHub provider settings from /onboard-github)
|
||||
{
|
||||
const { applySafeConfigEnvironmentVariables } = await import('../utils/managedEnv.js')
|
||||
applySafeConfigEnvironmentVariables()
|
||||
const { hydrateGeminiAccessTokenFromSecureStorage } = await import('../utils/geminiCredentials.js')
|
||||
hydrateGeminiAccessTokenFromSecureStorage()
|
||||
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../utils/githubModelsCredentials.js')
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
}
|
||||
|
||||
const startupEnv = await buildStartupEnvFromProfile({
|
||||
@@ -121,6 +122,16 @@ async function main(): Promise<void> {
|
||||
}
|
||||
}
|
||||
|
||||
// Hydrate GitHub credentials after profile is applied so CLAUDE_CODE_USE_GITHUB from profile is available
|
||||
{
|
||||
const {
|
||||
hydrateGithubModelsTokenFromSecureStorage,
|
||||
refreshGithubModelsTokenIfNeeded,
|
||||
} = await import('../utils/githubModelsCredentials.js')
|
||||
await refreshGithubModelsTokenIfNeeded()
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
}
|
||||
|
||||
await validateProviderEnvOrExit()
|
||||
|
||||
// Print the gradient startup screen before the Ink UI loads
|
||||
|
||||
@@ -18,6 +18,7 @@ const originalEnv = {
|
||||
GEMINI_API_KEY: process.env.GEMINI_API_KEY,
|
||||
GEMINI_MODEL: process.env.GEMINI_MODEL,
|
||||
GEMINI_BASE_URL: process.env.GEMINI_BASE_URL,
|
||||
GEMINI_AUTH_MODE: process.env.GEMINI_AUTH_MODE,
|
||||
GOOGLE_API_KEY: process.env.GOOGLE_API_KEY,
|
||||
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
|
||||
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
|
||||
@@ -32,6 +33,7 @@ beforeEach(() => {
|
||||
process.env.GEMINI_API_KEY = 'gemini-test-key'
|
||||
process.env.GEMINI_MODEL = 'gemini-2.0-flash'
|
||||
process.env.GEMINI_BASE_URL = 'https://gemini.example/v1beta/openai'
|
||||
process.env.GEMINI_AUTH_MODE = 'api-key'
|
||||
|
||||
delete process.env.GOOGLE_API_KEY
|
||||
delete process.env.OPENAI_API_KEY
|
||||
@@ -47,6 +49,7 @@ afterEach(() => {
|
||||
process.env.GEMINI_API_KEY = originalEnv.GEMINI_API_KEY
|
||||
process.env.GEMINI_MODEL = originalEnv.GEMINI_MODEL
|
||||
process.env.GEMINI_BASE_URL = originalEnv.GEMINI_BASE_URL
|
||||
process.env.GEMINI_AUTH_MODE = originalEnv.GEMINI_AUTH_MODE
|
||||
process.env.GOOGLE_API_KEY = originalEnv.GOOGLE_API_KEY
|
||||
process.env.OPENAI_API_KEY = originalEnv.OPENAI_API_KEY
|
||||
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
|
||||
|
||||
@@ -17,16 +17,23 @@ const tempDirs: string[] = []
|
||||
const originalEnv = {
|
||||
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
|
||||
OPENAI_API_BASE: process.env.OPENAI_API_BASE,
|
||||
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
|
||||
}
|
||||
|
||||
afterEach(() => {
|
||||
if (originalEnv.OPENAI_BASE_URL === undefined) delete process.env.OPENAI_BASE_URL
|
||||
else process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
|
||||
|
||||
if (originalEnv.OPENAI_API_BASE === undefined) delete process.env.OPENAI_API_BASE
|
||||
else process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
|
||||
|
||||
if (originalEnv.CLAUDE_CODE_USE_GITHUB === undefined) delete process.env.CLAUDE_CODE_USE_GITHUB
|
||||
else process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
|
||||
|
||||
while (tempDirs.length > 0) {
|
||||
const dir = tempDirs.pop()
|
||||
if (dir) rmSync(dir, { recursive: true, force: true })
|
||||
}
|
||||
|
||||
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
|
||||
process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
|
||||
})
|
||||
|
||||
function createTempAuthJson(payload: Record<string, unknown>): string {
|
||||
@@ -71,6 +78,7 @@ describe('Codex provider config', () => {
|
||||
test('resolves codexplan alias to Codex transport with reasoning', () => {
|
||||
delete process.env.OPENAI_BASE_URL
|
||||
delete process.env.OPENAI_API_BASE
|
||||
delete process.env.CLAUDE_CODE_USE_GITHUB
|
||||
|
||||
const resolved = resolveProviderRequest({ model: 'codexplan' })
|
||||
expect(resolved.transport).toBe('codex_responses')
|
||||
|
||||
@@ -15,9 +15,9 @@
|
||||
* OPENAI_MODEL=gpt-4o — default model override
|
||||
* CODEX_API_KEY / ~/.codex/auth.json — Codex auth for codexplan/codexspark
|
||||
*
|
||||
* GitHub Models (models.github.ai), OpenAI-compatible:
|
||||
* GitHub Copilot API (api.githubcopilot.com), OpenAI-compatible:
|
||||
* CLAUDE_CODE_USE_GITHUB=1 — enable GitHub inference (no need for USE_OPENAI)
|
||||
* GITHUB_TOKEN or GH_TOKEN — PAT with models access (mapped to Bearer auth)
|
||||
* GITHUB_TOKEN or GH_TOKEN — Copilot API token (mapped to Bearer auth)
|
||||
* OPENAI_MODEL — optional; use github:copilot or openai/gpt-4.1 style IDs
|
||||
*/
|
||||
|
||||
@@ -29,7 +29,9 @@ import { hydrateGithubModelsTokenFromSecureStorage } from '../../utils/githubMod
|
||||
import {
|
||||
codexStreamToAnthropic,
|
||||
collectCodexCompletedResponse,
|
||||
convertAnthropicMessagesToResponsesInput,
|
||||
convertCodexResponseToAnthropicMessage,
|
||||
convertToolsToResponsesTools,
|
||||
performCodexRequest,
|
||||
type AnthropicStreamEvent,
|
||||
type AnthropicUsage,
|
||||
@@ -39,6 +41,7 @@ import {
|
||||
isLocalProviderUrl,
|
||||
resolveCodexApiCredentials,
|
||||
resolveProviderRequest,
|
||||
getGithubEndpointType,
|
||||
} from './providerConfig.js'
|
||||
import { sanitizeSchemaForOpenAICompat } from '../../utils/schemaSanitizer.js'
|
||||
import { redactSecretValueForDisplay } from '../../utils/providerProfile.js'
|
||||
@@ -55,13 +58,19 @@ type SecretValueSource = Partial<{
|
||||
GEMINI_ACCESS_TOKEN: string
|
||||
}>
|
||||
|
||||
const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
|
||||
const GITHUB_API_VERSION = '2022-11-28'
|
||||
const GITHUB_COPILOT_BASE = 'https://api.githubcopilot.com'
|
||||
const GITHUB_429_MAX_RETRIES = 3
|
||||
const GITHUB_429_BASE_DELAY_SEC = 1
|
||||
const GITHUB_429_MAX_DELAY_SEC = 32
|
||||
const GEMINI_API_HOST = 'generativelanguage.googleapis.com'
|
||||
|
||||
const COPILOT_HEADERS: Record<string, string> = {
|
||||
'User-Agent': 'GitHubCopilotChat/0.26.7',
|
||||
'Editor-Version': 'vscode/1.99.3',
|
||||
'Editor-Plugin-Version': 'copilot-chat/0.26.7',
|
||||
'Copilot-Integration-Id': 'vscode-chat',
|
||||
}
|
||||
|
||||
function isGithubModelsMode(): boolean {
|
||||
return isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
}
|
||||
@@ -944,8 +953,9 @@ class OpenAIShimMessages {
|
||||
httpResponse = response
|
||||
|
||||
if (params.stream) {
|
||||
const isResponsesStream = response.url?.includes('/responses')
|
||||
return new OpenAIShimStream(
|
||||
request.transport === 'codex_responses'
|
||||
(request.transport === 'codex_responses' || isResponsesStream)
|
||||
? codexStreamToAnthropic(response, request.resolvedModel)
|
||||
: openaiStreamToAnthropic(response, request.resolvedModel),
|
||||
)
|
||||
@@ -959,8 +969,38 @@ class OpenAIShimMessages {
|
||||
)
|
||||
}
|
||||
|
||||
const data = await response.json()
|
||||
return self._convertNonStreamingResponse(data, request.resolvedModel)
|
||||
const isResponsesNonStream = response.url?.includes('/responses')
|
||||
if (isResponsesNonStream || (request.transport === 'chat_completions' && isGithubModelsMode())) {
|
||||
const contentType = response.headers.get('content-type') ?? ''
|
||||
if (contentType.includes('application/json')) {
|
||||
const parsed = await response.json() as Record<string, unknown>
|
||||
if (
|
||||
parsed &&
|
||||
typeof parsed === 'object' &&
|
||||
('output' in parsed || 'incomplete_details' in parsed)
|
||||
) {
|
||||
return convertCodexResponseToAnthropicMessage(
|
||||
parsed,
|
||||
request.resolvedModel,
|
||||
)
|
||||
}
|
||||
return self._convertNonStreamingResponse(parsed, request.resolvedModel)
|
||||
}
|
||||
}
|
||||
|
||||
const contentType = response.headers.get('content-type') ?? ''
|
||||
if (contentType.includes('application/json')) {
|
||||
const data = await response.json()
|
||||
return self._convertNonStreamingResponse(data, request.resolvedModel)
|
||||
}
|
||||
|
||||
const textBody = await response.text().catch(() => '')
|
||||
throw APIError.generate(
|
||||
response.status,
|
||||
undefined,
|
||||
`OpenAI API error ${response.status}: unexpected response: ${textBody.slice(0, 500)}`,
|
||||
response.headers as unknown as Headers,
|
||||
)
|
||||
})()
|
||||
|
||||
; (promise as unknown as Record<string, unknown>).withResponse =
|
||||
@@ -982,7 +1022,36 @@ class OpenAIShimMessages {
|
||||
params: ShimCreateParams,
|
||||
options?: { signal?: AbortSignal; headers?: Record<string, string> },
|
||||
): Promise<Response> {
|
||||
if (request.transport === 'codex_responses') {
|
||||
const githubEndpointType = getGithubEndpointType(request.baseUrl)
|
||||
const isGithubMode = isGithubModelsMode()
|
||||
const isGithubWithCodexTransport = isGithubMode && request.transport === 'codex_responses'
|
||||
const isGithubCopilotEndpoint = isGithubMode && githubEndpointType === 'copilot'
|
||||
|
||||
if (isGithubWithCodexTransport) {
|
||||
const apiKey = this.providerOverride?.apiKey ?? process.env.OPENAI_API_KEY ?? ''
|
||||
if (!apiKey) {
|
||||
throw new Error(
|
||||
'GitHub Copilot auth is required. Run /onboard-github to sign in.',
|
||||
)
|
||||
}
|
||||
|
||||
return performCodexRequest({
|
||||
request,
|
||||
credentials: {
|
||||
apiKey,
|
||||
source: 'env',
|
||||
},
|
||||
params,
|
||||
defaultHeaders: {
|
||||
...this.defaultHeaders,
|
||||
...(options?.headers ?? {}),
|
||||
...COPILOT_HEADERS,
|
||||
},
|
||||
signal: options?.signal,
|
||||
})
|
||||
}
|
||||
|
||||
if (request.transport === 'codex_responses' && !isGithubMode) {
|
||||
const credentials = resolveCodexApiCredentials()
|
||||
if (!credentials.apiKey) {
|
||||
const authHint = credentials.authPath
|
||||
@@ -1056,6 +1125,10 @@ class OpenAIShimMessages {
|
||||
}
|
||||
|
||||
const isGithub = isGithubModelsMode()
|
||||
const githubEndpointType = getGithubEndpointType(request.baseUrl)
|
||||
const isGithubCopilot = isGithub && githubEndpointType === 'copilot'
|
||||
const isGithubModels = isGithub && (githubEndpointType === 'models' || githubEndpointType === 'custom')
|
||||
|
||||
if (isGithub && body.max_completion_tokens !== undefined) {
|
||||
body.max_tokens = body.max_completion_tokens
|
||||
delete body.max_completion_tokens
|
||||
@@ -1121,15 +1194,17 @@ class OpenAIShimMessages {
|
||||
const geminiCredential = await resolveGeminiCredential(process.env)
|
||||
if (geminiCredential.kind !== 'none') {
|
||||
headers.Authorization = `Bearer ${geminiCredential.credential}`
|
||||
if (geminiCredential.projectId) {
|
||||
if (geminiCredential.kind !== 'api-key' && 'projectId' in geminiCredential && geminiCredential.projectId) {
|
||||
headers['x-goog-user-project'] = geminiCredential.projectId
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (isGithub) {
|
||||
headers.Accept = 'application/vnd.github.v3+json'
|
||||
headers['X-GitHub-Api-Version'] = GITHUB_API_VERSION
|
||||
if (isGithubCopilot) {
|
||||
Object.assign(headers, COPILOT_HEADERS)
|
||||
} else if (isGithubModels) {
|
||||
headers['Accept'] = 'application/vnd.github+json'
|
||||
headers['X-GitHub-Api-Version'] = '2022-11-28'
|
||||
}
|
||||
|
||||
// Build the chat completions URL
|
||||
@@ -1181,9 +1256,82 @@ class OpenAIShimMessages {
|
||||
await sleepMs(delaySec * 1000)
|
||||
continue
|
||||
}
|
||||
// Read body exactly once here — Response body is a stream that can only
|
||||
// be consumed a single time.
|
||||
const errorBody = await response.text().catch(() => 'unknown error')
|
||||
const rateHint =
|
||||
isGithub && response.status === 429 ? formatRetryAfterHint(response) : ''
|
||||
|
||||
// If GitHub Copilot returns error about /chat/completions,
|
||||
// try the /responses endpoint (needed for GPT-5+ models)
|
||||
if (isGithub && response.status === 400) {
|
||||
if (errorBody.includes('/chat/completions') || errorBody.includes('not accessible')) {
|
||||
const responsesUrl = `${request.baseUrl}/responses`
|
||||
const responsesBody: Record<string, unknown> = {
|
||||
model: request.resolvedModel,
|
||||
input: convertAnthropicMessagesToResponsesInput(
|
||||
params.messages as Array<{
|
||||
role?: string
|
||||
message?: { role?: string; content?: unknown }
|
||||
content?: unknown
|
||||
}>,
|
||||
),
|
||||
stream: params.stream ?? false,
|
||||
}
|
||||
|
||||
if (!Array.isArray(responsesBody.input) || responsesBody.input.length === 0) {
|
||||
responsesBody.input = [
|
||||
{
|
||||
type: 'message',
|
||||
role: 'user',
|
||||
content: [{ type: 'input_text', text: '' }],
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
const systemText = convertSystemPrompt(params.system)
|
||||
if (systemText) {
|
||||
responsesBody.instructions = systemText
|
||||
}
|
||||
|
||||
if (body.max_tokens !== undefined) {
|
||||
responsesBody.max_output_tokens = body.max_tokens
|
||||
}
|
||||
|
||||
if (params.tools && params.tools.length > 0) {
|
||||
const convertedTools = convertToolsToResponsesTools(
|
||||
params.tools as Array<{
|
||||
name?: string
|
||||
description?: string
|
||||
input_schema?: Record<string, unknown>
|
||||
}>,
|
||||
)
|
||||
if (convertedTools.length > 0) {
|
||||
responsesBody.tools = convertedTools
|
||||
}
|
||||
}
|
||||
|
||||
const responsesResponse = await fetch(responsesUrl, {
|
||||
method: 'POST',
|
||||
headers,
|
||||
body: JSON.stringify(responsesBody),
|
||||
signal: options?.signal,
|
||||
})
|
||||
if (responsesResponse.ok) {
|
||||
return responsesResponse
|
||||
}
|
||||
const responsesErrorBody = await responsesResponse.text().catch(() => 'unknown error')
|
||||
let responsesErrorResponse: object | undefined
|
||||
try { responsesErrorResponse = JSON.parse(responsesErrorBody) } catch { /* raw text */ }
|
||||
throw APIError.generate(
|
||||
responsesResponse.status,
|
||||
responsesErrorResponse,
|
||||
`OpenAI API error ${responsesResponse.status}: ${responsesErrorBody}`,
|
||||
responsesResponse.headers,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
let errorResponse: object | undefined
|
||||
try { errorResponse = JSON.parse(errorBody) } catch { /* raw text */ }
|
||||
throw APIError.generate(
|
||||
@@ -1351,7 +1499,7 @@ export function createOpenAIShimClient(options: {
|
||||
process.env.OPENAI_MODEL = process.env.GEMINI_MODEL
|
||||
}
|
||||
} else if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
|
||||
process.env.OPENAI_BASE_URL ??= GITHUB_MODELS_DEFAULT_BASE
|
||||
process.env.OPENAI_BASE_URL ??= GITHUB_COPILOT_BASE
|
||||
process.env.OPENAI_API_KEY ??=
|
||||
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN ?? ''
|
||||
}
|
||||
|
||||
@@ -23,6 +23,9 @@ test.each([
|
||||
['github:gpt-4o', 'gpt-4o'],
|
||||
['gpt-4o', 'gpt-4o'],
|
||||
['github:copilot?reasoning=high', DEFAULT_GITHUB_MODELS_API_MODEL],
|
||||
// normalizeGithubModelsApiModel preserves provider prefix for models.github.ai compatibility
|
||||
['github:openai/gpt-4.1', 'openai/gpt-4.1'],
|
||||
['openai/gpt-4.1', 'openai/gpt-4.1'],
|
||||
] as const)('normalizeGithubModelsApiModel(%s) -> %s', (input, expected) => {
|
||||
expect(normalizeGithubModelsApiModel(input)).toBe(expected)
|
||||
})
|
||||
@@ -34,6 +37,20 @@ test('resolveProviderRequest applies GitHub normalization when CLAUDE_CODE_USE_G
|
||||
expect(r.transport).toBe('chat_completions')
|
||||
})
|
||||
|
||||
test('resolveProviderRequest routes GitHub GPT-5 codex models to responses transport', () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
const r = resolveProviderRequest({ model: 'gpt-5.3-codex' })
|
||||
expect(r.resolvedModel).toBe('gpt-5.3-codex')
|
||||
expect(r.transport).toBe('codex_responses')
|
||||
})
|
||||
|
||||
test('resolveProviderRequest keeps gpt-5-mini on chat_completions for GitHub', () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
const r = resolveProviderRequest({ model: 'gpt-5-mini' })
|
||||
expect(r.resolvedModel).toBe('gpt-5-mini')
|
||||
expect(r.transport).toBe('chat_completions')
|
||||
})
|
||||
|
||||
test('resolveProviderRequest leaves model unchanged without GitHub flag', () => {
|
||||
delete process.env.CLAUDE_CODE_USE_GITHUB
|
||||
const r = resolveProviderRequest({ model: 'github:gpt-4o' })
|
||||
|
||||
@@ -7,8 +7,8 @@ import { isEnvTruthy } from '../../utils/envUtils.js'
|
||||
|
||||
export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1'
|
||||
export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex'
|
||||
/** Default GitHub Models API model when user selects copilot / github:copilot */
|
||||
export const DEFAULT_GITHUB_MODELS_API_MODEL = 'openai/gpt-4.1'
|
||||
/** Default GitHub Copilot API model when user selects copilot / github:copilot */
|
||||
export const DEFAULT_GITHUB_MODELS_API_MODEL = 'gpt-4o'
|
||||
|
||||
const CODEX_ALIAS_MODELS: Record<
|
||||
string,
|
||||
@@ -227,6 +227,21 @@ export function shouldUseCodexTransport(
|
||||
return isCodexBaseUrl(explicitBaseUrl) || (!explicitBaseUrl && isCodexAlias(model))
|
||||
}
|
||||
|
||||
function shouldUseGithubResponsesApi(model: string): boolean {
|
||||
const normalized = model.trim().toLowerCase()
|
||||
|
||||
// Codex-branded models require /responses.
|
||||
if (normalized.includes('codex')) return true
|
||||
|
||||
// GPT-5+ models use /responses, except gpt-5-mini.
|
||||
const match = /^gpt-(\d+)/.exec(normalized)
|
||||
if (!match) return false
|
||||
const major = Number(match[1])
|
||||
if (major < 5) return false
|
||||
if (normalized.startsWith('gpt-5-mini')) return false
|
||||
return true
|
||||
}
|
||||
|
||||
export function isLocalProviderUrl(baseUrl: string | undefined): boolean {
|
||||
if (!baseUrl) return false
|
||||
try {
|
||||
@@ -280,19 +295,61 @@ export function isCodexBaseUrl(baseUrl: string | undefined): boolean {
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize user model string for GitHub Models inference (models.github.ai).
|
||||
* Mirrors runtime devsper `github._normalize_model_id`.
|
||||
* Normalize user model string for GitHub Copilot API inference.
|
||||
* Mirrors how Copilot resolves model IDs internally.
|
||||
*/
|
||||
export function normalizeGithubModelsApiModel(requestedModel: string): string {
|
||||
export function normalizeGithubCopilotModel(requestedModel: string): string {
|
||||
const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel
|
||||
const segment =
|
||||
noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim()
|
||||
if (!segment || segment.toLowerCase() === 'copilot') {
|
||||
return DEFAULT_GITHUB_MODELS_API_MODEL
|
||||
}
|
||||
// Strip provider prefix if present (e.g., "openai/gpt-4o" -> "gpt-4o")
|
||||
const slashIndex = segment.indexOf('/')
|
||||
if (slashIndex !== -1) {
|
||||
return segment.slice(slashIndex + 1)
|
||||
}
|
||||
return segment
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize user model string for GitHub Models API inference.
|
||||
* Only normalizes the default alias, preserves provider-qualified models.
|
||||
*/
|
||||
export function normalizeGithubModelsApiModel(requestedModel: string): string {
|
||||
const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel
|
||||
const segment =
|
||||
noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim()
|
||||
// Only normalize the default alias for GitHub Models
|
||||
if (!segment || segment.toLowerCase() === 'copilot') {
|
||||
return DEFAULT_GITHUB_MODELS_API_MODEL
|
||||
}
|
||||
// Preserve provider prefix for GitHub Models (e.g., "openai/gpt-4.1" stays as-is)
|
||||
return segment
|
||||
}
|
||||
|
||||
export const GITHUB_COPILOT_BASE_URL = 'https://api.githubcopilot.com'
|
||||
export const GITHUB_MODELS_BASE_URL = 'https://models.github.ai/inference'
|
||||
|
||||
export function getGithubEndpointType(
|
||||
baseUrl: string | undefined,
|
||||
): 'copilot' | 'models' | 'custom' {
|
||||
if (!baseUrl) return 'copilot'
|
||||
try {
|
||||
const hostname = new URL(baseUrl).hostname.toLowerCase()
|
||||
if (hostname === 'api.githubcopilot.com') {
|
||||
return 'copilot'
|
||||
}
|
||||
if (hostname === 'models.github.ai' || hostname.endsWith('.github.ai')) {
|
||||
return 'models'
|
||||
}
|
||||
return 'custom'
|
||||
} catch {
|
||||
return 'copilot'
|
||||
}
|
||||
}
|
||||
|
||||
export function resolveProviderRequest(options?: {
|
||||
model?: string
|
||||
baseUrl?: string
|
||||
@@ -310,31 +367,49 @@ export function resolveProviderRequest(options?: {
|
||||
asEnvUrl(options?.baseUrl) ??
|
||||
asEnvUrl(process.env.OPENAI_BASE_URL) ??
|
||||
asEnvUrl(process.env.OPENAI_API_BASE)
|
||||
|
||||
const githubEndpointType = isGithubMode
|
||||
? getGithubEndpointType(rawBaseUrl)
|
||||
: 'custom'
|
||||
const isGithubCopilot = isGithubMode && githubEndpointType === 'copilot'
|
||||
const isGithubModels = isGithubMode && githubEndpointType === 'models'
|
||||
const isGithubCustom = isGithubMode && githubEndpointType === 'custom'
|
||||
|
||||
const githubResolvedModel = isGithubMode
|
||||
? normalizeGithubModelsApiModel(requestedModel)
|
||||
: requestedModel
|
||||
|
||||
const transport: ProviderTransport =
|
||||
shouldUseCodexTransport(requestedModel, rawBaseUrl)
|
||||
shouldUseCodexTransport(requestedModel, rawBaseUrl) ||
|
||||
(isGithubCopilot && shouldUseGithubResponsesApi(githubResolvedModel))
|
||||
? 'codex_responses'
|
||||
: 'chat_completions'
|
||||
|
||||
const resolvedModel =
|
||||
transport === 'chat_completions' &&
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
? normalizeGithubModelsApiModel(requestedModel)
|
||||
: descriptor.baseModel
|
||||
// For GitHub Copilot API, normalize to real model ID (e.g., "github:copilot" -> "gpt-4o")
|
||||
// For GitHub Models/custom endpoints:
|
||||
// - Normalize default alias (github:copilot -> gpt-4o)
|
||||
// - Preserve provider-qualified models (openai/gpt-4.1 stays as-is)
|
||||
const resolvedModel = isGithubCopilot
|
||||
? normalizeGithubCopilotModel(descriptor.baseModel)
|
||||
: (isGithubModels || isGithubCustom
|
||||
? normalizeGithubModelsApiModel(descriptor.baseModel)
|
||||
: descriptor.baseModel)
|
||||
|
||||
const reasoning = options?.reasoningEffortOverride
|
||||
? { effort: options.reasoningEffortOverride }
|
||||
: descriptor.reasoning
|
||||
|
||||
|
||||
return {
|
||||
transport,
|
||||
requestedModel,
|
||||
resolvedModel,
|
||||
baseUrl:
|
||||
(rawBaseUrl ??
|
||||
(transport === 'codex_responses'
|
||||
? DEFAULT_CODEX_BASE_URL
|
||||
: DEFAULT_OPENAI_BASE_URL)
|
||||
(isGithubCopilot && transport === 'codex_responses'
|
||||
? GITHUB_COPILOT_BASE_URL
|
||||
: (isGithubMode
|
||||
? GITHUB_COPILOT_BASE_URL
|
||||
: DEFAULT_OPENAI_BASE_URL))
|
||||
).replace(/\/+$/, ''),
|
||||
reasoning,
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { afterEach, describe, expect, mock, test } from 'bun:test'
|
||||
import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test'
|
||||
import { APIError } from '@anthropic-ai/sdk'
|
||||
|
||||
// Helper to build a mock APIError with specific headers
|
||||
@@ -15,15 +15,27 @@ function makeError(headers: Record<string, string>): APIError {
|
||||
|
||||
// Save/restore env vars between tests
|
||||
const originalEnv = { ...process.env }
|
||||
|
||||
const envKeys = [
|
||||
'CLAUDE_CODE_USE_OPENAI',
|
||||
'CLAUDE_CODE_USE_GEMINI',
|
||||
'CLAUDE_CODE_USE_GITHUB',
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'CLAUDE_CODE_USE_FOUNDRY',
|
||||
'OPENAI_MODEL',
|
||||
'OPENAI_BASE_URL',
|
||||
'OPENAI_API_BASE',
|
||||
] as const
|
||||
|
||||
beforeEach(() => {
|
||||
for (const key of envKeys) {
|
||||
delete process.env[key]
|
||||
}
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
for (const key of [
|
||||
'CLAUDE_CODE_USE_OPENAI',
|
||||
'CLAUDE_CODE_USE_GEMINI',
|
||||
'CLAUDE_CODE_USE_GITHUB',
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'CLAUDE_CODE_USE_FOUNDRY',
|
||||
]) {
|
||||
for (const key of envKeys) {
|
||||
if (originalEnv[key] === undefined) delete process.env[key]
|
||||
else process.env[key] = originalEnv[key]
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { afterEach, describe, expect, mock, test } from 'bun:test'
|
||||
import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test'
|
||||
|
||||
import {
|
||||
DEFAULT_GITHUB_DEVICE_SCOPE,
|
||||
@@ -7,14 +7,26 @@ import {
|
||||
requestDeviceCode,
|
||||
} from './deviceFlow.js'
|
||||
|
||||
async function importFreshModule() {
|
||||
mock.restore()
|
||||
return import(`./deviceFlow.ts?ts=${Date.now()}-${Math.random()}`)
|
||||
}
|
||||
|
||||
describe('requestDeviceCode', () => {
|
||||
const originalFetch = globalThis.fetch
|
||||
|
||||
beforeEach(() => {
|
||||
mock.restore()
|
||||
globalThis.fetch = originalFetch
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
globalThis.fetch = originalFetch
|
||||
})
|
||||
|
||||
test('parses successful device code response', async () => {
|
||||
const { requestDeviceCode } = await importFreshModule()
|
||||
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(
|
||||
new Response(
|
||||
@@ -42,6 +54,9 @@ describe('requestDeviceCode', () => {
|
||||
})
|
||||
|
||||
test('throws on HTTP error', async () => {
|
||||
const { requestDeviceCode, GitHubDeviceFlowError } =
|
||||
await importFreshModule()
|
||||
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(new Response('bad', { status: 500 })),
|
||||
)
|
||||
@@ -134,6 +149,8 @@ describe('pollAccessToken', () => {
|
||||
})
|
||||
|
||||
test('returns token when GitHub responds with access_token immediately', async () => {
|
||||
const { pollAccessToken } = await importFreshModule()
|
||||
|
||||
let calls = 0
|
||||
globalThis.fetch = mock(() => {
|
||||
calls++
|
||||
@@ -153,6 +170,8 @@ describe('pollAccessToken', () => {
|
||||
})
|
||||
|
||||
test('throws on access_denied', async () => {
|
||||
const { pollAccessToken } = await importFreshModule()
|
||||
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(
|
||||
new Response(JSON.stringify({ error: 'access_denied' }), {
|
||||
@@ -168,3 +187,62 @@ describe('pollAccessToken', () => {
|
||||
).rejects.toThrow(/denied/)
|
||||
})
|
||||
})
|
||||
|
||||
describe('exchangeForCopilotToken', () => {
|
||||
const originalFetch = globalThis.fetch
|
||||
|
||||
afterEach(() => {
|
||||
globalThis.fetch = originalFetch
|
||||
})
|
||||
|
||||
test('parses successful Copilot token response', async () => {
|
||||
const { exchangeForCopilotToken } = await importFreshModule()
|
||||
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(
|
||||
new Response(
|
||||
JSON.stringify({
|
||||
token: 'copilot-token-xyz',
|
||||
expires_at: 1700000000,
|
||||
refresh_in: 3600,
|
||||
endpoints: {
|
||||
api: 'https://api.githubcopilot.com',
|
||||
},
|
||||
}),
|
||||
{ status: 200 },
|
||||
),
|
||||
),
|
||||
)
|
||||
|
||||
const result = await exchangeForCopilotToken('oauth-token', globalThis.fetch)
|
||||
expect(result.token).toBe('copilot-token-xyz')
|
||||
expect(result.expires_at).toBe(1700000000)
|
||||
expect(result.refresh_in).toBe(3600)
|
||||
expect(result.endpoints.api).toBe('https://api.githubcopilot.com')
|
||||
})
|
||||
|
||||
test('throws on HTTP error', async () => {
|
||||
const { exchangeForCopilotToken, GitHubDeviceFlowError } =
|
||||
await importFreshModule()
|
||||
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(new Response('unauthorized', { status: 401 })),
|
||||
)
|
||||
await expect(
|
||||
exchangeForCopilotToken('bad-token', globalThis.fetch),
|
||||
).rejects.toThrow(GitHubDeviceFlowError)
|
||||
})
|
||||
|
||||
test('throws on malformed response', async () => {
|
||||
const { exchangeForCopilotToken } = await importFreshModule()
|
||||
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(
|
||||
new Response(JSON.stringify({ invalid: 'data' }), { status: 200 }),
|
||||
),
|
||||
)
|
||||
await expect(
|
||||
exchangeForCopilotToken('oauth-token', globalThis.fetch),
|
||||
).rejects.toThrow(/Malformed/)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,19 +1,35 @@
|
||||
/**
|
||||
* GitHub OAuth device flow for CLI login (https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow).
|
||||
* Uses GitHub Copilot's official OAuth app for device authentication.
|
||||
*/
|
||||
|
||||
import { execFileNoThrow } from '../../utils/execFileNoThrow.js'
|
||||
|
||||
export const DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID = 'Ov23liXjWSSui6QIahPl'
|
||||
export const DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID = 'Iv1.b507a08c87ecfe98'
|
||||
|
||||
export const GITHUB_DEVICE_CODE_URL = 'https://github.com/login/device/code'
|
||||
export const GITHUB_DEVICE_ACCESS_TOKEN_URL =
|
||||
'https://github.com/login/oauth/access_token'
|
||||
export const COPILOT_TOKEN_URL = 'https://api.github.com/copilot_internal/v2/token'
|
||||
|
||||
// OAuth app device flow does not accept the GitHub Models permission token
|
||||
// scope (models:read). Use an OAuth-safe default.
|
||||
const OAUTH_SAFE_GITHUB_DEVICE_SCOPE = 'read:user'
|
||||
export const DEFAULT_GITHUB_DEVICE_SCOPE = OAUTH_SAFE_GITHUB_DEVICE_SCOPE
|
||||
/** Only read:user scope — required for Copilot OAuth */
|
||||
export const DEFAULT_GITHUB_DEVICE_SCOPE = 'read:user'
|
||||
|
||||
export const COPILOT_HEADERS: Record<string, string> = {
|
||||
'User-Agent': 'GitHubCopilotChat/0.26.7',
|
||||
'Editor-Version': 'vscode/1.99.3',
|
||||
'Editor-Plugin-Version': 'copilot-chat/0.26.7',
|
||||
'Copilot-Integration-Id': 'vscode-chat',
|
||||
}
|
||||
|
||||
export type CopilotTokenResponse = {
|
||||
token: string
|
||||
expires_at: number
|
||||
refresh_in: number
|
||||
endpoints: {
|
||||
api: string
|
||||
}
|
||||
}
|
||||
|
||||
export class GitHubDeviceFlowError extends Error {
|
||||
constructor(message: string) {
|
||||
@@ -30,6 +46,8 @@ export type DeviceCodeResult = {
|
||||
interval: number
|
||||
}
|
||||
|
||||
type FetchLike = (input: RequestInfo | URL, init?: RequestInit) => Promise<Response>
|
||||
|
||||
export function getGithubDeviceFlowClientId(): string {
|
||||
return (
|
||||
process.env.GITHUB_DEVICE_FLOW_CLIENT_ID?.trim() ||
|
||||
@@ -44,21 +62,21 @@ function sleep(ms: number): Promise<void> {
|
||||
export async function requestDeviceCode(options?: {
|
||||
clientId?: string
|
||||
scope?: string
|
||||
fetchImpl?: typeof fetch
|
||||
fetchImpl?: FetchLike
|
||||
}): Promise<DeviceCodeResult> {
|
||||
const clientId = options?.clientId ?? getGithubDeviceFlowClientId()
|
||||
if (!clientId) {
|
||||
throw new GitHubDeviceFlowError(
|
||||
'No OAuth client ID: set GITHUB_DEVICE_FLOW_CLIENT_ID or paste a PAT instead.',
|
||||
'No OAuth client ID: set GITHUB_DEVICE_FLOW_CLIENT_ID.',
|
||||
)
|
||||
}
|
||||
const fetchFn = options?.fetchImpl ?? fetch
|
||||
const requestedScope =
|
||||
options?.scope?.trim() || DEFAULT_GITHUB_DEVICE_SCOPE
|
||||
const scopesToTry =
|
||||
requestedScope === OAUTH_SAFE_GITHUB_DEVICE_SCOPE
|
||||
requestedScope === DEFAULT_GITHUB_DEVICE_SCOPE
|
||||
? [requestedScope]
|
||||
: [requestedScope, OAUTH_SAFE_GITHUB_DEVICE_SCOPE]
|
||||
: [requestedScope, DEFAULT_GITHUB_DEVICE_SCOPE]
|
||||
|
||||
let lastError = 'Device code request failed.'
|
||||
|
||||
@@ -77,7 +95,7 @@ export async function requestDeviceCode(options?: {
|
||||
lastError = `Device code request failed: ${res.status} ${text}`
|
||||
const isInvalidScope = /invalid_scope/i.test(text)
|
||||
const canRetryWithFallback =
|
||||
scope !== OAUTH_SAFE_GITHUB_DEVICE_SCOPE && isInvalidScope
|
||||
scope !== DEFAULT_GITHUB_DEVICE_SCOPE && isInvalidScope
|
||||
if (canRetryWithFallback) {
|
||||
continue
|
||||
}
|
||||
@@ -114,7 +132,7 @@ export type PollOptions = {
|
||||
clientId?: string
|
||||
initialInterval?: number
|
||||
timeoutSeconds?: number
|
||||
fetchImpl?: typeof fetch
|
||||
fetchImpl?: FetchLike
|
||||
}
|
||||
|
||||
export async function pollAccessToken(
|
||||
@@ -197,3 +215,49 @@ export async function openVerificationUri(uri: string): Promise<void> {
|
||||
// User can open the URL manually
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Exchange an OAuth access token for a Copilot API token.
|
||||
* The OAuth token alone cannot be used with the Copilot API endpoint.
|
||||
*/
|
||||
export async function exchangeForCopilotToken(
|
||||
oauthToken: string,
|
||||
fetchImpl?: FetchLike,
|
||||
): Promise<CopilotTokenResponse> {
|
||||
const fetchFn = fetchImpl ?? fetch
|
||||
const res = await fetchFn(COPILOT_TOKEN_URL, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
Authorization: `Bearer ${oauthToken}`,
|
||||
...COPILOT_HEADERS,
|
||||
},
|
||||
})
|
||||
if (!res.ok) {
|
||||
const text = await res.text().catch(() => '')
|
||||
throw new GitHubDeviceFlowError(
|
||||
`Copilot token exchange failed: ${res.status} ${text}`,
|
||||
)
|
||||
}
|
||||
const data = (await res.json()) as Record<string, unknown>
|
||||
const token = data.token
|
||||
const expires_at = data.expires_at
|
||||
const refresh_in = data.refresh_in
|
||||
const endpoints = data.endpoints
|
||||
if (
|
||||
typeof token !== 'string' ||
|
||||
typeof expires_at !== 'number' ||
|
||||
typeof refresh_in !== 'number' ||
|
||||
!endpoints ||
|
||||
typeof endpoints !== 'object' ||
|
||||
typeof (endpoints as Record<string, unknown>).api !== 'string'
|
||||
) {
|
||||
throw new GitHubDeviceFlowError('Malformed Copilot token response')
|
||||
}
|
||||
return {
|
||||
token,
|
||||
expires_at,
|
||||
refresh_in,
|
||||
endpoints: endpoints as { api: string },
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,11 @@
|
||||
// Mock rate limits for testing [internal-only]
|
||||
// The external build keeps this module as a stable no-op surface so imports
|
||||
// remain valid without exposing internal-only rate-limit simulation behavior.
|
||||
// This allows testing various rate limit scenarios without hitting actual limits
|
||||
//
|
||||
// WARNING: This is for internal testing/demo purposes only!
|
||||
// The mock headers may not exactly match the API specification or real-world behavior.
|
||||
// Always validate against actual API responses before relying on this for production features.
|
||||
|
||||
import { setMockBillingAccessOverride } from '../utils/billing.js'
|
||||
import type { OverageDisabledReason } from './claudeAiLimits.js'
|
||||
|
||||
@@ -645,7 +645,7 @@ const internalOnlyTips: Tip[] =
|
||||
{
|
||||
id: 'skillify',
|
||||
content: async () =>
|
||||
'[internal] Turn repeatable workflows into reusable project skills when they keep recurring',
|
||||
'[internal] Use /skillify to turn repeatable recurring workflows into reusable project skills',
|
||||
cooldownSessions: 15,
|
||||
isRelevant: async () => true,
|
||||
},
|
||||
|
||||
@@ -3,6 +3,7 @@ import { afterEach, beforeEach, expect, mock, test } from 'bun:test'
|
||||
type MockStorageData = Record<string, unknown>
|
||||
|
||||
const originalEnv = { ...process.env }
|
||||
const originalArgv = [...process.argv]
|
||||
let storageState: MockStorageData = {}
|
||||
|
||||
async function importFreshModule() {
|
||||
@@ -27,11 +28,14 @@ async function importFreshModule() {
|
||||
|
||||
beforeEach(() => {
|
||||
process.env = { ...originalEnv }
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
process.argv = originalArgv.filter(arg => arg !== '--bare')
|
||||
storageState = {}
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
process.env = { ...originalEnv }
|
||||
process.argv = [...originalArgv]
|
||||
storageState = {}
|
||||
mock.restore()
|
||||
})
|
||||
|
||||
118
src/utils/githubModelsCredentials.refresh.test.ts
Normal file
118
src/utils/githubModelsCredentials.refresh.test.ts
Normal file
@@ -0,0 +1,118 @@
|
||||
import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test'
|
||||
|
||||
async function importFreshModule() {
|
||||
mock.restore()
|
||||
return import(`./githubModelsCredentials.ts?ts=${Date.now()}-${Math.random()}`)
|
||||
}
|
||||
|
||||
describe('refreshGithubModelsTokenIfNeeded', () => {
|
||||
const orig = {
|
||||
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
|
||||
CLAUDE_CODE_SIMPLE: process.env.CLAUDE_CODE_SIMPLE,
|
||||
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
|
||||
GH_TOKEN: process.env.GH_TOKEN,
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
mock.restore()
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
for (const [k, v] of Object.entries(orig)) {
|
||||
if (v === undefined) {
|
||||
delete process.env[k as keyof typeof orig]
|
||||
} else {
|
||||
process.env[k as keyof typeof orig] = v
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
test('refreshes expired Copilot token using stored OAuth token', async () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
delete process.env.GITHUB_TOKEN
|
||||
delete process.env.GH_TOKEN
|
||||
|
||||
const futureExp = Math.floor(Date.now() / 1000) + 3600
|
||||
let store: Record<string, unknown> = {
|
||||
githubModels: {
|
||||
accessToken: 'tid=stale;exp=1;sku=free',
|
||||
oauthAccessToken: 'ghu_oauth_secret',
|
||||
},
|
||||
}
|
||||
|
||||
mock.module('./secureStorage/index.js', () => ({
|
||||
getSecureStorage: () => ({
|
||||
read: () => store,
|
||||
update: (next: Record<string, unknown>) => {
|
||||
store = next
|
||||
return { success: true }
|
||||
},
|
||||
}),
|
||||
}))
|
||||
|
||||
mock.module('../services/github/deviceFlow.js', () => ({
|
||||
DEFAULT_GITHUB_DEVICE_SCOPE: 'read:user',
|
||||
exchangeForCopilotToken: async () => ({
|
||||
token: `tid=fresh;exp=${futureExp};sku=free`,
|
||||
expires_at: futureExp,
|
||||
refresh_in: 1500,
|
||||
endpoints: { api: 'https://api.githubcopilot.com' },
|
||||
}),
|
||||
}))
|
||||
|
||||
const { refreshGithubModelsTokenIfNeeded } = await importFreshModule()
|
||||
|
||||
const refreshed = await refreshGithubModelsTokenIfNeeded()
|
||||
expect(refreshed).toBe(true)
|
||||
expect(process.env.GITHUB_TOKEN?.startsWith('tid=fresh;exp=')).toBe(true)
|
||||
|
||||
const githubModels = (store.githubModels ?? {}) as {
|
||||
accessToken?: string
|
||||
oauthAccessToken?: string
|
||||
}
|
||||
expect(githubModels.accessToken?.startsWith('tid=fresh;exp=')).toBe(true)
|
||||
expect(githubModels.oauthAccessToken).toBe('ghu_oauth_secret')
|
||||
})
|
||||
|
||||
test('does not refresh when current Copilot token is valid', async () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
delete process.env.GITHUB_TOKEN
|
||||
delete process.env.GH_TOKEN
|
||||
|
||||
const futureExp = Math.floor(Date.now() / 1000) + 3600
|
||||
const exchangeSpy = mock(async () => ({
|
||||
token: `tid=unexpected;exp=${futureExp};sku=free`,
|
||||
expires_at: futureExp,
|
||||
refresh_in: 1500,
|
||||
endpoints: { api: 'https://api.githubcopilot.com' },
|
||||
}))
|
||||
|
||||
mock.module('./secureStorage/index.js', () => ({
|
||||
getSecureStorage: () => ({
|
||||
read: () => ({
|
||||
githubModels: {
|
||||
accessToken: `tid=already-valid;exp=${futureExp};sku=free`,
|
||||
oauthAccessToken: 'ghu_oauth_secret',
|
||||
},
|
||||
}),
|
||||
update: () => ({ success: true }),
|
||||
}),
|
||||
}))
|
||||
|
||||
mock.module('../services/github/deviceFlow.js', () => ({
|
||||
DEFAULT_GITHUB_DEVICE_SCOPE: 'read:user',
|
||||
exchangeForCopilotToken: exchangeSpy,
|
||||
}))
|
||||
|
||||
const { refreshGithubModelsTokenIfNeeded } = await importFreshModule()
|
||||
|
||||
const refreshed = await refreshGithubModelsTokenIfNeeded()
|
||||
expect(refreshed).toBe(false)
|
||||
expect(exchangeSpy).not.toHaveBeenCalled()
|
||||
expect(process.env.GITHUB_TOKEN?.startsWith('tid=already-valid;exp=')).toBe(
|
||||
true,
|
||||
)
|
||||
})
|
||||
})
|
||||
@@ -1,5 +1,6 @@
|
||||
import { isBareMode, isEnvTruthy } from './envUtils.js'
|
||||
import { getSecureStorage } from './secureStorage/index.js'
|
||||
import { exchangeForCopilotToken } from '../services/github/deviceFlow.js'
|
||||
|
||||
/** JSON key in the shared OpenClaude secure storage blob. */
|
||||
export const GITHUB_MODELS_STORAGE_KEY = 'githubModels' as const
|
||||
@@ -8,6 +9,38 @@ export const GITHUB_MODELS_HYDRATED_ENV_MARKER =
|
||||
|
||||
export type GithubModelsCredentialBlob = {
|
||||
accessToken: string
|
||||
oauthAccessToken?: string
|
||||
}
|
||||
|
||||
type GithubTokenStatus = 'valid' | 'expired' | 'invalid_format'
|
||||
|
||||
function checkGithubTokenStatus(token: string): GithubTokenStatus {
|
||||
const expMatch = token.match(/exp=(\d+)/)
|
||||
if (expMatch) {
|
||||
const expSeconds = Number(expMatch[1])
|
||||
if (!Number.isNaN(expSeconds)) {
|
||||
return Date.now() >= expSeconds * 1000 ? 'expired' : 'valid'
|
||||
}
|
||||
}
|
||||
|
||||
const parts = token.split('.')
|
||||
const looksLikeJwt =
|
||||
parts.length === 3 && parts.every(part => /^[A-Za-z0-9_-]+$/.test(part))
|
||||
if (looksLikeJwt) {
|
||||
try {
|
||||
const normalized = parts[1].replace(/-/g, '+').replace(/_/g, '/')
|
||||
const padded = normalized + '='.repeat((4 - (normalized.length % 4)) % 4)
|
||||
const json = Buffer.from(padded, 'base64').toString('utf8')
|
||||
const parsed = JSON.parse(json)
|
||||
if (parsed && typeof parsed === 'object' && parsed.exp) {
|
||||
return Date.now() >= (parsed.exp as number) * 1000 ? 'expired' : 'valid'
|
||||
}
|
||||
} catch {
|
||||
return 'invalid_format'
|
||||
}
|
||||
}
|
||||
|
||||
return 'invalid_format'
|
||||
}
|
||||
|
||||
export function readGithubModelsToken(): string | undefined {
|
||||
@@ -66,7 +99,62 @@ export function hydrateGithubModelsTokenFromSecureStorage(): void {
|
||||
delete process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER]
|
||||
}
|
||||
|
||||
export function saveGithubModelsToken(token: string): {
|
||||
/**
|
||||
* Startup auto-refresh for GitHub Models mode.
|
||||
*
|
||||
* If a stored Copilot token is expired/invalid and an OAuth token is present,
|
||||
* exchange the OAuth token for a fresh Copilot token and persist it.
|
||||
*/
|
||||
export async function refreshGithubModelsTokenIfNeeded(): Promise<boolean> {
|
||||
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
|
||||
return false
|
||||
}
|
||||
if (isBareMode()) {
|
||||
return false
|
||||
}
|
||||
|
||||
try {
|
||||
const secureStorage = getSecureStorage()
|
||||
const data = secureStorage.read() as
|
||||
| ({ githubModels?: GithubModelsCredentialBlob } & Record<string, unknown>)
|
||||
| null
|
||||
const blob = data?.githubModels
|
||||
const accessToken = blob?.accessToken?.trim() || ''
|
||||
const oauthToken = blob?.oauthAccessToken?.trim() || ''
|
||||
|
||||
if (!accessToken && !oauthToken) {
|
||||
return false
|
||||
}
|
||||
|
||||
const status = accessToken ? checkGithubTokenStatus(accessToken) : 'expired'
|
||||
if (status === 'valid') {
|
||||
if (!process.env.GITHUB_TOKEN?.trim() && !process.env.GH_TOKEN?.trim()) {
|
||||
process.env.GITHUB_TOKEN = accessToken
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
if (!oauthToken) {
|
||||
return false
|
||||
}
|
||||
|
||||
const refreshed = await exchangeForCopilotToken(oauthToken)
|
||||
const saved = saveGithubModelsToken(refreshed.token, oauthToken)
|
||||
if (!saved.success) {
|
||||
return false
|
||||
}
|
||||
|
||||
process.env.GITHUB_TOKEN = refreshed.token
|
||||
return true
|
||||
} catch {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
export function saveGithubModelsToken(
|
||||
token: string,
|
||||
oauthToken?: string,
|
||||
): {
|
||||
success: boolean
|
||||
warning?: string
|
||||
} {
|
||||
@@ -79,9 +167,21 @@ export function saveGithubModelsToken(token: string): {
|
||||
}
|
||||
const secureStorage = getSecureStorage()
|
||||
const prev = secureStorage.read() || {}
|
||||
const prevGithubModels = (prev as Record<string, unknown>)[
|
||||
GITHUB_MODELS_STORAGE_KEY
|
||||
] as GithubModelsCredentialBlob | undefined
|
||||
const oauthTrimmed = oauthToken?.trim()
|
||||
const mergedBlob: GithubModelsCredentialBlob = {
|
||||
accessToken: trimmed,
|
||||
}
|
||||
if (oauthTrimmed) {
|
||||
mergedBlob.oauthAccessToken = oauthTrimmed
|
||||
} else if (prevGithubModels?.oauthAccessToken?.trim()) {
|
||||
mergedBlob.oauthAccessToken = prevGithubModels.oauthAccessToken.trim()
|
||||
}
|
||||
const merged = {
|
||||
...(prev as Record<string, unknown>),
|
||||
[GITHUB_MODELS_STORAGE_KEY]: { accessToken: trimmed },
|
||||
[GITHUB_MODELS_STORAGE_KEY]: mergedBlob,
|
||||
}
|
||||
return secureStorage.update(merged as typeof prev)
|
||||
}
|
||||
|
||||
@@ -35,6 +35,8 @@ export const CLAUDE_3_7_SONNET_CONFIG = {
|
||||
foundry: 'claude-3-7-sonnet',
|
||||
openai: 'gpt-4o-mini',
|
||||
gemini: 'gemini-2.0-flash',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_3_5_V2_SONNET_CONFIG = {
|
||||
@@ -44,6 +46,8 @@ export const CLAUDE_3_5_V2_SONNET_CONFIG = {
|
||||
foundry: 'claude-3-5-sonnet',
|
||||
openai: 'gpt-4o-mini',
|
||||
gemini: 'gemini-2.0-flash',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_3_5_HAIKU_CONFIG = {
|
||||
@@ -53,6 +57,8 @@ export const CLAUDE_3_5_HAIKU_CONFIG = {
|
||||
foundry: 'claude-3-5-haiku',
|
||||
openai: 'gpt-4o-mini',
|
||||
gemini: 'gemini-2.0-flash-lite',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_HAIKU_4_5_CONFIG = {
|
||||
@@ -62,6 +68,8 @@ export const CLAUDE_HAIKU_4_5_CONFIG = {
|
||||
foundry: 'claude-haiku-4-5',
|
||||
openai: 'gpt-4o-mini',
|
||||
gemini: 'gemini-2.0-flash-lite',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_SONNET_4_CONFIG = {
|
||||
@@ -71,6 +79,8 @@ export const CLAUDE_SONNET_4_CONFIG = {
|
||||
foundry: 'claude-sonnet-4',
|
||||
openai: 'gpt-4o-mini',
|
||||
gemini: 'gemini-2.0-flash',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_SONNET_4_5_CONFIG = {
|
||||
@@ -80,6 +90,8 @@ export const CLAUDE_SONNET_4_5_CONFIG = {
|
||||
foundry: 'claude-sonnet-4-5',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.0-flash',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_OPUS_4_CONFIG = {
|
||||
@@ -89,6 +101,8 @@ export const CLAUDE_OPUS_4_CONFIG = {
|
||||
foundry: 'claude-opus-4',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.5-pro-preview-03-25',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_OPUS_4_1_CONFIG = {
|
||||
@@ -98,6 +112,8 @@ export const CLAUDE_OPUS_4_1_CONFIG = {
|
||||
foundry: 'claude-opus-4-1',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.5-pro-preview-03-25',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_OPUS_4_5_CONFIG = {
|
||||
@@ -107,6 +123,8 @@ export const CLAUDE_OPUS_4_5_CONFIG = {
|
||||
foundry: 'claude-opus-4-5',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.5-pro-preview-03-25',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_OPUS_4_6_CONFIG = {
|
||||
@@ -116,6 +134,8 @@ export const CLAUDE_OPUS_4_6_CONFIG = {
|
||||
foundry: 'claude-opus-4-6',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.5-pro-preview-03-25',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_SONNET_4_6_CONFIG = {
|
||||
@@ -125,6 +145,8 @@ export const CLAUDE_SONNET_4_6_CONFIG = {
|
||||
foundry: 'claude-sonnet-4-6',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.0-flash',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
// @[MODEL LAUNCH]: Register the new config here.
|
||||
|
||||
351
src/utils/model/copilotModels.ts
Normal file
351
src/utils/model/copilotModels.ts
Normal file
@@ -0,0 +1,351 @@
|
||||
/**
|
||||
* Hardcoded Copilot model registry from models.dev/api.json
|
||||
* These are the 19 models available through GitHub Copilot.
|
||||
*/
|
||||
|
||||
export type CopilotModel = {
|
||||
id: string
|
||||
name: string
|
||||
family: string
|
||||
attachment: boolean
|
||||
reasoning: boolean
|
||||
tool_call: boolean
|
||||
temperature: boolean
|
||||
knowledge: string
|
||||
release_date: string
|
||||
last_updated: string
|
||||
modalities: {
|
||||
input: string[]
|
||||
output: string[]
|
||||
}
|
||||
open_weights: boolean
|
||||
cost: {
|
||||
input: number
|
||||
output: number
|
||||
cache_read?: number
|
||||
}
|
||||
limit: {
|
||||
context: number
|
||||
input?: number
|
||||
output: number
|
||||
}
|
||||
}
|
||||
|
||||
export const COPILOT_MODELS: Record<string, CopilotModel> = {
|
||||
'gpt-5.4': {
|
||||
id: 'gpt-5.4',
|
||||
name: 'GPT-5.4',
|
||||
family: 'gpt',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.4-mini': {
|
||||
id: 'gpt-5.4-mini',
|
||||
name: 'GPT-5.4 mini',
|
||||
family: 'gpt-mini',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.3-codex': {
|
||||
id: 'gpt-5.3-codex',
|
||||
name: 'GPT-5.3-Codex',
|
||||
family: 'gpt-codex',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.2-codex': {
|
||||
id: 'gpt-5.2-codex',
|
||||
name: 'GPT-5.2-Codex',
|
||||
family: 'gpt-codex',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.2': {
|
||||
id: 'gpt-5.2',
|
||||
name: 'GPT-5.2',
|
||||
family: 'gpt',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 264000, output: 32768 },
|
||||
},
|
||||
'gpt-5.1-codex': {
|
||||
id: 'gpt-5.1-codex',
|
||||
name: 'GPT-5.1-Codex',
|
||||
family: 'gpt-codex',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.1-codex-max': {
|
||||
id: 'gpt-5.1-codex-max',
|
||||
name: 'GPT-5.1-Codex-max',
|
||||
family: 'gpt-codex',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.1-codex-mini': {
|
||||
id: 'gpt-5.1-codex-mini',
|
||||
name: 'GPT-5.1-Codex-mini',
|
||||
family: 'gpt-codex',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-4o': {
|
||||
id: 'gpt-4o',
|
||||
name: 'GPT-4o',
|
||||
family: 'gpt',
|
||||
attachment: true,
|
||||
reasoning: false,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2023-10',
|
||||
release_date: '2024-05-01',
|
||||
last_updated: '2024-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 16384 },
|
||||
},
|
||||
'gpt-4.1': {
|
||||
id: 'gpt-4.1',
|
||||
name: 'GPT-4.1',
|
||||
family: 'gpt',
|
||||
attachment: false,
|
||||
reasoning: false,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2024-06',
|
||||
release_date: '2024-06-01',
|
||||
last_updated: '2024-06-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 32768 },
|
||||
},
|
||||
'claude-opus-4.6': {
|
||||
id: 'claude-opus-4.6',
|
||||
name: 'Claude Opus 4.6',
|
||||
family: 'claude-opus',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 144000, output: 32768 },
|
||||
},
|
||||
'claude-opus-4.5': {
|
||||
id: 'claude-opus-4.5',
|
||||
name: 'Claude Opus 4.5',
|
||||
family: 'claude-opus',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 160000, output: 32768 },
|
||||
},
|
||||
'claude-sonnet-4.6': {
|
||||
id: 'claude-sonnet-4.6',
|
||||
name: 'Claude Sonnet 4.6',
|
||||
family: 'claude-sonnet',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 200000, output: 32768 },
|
||||
},
|
||||
'claude-sonnet-4.5': {
|
||||
id: 'claude-sonnet-4.5',
|
||||
name: 'Claude Sonnet 4.5',
|
||||
family: 'claude-sonnet',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 144000, output: 32768 },
|
||||
},
|
||||
'claude-haiku-4.5': {
|
||||
id: 'claude-haiku-4.5',
|
||||
name: 'Claude Haiku 4.5',
|
||||
family: 'claude-haiku',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 144000, output: 32768 },
|
||||
},
|
||||
'gemini-3.1-pro-preview': {
|
||||
id: 'gemini-3.1-pro-preview',
|
||||
name: 'Gemini 3.1 Pro Preview',
|
||||
family: 'gemini-pro',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image', 'audio'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 32768 },
|
||||
},
|
||||
'gemini-3-flash-preview': {
|
||||
id: 'gemini-3-flash-preview',
|
||||
name: 'Gemini 3 Flash',
|
||||
family: 'gemini-flash',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 32768 },
|
||||
},
|
||||
'gemini-2.5-pro': {
|
||||
id: 'gemini-2.5-pro',
|
||||
name: 'Gemini 2.5 Pro',
|
||||
family: 'gemini-pro',
|
||||
attachment: true,
|
||||
reasoning: false,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 32768 },
|
||||
},
|
||||
'grok-code-fast-1': {
|
||||
id: 'grok-code-fast-1',
|
||||
name: 'Grok Code Fast 1',
|
||||
family: 'grok',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 32768 },
|
||||
},
|
||||
}
|
||||
|
||||
export function getCopilotModelIds(): string[] {
|
||||
return Object.keys(COPILOT_MODELS)
|
||||
}
|
||||
|
||||
export function getCopilotModel(id: string): CopilotModel | undefined {
|
||||
return COPILOT_MODELS[id]
|
||||
}
|
||||
|
||||
export function getAllCopilotModels(): CopilotModel[] {
|
||||
return Object.values(COPILOT_MODELS)
|
||||
}
|
||||
@@ -43,6 +43,10 @@ export function getSmallFastModel(): ModelName {
|
||||
if (getAPIProvider() === 'openai') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-4o-mini'
|
||||
}
|
||||
// For GitHub Copilot provider
|
||||
if (getAPIProvider() === 'github') {
|
||||
return process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
return getDefaultHaikuModel()
|
||||
}
|
||||
|
||||
@@ -137,6 +141,10 @@ export function getDefaultOpusModel(): ModelName {
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
}
|
||||
// GitHub Copilot provider
|
||||
if (getAPIProvider() === 'github') {
|
||||
return process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
// 3P providers (Bedrock, Vertex, Foundry) — kept as a separate branch
|
||||
// even when values match, since 3P availability lags firstParty and
|
||||
// these will diverge again at the next model launch.
|
||||
@@ -163,6 +171,10 @@ export function getDefaultSonnetModel(): ModelName {
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
}
|
||||
// GitHub Copilot provider
|
||||
if (getAPIProvider() === 'github') {
|
||||
return process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
// Default to Sonnet 4.5 for 3P since they may not have 4.6 yet
|
||||
if (getAPIProvider() !== 'firstParty') {
|
||||
return getModelStrings().sonnet45
|
||||
@@ -175,10 +187,6 @@ export function getDefaultHaikuModel(): ModelName {
|
||||
if (process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL) {
|
||||
return process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL
|
||||
}
|
||||
// Gemini provider
|
||||
if (getAPIProvider() === 'gemini') {
|
||||
return process.env.GEMINI_MODEL || 'gemini-2.0-flash-lite'
|
||||
}
|
||||
// OpenAI provider
|
||||
if (getAPIProvider() === 'openai') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-4o-mini'
|
||||
@@ -187,6 +195,14 @@ export function getDefaultHaikuModel(): ModelName {
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
}
|
||||
// GitHub Copilot provider
|
||||
if (getAPIProvider() === 'github') {
|
||||
return process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
// Gemini provider
|
||||
if (getAPIProvider() === 'gemini') {
|
||||
return process.env.GEMINI_MODEL || 'gemini-2.0-flash-lite'
|
||||
}
|
||||
|
||||
// Haiku 4.5 is available on all platforms (first-party, Foundry, Bedrock, Vertex)
|
||||
return getModelStrings().haiku45
|
||||
@@ -231,6 +247,11 @@ export function getRuntimeMainLoopModel(params: {
|
||||
* @returns The default model setting to use
|
||||
*/
|
||||
export function getDefaultMainLoopModelSetting(): ModelName | ModelAlias {
|
||||
// GitHub Copilot provider: check settings.model first, then env, then default
|
||||
if (getAPIProvider() === 'github') {
|
||||
const settings = getSettings_DEPRECATED() || {}
|
||||
return settings.model || process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
// Gemini provider: always use the configured Gemini model
|
||||
if (getAPIProvider() === 'gemini') {
|
||||
return process.env.GEMINI_MODEL || 'gemini-2.0-flash'
|
||||
@@ -239,10 +260,6 @@ export function getDefaultMainLoopModelSetting(): ModelName | ModelAlias {
|
||||
if (getAPIProvider() === 'openai') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-4o'
|
||||
}
|
||||
// GitHub provider: always use the configured GitHub model
|
||||
if (getAPIProvider() === 'github') {
|
||||
return process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
// Codex provider: always use the configured Codex model (default gpt-5.4)
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
@@ -426,8 +443,33 @@ export function renderModelSetting(setting: ModelName | ModelAlias): string {
|
||||
* if the model is not recognized as a public model.
|
||||
*/
|
||||
export function getPublicModelDisplayName(model: ModelName): string | null {
|
||||
// For OpenAI/Gemini/Codex providers, show the actual model name not a Claude alias
|
||||
if (getAPIProvider() === 'openai' || getAPIProvider() === 'gemini' || getAPIProvider() === 'codex') {
|
||||
// For OpenAI/Gemini/Codex/GitHub providers, show the actual model name not a Claude alias
|
||||
if (getAPIProvider() === 'openai' || getAPIProvider() === 'gemini' || getAPIProvider() === 'codex' || getAPIProvider() === 'github') {
|
||||
// Return display names for known GitHub Copilot models
|
||||
const copilotModelNames: Record<string, string> = {
|
||||
'gpt-5.4': 'GPT-5.4',
|
||||
'gpt-5.4-mini': 'GPT-5.4 mini',
|
||||
'gpt-5.3-codex': 'GPT-5.3 Codex',
|
||||
'gpt-5.2-codex': 'GPT-5.2 Codex',
|
||||
'gpt-5.2': 'GPT-5.2',
|
||||
'gpt-5.1-codex': 'GPT-5.1 Codex',
|
||||
'gpt-5.1-codex-max': 'GPT-5.1 Codex max',
|
||||
'gpt-5.1-codex-mini': 'GPT-5.1 Codex mini',
|
||||
'gpt-4o': 'GPT-4o',
|
||||
'gpt-4.1': 'GPT-4.1',
|
||||
'claude-opus-4.6': 'Claude Opus 4.6',
|
||||
'claude-opus-4.5': 'Claude Opus 4.5',
|
||||
'claude-sonnet-4.6': 'Claude Sonnet 4.6',
|
||||
'claude-sonnet-4.5': 'Claude Sonnet 4.5',
|
||||
'claude-haiku-4.5': 'Claude Haiku 4.5',
|
||||
'gemini-3.1-pro-preview': 'Gemini 3.1 Pro Preview',
|
||||
'gemini-3-flash-preview': 'Gemini 3 Flash',
|
||||
'gemini-2.5-pro': 'Gemini 2.5 Pro',
|
||||
'grok-code-fast-1': 'Grok Code Fast 1',
|
||||
}
|
||||
if (copilotModelNames[model]) {
|
||||
return copilotModelNames[model]
|
||||
}
|
||||
return null
|
||||
}
|
||||
switch (model) {
|
||||
@@ -484,6 +526,10 @@ export function renderModelName(model: ModelName): string {
|
||||
if (publicName) {
|
||||
return publicName
|
||||
}
|
||||
// Handle GitHub Copilot special model aliases
|
||||
if (model === 'github:copilot') {
|
||||
return 'GPT-4o'
|
||||
}
|
||||
if (process.env.USER_TYPE === 'ant') {
|
||||
const resolved = parseUserSpecifiedModel(model)
|
||||
const antModel = resolveAntModel(model)
|
||||
|
||||
@@ -61,7 +61,7 @@ afterEach(() => {
|
||||
resetModelStringsForTestingOnly()
|
||||
})
|
||||
|
||||
test('GitHub provider exposes only default + GitHub model in /model options', async () => {
|
||||
test('GitHub provider exposes default + all Copilot models in /model options', async () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
delete process.env.CLAUDE_CODE_USE_OPENAI
|
||||
delete process.env.CLAUDE_CODE_USE_GEMINI
|
||||
@@ -69,7 +69,7 @@ test('GitHub provider exposes only default + GitHub model in /model options', as
|
||||
delete process.env.CLAUDE_CODE_USE_VERTEX
|
||||
delete process.env.CLAUDE_CODE_USE_FOUNDRY
|
||||
|
||||
process.env.OPENAI_MODEL = 'github:copilot'
|
||||
process.env.OPENAI_MODEL = 'gpt-4o'
|
||||
delete process.env.ANTHROPIC_CUSTOM_MODEL_OPTION
|
||||
|
||||
const { getModelOptions } = await importFreshModelOptionsModule()
|
||||
@@ -78,6 +78,7 @@ test('GitHub provider exposes only default + GitHub model in /model options', as
|
||||
(option: { value: unknown }) => option.value !== null,
|
||||
)
|
||||
|
||||
expect(nonDefault.length).toBe(1)
|
||||
expect(nonDefault[0]?.value).toBe('github:copilot')
|
||||
expect(nonDefault.length).toBeGreaterThan(1)
|
||||
expect(nonDefault.some((o: { value: unknown }) => o.value === 'gpt-4o')).toBe(true)
|
||||
expect(nonDefault.some((o: { value: unknown }) => o.value === 'gpt-5.3-codex')).toBe(true)
|
||||
})
|
||||
|
||||
@@ -35,6 +35,7 @@ import { has1mContext } from '../context.js'
|
||||
import { getGlobalConfig } from '../config.js'
|
||||
import { getActiveOpenAIModelOptionsCache } from '../providerProfiles.js'
|
||||
import { getCachedOllamaModelOptions, isOllamaProvider } from './ollamaModels.js'
|
||||
import { getAntModels } from './antModels.js'
|
||||
|
||||
// @[MODEL LAUNCH]: Update all the available and default model option strings below.
|
||||
|
||||
@@ -351,17 +352,20 @@ function getCodexModelOptions(): ModelOption[] {
|
||||
|
||||
// @[MODEL LAUNCH]: Update the model picker lists below to include/reorder options for the new model.
|
||||
// Each user tier (ant, Max/Team Premium, Pro/Team Standard/Enterprise, PAYG 1P, PAYG 3P) has its own list.
|
||||
|
||||
import { getAllCopilotModels } from './copilotModels.js'
|
||||
|
||||
function getCopilotModelOptions(): ModelOption[] {
|
||||
return getAllCopilotModels().map(m => ({
|
||||
value: m.id,
|
||||
label: m.name,
|
||||
description: `${m.family}${m.reasoning ? ' · Reasoning' : ''}${m.tool_call ? ' · Tool call' : ''} · ${Math.round(m.limit.context / 1000)}K context`,
|
||||
}))
|
||||
}
|
||||
|
||||
function getModelOptionsBase(fastMode = false): ModelOption[] {
|
||||
if (getAPIProvider() === 'github') {
|
||||
const githubModel = process.env.OPENAI_MODEL?.trim() || 'github:copilot'
|
||||
return [
|
||||
getDefaultOptionForUser(fastMode),
|
||||
{
|
||||
value: githubModel,
|
||||
label: githubModel,
|
||||
description: 'GitHub Models default',
|
||||
},
|
||||
]
|
||||
return [getDefaultOptionForUser(fastMode), ...getCopilotModelOptions()]
|
||||
}
|
||||
|
||||
// When using Ollama, show models from the Ollama server instead of Claude models
|
||||
|
||||
@@ -51,6 +51,7 @@ export const DANGEROUS_BASH_PATTERNS: readonly string[] = [
|
||||
'xargs',
|
||||
'sudo',
|
||||
// Internal-only: internal-only tools plus general tools that ant sandbox
|
||||
// data shows are frequently over-allowlisted as broad prefixes.
|
||||
// dotfile data shows are commonly over-allowlisted as broad prefixes.
|
||||
// These stay internal-only — external users don't have coo, and the rest are
|
||||
// an empirical-risk call grounded in ant sandbox data, not a universal
|
||||
|
||||
@@ -6,7 +6,26 @@ import {
|
||||
VALID_PROVIDERS,
|
||||
} from './providerFlag.js'
|
||||
|
||||
const originalEnv = { ...process.env }
|
||||
const ENV_KEYS = [
|
||||
'CLAUDE_CODE_USE_OPENAI',
|
||||
'CLAUDE_CODE_USE_GEMINI',
|
||||
'CLAUDE_CODE_USE_GITHUB',
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'OPENAI_BASE_URL',
|
||||
'OPENAI_API_KEY',
|
||||
'OPENAI_MODEL',
|
||||
'GEMINI_MODEL',
|
||||
]
|
||||
|
||||
const originalEnv: Record<string, string | undefined> = {}
|
||||
|
||||
beforeEach(() => {
|
||||
for (const key of ENV_KEYS) {
|
||||
originalEnv[key] = process.env[key]
|
||||
delete process.env[key]
|
||||
}
|
||||
})
|
||||
|
||||
const RESET_KEYS = [
|
||||
'CLAUDE_CODE_USE_OPENAI',
|
||||
@@ -27,9 +46,12 @@ beforeEach(() => {
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
for (const key of RESET_KEYS) {
|
||||
if (originalEnv[key] === undefined) delete process.env[key]
|
||||
else process.env[key] = originalEnv[key]
|
||||
for (const key of ENV_KEYS) {
|
||||
if (originalEnv[key] === undefined) {
|
||||
delete process.env[key]
|
||||
} else {
|
||||
process.env[key] = originalEnv[key]
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import {
|
||||
getGithubEndpointType,
|
||||
isLocalProviderUrl,
|
||||
resolveCodexApiCredentials,
|
||||
resolveProviderRequest,
|
||||
@@ -15,6 +16,51 @@ function isEnvTruthy(value: string | undefined): boolean {
|
||||
return normalized !== '' && normalized !== '0' && normalized !== 'false' && normalized !== 'no'
|
||||
}
|
||||
|
||||
type GithubTokenStatus = 'valid' | 'expired' | 'invalid_format'
|
||||
|
||||
const GITHUB_PAT_PREFIXES = ['ghp_', 'gho_', 'ghs_', 'ghr_', 'github_pat_']
|
||||
|
||||
function checkGithubTokenStatus(
|
||||
token: string,
|
||||
endpointType: 'copilot' | 'models' | 'custom' = 'copilot',
|
||||
): GithubTokenStatus {
|
||||
// PATs work with GitHub Models but not with Copilot API
|
||||
if (GITHUB_PAT_PREFIXES.some(prefix => token.startsWith(prefix))) {
|
||||
if (endpointType === 'copilot') {
|
||||
return 'expired'
|
||||
}
|
||||
return 'valid'
|
||||
}
|
||||
|
||||
const expMatch = token.match(/exp=(\d+)/)
|
||||
if (expMatch) {
|
||||
const expSeconds = Number(expMatch[1])
|
||||
if (!Number.isNaN(expSeconds)) {
|
||||
return Date.now() >= expSeconds * 1000 ? 'expired' : 'valid'
|
||||
}
|
||||
}
|
||||
|
||||
const parts = token.split('.')
|
||||
const looksLikeJwt =
|
||||
parts.length === 3 && parts.every(part => /^[A-Za-z0-9_-]+$/.test(part))
|
||||
if (looksLikeJwt) {
|
||||
try {
|
||||
const normalized = parts[1].replace(/-/g, '+').replace(/_/g, '/')
|
||||
const padded = normalized + '='.repeat((4 - (normalized.length % 4)) % 4)
|
||||
const json = Buffer.from(padded, 'base64').toString('utf8')
|
||||
const parsed = JSON.parse(json)
|
||||
if (parsed && typeof parsed === 'object' && parsed.exp) {
|
||||
return Date.now() >= (parsed.exp as number) * 1000 ? 'expired' : 'valid'
|
||||
}
|
||||
} catch {
|
||||
return 'invalid_format'
|
||||
}
|
||||
}
|
||||
|
||||
// Keep compatibility with opaque token formats that do not expose expiry.
|
||||
return 'valid'
|
||||
}
|
||||
|
||||
export async function getProviderValidationError(
|
||||
env: NodeJS.ProcessEnv = process.env,
|
||||
options?: {
|
||||
@@ -39,7 +85,19 @@ export async function getProviderValidationError(
|
||||
if (useGithub && !useOpenAI) {
|
||||
const token = (env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()) ?? ''
|
||||
if (!token) {
|
||||
return 'GITHUB_TOKEN or GH_TOKEN is required when CLAUDE_CODE_USE_GITHUB=1.'
|
||||
return 'GitHub Copilot authentication required.\n' +
|
||||
'Run /onboard-github in the CLI to sign in with your GitHub account.\n' +
|
||||
'This will store your OAuth token securely and enable Copilot models.'
|
||||
}
|
||||
const endpointType = getGithubEndpointType(env.OPENAI_BASE_URL)
|
||||
const status = checkGithubTokenStatus(token, endpointType)
|
||||
if (status === 'expired') {
|
||||
return 'GitHub Copilot token has expired.\n' +
|
||||
'Run /onboard-github to sign in again and get a fresh token.'
|
||||
}
|
||||
if (status === 'invalid_format') {
|
||||
return 'GitHub Copilot token is invalid or corrupted.\n' +
|
||||
'Run /onboard-github to sign in again with your GitHub account.'
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user