Improve GitHub Copilot provider: official OAuth onboarding, Copilot API routing, and test hardening and auto refresh token logic (#288)
* update gitHub copilot API with offical client id and update model configurations * test: add unit tests for exchangeForCopilotToken and enhance GitHub model normalization * remove PAT token feature * test(api): harden provider tests against env leakage * Added back trimmed github auth token * added auto refresh logic for auto token along with test * fix: remove forked provider validation in cli.tsx and clear stale provider env vars in /onboard-github * refactor: streamline environment variable handling in mergeUserSettingsEnv * fix: clear stale provider env vars to ensure correct GH routing * Remove internal-only tooling from the external build (#352) * Remove internal-only tooling without changing external runtime contracts This trims the lowest-risk internal-only surfaces first: deleted internal modules are replaced by build-time no-op stubs, the bundled stuck skill is removed, and the insights S3 upload path now stays local-only. The privacy verifier is expanded and the remaining bundled internal Slack/Artifactory strings are neutralized without broad repo-wide renames. Constraint: Keep the first PR deletion-heavy and avoid mass rewrites of USER_TYPE, tengu, or claude_code identifiers Rejected: One-shot DMCA cleanup branch | too much semantic risk for a first PR Confidence: medium Scope-risk: moderate Reversibility: clean Directive: Treat full-repo typecheck as a baseline issue on this upstream snapshot; do not claim this commit introduced the existing non-Phase-A errors without isolating them first Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Not-tested: Full repo typecheck (currently fails on widespread pre-existing upstream errors outside this change set) * Keep minimal source shims so CI can import Phase A cleanup paths The first PR removed internal-only source files entirely, but CI provider and context tests import those modules directly from source rather than through the build-time no-telemetry stubs. This restores tiny no-op source shims so tests and local source imports resolve while preserving the same external runtime behavior. Constraint: GitHub Actions runs source-level tests in addition to bundled build/privacy checks Rejected: Revert the entire deletion pass | unnecessary once the import contract is satisfied by small shims Confidence: high Scope-risk: narrow Reversibility: clean Directive: For later cleanup phases, treat build-time stubs and source-test imports as separate compatibility surfaces Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (still noisy on this upstream snapshot) --------- Co-authored-by: anandh8x <test@example.com> * Reduce internal-only labeling noise in source comments (#355) This pass rewrites comment-only ANT-ONLY markers to neutral internal-only language across the source tree without changing runtime strings, flags, commands, or protocol identifiers. The goal is to lower obvious internal prose leakage while keeping the diff mechanically safe and easy to review. Constraint: Phase B is limited to comments/prose only; runtime strings and user-facing labels remain deferred Rejected: Broad search-and-replace across strings and command descriptions | too risky for a prose-only pass Confidence: high Scope-risk: narrow Reversibility: clean Directive: Remaining ANT-ONLY hits are mostly runtime/user-facing strings and should be handled separately from comment cleanup Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) Co-authored-by: anandh8x <test@example.com> * Neutralize internal Anthropic prose in explanatory comments (#357) This is a small prose-only follow-up that rewrites clearly internal or explanatory Anthropic comment language to neutral wording in a handful of high-confidence files. It avoids runtime strings, flags, command labels, protocol identifiers, and provider-facing references. Constraint: Keep this pass narrowly scoped to comments/documentation only Rejected: Broader Anthropic comment sweep across functional API/protocol references | too ambiguous for a safe prose-only PR Confidence: high Scope-risk: narrow Reversibility: clean Directive: Leave functional Anthropic references (API behavior, SDKs, URLs, provider labels, protocol docs) for separate reviewed passes Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) Co-authored-by: anandh8x <test@example.com> * Neutralize remaining internal-only diagnostic labels (#359) This pass rewrites a small set of ant-only diagnostic and UI labels to neutral internal wording while leaving command definitions, flags, and runtime logic untouched. It focuses on internal debug output, dead UI branches, and noninteractive headings rather than broader product text. Constraint: Label cleanup only; do not change command semantics or ant-only logic gates Rejected: Renaming ant-only command descriptions in main.tsx | broader UX surface better handled in a separate reviewed pass Confidence: high Scope-risk: narrow Reversibility: clean Directive: Remaining ANT-ONLY hits are mostly command descriptions and intentionally deferred user-facing strings Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) Co-authored-by: anandh8x <test@example.com> * Finish eliminating remaining ANT-ONLY source labels (#360) This extends the label-only cleanup to the remaining internal-only command, debug, and heading strings so the source tree no longer contains ANT-ONLY markers. The pass still avoids logic changes and only renames labels shown in internal or gated surfaces. Constraint: Update the existing label-cleanup PR without widening scope into behavior changes Rejected: Leave the last ANT-ONLY strings for a later pass | low-cost cleanup while the branch is already focused on labels Confidence: high Scope-risk: narrow Reversibility: clean Directive: The next phase should move off label cleanup and onto a separately scoped logic or rebrand slice Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) Co-authored-by: anandh8x <test@example.com> * Stub internal-only recording and model capability helpers (#377) This follow-up Phase C-lite slice replaces purely internal helper modules with stable external no-op surfaces and collapses internal elevated error logging to a no-op. The change removes additional USER_TYPE-gated helper behavior without touching product-facing runtime flows. Constraint: Keep this PR limited to isolated helper modules that are already external no-ops in practice Rejected: Pulling in broader speculation or logging sink changes | less isolated and easier to debate during review Confidence: high Scope-risk: narrow Reversibility: clean Directive: Continue Phase C with similarly isolated helpers before moving into mixed behavior files Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) Co-authored-by: anandh8x <test@example.com> * Remove internal-only bundled skills and mock helpers (#376) * Remove internal-only bundled skills and mock rate-limit behavior This takes the next planned Phase C-lite slice by deleting bundled skills that only ever registered for internal users and replacing the internal mock rate-limit helper with a stable no-op external stub. The external build keeps the same behavior while removing a concentrated block of USER_TYPE-gated dead code. Constraint: Limit this PR to isolated internal-only helpers and avoid bridge, oauth, or rebrand behavior Rejected: Broad USER_TYPE cleanup across mixed runtime surfaces | too risky for the next medium-sized PR Confidence: high Scope-risk: moderate Reversibility: clean Directive: The next cleanup pass should continue with similarly isolated USER_TYPE helpers before touching main.tsx or protocol-heavy code Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy) * Align internal-only helper removal with remaining user guidance This follow-up fixes the mock billing stub to be a true no-op and removes stale user-facing references to /verify and /skillify from the same PR. It also leaves a clearer paper trail for review: the deleted verify skill was explicitly ant-gated before removal, and the remaining mock helper callers still resolve to safe no-op returns in the external build. Constraint: Keep the PR focused on consistency fixes and reviewer-requested evidence, not new cleanup scope Rejected: Leave stale guidance for a later PR | would make this branch internally inconsistent after skill removal Confidence: high Scope-risk: narrow Reversibility: clean Directive: When deleting gated features, always sweep user guidance and coordinator prompts in the same pass Tested: bun run build Tested: bun run smoke Tested: bun run verify:privacy Tested: bun run test:provider Tested: bun run test:provider-recommendation Not-tested: Full repo typecheck (upstream baseline remains noisy; changed-file scan still shows only pre-existing tipRegistry errors outside edited lines) * Clarify generic workflow wording after skill removal This removes the last generic verification-skill wording that could still be read as pointing at a deleted bundled command. The guidance now talks about project workflows rather than a specific bundled verify skill. Constraint: Keep the follow-up limited to reviewer-facing wording cleanup on the same PR Rejected: Leave generic wording as-is | still too easy to misread after the explicit /verify references were removed Confidence: high Scope-risk: narrow Reversibility: clean Directive: When removing bundled commands, scrub both explicit and generic references in the same branch Tested: bun run build Tested: bun run smoke Not-tested: Additional checks unchanged by wording-only follow-up --------- Co-authored-by: anandh8x <test@example.com> * test(api): add GEMINI_AUTH_MODE to environment setup in tests * test: isolate GitHub/Gemini credential tests with fresh module imports and explicit non-bare env setup to prevent cross-test mock/cache leaks * fix: update GitHub Copilot base URL and model defaults for improved compatibility * fix: enhance error handling in OpenAI API response processing * fix: improve error handling for GitHub Copilot API responses and streamline error body consumption * fix: enhance response handling in OpenAI API shim for better error reporting and support for streaming responses * feat: enhance GitHub device flow with fresh module import and token validation improvements * fix: separate Copilot API routing from GitHub Models, clear stale env vars, honor providerOverride.apiKey * fix: route GitHub GPT-5/Codex to Copilot API, show all Copilot models in picker, clear stale env vars * fix GitHub Models API regression * feat: update GitHub authentication to require OAuth tokens, normalize model handling for Copilot and GitHub Models * fix: update GitHub token validation to support OAuth tokens and improve endpoint type handling --------- Co-authored-by: Anandan <anandan.8x@gmail.com> Co-authored-by: anandh8x <test@example.com>
This commit is contained in:
@@ -3,6 +3,7 @@ import { afterEach, beforeEach, expect, mock, test } from 'bun:test'
|
||||
type MockStorageData = Record<string, unknown>
|
||||
|
||||
const originalEnv = { ...process.env }
|
||||
const originalArgv = [...process.argv]
|
||||
let storageState: MockStorageData = {}
|
||||
|
||||
async function importFreshModule() {
|
||||
@@ -27,11 +28,14 @@ async function importFreshModule() {
|
||||
|
||||
beforeEach(() => {
|
||||
process.env = { ...originalEnv }
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
process.argv = originalArgv.filter(arg => arg !== '--bare')
|
||||
storageState = {}
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
process.env = { ...originalEnv }
|
||||
process.argv = [...originalArgv]
|
||||
storageState = {}
|
||||
mock.restore()
|
||||
})
|
||||
|
||||
118
src/utils/githubModelsCredentials.refresh.test.ts
Normal file
118
src/utils/githubModelsCredentials.refresh.test.ts
Normal file
@@ -0,0 +1,118 @@
|
||||
import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test'
|
||||
|
||||
async function importFreshModule() {
|
||||
mock.restore()
|
||||
return import(`./githubModelsCredentials.ts?ts=${Date.now()}-${Math.random()}`)
|
||||
}
|
||||
|
||||
describe('refreshGithubModelsTokenIfNeeded', () => {
|
||||
const orig = {
|
||||
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
|
||||
CLAUDE_CODE_SIMPLE: process.env.CLAUDE_CODE_SIMPLE,
|
||||
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
|
||||
GH_TOKEN: process.env.GH_TOKEN,
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
mock.restore()
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
for (const [k, v] of Object.entries(orig)) {
|
||||
if (v === undefined) {
|
||||
delete process.env[k as keyof typeof orig]
|
||||
} else {
|
||||
process.env[k as keyof typeof orig] = v
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
test('refreshes expired Copilot token using stored OAuth token', async () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
delete process.env.GITHUB_TOKEN
|
||||
delete process.env.GH_TOKEN
|
||||
|
||||
const futureExp = Math.floor(Date.now() / 1000) + 3600
|
||||
let store: Record<string, unknown> = {
|
||||
githubModels: {
|
||||
accessToken: 'tid=stale;exp=1;sku=free',
|
||||
oauthAccessToken: 'ghu_oauth_secret',
|
||||
},
|
||||
}
|
||||
|
||||
mock.module('./secureStorage/index.js', () => ({
|
||||
getSecureStorage: () => ({
|
||||
read: () => store,
|
||||
update: (next: Record<string, unknown>) => {
|
||||
store = next
|
||||
return { success: true }
|
||||
},
|
||||
}),
|
||||
}))
|
||||
|
||||
mock.module('../services/github/deviceFlow.js', () => ({
|
||||
DEFAULT_GITHUB_DEVICE_SCOPE: 'read:user',
|
||||
exchangeForCopilotToken: async () => ({
|
||||
token: `tid=fresh;exp=${futureExp};sku=free`,
|
||||
expires_at: futureExp,
|
||||
refresh_in: 1500,
|
||||
endpoints: { api: 'https://api.githubcopilot.com' },
|
||||
}),
|
||||
}))
|
||||
|
||||
const { refreshGithubModelsTokenIfNeeded } = await importFreshModule()
|
||||
|
||||
const refreshed = await refreshGithubModelsTokenIfNeeded()
|
||||
expect(refreshed).toBe(true)
|
||||
expect(process.env.GITHUB_TOKEN?.startsWith('tid=fresh;exp=')).toBe(true)
|
||||
|
||||
const githubModels = (store.githubModels ?? {}) as {
|
||||
accessToken?: string
|
||||
oauthAccessToken?: string
|
||||
}
|
||||
expect(githubModels.accessToken?.startsWith('tid=fresh;exp=')).toBe(true)
|
||||
expect(githubModels.oauthAccessToken).toBe('ghu_oauth_secret')
|
||||
})
|
||||
|
||||
test('does not refresh when current Copilot token is valid', async () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
delete process.env.GITHUB_TOKEN
|
||||
delete process.env.GH_TOKEN
|
||||
|
||||
const futureExp = Math.floor(Date.now() / 1000) + 3600
|
||||
const exchangeSpy = mock(async () => ({
|
||||
token: `tid=unexpected;exp=${futureExp};sku=free`,
|
||||
expires_at: futureExp,
|
||||
refresh_in: 1500,
|
||||
endpoints: { api: 'https://api.githubcopilot.com' },
|
||||
}))
|
||||
|
||||
mock.module('./secureStorage/index.js', () => ({
|
||||
getSecureStorage: () => ({
|
||||
read: () => ({
|
||||
githubModels: {
|
||||
accessToken: `tid=already-valid;exp=${futureExp};sku=free`,
|
||||
oauthAccessToken: 'ghu_oauth_secret',
|
||||
},
|
||||
}),
|
||||
update: () => ({ success: true }),
|
||||
}),
|
||||
}))
|
||||
|
||||
mock.module('../services/github/deviceFlow.js', () => ({
|
||||
DEFAULT_GITHUB_DEVICE_SCOPE: 'read:user',
|
||||
exchangeForCopilotToken: exchangeSpy,
|
||||
}))
|
||||
|
||||
const { refreshGithubModelsTokenIfNeeded } = await importFreshModule()
|
||||
|
||||
const refreshed = await refreshGithubModelsTokenIfNeeded()
|
||||
expect(refreshed).toBe(false)
|
||||
expect(exchangeSpy).not.toHaveBeenCalled()
|
||||
expect(process.env.GITHUB_TOKEN?.startsWith('tid=already-valid;exp=')).toBe(
|
||||
true,
|
||||
)
|
||||
})
|
||||
})
|
||||
@@ -1,5 +1,6 @@
|
||||
import { isBareMode, isEnvTruthy } from './envUtils.js'
|
||||
import { getSecureStorage } from './secureStorage/index.js'
|
||||
import { exchangeForCopilotToken } from '../services/github/deviceFlow.js'
|
||||
|
||||
/** JSON key in the shared OpenClaude secure storage blob. */
|
||||
export const GITHUB_MODELS_STORAGE_KEY = 'githubModels' as const
|
||||
@@ -8,6 +9,38 @@ export const GITHUB_MODELS_HYDRATED_ENV_MARKER =
|
||||
|
||||
export type GithubModelsCredentialBlob = {
|
||||
accessToken: string
|
||||
oauthAccessToken?: string
|
||||
}
|
||||
|
||||
type GithubTokenStatus = 'valid' | 'expired' | 'invalid_format'
|
||||
|
||||
function checkGithubTokenStatus(token: string): GithubTokenStatus {
|
||||
const expMatch = token.match(/exp=(\d+)/)
|
||||
if (expMatch) {
|
||||
const expSeconds = Number(expMatch[1])
|
||||
if (!Number.isNaN(expSeconds)) {
|
||||
return Date.now() >= expSeconds * 1000 ? 'expired' : 'valid'
|
||||
}
|
||||
}
|
||||
|
||||
const parts = token.split('.')
|
||||
const looksLikeJwt =
|
||||
parts.length === 3 && parts.every(part => /^[A-Za-z0-9_-]+$/.test(part))
|
||||
if (looksLikeJwt) {
|
||||
try {
|
||||
const normalized = parts[1].replace(/-/g, '+').replace(/_/g, '/')
|
||||
const padded = normalized + '='.repeat((4 - (normalized.length % 4)) % 4)
|
||||
const json = Buffer.from(padded, 'base64').toString('utf8')
|
||||
const parsed = JSON.parse(json)
|
||||
if (parsed && typeof parsed === 'object' && parsed.exp) {
|
||||
return Date.now() >= (parsed.exp as number) * 1000 ? 'expired' : 'valid'
|
||||
}
|
||||
} catch {
|
||||
return 'invalid_format'
|
||||
}
|
||||
}
|
||||
|
||||
return 'invalid_format'
|
||||
}
|
||||
|
||||
export function readGithubModelsToken(): string | undefined {
|
||||
@@ -66,7 +99,62 @@ export function hydrateGithubModelsTokenFromSecureStorage(): void {
|
||||
delete process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER]
|
||||
}
|
||||
|
||||
export function saveGithubModelsToken(token: string): {
|
||||
/**
|
||||
* Startup auto-refresh for GitHub Models mode.
|
||||
*
|
||||
* If a stored Copilot token is expired/invalid and an OAuth token is present,
|
||||
* exchange the OAuth token for a fresh Copilot token and persist it.
|
||||
*/
|
||||
export async function refreshGithubModelsTokenIfNeeded(): Promise<boolean> {
|
||||
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
|
||||
return false
|
||||
}
|
||||
if (isBareMode()) {
|
||||
return false
|
||||
}
|
||||
|
||||
try {
|
||||
const secureStorage = getSecureStorage()
|
||||
const data = secureStorage.read() as
|
||||
| ({ githubModels?: GithubModelsCredentialBlob } & Record<string, unknown>)
|
||||
| null
|
||||
const blob = data?.githubModels
|
||||
const accessToken = blob?.accessToken?.trim() || ''
|
||||
const oauthToken = blob?.oauthAccessToken?.trim() || ''
|
||||
|
||||
if (!accessToken && !oauthToken) {
|
||||
return false
|
||||
}
|
||||
|
||||
const status = accessToken ? checkGithubTokenStatus(accessToken) : 'expired'
|
||||
if (status === 'valid') {
|
||||
if (!process.env.GITHUB_TOKEN?.trim() && !process.env.GH_TOKEN?.trim()) {
|
||||
process.env.GITHUB_TOKEN = accessToken
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
if (!oauthToken) {
|
||||
return false
|
||||
}
|
||||
|
||||
const refreshed = await exchangeForCopilotToken(oauthToken)
|
||||
const saved = saveGithubModelsToken(refreshed.token, oauthToken)
|
||||
if (!saved.success) {
|
||||
return false
|
||||
}
|
||||
|
||||
process.env.GITHUB_TOKEN = refreshed.token
|
||||
return true
|
||||
} catch {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
export function saveGithubModelsToken(
|
||||
token: string,
|
||||
oauthToken?: string,
|
||||
): {
|
||||
success: boolean
|
||||
warning?: string
|
||||
} {
|
||||
@@ -79,9 +167,21 @@ export function saveGithubModelsToken(token: string): {
|
||||
}
|
||||
const secureStorage = getSecureStorage()
|
||||
const prev = secureStorage.read() || {}
|
||||
const prevGithubModels = (prev as Record<string, unknown>)[
|
||||
GITHUB_MODELS_STORAGE_KEY
|
||||
] as GithubModelsCredentialBlob | undefined
|
||||
const oauthTrimmed = oauthToken?.trim()
|
||||
const mergedBlob: GithubModelsCredentialBlob = {
|
||||
accessToken: trimmed,
|
||||
}
|
||||
if (oauthTrimmed) {
|
||||
mergedBlob.oauthAccessToken = oauthTrimmed
|
||||
} else if (prevGithubModels?.oauthAccessToken?.trim()) {
|
||||
mergedBlob.oauthAccessToken = prevGithubModels.oauthAccessToken.trim()
|
||||
}
|
||||
const merged = {
|
||||
...(prev as Record<string, unknown>),
|
||||
[GITHUB_MODELS_STORAGE_KEY]: { accessToken: trimmed },
|
||||
[GITHUB_MODELS_STORAGE_KEY]: mergedBlob,
|
||||
}
|
||||
return secureStorage.update(merged as typeof prev)
|
||||
}
|
||||
|
||||
@@ -35,6 +35,8 @@ export const CLAUDE_3_7_SONNET_CONFIG = {
|
||||
foundry: 'claude-3-7-sonnet',
|
||||
openai: 'gpt-4o-mini',
|
||||
gemini: 'gemini-2.0-flash',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_3_5_V2_SONNET_CONFIG = {
|
||||
@@ -44,6 +46,8 @@ export const CLAUDE_3_5_V2_SONNET_CONFIG = {
|
||||
foundry: 'claude-3-5-sonnet',
|
||||
openai: 'gpt-4o-mini',
|
||||
gemini: 'gemini-2.0-flash',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_3_5_HAIKU_CONFIG = {
|
||||
@@ -53,6 +57,8 @@ export const CLAUDE_3_5_HAIKU_CONFIG = {
|
||||
foundry: 'claude-3-5-haiku',
|
||||
openai: 'gpt-4o-mini',
|
||||
gemini: 'gemini-2.0-flash-lite',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_HAIKU_4_5_CONFIG = {
|
||||
@@ -62,6 +68,8 @@ export const CLAUDE_HAIKU_4_5_CONFIG = {
|
||||
foundry: 'claude-haiku-4-5',
|
||||
openai: 'gpt-4o-mini',
|
||||
gemini: 'gemini-2.0-flash-lite',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_SONNET_4_CONFIG = {
|
||||
@@ -71,6 +79,8 @@ export const CLAUDE_SONNET_4_CONFIG = {
|
||||
foundry: 'claude-sonnet-4',
|
||||
openai: 'gpt-4o-mini',
|
||||
gemini: 'gemini-2.0-flash',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_SONNET_4_5_CONFIG = {
|
||||
@@ -80,6 +90,8 @@ export const CLAUDE_SONNET_4_5_CONFIG = {
|
||||
foundry: 'claude-sonnet-4-5',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.0-flash',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_OPUS_4_CONFIG = {
|
||||
@@ -89,6 +101,8 @@ export const CLAUDE_OPUS_4_CONFIG = {
|
||||
foundry: 'claude-opus-4',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.5-pro-preview-03-25',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_OPUS_4_1_CONFIG = {
|
||||
@@ -98,6 +112,8 @@ export const CLAUDE_OPUS_4_1_CONFIG = {
|
||||
foundry: 'claude-opus-4-1',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.5-pro-preview-03-25',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_OPUS_4_5_CONFIG = {
|
||||
@@ -107,6 +123,8 @@ export const CLAUDE_OPUS_4_5_CONFIG = {
|
||||
foundry: 'claude-opus-4-5',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.5-pro-preview-03-25',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_OPUS_4_6_CONFIG = {
|
||||
@@ -116,6 +134,8 @@ export const CLAUDE_OPUS_4_6_CONFIG = {
|
||||
foundry: 'claude-opus-4-6',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.5-pro-preview-03-25',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
export const CLAUDE_SONNET_4_6_CONFIG = {
|
||||
@@ -125,6 +145,8 @@ export const CLAUDE_SONNET_4_6_CONFIG = {
|
||||
foundry: 'claude-sonnet-4-6',
|
||||
openai: 'gpt-4o',
|
||||
gemini: 'gemini-2.0-flash',
|
||||
github: 'github:copilot',
|
||||
codex: 'gpt-5.4',
|
||||
} as const satisfies ModelConfig
|
||||
|
||||
// @[MODEL LAUNCH]: Register the new config here.
|
||||
|
||||
351
src/utils/model/copilotModels.ts
Normal file
351
src/utils/model/copilotModels.ts
Normal file
@@ -0,0 +1,351 @@
|
||||
/**
|
||||
* Hardcoded Copilot model registry from models.dev/api.json
|
||||
* These are the 19 models available through GitHub Copilot.
|
||||
*/
|
||||
|
||||
export type CopilotModel = {
|
||||
id: string
|
||||
name: string
|
||||
family: string
|
||||
attachment: boolean
|
||||
reasoning: boolean
|
||||
tool_call: boolean
|
||||
temperature: boolean
|
||||
knowledge: string
|
||||
release_date: string
|
||||
last_updated: string
|
||||
modalities: {
|
||||
input: string[]
|
||||
output: string[]
|
||||
}
|
||||
open_weights: boolean
|
||||
cost: {
|
||||
input: number
|
||||
output: number
|
||||
cache_read?: number
|
||||
}
|
||||
limit: {
|
||||
context: number
|
||||
input?: number
|
||||
output: number
|
||||
}
|
||||
}
|
||||
|
||||
export const COPILOT_MODELS: Record<string, CopilotModel> = {
|
||||
'gpt-5.4': {
|
||||
id: 'gpt-5.4',
|
||||
name: 'GPT-5.4',
|
||||
family: 'gpt',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.4-mini': {
|
||||
id: 'gpt-5.4-mini',
|
||||
name: 'GPT-5.4 mini',
|
||||
family: 'gpt-mini',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.3-codex': {
|
||||
id: 'gpt-5.3-codex',
|
||||
name: 'GPT-5.3-Codex',
|
||||
family: 'gpt-codex',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.2-codex': {
|
||||
id: 'gpt-5.2-codex',
|
||||
name: 'GPT-5.2-Codex',
|
||||
family: 'gpt-codex',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.2': {
|
||||
id: 'gpt-5.2',
|
||||
name: 'GPT-5.2',
|
||||
family: 'gpt',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 264000, output: 32768 },
|
||||
},
|
||||
'gpt-5.1-codex': {
|
||||
id: 'gpt-5.1-codex',
|
||||
name: 'GPT-5.1-Codex',
|
||||
family: 'gpt-codex',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.1-codex-max': {
|
||||
id: 'gpt-5.1-codex-max',
|
||||
name: 'GPT-5.1-Codex-max',
|
||||
family: 'gpt-codex',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-5.1-codex-mini': {
|
||||
id: 'gpt-5.1-codex-mini',
|
||||
name: 'GPT-5.1-Codex-mini',
|
||||
family: 'gpt-codex',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 400000, output: 32768 },
|
||||
},
|
||||
'gpt-4o': {
|
||||
id: 'gpt-4o',
|
||||
name: 'GPT-4o',
|
||||
family: 'gpt',
|
||||
attachment: true,
|
||||
reasoning: false,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2023-10',
|
||||
release_date: '2024-05-01',
|
||||
last_updated: '2024-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 16384 },
|
||||
},
|
||||
'gpt-4.1': {
|
||||
id: 'gpt-4.1',
|
||||
name: 'GPT-4.1',
|
||||
family: 'gpt',
|
||||
attachment: false,
|
||||
reasoning: false,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2024-06',
|
||||
release_date: '2024-06-01',
|
||||
last_updated: '2024-06-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 32768 },
|
||||
},
|
||||
'claude-opus-4.6': {
|
||||
id: 'claude-opus-4.6',
|
||||
name: 'Claude Opus 4.6',
|
||||
family: 'claude-opus',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 144000, output: 32768 },
|
||||
},
|
||||
'claude-opus-4.5': {
|
||||
id: 'claude-opus-4.5',
|
||||
name: 'Claude Opus 4.5',
|
||||
family: 'claude-opus',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 160000, output: 32768 },
|
||||
},
|
||||
'claude-sonnet-4.6': {
|
||||
id: 'claude-sonnet-4.6',
|
||||
name: 'Claude Sonnet 4.6',
|
||||
family: 'claude-sonnet',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 200000, output: 32768 },
|
||||
},
|
||||
'claude-sonnet-4.5': {
|
||||
id: 'claude-sonnet-4.5',
|
||||
name: 'Claude Sonnet 4.5',
|
||||
family: 'claude-sonnet',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 144000, output: 32768 },
|
||||
},
|
||||
'claude-haiku-4.5': {
|
||||
id: 'claude-haiku-4.5',
|
||||
name: 'Claude Haiku 4.5',
|
||||
family: 'claude-haiku',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 144000, output: 32768 },
|
||||
},
|
||||
'gemini-3.1-pro-preview': {
|
||||
id: 'gemini-3.1-pro-preview',
|
||||
name: 'Gemini 3.1 Pro Preview',
|
||||
family: 'gemini-pro',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image', 'audio'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 32768 },
|
||||
},
|
||||
'gemini-3-flash-preview': {
|
||||
id: 'gemini-3-flash-preview',
|
||||
name: 'Gemini 3 Flash',
|
||||
family: 'gemini-flash',
|
||||
attachment: true,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 32768 },
|
||||
},
|
||||
'gemini-2.5-pro': {
|
||||
id: 'gemini-2.5-pro',
|
||||
name: 'Gemini 2.5 Pro',
|
||||
family: 'gemini-pro',
|
||||
attachment: true,
|
||||
reasoning: false,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text', 'image'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 32768 },
|
||||
},
|
||||
'grok-code-fast-1': {
|
||||
id: 'grok-code-fast-1',
|
||||
name: 'Grok Code Fast 1',
|
||||
family: 'grok',
|
||||
attachment: false,
|
||||
reasoning: true,
|
||||
tool_call: true,
|
||||
temperature: true,
|
||||
knowledge: '2025-05',
|
||||
release_date: '2025-05-01',
|
||||
last_updated: '2025-05-01',
|
||||
modalities: { input: ['text'], output: ['text'] },
|
||||
open_weights: false,
|
||||
cost: { input: 0, output: 0 },
|
||||
limit: { context: 128000, output: 32768 },
|
||||
},
|
||||
}
|
||||
|
||||
export function getCopilotModelIds(): string[] {
|
||||
return Object.keys(COPILOT_MODELS)
|
||||
}
|
||||
|
||||
export function getCopilotModel(id: string): CopilotModel | undefined {
|
||||
return COPILOT_MODELS[id]
|
||||
}
|
||||
|
||||
export function getAllCopilotModels(): CopilotModel[] {
|
||||
return Object.values(COPILOT_MODELS)
|
||||
}
|
||||
@@ -43,6 +43,10 @@ export function getSmallFastModel(): ModelName {
|
||||
if (getAPIProvider() === 'openai') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-4o-mini'
|
||||
}
|
||||
// For GitHub Copilot provider
|
||||
if (getAPIProvider() === 'github') {
|
||||
return process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
return getDefaultHaikuModel()
|
||||
}
|
||||
|
||||
@@ -137,6 +141,10 @@ export function getDefaultOpusModel(): ModelName {
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
}
|
||||
// GitHub Copilot provider
|
||||
if (getAPIProvider() === 'github') {
|
||||
return process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
// 3P providers (Bedrock, Vertex, Foundry) — kept as a separate branch
|
||||
// even when values match, since 3P availability lags firstParty and
|
||||
// these will diverge again at the next model launch.
|
||||
@@ -163,6 +171,10 @@ export function getDefaultSonnetModel(): ModelName {
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
}
|
||||
// GitHub Copilot provider
|
||||
if (getAPIProvider() === 'github') {
|
||||
return process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
// Default to Sonnet 4.5 for 3P since they may not have 4.6 yet
|
||||
if (getAPIProvider() !== 'firstParty') {
|
||||
return getModelStrings().sonnet45
|
||||
@@ -175,10 +187,6 @@ export function getDefaultHaikuModel(): ModelName {
|
||||
if (process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL) {
|
||||
return process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL
|
||||
}
|
||||
// Gemini provider
|
||||
if (getAPIProvider() === 'gemini') {
|
||||
return process.env.GEMINI_MODEL || 'gemini-2.0-flash-lite'
|
||||
}
|
||||
// OpenAI provider
|
||||
if (getAPIProvider() === 'openai') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-4o-mini'
|
||||
@@ -187,6 +195,14 @@ export function getDefaultHaikuModel(): ModelName {
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
}
|
||||
// GitHub Copilot provider
|
||||
if (getAPIProvider() === 'github') {
|
||||
return process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
// Gemini provider
|
||||
if (getAPIProvider() === 'gemini') {
|
||||
return process.env.GEMINI_MODEL || 'gemini-2.0-flash-lite'
|
||||
}
|
||||
|
||||
// Haiku 4.5 is available on all platforms (first-party, Foundry, Bedrock, Vertex)
|
||||
return getModelStrings().haiku45
|
||||
@@ -231,6 +247,11 @@ export function getRuntimeMainLoopModel(params: {
|
||||
* @returns The default model setting to use
|
||||
*/
|
||||
export function getDefaultMainLoopModelSetting(): ModelName | ModelAlias {
|
||||
// GitHub Copilot provider: check settings.model first, then env, then default
|
||||
if (getAPIProvider() === 'github') {
|
||||
const settings = getSettings_DEPRECATED() || {}
|
||||
return settings.model || process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
// Gemini provider: always use the configured Gemini model
|
||||
if (getAPIProvider() === 'gemini') {
|
||||
return process.env.GEMINI_MODEL || 'gemini-2.0-flash'
|
||||
@@ -239,10 +260,6 @@ export function getDefaultMainLoopModelSetting(): ModelName | ModelAlias {
|
||||
if (getAPIProvider() === 'openai') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-4o'
|
||||
}
|
||||
// GitHub provider: always use the configured GitHub model
|
||||
if (getAPIProvider() === 'github') {
|
||||
return process.env.OPENAI_MODEL || 'github:copilot'
|
||||
}
|
||||
// Codex provider: always use the configured Codex model (default gpt-5.4)
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
@@ -426,8 +443,33 @@ export function renderModelSetting(setting: ModelName | ModelAlias): string {
|
||||
* if the model is not recognized as a public model.
|
||||
*/
|
||||
export function getPublicModelDisplayName(model: ModelName): string | null {
|
||||
// For OpenAI/Gemini/Codex providers, show the actual model name not a Claude alias
|
||||
if (getAPIProvider() === 'openai' || getAPIProvider() === 'gemini' || getAPIProvider() === 'codex') {
|
||||
// For OpenAI/Gemini/Codex/GitHub providers, show the actual model name not a Claude alias
|
||||
if (getAPIProvider() === 'openai' || getAPIProvider() === 'gemini' || getAPIProvider() === 'codex' || getAPIProvider() === 'github') {
|
||||
// Return display names for known GitHub Copilot models
|
||||
const copilotModelNames: Record<string, string> = {
|
||||
'gpt-5.4': 'GPT-5.4',
|
||||
'gpt-5.4-mini': 'GPT-5.4 mini',
|
||||
'gpt-5.3-codex': 'GPT-5.3 Codex',
|
||||
'gpt-5.2-codex': 'GPT-5.2 Codex',
|
||||
'gpt-5.2': 'GPT-5.2',
|
||||
'gpt-5.1-codex': 'GPT-5.1 Codex',
|
||||
'gpt-5.1-codex-max': 'GPT-5.1 Codex max',
|
||||
'gpt-5.1-codex-mini': 'GPT-5.1 Codex mini',
|
||||
'gpt-4o': 'GPT-4o',
|
||||
'gpt-4.1': 'GPT-4.1',
|
||||
'claude-opus-4.6': 'Claude Opus 4.6',
|
||||
'claude-opus-4.5': 'Claude Opus 4.5',
|
||||
'claude-sonnet-4.6': 'Claude Sonnet 4.6',
|
||||
'claude-sonnet-4.5': 'Claude Sonnet 4.5',
|
||||
'claude-haiku-4.5': 'Claude Haiku 4.5',
|
||||
'gemini-3.1-pro-preview': 'Gemini 3.1 Pro Preview',
|
||||
'gemini-3-flash-preview': 'Gemini 3 Flash',
|
||||
'gemini-2.5-pro': 'Gemini 2.5 Pro',
|
||||
'grok-code-fast-1': 'Grok Code Fast 1',
|
||||
}
|
||||
if (copilotModelNames[model]) {
|
||||
return copilotModelNames[model]
|
||||
}
|
||||
return null
|
||||
}
|
||||
switch (model) {
|
||||
@@ -484,6 +526,10 @@ export function renderModelName(model: ModelName): string {
|
||||
if (publicName) {
|
||||
return publicName
|
||||
}
|
||||
// Handle GitHub Copilot special model aliases
|
||||
if (model === 'github:copilot') {
|
||||
return 'GPT-4o'
|
||||
}
|
||||
if (process.env.USER_TYPE === 'ant') {
|
||||
const resolved = parseUserSpecifiedModel(model)
|
||||
const antModel = resolveAntModel(model)
|
||||
|
||||
@@ -61,7 +61,7 @@ afterEach(() => {
|
||||
resetModelStringsForTestingOnly()
|
||||
})
|
||||
|
||||
test('GitHub provider exposes only default + GitHub model in /model options', async () => {
|
||||
test('GitHub provider exposes default + all Copilot models in /model options', async () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
delete process.env.CLAUDE_CODE_USE_OPENAI
|
||||
delete process.env.CLAUDE_CODE_USE_GEMINI
|
||||
@@ -69,7 +69,7 @@ test('GitHub provider exposes only default + GitHub model in /model options', as
|
||||
delete process.env.CLAUDE_CODE_USE_VERTEX
|
||||
delete process.env.CLAUDE_CODE_USE_FOUNDRY
|
||||
|
||||
process.env.OPENAI_MODEL = 'github:copilot'
|
||||
process.env.OPENAI_MODEL = 'gpt-4o'
|
||||
delete process.env.ANTHROPIC_CUSTOM_MODEL_OPTION
|
||||
|
||||
const { getModelOptions } = await importFreshModelOptionsModule()
|
||||
@@ -78,6 +78,7 @@ test('GitHub provider exposes only default + GitHub model in /model options', as
|
||||
(option: { value: unknown }) => option.value !== null,
|
||||
)
|
||||
|
||||
expect(nonDefault.length).toBe(1)
|
||||
expect(nonDefault[0]?.value).toBe('github:copilot')
|
||||
expect(nonDefault.length).toBeGreaterThan(1)
|
||||
expect(nonDefault.some((o: { value: unknown }) => o.value === 'gpt-4o')).toBe(true)
|
||||
expect(nonDefault.some((o: { value: unknown }) => o.value === 'gpt-5.3-codex')).toBe(true)
|
||||
})
|
||||
|
||||
@@ -35,6 +35,7 @@ import { has1mContext } from '../context.js'
|
||||
import { getGlobalConfig } from '../config.js'
|
||||
import { getActiveOpenAIModelOptionsCache } from '../providerProfiles.js'
|
||||
import { getCachedOllamaModelOptions, isOllamaProvider } from './ollamaModels.js'
|
||||
import { getAntModels } from './antModels.js'
|
||||
|
||||
// @[MODEL LAUNCH]: Update all the available and default model option strings below.
|
||||
|
||||
@@ -351,17 +352,20 @@ function getCodexModelOptions(): ModelOption[] {
|
||||
|
||||
// @[MODEL LAUNCH]: Update the model picker lists below to include/reorder options for the new model.
|
||||
// Each user tier (ant, Max/Team Premium, Pro/Team Standard/Enterprise, PAYG 1P, PAYG 3P) has its own list.
|
||||
|
||||
import { getAllCopilotModels } from './copilotModels.js'
|
||||
|
||||
function getCopilotModelOptions(): ModelOption[] {
|
||||
return getAllCopilotModels().map(m => ({
|
||||
value: m.id,
|
||||
label: m.name,
|
||||
description: `${m.family}${m.reasoning ? ' · Reasoning' : ''}${m.tool_call ? ' · Tool call' : ''} · ${Math.round(m.limit.context / 1000)}K context`,
|
||||
}))
|
||||
}
|
||||
|
||||
function getModelOptionsBase(fastMode = false): ModelOption[] {
|
||||
if (getAPIProvider() === 'github') {
|
||||
const githubModel = process.env.OPENAI_MODEL?.trim() || 'github:copilot'
|
||||
return [
|
||||
getDefaultOptionForUser(fastMode),
|
||||
{
|
||||
value: githubModel,
|
||||
label: githubModel,
|
||||
description: 'GitHub Models default',
|
||||
},
|
||||
]
|
||||
return [getDefaultOptionForUser(fastMode), ...getCopilotModelOptions()]
|
||||
}
|
||||
|
||||
// When using Ollama, show models from the Ollama server instead of Claude models
|
||||
|
||||
@@ -51,6 +51,7 @@ export const DANGEROUS_BASH_PATTERNS: readonly string[] = [
|
||||
'xargs',
|
||||
'sudo',
|
||||
// Internal-only: internal-only tools plus general tools that ant sandbox
|
||||
// data shows are frequently over-allowlisted as broad prefixes.
|
||||
// dotfile data shows are commonly over-allowlisted as broad prefixes.
|
||||
// These stay internal-only — external users don't have coo, and the rest are
|
||||
// an empirical-risk call grounded in ant sandbox data, not a universal
|
||||
|
||||
@@ -6,7 +6,26 @@ import {
|
||||
VALID_PROVIDERS,
|
||||
} from './providerFlag.js'
|
||||
|
||||
const originalEnv = { ...process.env }
|
||||
const ENV_KEYS = [
|
||||
'CLAUDE_CODE_USE_OPENAI',
|
||||
'CLAUDE_CODE_USE_GEMINI',
|
||||
'CLAUDE_CODE_USE_GITHUB',
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'OPENAI_BASE_URL',
|
||||
'OPENAI_API_KEY',
|
||||
'OPENAI_MODEL',
|
||||
'GEMINI_MODEL',
|
||||
]
|
||||
|
||||
const originalEnv: Record<string, string | undefined> = {}
|
||||
|
||||
beforeEach(() => {
|
||||
for (const key of ENV_KEYS) {
|
||||
originalEnv[key] = process.env[key]
|
||||
delete process.env[key]
|
||||
}
|
||||
})
|
||||
|
||||
const RESET_KEYS = [
|
||||
'CLAUDE_CODE_USE_OPENAI',
|
||||
@@ -27,9 +46,12 @@ beforeEach(() => {
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
for (const key of RESET_KEYS) {
|
||||
if (originalEnv[key] === undefined) delete process.env[key]
|
||||
else process.env[key] = originalEnv[key]
|
||||
for (const key of ENV_KEYS) {
|
||||
if (originalEnv[key] === undefined) {
|
||||
delete process.env[key]
|
||||
} else {
|
||||
process.env[key] = originalEnv[key]
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import {
|
||||
getGithubEndpointType,
|
||||
isLocalProviderUrl,
|
||||
resolveCodexApiCredentials,
|
||||
resolveProviderRequest,
|
||||
@@ -15,6 +16,51 @@ function isEnvTruthy(value: string | undefined): boolean {
|
||||
return normalized !== '' && normalized !== '0' && normalized !== 'false' && normalized !== 'no'
|
||||
}
|
||||
|
||||
type GithubTokenStatus = 'valid' | 'expired' | 'invalid_format'
|
||||
|
||||
const GITHUB_PAT_PREFIXES = ['ghp_', 'gho_', 'ghs_', 'ghr_', 'github_pat_']
|
||||
|
||||
function checkGithubTokenStatus(
|
||||
token: string,
|
||||
endpointType: 'copilot' | 'models' | 'custom' = 'copilot',
|
||||
): GithubTokenStatus {
|
||||
// PATs work with GitHub Models but not with Copilot API
|
||||
if (GITHUB_PAT_PREFIXES.some(prefix => token.startsWith(prefix))) {
|
||||
if (endpointType === 'copilot') {
|
||||
return 'expired'
|
||||
}
|
||||
return 'valid'
|
||||
}
|
||||
|
||||
const expMatch = token.match(/exp=(\d+)/)
|
||||
if (expMatch) {
|
||||
const expSeconds = Number(expMatch[1])
|
||||
if (!Number.isNaN(expSeconds)) {
|
||||
return Date.now() >= expSeconds * 1000 ? 'expired' : 'valid'
|
||||
}
|
||||
}
|
||||
|
||||
const parts = token.split('.')
|
||||
const looksLikeJwt =
|
||||
parts.length === 3 && parts.every(part => /^[A-Za-z0-9_-]+$/.test(part))
|
||||
if (looksLikeJwt) {
|
||||
try {
|
||||
const normalized = parts[1].replace(/-/g, '+').replace(/_/g, '/')
|
||||
const padded = normalized + '='.repeat((4 - (normalized.length % 4)) % 4)
|
||||
const json = Buffer.from(padded, 'base64').toString('utf8')
|
||||
const parsed = JSON.parse(json)
|
||||
if (parsed && typeof parsed === 'object' && parsed.exp) {
|
||||
return Date.now() >= (parsed.exp as number) * 1000 ? 'expired' : 'valid'
|
||||
}
|
||||
} catch {
|
||||
return 'invalid_format'
|
||||
}
|
||||
}
|
||||
|
||||
// Keep compatibility with opaque token formats that do not expose expiry.
|
||||
return 'valid'
|
||||
}
|
||||
|
||||
export async function getProviderValidationError(
|
||||
env: NodeJS.ProcessEnv = process.env,
|
||||
options?: {
|
||||
@@ -39,7 +85,19 @@ export async function getProviderValidationError(
|
||||
if (useGithub && !useOpenAI) {
|
||||
const token = (env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()) ?? ''
|
||||
if (!token) {
|
||||
return 'GITHUB_TOKEN or GH_TOKEN is required when CLAUDE_CODE_USE_GITHUB=1.'
|
||||
return 'GitHub Copilot authentication required.\n' +
|
||||
'Run /onboard-github in the CLI to sign in with your GitHub account.\n' +
|
||||
'This will store your OAuth token securely and enable Copilot models.'
|
||||
}
|
||||
const endpointType = getGithubEndpointType(env.OPENAI_BASE_URL)
|
||||
const status = checkGithubTokenStatus(token, endpointType)
|
||||
if (status === 'expired') {
|
||||
return 'GitHub Copilot token has expired.\n' +
|
||||
'Run /onboard-github to sign in again and get a fresh token.'
|
||||
}
|
||||
if (status === 'invalid_format') {
|
||||
return 'GitHub Copilot token is invalid or corrupted.\n' +
|
||||
'Run /onboard-github to sign in again with your GitHub account.'
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user