Files
orcs-code/scripts/system-check.ts
Meetpatel006 ad724dc3a4 Improve GitHub Copilot provider: official OAuth onboarding, Copilot API routing, and test hardening and auto refresh token logic (#288)
* update gitHub copilot API with offical client id and update model configurations

* test: add unit tests for exchangeForCopilotToken and enhance GitHub model normalization

* remove PAT token feature

* test(api): harden provider tests against env leakage

* Added back trimmed github auth token

* added auto refresh logic for auto token along with test

* fix: remove forked provider validation in cli.tsx and clear stale provider env vars in /onboard-github

* refactor: streamline environment variable handling in mergeUserSettingsEnv

* fix: clear stale provider env vars to ensure correct GH routing

* Remove internal-only tooling from the external build (#352)

* Remove internal-only tooling without changing external runtime contracts

This trims the lowest-risk internal-only surfaces first: deleted internal
modules are replaced by build-time no-op stubs, the bundled stuck skill is
removed, and the insights S3 upload path now stays local-only. The privacy
verifier is expanded and the remaining bundled internal Slack/Artifactory
strings are neutralized without broad repo-wide renames.

Constraint: Keep the first PR deletion-heavy and avoid mass rewrites of USER_TYPE, tengu, or claude_code identifiers
Rejected: One-shot DMCA cleanup branch | too much semantic risk for a first PR
Confidence: medium
Scope-risk: moderate
Reversibility: clean
Directive: Treat full-repo typecheck as a baseline issue on this upstream snapshot; do not claim this commit introduced the existing non-Phase-A errors without isolating them first
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Not-tested: Full repo typecheck (currently fails on widespread pre-existing upstream errors outside this change set)

* Keep minimal source shims so CI can import Phase A cleanup paths

The first PR removed internal-only source files entirely, but CI provider
and context tests import those modules directly from source rather than
through the build-time no-telemetry stubs. This restores tiny no-op source
shims so tests and local source imports resolve while preserving the same
external runtime behavior.

Constraint: GitHub Actions runs source-level tests in addition to bundled build/privacy checks
Rejected: Revert the entire deletion pass | unnecessary once the import contract is satisfied by small shims
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: For later cleanup phases, treat build-time stubs and source-test imports as separate compatibility surfaces
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (still noisy on this upstream snapshot)

---------

Co-authored-by: anandh8x <test@example.com>

* Reduce internal-only labeling noise in source comments (#355)

This pass rewrites comment-only ANT-ONLY markers to neutral internal-only
language across the source tree without changing runtime strings, flags,
commands, or protocol identifiers. The goal is to lower obvious internal
prose leakage while keeping the diff mechanically safe and easy to review.

Constraint: Phase B is limited to comments/prose only; runtime strings and user-facing labels remain deferred
Rejected: Broad search-and-replace across strings and command descriptions | too risky for a prose-only pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly runtime/user-facing strings and should be handled separately from comment cleanup
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>

* Neutralize internal Anthropic prose in explanatory comments (#357)

This is a small prose-only follow-up that rewrites clearly internal or
explanatory Anthropic comment language to neutral wording in a handful of
high-confidence files. It avoids runtime strings, flags, command labels,
protocol identifiers, and provider-facing references.

Constraint: Keep this pass narrowly scoped to comments/documentation only
Rejected: Broader Anthropic comment sweep across functional API/protocol references | too ambiguous for a safe prose-only PR
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Leave functional Anthropic references (API behavior, SDKs, URLs, provider labels, protocol docs) for separate reviewed passes
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>

* Neutralize remaining internal-only diagnostic labels (#359)

This pass rewrites a small set of ant-only diagnostic and UI labels to
neutral internal wording while leaving command definitions, flags, and
runtime logic untouched. It focuses on internal debug output, dead UI
branches, and noninteractive headings rather than broader product text.

Constraint: Label cleanup only; do not change command semantics or ant-only logic gates
Rejected: Renaming ant-only command descriptions in main.tsx | broader UX surface better handled in a separate reviewed pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly command descriptions and intentionally deferred user-facing strings
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>

* Finish eliminating remaining ANT-ONLY source labels (#360)

This extends the label-only cleanup to the remaining internal-only command,
debug, and heading strings so the source tree no longer contains ANT-ONLY
markers. The pass still avoids logic changes and only renames labels shown
in internal or gated surfaces.

Constraint: Update the existing label-cleanup PR without widening scope into behavior changes
Rejected: Leave the last ANT-ONLY strings for a later pass | low-cost cleanup while the branch is already focused on labels
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: The next phase should move off label cleanup and onto a separately scoped logic or rebrand slice
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>

* Stub internal-only recording and model capability helpers (#377)

This follow-up Phase C-lite slice replaces purely internal helper modules
with stable external no-op surfaces and collapses internal elevated error
logging to a no-op. The change removes additional USER_TYPE-gated helper
behavior without touching product-facing runtime flows.

Constraint: Keep this PR limited to isolated helper modules that are already external no-ops in practice
Rejected: Pulling in broader speculation or logging sink changes | less isolated and easier to debate during review
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Continue Phase C with similarly isolated helpers before moving into mixed behavior files
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>

* Remove internal-only bundled skills and mock helpers (#376)

* Remove internal-only bundled skills and mock rate-limit behavior

This takes the next planned Phase C-lite slice by deleting bundled skills
that only ever registered for internal users and replacing the internal
mock rate-limit helper with a stable no-op external stub. The external
build keeps the same behavior while removing a concentrated block of
USER_TYPE-gated dead code.

Constraint: Limit this PR to isolated internal-only helpers and avoid bridge, oauth, or rebrand behavior
Rejected: Broad USER_TYPE cleanup across mixed runtime surfaces | too risky for the next medium-sized PR
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: The next cleanup pass should continue with similarly isolated USER_TYPE helpers before touching main.tsx or protocol-heavy code
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

* Align internal-only helper removal with remaining user guidance

This follow-up fixes the mock billing stub to be a true no-op and removes
stale user-facing references to /verify and /skillify from the same PR.
It also leaves a clearer paper trail for review: the deleted verify skill
was explicitly ant-gated before removal, and the remaining mock helper
callers still resolve to safe no-op returns in the external build.

Constraint: Keep the PR focused on consistency fixes and reviewer-requested evidence, not new cleanup scope
Rejected: Leave stale guidance for a later PR | would make this branch internally inconsistent after skill removal
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When deleting gated features, always sweep user guidance and coordinator prompts in the same pass
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy; changed-file scan still shows only pre-existing tipRegistry errors outside edited lines)

* Clarify generic workflow wording after skill removal

This removes the last generic verification-skill wording that could still
be read as pointing at a deleted bundled command. The guidance now talks
about project workflows rather than a specific bundled verify skill.

Constraint: Keep the follow-up limited to reviewer-facing wording cleanup on the same PR
Rejected: Leave generic wording as-is | still too easy to misread after the explicit /verify references were removed
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When removing bundled commands, scrub both explicit and generic references in the same branch
Tested: bun run build
Tested: bun run smoke
Not-tested: Additional checks unchanged by wording-only follow-up

---------

Co-authored-by: anandh8x <test@example.com>

* test(api): add GEMINI_AUTH_MODE to environment setup in tests

* test: isolate GitHub/Gemini credential tests with fresh module imports and explicit non-bare env setup to prevent cross-test mock/cache leaks

* fix: update GitHub Copilot base URL and model defaults for improved compatibility

* fix: enhance error handling in OpenAI API response processing

* fix: improve error handling for GitHub Copilot API responses and streamline error body consumption

* fix: enhance response handling in OpenAI API shim for better error reporting and support for streaming responses

* feat: enhance GitHub device flow with fresh module import and token validation improvements

* fix: separate Copilot API routing from GitHub Models, clear stale env vars, honor providerOverride.apiKey

* fix: route GitHub GPT-5/Codex to Copilot API, show all Copilot models in picker, clear stale env vars

* fix GitHub Models API regression

* feat: update GitHub authentication to require OAuth tokens, normalize model handling for Copilot and GitHub Models

* fix: update GitHub token validation to support OAuth tokens and improve endpoint type handling

---------

Co-authored-by: Anandan <anandan.8x@gmail.com>
Co-authored-by: anandh8x <test@example.com>
2026-04-08 16:03:31 +08:00

548 lines
16 KiB
TypeScript

// @ts-nocheck
import { existsSync, mkdirSync, writeFileSync } from 'node:fs'
import { dirname, join, resolve } from 'node:path'
import { spawnSync } from 'node:child_process'
import {
resolveCodexApiCredentials,
resolveProviderRequest,
isLocalProviderUrl as isProviderLocalUrl,
} from '../src/services/api/providerConfig.js'
type CheckResult = {
ok: boolean
label: string
detail?: string
}
type CliOptions = {
json: boolean
outFile: string | null
}
function pass(label: string, detail?: string): CheckResult {
return { ok: true, label, detail }
}
function fail(label: string, detail?: string): CheckResult {
return { ok: false, label, detail }
}
function isTruthy(value: string | undefined): boolean {
if (!value) return false
const normalized = value.trim().toLowerCase()
return normalized !== '' && normalized !== '0' && normalized !== 'false' && normalized !== 'no'
}
function parseOptions(argv: string[]): CliOptions {
const options: CliOptions = {
json: false,
outFile: null,
}
for (let i = 0; i < argv.length; i++) {
const arg = argv[i]
if (arg === '--json') {
options.json = true
continue
}
if (arg === '--out') {
const next = argv[i + 1]
if (next && !next.startsWith('--')) {
options.outFile = next
i++
}
}
}
return options
}
export function formatReachabilityFailureDetail(
endpoint: string,
status: number,
responseBody: string,
request: {
transport: string
requestedModel: string
resolvedModel: string
},
): string {
const compactBody = responseBody.trim().replace(/\s+/g, ' ').slice(0, 240)
const base = `Unexpected status ${status} from ${endpoint}.`
const bodySuffix = compactBody ? ` Body: ${compactBody}` : ''
if (request.transport !== 'codex_responses' || status !== 400) {
return `${base}${bodySuffix}`
}
if (!/not supported.*chatgpt account/i.test(responseBody)) {
return `${base}${bodySuffix}`
}
return `${base}${bodySuffix} Hint: model alias "${request.requestedModel}" resolved to "${request.resolvedModel}", which this ChatGPT account does not currently allow. Try "codexplan" or another entitled Codex model.`
}
function checkNodeVersion(): CheckResult {
const raw = process.versions.node
const major = Number(raw.split('.')[0] ?? '0')
if (Number.isNaN(major)) {
return fail('Node.js version', `Could not parse version: ${raw}`)
}
if (major < 20) {
return fail('Node.js version', `Detected ${raw}. Require >= 20.`)
}
return pass('Node.js version', raw)
}
function checkBunRuntime(): CheckResult {
const bunVersion = (globalThis as { Bun?: { version?: string } }).Bun?.version
if (!bunVersion) {
return pass('Bun runtime', 'Not running inside Bun (this is acceptable for Node startup).')
}
return pass('Bun runtime', bunVersion)
}
function checkBuildArtifacts(): CheckResult {
const distCli = resolve(process.cwd(), 'dist', 'cli.mjs')
if (!existsSync(distCli)) {
return fail('Build artifacts', `Missing ${distCli}. Run: bun run build`)
}
return pass('Build artifacts', distCli)
}
function isLocalBaseUrl(baseUrl: string): boolean {
return isProviderLocalUrl(baseUrl)
}
const GEMINI_DEFAULT_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta/openai'
const GITHUB_COPILOT_BASE = 'https://api.githubcopilot.com'
function currentBaseUrl(): string {
if (isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
return process.env.GEMINI_BASE_URL ?? GEMINI_DEFAULT_BASE_URL
}
if (isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
return process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE
}
return process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1'
}
function checkGeminiEnv(): CheckResult[] {
const results: CheckResult[] = []
const model = process.env.GEMINI_MODEL
const key = process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY
const baseUrl = process.env.GEMINI_BASE_URL ?? GEMINI_DEFAULT_BASE_URL
results.push(pass('Provider mode', 'Google Gemini provider enabled.'))
if (!model) {
results.push(pass('GEMINI_MODEL', 'Not set. Default gemini-2.0-flash will be used.'))
} else {
results.push(pass('GEMINI_MODEL', model))
}
results.push(pass('GEMINI_BASE_URL', baseUrl))
if (!key) {
results.push(fail('GEMINI_API_KEY', 'Missing. Set GEMINI_API_KEY or GOOGLE_API_KEY.'))
} else {
results.push(pass('GEMINI_API_KEY', 'Configured.'))
}
return results
}
function checkGithubEnv(): CheckResult[] {
const results: CheckResult[] = []
const baseUrl = process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE
results.push(pass('Provider mode', 'GitHub Models provider enabled.'))
const token = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN
if (!token?.trim()) {
results.push(fail('GITHUB_TOKEN', 'Missing. Set GITHUB_TOKEN or GH_TOKEN.'))
} else {
results.push(pass('GITHUB_TOKEN', 'Configured.'))
}
if (!process.env.OPENAI_MODEL) {
results.push(
pass(
'OPENAI_MODEL',
'Not set. Default github:copilot → openai/gpt-4.1 at runtime.',
),
)
} else {
results.push(pass('OPENAI_MODEL', process.env.OPENAI_MODEL))
}
results.push(pass('OPENAI_BASE_URL', baseUrl))
return results
}
function checkOpenAIEnv(): CheckResult[] {
const results: CheckResult[] = []
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
if (useGemini) {
return checkGeminiEnv()
}
if (useGithub && !useOpenAI) {
return checkGithubEnv()
}
if (!useOpenAI) {
results.push(pass('Provider mode', 'Anthropic login flow enabled (CLAUDE_CODE_USE_OPENAI is off).'))
return results
}
const request = resolveProviderRequest({
model: process.env.OPENAI_MODEL,
baseUrl: process.env.OPENAI_BASE_URL,
})
results.push(
pass(
'Provider mode',
request.transport === 'codex_responses'
? 'Codex responses backend enabled.'
: 'OpenAI-compatible provider enabled.',
),
)
if (!process.env.OPENAI_MODEL) {
results.push(pass('OPENAI_MODEL', 'Not set. Runtime fallback model will be used.'))
} else {
results.push(pass('OPENAI_MODEL', process.env.OPENAI_MODEL))
}
results.push(pass('OPENAI_BASE_URL', request.baseUrl))
if (request.transport === 'codex_responses') {
const credentials = resolveCodexApiCredentials(process.env)
if (!credentials.apiKey) {
const authHint = credentials.authPath
? `Missing CODEX_API_KEY and no usable auth.json at ${credentials.authPath}.`
: 'Missing CODEX_API_KEY and auth.json fallback.'
results.push(fail('CODEX auth', authHint))
} else if (!credentials.accountId) {
results.push(fail('CHATGPT_ACCOUNT_ID', 'Missing chatgpt_account_id in Codex auth.'))
} else {
const detail = credentials.source === 'env'
? 'Using CODEX_API_KEY.'
: `Using ${credentials.authPath}.`
results.push(pass('CODEX auth', detail))
}
return results
}
const key = process.env.OPENAI_API_KEY
const githubToken = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN
if (key === 'SUA_CHAVE') {
results.push(fail('OPENAI_API_KEY', 'Placeholder value detected: SUA_CHAVE.'))
} else if (
!key &&
!isLocalBaseUrl(request.baseUrl) &&
!(useGithub && githubToken?.trim())
) {
results.push(fail('OPENAI_API_KEY', 'Missing key for non-local provider URL.'))
} else if (!key && useGithub && githubToken?.trim()) {
results.push(
pass('OPENAI_API_KEY', 'Not set; GITHUB_TOKEN/GH_TOKEN will be used for GitHub Models.'),
)
} else if (!key) {
results.push(pass('OPENAI_API_KEY', 'Not set (allowed for local providers like Atomic Chat/Ollama/LM Studio).'))
} else {
results.push(pass('OPENAI_API_KEY', 'Configured.'))
}
return results
}
async function checkBaseUrlReachability(): Promise<CheckResult> {
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
if (!useGemini && !useOpenAI && !useGithub) {
return pass('Provider reachability', 'Skipped (OpenAI-compatible mode disabled).')
}
if (useGithub) {
return pass(
'Provider reachability',
'Skipped for GitHub Models (inference endpoint differs from OpenAI /models probe).',
)
}
const geminiBaseUrl = 'https://generativelanguage.googleapis.com/v1beta/openai'
const resolvedBaseUrl = useGemini
? (process.env.GEMINI_BASE_URL ?? geminiBaseUrl)
: undefined
const request = resolveProviderRequest({
model: process.env.OPENAI_MODEL,
baseUrl: resolvedBaseUrl ?? process.env.OPENAI_BASE_URL,
})
const endpoint = request.transport === 'codex_responses'
? `${request.baseUrl}/responses`
: `${request.baseUrl}/models`
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 4000)
try {
const headers: Record<string, string> = {}
let method = 'GET'
let body: string | undefined
if (request.transport === 'codex_responses') {
const credentials = resolveCodexApiCredentials(process.env)
if (credentials.apiKey) {
headers.Authorization = `Bearer ${credentials.apiKey}`
}
if (credentials.accountId) {
headers['chatgpt-account-id'] = credentials.accountId
}
headers['Content-Type'] = 'application/json'
headers.originator = 'openclaude'
method = 'POST'
body = JSON.stringify({
model: request.resolvedModel,
instructions: 'Runtime doctor probe.',
input: [
{
type: 'message',
role: 'user',
content: [{ type: 'input_text', text: 'ping' }],
},
],
store: false,
stream: true,
})
} else if (useGemini && (process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY)) {
headers.Authorization = `Bearer ${process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY}`
} else if (process.env.OPENAI_API_KEY) {
headers.Authorization = `Bearer ${process.env.OPENAI_API_KEY}`
}
const response = await fetch(endpoint, {
method,
headers,
body,
signal: controller.signal,
})
if (response.status === 200 || response.status === 401 || response.status === 403) {
return pass('Provider reachability', `Reached ${endpoint} (status ${response.status}).`)
}
const responseBody = await response.text().catch(() => '')
const detail = formatReachabilityFailureDetail(
endpoint,
response.status,
responseBody,
request,
)
return fail(
'Provider reachability',
detail,
)
} catch (error) {
const message = error instanceof Error ? error.message : String(error)
return fail('Provider reachability', `Failed to reach ${endpoint}: ${message}`)
} finally {
clearTimeout(timeout)
}
}
function isAtomicChatUrl(baseUrl: string): boolean {
try {
const parsed = new URL(baseUrl)
return parsed.port === '1337' && isLocalBaseUrl(baseUrl)
} catch {
return false
}
}
function checkOllamaProcessorMode(): CheckResult {
if (
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
) {
return pass('Ollama processor mode', 'Skipped (OpenAI-compatible mode disabled).')
}
const baseUrl = currentBaseUrl()
if (!isLocalBaseUrl(baseUrl)) {
return pass('Ollama processor mode', 'Skipped (provider URL is not local).')
}
if (isAtomicChatUrl(baseUrl)) {
return pass('Ollama processor mode', 'Skipped (Atomic Chat local provider detected, not Ollama).')
}
const result = spawnSync('ollama', ['ps'], {
cwd: process.cwd(),
encoding: 'utf8',
shell: true,
})
if (result.status !== 0) {
const detail = (result.stderr || result.stdout || 'Unable to run ollama ps').trim()
return pass('Ollama processor mode', `Native CLI check failed (${detail}). Assuming valid Docker/remote backend since HTTP ping passed.`)
}
const output = (result.stdout || '').trim()
if (!output) {
return fail('Ollama processor mode', 'ollama ps returned empty output.')
}
const lines = output.split(/\r?\n/).map(line => line.trim()).filter(Boolean)
const modelLine = lines.find(line => line.includes(':') && !line.startsWith('NAME'))
if (!modelLine) {
return pass('Ollama processor mode', 'No loaded model found (run a prompt first).')
}
if (modelLine.includes('CPU')) {
return pass('Ollama processor mode', 'Detected CPU mode. This is valid but can be slow for larger models.')
}
return pass('Ollama processor mode', `Detected non-CPU mode: ${modelLine}`)
}
function serializeSafeEnvSummary(): Record<string, string | boolean> {
if (isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
return {
CLAUDE_CODE_USE_GEMINI: true,
GEMINI_MODEL: process.env.GEMINI_MODEL ?? '(unset, default: gemini-2.0-flash)',
GEMINI_BASE_URL: process.env.GEMINI_BASE_URL ?? 'https://generativelanguage.googleapis.com/v1beta/openai',
GEMINI_API_KEY_SET: Boolean(process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY),
}
}
if (
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB) &&
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
) {
return {
CLAUDE_CODE_USE_GITHUB: true,
OPENAI_MODEL:
process.env.OPENAI_MODEL ??
'(unset, default: github:copilot → openai/gpt-4.1)',
OPENAI_BASE_URL:
process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE,
GITHUB_TOKEN_SET: Boolean(
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN,
),
}
}
const request = resolveProviderRequest({
model: process.env.OPENAI_MODEL,
baseUrl: process.env.OPENAI_BASE_URL,
})
return {
CLAUDE_CODE_USE_OPENAI: isTruthy(process.env.CLAUDE_CODE_USE_OPENAI),
OPENAI_MODEL: process.env.OPENAI_MODEL ?? '(unset)',
OPENAI_BASE_URL: request.baseUrl,
OPENAI_API_KEY_SET: Boolean(process.env.OPENAI_API_KEY),
CODEX_API_KEY_SET: Boolean(resolveCodexApiCredentials(process.env).apiKey),
}
}
function printResults(results: CheckResult[]): void {
for (const result of results) {
const icon = result.ok ? 'PASS' : 'FAIL'
const suffix = result.detail ? ` - ${result.detail}` : ''
console.log(`[${icon}] ${result.label}${suffix}`)
}
}
function writeJsonReport(
options: CliOptions,
results: CheckResult[],
): void {
const envSummary = serializeSafeEnvSummary()
const payload = {
timestamp: new Date().toISOString(),
cwd: process.cwd(),
summary: {
total: results.length,
passed: results.filter(result => result.ok).length,
failed: results.filter(result => !result.ok).length,
},
env: envSummary,
results,
}
if (options.json) {
console.log(
JSON.stringify(
{
timestamp: payload.timestamp,
cwd: payload.cwd,
summary: payload.summary,
env: '[redacted in console JSON output; use --out-file for the full report]',
results: payload.results,
},
null,
2,
),
)
}
if (options.outFile) {
const outputPath = resolve(process.cwd(), options.outFile)
mkdirSync(dirname(outputPath), { recursive: true })
writeFileSync(outputPath, JSON.stringify(payload, null, 2), 'utf8')
if (!options.json) {
console.log(`Report written to ${outputPath}`)
}
}
}
async function main(): Promise<void> {
const options = parseOptions(process.argv.slice(2))
const results: CheckResult[] = []
const { enableConfigs } = await import('../src/utils/config.js')
enableConfigs()
const { applySafeConfigEnvironmentVariables } = await import('../src/utils/managedEnv.js')
applySafeConfigEnvironmentVariables()
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../src/utils/githubModelsCredentials.js')
hydrateGithubModelsTokenFromSecureStorage()
results.push(checkNodeVersion())
results.push(checkBunRuntime())
results.push(checkBuildArtifacts())
results.push(...checkOpenAIEnv())
results.push(await checkBaseUrlReachability())
results.push(checkOllamaProcessorMode())
if (!options.json) {
printResults(results)
}
writeJsonReport(options, results)
const hasFailure = results.some(result => !result.ok)
if (hasFailure) {
process.exitCode = 1
return
}
if (!options.json) {
console.log('\nRuntime checks completed successfully.')
}
}
if (import.meta.main) {
await main()
}
export {}