* feat: add xAI as official provider
- Add xAI preset to ProviderManager (alphabetical order)
- Add xAI provider detection via XAI_API_KEY
- Add xAI startup screen heuristic (x.ai base URL or grok model)
- Add xAI status display properties
- Add grok-4 and grok-3 context windows
- Add xAI model fallbacks across all tiers
- Fix JSDoc priority order in providerAutoDetect
Co-Authored-By: Claude Opus 4.6 <noreply@openclaude.dev>
* fix(xai): persist relaunch classification for xAI profiles
Addresses reviewer feedback on feat/xai-official-provider:
- isProcessEnvAlignedWithProfile now validates XAI_API_KEY for x.ai
base URLs, mirroring the Bankr pattern. Without this, relaunch
skips re-applying the profile, XAI_API_KEY stays unset, and
getAPIProvider() falls back to 'openai'.
- buildOpenAICompatibleStartupEnv now sets XAI_API_KEY when syncing
active xAI profile to the legacy fallback file.
- Adds 'xai' to VALID_PROVIDERS and --provider xai CLI flag support.
- Adds xAI detection to providerDiscovery label heuristics.
- Adds 'xai' to legacy ProviderProfile type/isProviderProfile guard.
- Adds targeted tests for relaunch alignment, flag application, and
discovery labeling.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@openclaude.dev>
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
The startup screen was only reading model from env vars and settings,
ignoring the --model CLI flag since it's parsed by Commander.js after
the banner prints. Now eagerly parses --model from argv before rendering
so the displayed model matches what the session will actually use.
* feat(zai): add Z.AI GLM Coding Plan provider preset
Add dedicated Z.AI provider support for the GLM Coding Plan, enabling
use of GLM-5.1, GLM-5-Turbo, GLM-4.7, and GLM-4.5-Air models through
the OpenAI-compatible shim with proper thinking mode (reasoning_content),
max_tokens handling, and context window sizing.
* fix(zai): unify GLM max output token limits across casing variants
glm-5/glm-4.7 had conservative 16K max output while GLM-5/GLM-4.7
had 131K. Use consistent Z.AI coding plan limits for all GLM variants.
* fix(zai): restore DashScope GLM limits, enable GLM thinking support
- Restore lowercase glm-5/glm-4.7 to 16_384 max output (DashScope limits)
while keeping Z.AI coding plan high limits on uppercase GLM-* keys only
- Add GLM model support to modelSupportsThinking() so reasoning_content
is enabled when using GLM-5.x/GLM-4.7 models on Z.AI
* fix(zai): tighten GLM regexes, fix misleading context window comment
- Use precise regex in thinking.ts: exact GLM model matches only,
no false positives on glm-50/glm-4, includes glm-4.5-air
- Use uppercase-only match in StartupScreen rawModel fallback so
DashScope lowercase glm-* models aren't mislabeled as Z.AI
- Clarify context window comment: lowercase glm-5.1/glm-5-turbo/
glm-4.5-air are Z.AI-specific aliases, not DashScope
* fix(zai): scope GLM detection to Z.AI
* improve readability of max_completion_tokens check
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
* feat(provider): add Bankr LLM Gateway support
Add Bankr as an OpenAI-compatible provider preset with dedicated env vars:
- BNKR_API_KEY, BANKR_BASE_URL, BANKR_MODEL
- Uses X-API-Key header instead of Authorization Bearer
- Base URL: https://llm.bankr.bot/v1
- Default model: claude-opus-4.6
Changes:
- Add 'bankr' to VALID_PROVIDERS and provider flag handling
- Add buildBankrProfileEnv() with env key registration
- Add Bankr detection in startup screen and provider discovery
- Map Bankr env vars to OpenAI-compatible vars in shim
- Add Bankr preset to ProviderManager (alphabetical order)
- Update PRESET_ORDER test to include Bankr
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* fixup(provider): address Bankr PR review feedback
1. Map BNKR_API_KEY → OPENAI_API_KEY in providerFlag.ts so
--provider bankr works with BNKR_API_KEY in non-interactive startup.
2. Remove unconditional BANKR_MODEL read from model.ts; it maps to
OPENAI_MODEL via providerFlag.ts and openaiShim.ts, preventing
cross-provider leakage.
3. Use X-API-Key for Bankr model discovery in openaiModelDiscovery.ts
and providerDiscovery.ts, matching chat request auth.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
---------
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
The banner provider branch tested model-name substrings (`/deepseek/`, `/kimi/`,
`/mistral/`, `/llama/`) before aggregator base-URL substrings (`/openrouter/`,
`/together/`, `/groq/`, `/azure/`). When running OpenRouter/Together/Groq with
vendor-prefixed model IDs (e.g. `deepseek/deepseek-chat`, `moonshotai/kimi-k2`,
`deepseek-r1-distill-llama-70b`), the banner mislabelled the provider.
Reorder: explicit env flags (NVIDIA_NIM, MINIMAX_API_KEY) and codex transport
win first; base-URL host checks run before rawModel fallback; rawModel only
fires when the base URL is generic/custom. Add unit tests covering the
aggregator × vendor-prefixed-model matrix plus direct-vendor regressions.
Closes#855
* feat(provider): first-class Moonshot (Kimi) direct-API support
Moonshot's direct API (api.moonshot.ai/v1) is OpenAI-compatible and works
today via the generic OpenAI shim, including the reasoning_content channel
that Kimi returns alongside the user-visible content. But the UX was rough:
unknown context window triggered the conservative 128k fallback + a warning,
and the provider displayed as "Local OpenAI-compatible".
Makes Moonshot a recognized provider:
- src/utils/model/openaiContextWindows.ts: add the Kimi K2 family and
moonshot-v1-* variants to both the context-window and max-output tables.
Values from Moonshot's model card — K2.6 and K2-thinking are 256K,
K2/K2-instruct are 128K, moonshot-v1 sizes are embedded in the model id.
- src/utils/providerDiscovery.ts: recognize the api.moonshot.ai hostname
and label it "Moonshot (Kimi)" in the startup banner and provider UI.
Users can now launch with:
CLAUDE_CODE_USE_OPENAI=1 \
OPENAI_BASE_URL=https://api.moonshot.ai/v1 \
OPENAI_API_KEY=sk-... \
OPENAI_MODEL=kimi-k2.6 \
openclaude
and get accurate compaction + correct labeling + correct max_tokens out
of the box.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* fix(openai-shim): Moonshot API compatibility — max_tokens + strip store
Moonshot's direct API (api.moonshot.ai and api.moonshot.cn) uses the
classic OpenAI `max_tokens` parameter, not the newer `max_completion_tokens`
that the shim defaults to. It also hasn't published support for `store`
and may reject it on strict-parse — same class of error as Gemini's
"Unknown name 'store': Cannot find field" 400.
- Adds isMoonshotBaseUrl() that recognizes both .ai and .cn hosts.
- Converts max_completion_tokens → max_tokens for Moonshot requests
(alongside GitHub / Mistral / local providers).
- Strips body.store for Moonshot requests (alongside Mistral / Gemini).
Two shim tests cover both the .ai and .cn hostnames.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* fix: null-safe access on getCachedMCConfig() in external builds
External builds stub src/services/compact/cachedMicrocompact.ts so
getCachedMCConfig() returns null, but two call sites still dereferenced
config.supportedModels directly. The ?. operator was in the wrong place
(config.supportedModels? instead of config?.supportedModels), so the null
config threw "Cannot read properties of null (reading 'supportedModels')"
on every request.
Reproduces with any external-build provider (notably Kimi/Moonshot just
enabled in the sibling commits, but equally DeepSeek, Mistral, Groq,
Ollama, etc.):
❯ hey
⏺ Cannot read properties of null (reading 'supportedModels')
- prompts.ts: early-return from getFunctionResultClearingSection() when
config is null, before touching .supportedModels.
- claude.ts: guard the debug-log jsonStringify with ?. so the log line
never throws.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* fix(startup): show "Moonshot (Kimi)" on the startup banner
The startup-screen provider detector had regex branches for OpenRouter,
DeepSeek, Groq, Together, Azure, etc., but nothing for Moonshot. Remote
Moonshot sessions fell through to the generic "OpenAI" label —
getLocalOpenAICompatibleProviderLabel() only runs for local URLs, and
api.moonshot.ai / api.moonshot.cn are not local.
Adds a Moonshot branch matching /moonshot/ in the base URL OR /kimi/ in
the model id. Now launches with:
OPENAI_BASE_URL=https://api.moonshot.ai/v1 OPENAI_MODEL=kimi-k2.6
display the Provider row as "Moonshot (Kimi)" instead of "OpenAI".
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
* refactor(provider): sort preset picker alphabetically; Custom at end
The /provider preset picker was in ad-hoc order (Anthropic, Ollama,
OpenAI, then a jumble of third-party / local / codex / Alibaba / custom /
nvidia / minimax). Hard to scan when you know the provider name you want.
Sorts the list alphabetically by label A→Z. Pins "Custom" to the end —
it's the catch-all / escape hatch so it's scanned last, not shuffled into
the alphabetical run where a user looking for a named provider might
grab it by mistake. First-run-only "Skip for now" stays at the very
bottom, after Custom.
Test churn:
- ProviderManager.test.tsx: four tests hardcoded press counts (1 or 3 'j'
presses) that broke when targets moved. Replaces them with a
navigateToPreset(stdin, label) helper driven from a declared
PRESET_ORDER array, so future list edits only update the array.
- ConsoleOAuthFlow.test.tsx: the 13-row test frame only renders the first
~13 providers. "Ollama", "OpenAI", "LM Studio" sentinels moved below
the fold; swap them for alphabetically-early providers still visible
in-frame ("Azure OpenAI", "DeepSeek", "Google Gemini"). Test intent
(picker opened with providers listed) is preserved.
Co-Authored-By: OpenClaude <openclaude@gitlawb.com>
---------
Co-authored-by: OpenClaude <openclaude@gitlawb.com>
Previously, the startup intro screen always displayed
'https://api.anthropic.com' as the endpoint for Anthropic provider,
even when a custom endpoint was configured via ANTHROPIC_BASE_URL.
This fix reads ANTHROPIC_BASE_URL from environment and displays the
actual configured endpoint, providing accurate information to users
about where their API requests will be sent (proxy gateways, staging,
custom Anthropic-compatible APIs).
Also adds isLocal detection for local endpoints to show appropriate
visual indicator in the startup banner.
Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
* feat: add NVIDIA NIM and MiniMax provider support
- Add nvidia-nim and minimax to --provider CLI flag
- Add model discovery for NVIDIA NIM (160+ models) and MiniMax
- Update /model picker to show provider-specific models
- Fix provider detection in startup banner
- Update .env.example with new provider options
Supported providers:
- NVIDIA NIM: https://integrate.api.nvidia.com/v1
- MiniMax: https://api.minimax.io/v1
* fix: resolve conflict in StartupScreen (keep NVIDIA/MiniMax + add Codex detection)
* fix: resolve providerProfile conflict (add imports from main, keep NVIDIA/MiniMax)
* fix: revert providerSecrets to match main (NVIDIA/MiniMax handled elsewhere)
* fix: add context window entries for NVIDIA NIM and new MiniMax models
* fix: use GLM-5 as NVIDIA NIM default and MiniMax-M2.5 for consistency
* fix: address remaining review items - add GLM/Kimi context entries, max output tokens, fix .env.example, revert to Nemotron default
* fix: filter NVIDIA NIM picker to chat/instruct models only, set provider-specific API keys from saved profiles
* chore: add more NVIDIA NIM context window entries for popular models
* fix: address remaining non-blocking items - fix base model, clear provider API keys on profile switch
* feat: enhance codex provider resolution with shortcut aliases and improved base URL handling
* fix: enhance codex alias resolution to include shell model
* feat: enhance Codex provider resolution to support new aliases and base URL handling
* fix: update base URL resolution logic for Codex models in GitHub mode
* fix: update provider transport logic to enforce Codex responses and adjust base URL handling
* fix: update provider request resolution to respect custom base URLs and adjust transport logic
* fix: restore OPENAI_MODEL environment variable handling in tests and provider config
* update gitHub copilot API with offical client id and update model configurations
* test: add unit tests for exchangeForCopilotToken and enhance GitHub model normalization
* remove PAT token feature
* test(api): harden provider tests against env leakage
* Added back trimmed github auth token
* added auto refresh logic for auto token along with test
* fix: remove forked provider validation in cli.tsx and clear stale provider env vars in /onboard-github
* refactor: streamline environment variable handling in mergeUserSettingsEnv
* fix: clear stale provider env vars to ensure correct GH routing
* Remove internal-only tooling from the external build (#352)
* Remove internal-only tooling without changing external runtime contracts
This trims the lowest-risk internal-only surfaces first: deleted internal
modules are replaced by build-time no-op stubs, the bundled stuck skill is
removed, and the insights S3 upload path now stays local-only. The privacy
verifier is expanded and the remaining bundled internal Slack/Artifactory
strings are neutralized without broad repo-wide renames.
Constraint: Keep the first PR deletion-heavy and avoid mass rewrites of USER_TYPE, tengu, or claude_code identifiers
Rejected: One-shot DMCA cleanup branch | too much semantic risk for a first PR
Confidence: medium
Scope-risk: moderate
Reversibility: clean
Directive: Treat full-repo typecheck as a baseline issue on this upstream snapshot; do not claim this commit introduced the existing non-Phase-A errors without isolating them first
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Not-tested: Full repo typecheck (currently fails on widespread pre-existing upstream errors outside this change set)
* Keep minimal source shims so CI can import Phase A cleanup paths
The first PR removed internal-only source files entirely, but CI provider
and context tests import those modules directly from source rather than
through the build-time no-telemetry stubs. This restores tiny no-op source
shims so tests and local source imports resolve while preserving the same
external runtime behavior.
Constraint: GitHub Actions runs source-level tests in addition to bundled build/privacy checks
Rejected: Revert the entire deletion pass | unnecessary once the import contract is satisfied by small shims
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: For later cleanup phases, treat build-time stubs and source-test imports as separate compatibility surfaces
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (still noisy on this upstream snapshot)
---------
Co-authored-by: anandh8x <test@example.com>
* Reduce internal-only labeling noise in source comments (#355)
This pass rewrites comment-only ANT-ONLY markers to neutral internal-only
language across the source tree without changing runtime strings, flags,
commands, or protocol identifiers. The goal is to lower obvious internal
prose leakage while keeping the diff mechanically safe and easy to review.
Constraint: Phase B is limited to comments/prose only; runtime strings and user-facing labels remain deferred
Rejected: Broad search-and-replace across strings and command descriptions | too risky for a prose-only pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly runtime/user-facing strings and should be handled separately from comment cleanup
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Neutralize internal Anthropic prose in explanatory comments (#357)
This is a small prose-only follow-up that rewrites clearly internal or
explanatory Anthropic comment language to neutral wording in a handful of
high-confidence files. It avoids runtime strings, flags, command labels,
protocol identifiers, and provider-facing references.
Constraint: Keep this pass narrowly scoped to comments/documentation only
Rejected: Broader Anthropic comment sweep across functional API/protocol references | too ambiguous for a safe prose-only PR
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Leave functional Anthropic references (API behavior, SDKs, URLs, provider labels, protocol docs) for separate reviewed passes
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Neutralize remaining internal-only diagnostic labels (#359)
This pass rewrites a small set of ant-only diagnostic and UI labels to
neutral internal wording while leaving command definitions, flags, and
runtime logic untouched. It focuses on internal debug output, dead UI
branches, and noninteractive headings rather than broader product text.
Constraint: Label cleanup only; do not change command semantics or ant-only logic gates
Rejected: Renaming ant-only command descriptions in main.tsx | broader UX surface better handled in a separate reviewed pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly command descriptions and intentionally deferred user-facing strings
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Finish eliminating remaining ANT-ONLY source labels (#360)
This extends the label-only cleanup to the remaining internal-only command,
debug, and heading strings so the source tree no longer contains ANT-ONLY
markers. The pass still avoids logic changes and only renames labels shown
in internal or gated surfaces.
Constraint: Update the existing label-cleanup PR without widening scope into behavior changes
Rejected: Leave the last ANT-ONLY strings for a later pass | low-cost cleanup while the branch is already focused on labels
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: The next phase should move off label cleanup and onto a separately scoped logic or rebrand slice
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Stub internal-only recording and model capability helpers (#377)
This follow-up Phase C-lite slice replaces purely internal helper modules
with stable external no-op surfaces and collapses internal elevated error
logging to a no-op. The change removes additional USER_TYPE-gated helper
behavior without touching product-facing runtime flows.
Constraint: Keep this PR limited to isolated helper modules that are already external no-ops in practice
Rejected: Pulling in broader speculation or logging sink changes | less isolated and easier to debate during review
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Continue Phase C with similarly isolated helpers before moving into mixed behavior files
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
Co-authored-by: anandh8x <test@example.com>
* Remove internal-only bundled skills and mock helpers (#376)
* Remove internal-only bundled skills and mock rate-limit behavior
This takes the next planned Phase C-lite slice by deleting bundled skills
that only ever registered for internal users and replacing the internal
mock rate-limit helper with a stable no-op external stub. The external
build keeps the same behavior while removing a concentrated block of
USER_TYPE-gated dead code.
Constraint: Limit this PR to isolated internal-only helpers and avoid bridge, oauth, or rebrand behavior
Rejected: Broad USER_TYPE cleanup across mixed runtime surfaces | too risky for the next medium-sized PR
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: The next cleanup pass should continue with similarly isolated USER_TYPE helpers before touching main.tsx or protocol-heavy code
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)
* Align internal-only helper removal with remaining user guidance
This follow-up fixes the mock billing stub to be a true no-op and removes
stale user-facing references to /verify and /skillify from the same PR.
It also leaves a clearer paper trail for review: the deleted verify skill
was explicitly ant-gated before removal, and the remaining mock helper
callers still resolve to safe no-op returns in the external build.
Constraint: Keep the PR focused on consistency fixes and reviewer-requested evidence, not new cleanup scope
Rejected: Leave stale guidance for a later PR | would make this branch internally inconsistent after skill removal
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When deleting gated features, always sweep user guidance and coordinator prompts in the same pass
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy; changed-file scan still shows only pre-existing tipRegistry errors outside edited lines)
* Clarify generic workflow wording after skill removal
This removes the last generic verification-skill wording that could still
be read as pointing at a deleted bundled command. The guidance now talks
about project workflows rather than a specific bundled verify skill.
Constraint: Keep the follow-up limited to reviewer-facing wording cleanup on the same PR
Rejected: Leave generic wording as-is | still too easy to misread after the explicit /verify references were removed
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When removing bundled commands, scrub both explicit and generic references in the same branch
Tested: bun run build
Tested: bun run smoke
Not-tested: Additional checks unchanged by wording-only follow-up
---------
Co-authored-by: anandh8x <test@example.com>
* test(api): add GEMINI_AUTH_MODE to environment setup in tests
* test: isolate GitHub/Gemini credential tests with fresh module imports and explicit non-bare env setup to prevent cross-test mock/cache leaks
* fix: update GitHub Copilot base URL and model defaults for improved compatibility
* fix: enhance error handling in OpenAI API response processing
* fix: improve error handling for GitHub Copilot API responses and streamline error body consumption
* fix: enhance response handling in OpenAI API shim for better error reporting and support for streaming responses
* feat: enhance GitHub device flow with fresh module import and token validation improvements
* fix: separate Copilot API routing from GitHub Models, clear stale env vars, honor providerOverride.apiKey
* fix: route GitHub GPT-5/Codex to Copilot API, show all Copilot models in picker, clear stale env vars
* fix GitHub Models API regression
* feat: update GitHub authentication to require OAuth tokens, normalize model handling for Copilot and GitHub Models
* fix: update GitHub token validation to support OAuth tokens and improve endpoint type handling
---------
Co-authored-by: Anandan <anandan.8x@gmail.com>
Co-authored-by: anandh8x <test@example.com>
* Add local OpenAI-compatible model discovery to /model
* Guard local OpenAI model discovery from Codex routing
* Preserve remote OpenAI Codex alias behavior
- Introduced environment variable CLAUDE_CODE_USE_GITHUB to enable GitHub Models.
- Added checks for GITHUB_TOKEN or GH_TOKEN for authentication.
- Updated base URL handling to include GitHub Models default.
- Enhanced provider detection and error handling for GitHub Models.
- Updated relevant functions and components to accommodate the new provider.
Apply the existing ACCENT colour (rgb 240 148 100) to the version
string so it stands out against the dim label, matching the warm
orange used throughout the startup screen for stars and status text.
Requested in #95.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
StartupScreen.ts was reading the version via globalThis['MACRO_DISPLAY_VERSION']
which is never populated — the Bun bundler inlines it as MACRO.DISPLAY_VERSION
(dot notation), not as a globalThis key.
Result: startup screen always showed the hardcoded fallback 'v0.1.4' regardless
of the installed version.
Fix: use MACRO.DISPLAY_VERSION ?? MACRO.VERSION directly, consistent with
cli.tsx, main.tsx, and logoV2Utils.ts.
Fixes#95
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds a new startup screen with filled-block text logo and sunset
gradient, printed to stdout before the Ink UI loads. Removes the
old OPEN box logo from the chat UI since the new screen replaces it.
Changes:
- src/components/StartupScreen.ts (NEW) — gradient OPEN CLAUDE logo
with provider info box (Provider, Model, Endpoint). Auto-detects
active provider from env vars (OpenAI, Gemini, DeepSeek, Ollama,
Groq, Mistral, Azure, LM Studio, Anthropic). Skipped in CI and
non-TTY environments.
- src/entrypoints/cli.tsx — calls printStartupScreen() at startup
before Ink renders
- src/components/Messages.tsx — removes <LogoV2 /> from LogoHeader
so the old OPEN box logo no longer appears in the chat UI
Addresses #55.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>