The startup screen was only reading model from env vars and settings,
ignoring the --model CLI flag since it's parsed by Commander.js after
the banner prints. Now eagerly parses --model from argv before rendering
so the displayed model matches what the session will actually use.
* feat(zai): add Z.AI GLM Coding Plan provider preset
Add dedicated Z.AI provider support for the GLM Coding Plan, enabling
use of GLM-5.1, GLM-5-Turbo, GLM-4.7, and GLM-4.5-Air models through
the OpenAI-compatible shim with proper thinking mode (reasoning_content),
max_tokens handling, and context window sizing.
* fix(zai): unify GLM max output token limits across casing variants
glm-5/glm-4.7 had conservative 16K max output while GLM-5/GLM-4.7
had 131K. Use consistent Z.AI coding plan limits for all GLM variants.
* fix(zai): restore DashScope GLM limits, enable GLM thinking support
- Restore lowercase glm-5/glm-4.7 to 16_384 max output (DashScope limits)
while keeping Z.AI coding plan high limits on uppercase GLM-* keys only
- Add GLM model support to modelSupportsThinking() so reasoning_content
is enabled when using GLM-5.x/GLM-4.7 models on Z.AI
* fix(zai): tighten GLM regexes, fix misleading context window comment
- Use precise regex in thinking.ts: exact GLM model matches only,
no false positives on glm-50/glm-4, includes glm-4.5-air
- Use uppercase-only match in StartupScreen rawModel fallback so
DashScope lowercase glm-* models aren't mislabeled as Z.AI
- Clarify context window comment: lowercase glm-5.1/glm-5-turbo/
glm-4.5-air are Z.AI-specific aliases, not DashScope
* fix(zai): scope GLM detection to Z.AI
* improve readability of max_completion_tokens check
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
The banner provider branch tested model-name substrings (`/deepseek/`, `/kimi/`,
`/mistral/`, `/llama/`) before aggregator base-URL substrings (`/openrouter/`,
`/together/`, `/groq/`, `/azure/`). When running OpenRouter/Together/Groq with
vendor-prefixed model IDs (e.g. `deepseek/deepseek-chat`, `moonshotai/kimi-k2`,
`deepseek-r1-distill-llama-70b`), the banner mislabelled the provider.
Reorder: explicit env flags (NVIDIA_NIM, MINIMAX_API_KEY) and codex transport
win first; base-URL host checks run before rawModel fallback; rawModel only
fires when the base URL is generic/custom. Add unit tests covering the
aggregator × vendor-prefixed-model matrix plus direct-vendor regressions.
Closes#855