feat(zai): add Z.AI GLM Coding Plan provider preset (#896)

* feat(zai): add Z.AI GLM Coding Plan provider preset

Add dedicated Z.AI provider support for the GLM Coding Plan, enabling
use of GLM-5.1, GLM-5-Turbo, GLM-4.7, and GLM-4.5-Air models through
the OpenAI-compatible shim with proper thinking mode (reasoning_content),
max_tokens handling, and context window sizing.

* fix(zai): unify GLM max output token limits across casing variants

glm-5/glm-4.7 had conservative 16K max output while GLM-5/GLM-4.7
had 131K. Use consistent Z.AI coding plan limits for all GLM variants.

* fix(zai): restore DashScope GLM limits, enable GLM thinking support

- Restore lowercase glm-5/glm-4.7 to 16_384 max output (DashScope limits)
  while keeping Z.AI coding plan high limits on uppercase GLM-* keys only
- Add GLM model support to modelSupportsThinking() so reasoning_content
  is enabled when using GLM-5.x/GLM-4.7 models on Z.AI

* fix(zai): tighten GLM regexes, fix misleading context window comment

- Use precise regex in thinking.ts: exact GLM model matches only,
  no false positives on glm-50/glm-4, includes glm-4.5-air
- Use uppercase-only match in StartupScreen rawModel fallback so
  DashScope lowercase glm-* models aren't mislabeled as Z.AI
- Clarify context window comment: lowercase glm-5.1/glm-5-turbo/
  glm-4.5-air are Z.AI-specific aliases, not DashScope

* fix(zai): scope GLM detection to Z.AI

* improve readability of max_completion_tokens check

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
This commit is contained in:
chioarub
2026-04-26 03:18:59 +03:00
committed by GitHub
parent 29f7579377
commit a0d657ee18
16 changed files with 342 additions and 6 deletions

View File

@@ -9,6 +9,7 @@ import { isLocalProviderUrl, resolveProviderRequest } from '../services/api/prov
import { getLocalOpenAICompatibleProviderLabel } from '../utils/providerDiscovery.js'
import { getSettings_DEPRECATED } from '../utils/settings/settings.js'
import { parseUserSpecifiedModel } from '../utils/model/model.js'
import { containsExactZaiGlmModelId, isZaiBaseUrl } from '../utils/zaiProvider.js'
declare const MACRO: { VERSION: string; DISPLAY_VERSION?: string }
@@ -137,6 +138,7 @@ export function detectProvider(): { name: string; model: string; baseUrl: string
else if (/api\.kimi\.com/i.test(baseUrl)) name = 'Moonshot AI - Kimi Code'
else if (/moonshot/i.test(baseUrl)) name = 'Moonshot AI - API'
else if (/deepseek/i.test(baseUrl)) name = 'DeepSeek'
else if (isZaiBaseUrl(baseUrl)) name = 'Z.AI - GLM'
else if (/mistral/i.test(baseUrl)) name = 'Mistral'
// rawModel fallback — fires only when base URL is generic/custom.
else if (/nvidia/i.test(rawModel)) name = 'NVIDIA NIM'
@@ -146,6 +148,7 @@ export function detectProvider(): { name: string; model: string; baseUrl: string
else if (/\bkimi-k/i.test(rawModel) || /moonshot/i.test(rawModel))
name = 'Moonshot AI - API'
else if (/deepseek/i.test(rawModel)) name = 'DeepSeek'
else if (containsExactZaiGlmModelId(rawModel)) name = 'Z.AI - GLM'
else if (/mistral/i.test(rawModel)) name = 'Mistral'
else if (/llama/i.test(rawModel)) name = 'Meta Llama'
else if (/bankr/i.test(baseUrl)) name = 'Bankr'