feat(zai): add Z.AI GLM Coding Plan provider preset (#896)
* feat(zai): add Z.AI GLM Coding Plan provider preset Add dedicated Z.AI provider support for the GLM Coding Plan, enabling use of GLM-5.1, GLM-5-Turbo, GLM-4.7, and GLM-4.5-Air models through the OpenAI-compatible shim with proper thinking mode (reasoning_content), max_tokens handling, and context window sizing. * fix(zai): unify GLM max output token limits across casing variants glm-5/glm-4.7 had conservative 16K max output while GLM-5/GLM-4.7 had 131K. Use consistent Z.AI coding plan limits for all GLM variants. * fix(zai): restore DashScope GLM limits, enable GLM thinking support - Restore lowercase glm-5/glm-4.7 to 16_384 max output (DashScope limits) while keeping Z.AI coding plan high limits on uppercase GLM-* keys only - Add GLM model support to modelSupportsThinking() so reasoning_content is enabled when using GLM-5.x/GLM-4.7 models on Z.AI * fix(zai): tighten GLM regexes, fix misleading context window comment - Use precise regex in thinking.ts: exact GLM model matches only, no false positives on glm-50/glm-4, includes glm-4.5-air - Use uppercase-only match in StartupScreen rawModel fallback so DashScope lowercase glm-* models aren't mislabeled as Z.AI - Clarify context window comment: lowercase glm-5.1/glm-5-turbo/ glm-4.5-air are Z.AI-specific aliases, not DashScope * fix(zai): scope GLM detection to Z.AI * improve readability of max_completion_tokens check Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
This commit is contained in:
@@ -116,6 +116,11 @@ describe('detectProvider — direct vendor endpoints', () => {
|
||||
expect(detectProvider().name).toBe('Mistral')
|
||||
})
|
||||
|
||||
test('api.z.ai labels as Z.AI GLM', () => {
|
||||
setupOpenAIMode('https://api.z.ai/api/coding/paas/v4', 'GLM-5.1')
|
||||
expect(detectProvider().name).toBe('Z.AI - GLM')
|
||||
})
|
||||
|
||||
test('default OpenAI URL + gpt-4o labels as OpenAI', () => {
|
||||
setupOpenAIMode('https://api.openai.com/v1', 'gpt-4o')
|
||||
expect(detectProvider().name).toBe('OpenAI')
|
||||
@@ -149,6 +154,21 @@ describe('detectProvider — rawModel fallback when URL is generic', () => {
|
||||
setupOpenAIMode('https://my-proxy.internal/v1', 'mistral-large-latest')
|
||||
expect(detectProvider().name).toBe('Mistral')
|
||||
})
|
||||
|
||||
test('custom proxy + exact uppercase GLM ID falls back to Z.AI GLM', () => {
|
||||
setupOpenAIMode('https://my-proxy.internal/v1', 'GLM-5.1')
|
||||
expect(detectProvider().name).toBe('Z.AI - GLM')
|
||||
})
|
||||
|
||||
test('custom proxy + lowercase glm ID stays generic OpenAI', () => {
|
||||
setupOpenAIMode('https://my-proxy.internal/v1', 'glm-5.1')
|
||||
expect(detectProvider().name).toBe('OpenAI')
|
||||
})
|
||||
|
||||
test('DashScope lowercase glm ID is not mislabeled as Z.AI', () => {
|
||||
setupOpenAIMode('https://dashscope.aliyuncs.com/compatible-mode/v1', 'glm-5.1')
|
||||
expect(detectProvider().name).toBe('OpenAI')
|
||||
})
|
||||
})
|
||||
|
||||
// --- Explicit env flags win over URL heuristics ---
|
||||
|
||||
Reference in New Issue
Block a user