Feat/multi model provider support (#692)

* test: add tests for provider model env updates and multi-model profiles

Add comprehensive tests covering:
- OPENAI_MODEL/ANTHROPIC_MODEL env updates on provider activation
- Cross-provider type switches (openai ↔ anthropic) clearing stale env
- Multi-model profile activation using only the first model for env vars
- Model options cache population from comma-separated model lists
- getProfileModelOptions generating correct ModelOption arrays

* feat: multi-model provider support and model auto-switch

Support comma-separated model names in provider profiles (e.g.
"glm-4.7, glm-4.7-flash"). The first model is used as default on
activation; all models appear in the /model picker for easy switching.

When switching active providers, the session model now automatically
updates to the new provider's first model. The multi-model list is
preserved across switches and /model selections.

Changes:
- Add parseModelList, getPrimaryModel, hasMultipleModels utilities
  with full test coverage (19 tests)
- Use getPrimaryModel when applying profiles to process.env so only
  the primary model is set in OPENAI_MODEL/ANTHROPIC_MODEL
- Update ProviderManager UI to hint at multi-model syntax and show
  model count in provider list summaries
- Populate model options cache from multi-model profiles on activation
  so all models appear in /model picker regardless of base URL type
- Guard persistActiveProviderProfileModel against overwriting
  comma-separated lists: models already in the profile are session
  selections, not profile edits
- Set AppState.mainLoopModel to the actual model string on provider
  switch so Anthropic profiles use the configured model instead of
  falling back to the built-in default

* fix: only show profile models when provider profile env is applied

Guard the profile model picker options behind a
PROFILE_ENV_APPLIED check. getActiveProviderProfile() has a
?? profiles[0] fallback that returns the first profile even when
no profile is explicitly active, causing users with inactive
profiles to lose all standard model options (Opus, Haiku, etc.)
from the /model picker.

* fix: show all model names for profiles with 3 or fewer models

Instead of a summary format for multi-model profiles, display all
model names when there are 3 or fewer. Only use the "+ N more"
format for profiles with 4+ models.

* fix: preserve standard model options in picker alongside profile models

The previous implementation used an early return that replaced all
standard picker options (Opus, Haiku, Sonnet for Anthropic; Codex/GPT
models for OpenAI) with only the profile's custom models.

Changes:
- Collect profile models into a shared array instead of early returning
- Append profile models to firstParty path (Opus + Haiku + Sonnet + custom)
- Append profile models to PAYG 3P path (Codex + Sonnet + Opus + Haiku + custom)
- Guard collection behind PROFILE_ENV_APPLIED to avoid ?? profiles[0] fallback

Fixes review feedback: standard models are no longer hidden when a
provider profile with custom models is active. Users see both the
standard options and their profile's models.

---------

Co-authored-by: Ali Alakbarli <ali.alakbarli@users.noreply.github.com>
This commit is contained in:
emsanakhchivan
2026-04-16 01:01:55 +04:00
committed by GitHub
parent 51191d6132
commit b66633ea4d
6 changed files with 505 additions and 9 deletions

View File

@@ -9,6 +9,7 @@ import {
readCodexCredentialsAsync,
} from '../utils/codexCredentials.js'
import { isBareMode, isEnvTruthy } from '../utils/envUtils.js'
import { getPrimaryModel, hasMultipleModels, parseModelList } from '../utils/providerModels.js'
import {
applySavedProfileToCurrentSession,
buildCodexOAuthProfileEnv,
@@ -50,6 +51,7 @@ import {
import { Pane } from './design-system/Pane.js'
import TextInput from './TextInput.js'
import { useCodexOAuthFlow } from './useCodexOAuthFlow.js'
import { useSetAppState } from '../state/AppState.js'
export type ProviderManagerResult = {
action: 'saved' | 'cancelled'
@@ -108,8 +110,8 @@ const FORM_STEPS: Array<{
{
key: 'model',
label: 'Default model',
placeholder: 'e.g. llama3.1:8b',
helpText: 'Model name to use when this provider is active.',
placeholder: 'e.g. llama3.1:8b or glm-4.7, glm-4.7-flash',
helpText: 'Model name(s) to use. Separate multiple with commas; first is default.',
},
{
key: 'apiKey',
@@ -153,7 +155,12 @@ function profileSummary(profile: ProviderProfile, isActive: boolean): string {
const keyInfo = profile.apiKey ? 'key set' : 'no key'
const providerKind =
profile.provider === 'anthropic' ? 'anthropic' : 'openai-compatible'
return `${providerKind} · ${profile.baseUrl} · ${profile.model} · ${keyInfo}${activeSuffix}`
const models = parseModelList(profile.model)
const modelDisplay =
models.length <= 3
? models.join(', ')
: `${models[0]}, ${models[1]} + ${models.length - 2} more`
return `${providerKind} · ${profile.baseUrl} · ${modelDisplay} · ${keyInfo}${activeSuffix}`
}
function getGithubCredentialSourceFromEnv(
@@ -320,6 +327,7 @@ function CodexOAuthSetup({
}
export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
const setAppState = useSetAppState()
const initialGithubCredentialSource = getGithubCredentialSourceFromEnv()
const initialIsGithubActive = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const initialHasGithubCredential = initialGithubCredentialSource !== 'none'
@@ -573,6 +581,10 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
}
refreshProfiles()
setAppState(prev => ({
...prev,
mainLoopModel: GITHUB_PROVIDER_DEFAULT_MODEL,
}))
setStatusMessage(`Active provider: ${GITHUB_PROVIDER_LABEL}`)
setScreen('menu')
return
@@ -585,6 +597,16 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
return
}
// Update the session model to the new provider's first model.
// persistActiveProviderProfileModel (called by onChangeAppState) will
// not overwrite the multi-model list because it checks if the model
// is already in the profile's comma-separated model list.
const newModel = getPrimaryModel(active.model)
setAppState(prev => ({
...prev,
mainLoopModel: newModel,
}))
providerLabel = active.name
const settingsOverrideError =
clearStartupProviderOverrideFromUserSettings()

View File

@@ -33,7 +33,11 @@ import {
} from './model.js'
import { has1mContext } from '../context.js'
import { getGlobalConfig } from '../config.js'
import { getActiveOpenAIModelOptionsCache } from '../providerProfiles.js'
import {
getActiveOpenAIModelOptionsCache,
getActiveProviderProfile,
getProfileModelOptions,
} from '../providerProfiles.js'
import { getCachedOllamaModelOptions, isOllamaProvider } from './ollamaModels.js'
import { getCachedNvidiaNimModelOptions, isNvidiaNimProvider } from './nvidiaNimModels.js'
import { getCachedMiniMaxModelOptions, isMiniMaxProvider } from './minimaxModels.js'
@@ -476,6 +480,20 @@ function getModelOptionsBase(fastMode = false): ModelOption[] {
]
}
// When a provider profile's env is applied, collect its models so they
// can be appended to the standard picker options below.
// We check PROFILE_ENV_APPLIED to avoid the ?? profiles[0] fallback in
// getActiveProviderProfile which would affect users with inactive profiles.
const profileEnvApplied = process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED === '1'
const profileModelOptions: ModelOption[] = []
if (profileEnvApplied) {
const activeProfile = getActiveProviderProfile()
if (activeProfile) {
const models = getProfileModelOptions(activeProfile)
profileModelOptions.push(...models)
}
}
// PAYG 1P API: Default (Sonnet) + Sonnet 1M + Opus 4.6 + Opus 1M + Haiku
if (getAPIProvider() === 'firstParty') {
const payg1POptions = [getDefaultOptionForUser(fastMode)]
@@ -491,6 +509,7 @@ function getModelOptionsBase(fastMode = false): ModelOption[] {
}
}
payg1POptions.push(getHaiku45Option())
payg1POptions.push(...profileModelOptions)
return payg1POptions
}
@@ -530,6 +549,7 @@ function getModelOptionsBase(fastMode = false): ModelOption[] {
} else {
payg3pOptions.push(getHaikuOption())
}
payg3pOptions.push(...profileModelOptions)
return payg3pOptions
}

View File

@@ -0,0 +1,108 @@
import { describe, expect, test } from 'bun:test'
import {
getPrimaryModel,
hasMultipleModels,
parseModelList,
} from './providerModels.ts'
// ── parseModelList ────────────────────────────────────────────────────────────
describe('parseModelList', () => {
test('splits comma-separated models', () => {
expect(parseModelList('glm-4.7, glm-4.7-flash')).toEqual([
'glm-4.7',
'glm-4.7-flash',
])
})
test('returns single model in an array', () => {
expect(parseModelList('llama3.1:8b')).toEqual(['llama3.1:8b'])
})
test('trims whitespace around each model', () => {
expect(parseModelList(' gpt-4o , gpt-4o-mini , o3-mini ')).toEqual([
'gpt-4o',
'gpt-4o-mini',
'o3-mini',
])
})
test('filters out empty entries from trailing commas', () => {
expect(parseModelList('gpt-4o,,gpt-4o-mini,')).toEqual([
'gpt-4o',
'gpt-4o-mini',
])
})
test('returns empty array for empty string', () => {
expect(parseModelList('')).toEqual([])
})
test('returns empty array for whitespace-only string', () => {
expect(parseModelList(' ')).toEqual([])
})
test('returns empty array for comma-only string', () => {
expect(parseModelList(',,,')).toEqual([])
})
test('handles models with colons', () => {
expect(parseModelList('qwen2.5-coder:7b, llama3.1:8b')).toEqual([
'qwen2.5-coder:7b',
'llama3.1:8b',
])
})
})
// ── getPrimaryModel ───────────────────────────────────────────────────────────
describe('getPrimaryModel', () => {
test('returns first model from comma-separated list', () => {
expect(getPrimaryModel('glm-4.7, glm-4.7-flash')).toBe('glm-4.7')
})
test('returns the only model when single model is provided', () => {
expect(getPrimaryModel('llama3.1:8b')).toBe('llama3.1:8b')
})
test('returns the original string when input is empty', () => {
expect(getPrimaryModel('')).toBe('')
})
test('returns first model after trimming', () => {
expect(getPrimaryModel(' gpt-4o , gpt-4o-mini')).toBe('gpt-4o')
})
test('returns first model when others are empty from trailing commas', () => {
expect(getPrimaryModel('claude-sonnet-4-6,,')).toBe('claude-sonnet-4-6')
})
})
// ── hasMultipleModels ─────────────────────────────────────────────────────────
describe('hasMultipleModels', () => {
test('returns true when multiple models are present', () => {
expect(hasMultipleModels('glm-4.7, glm-4.7-flash')).toBe(true)
})
test('returns false for a single model', () => {
expect(hasMultipleModels('llama3.1:8b')).toBe(false)
})
test('returns false for empty string', () => {
expect(hasMultipleModels('')).toBe(false)
})
test('returns false for whitespace-only string', () => {
expect(hasMultipleModels(' ')).toBe(false)
})
test('returns false when extra commas produce no extra models', () => {
expect(hasMultipleModels('gpt-4o,,')).toBe(false)
})
test('returns true for three models', () => {
expect(hasMultipleModels('a, b, c')).toBe(true)
})
})

View File

@@ -0,0 +1,33 @@
/**
* Utility functions for parsing comma-separated model names in provider profiles.
*
* Example: "glm-4.7, glm-4.7-flash" -> ["glm-4.7", "glm-4.7-flash"]
* Single model: "llama3.1:8b" -> ["llama3.1:8b"]
*/
/**
* Splits a comma-separated model field into an array of trimmed model names,
* filtering out any empty entries.
*/
export function parseModelList(modelField: string): string[] {
return modelField
.split(',')
.map((part) => part.trim())
.filter((part) => part.length > 0)
}
/**
* Returns the first (primary) model from a comma-separated model field.
* Falls back to the original string if parsing yields no results.
*/
export function getPrimaryModel(modelField: string): string {
const models = parseModelList(modelField)
return models.length > 0 ? models[0] : modelField
}
/**
* Returns true if the model field contains more than one model.
*/
export function hasMultipleModels(modelField: string): boolean {
return parseModelList(modelField).length > 1
}

View File

@@ -139,6 +139,39 @@ describe('applyProviderProfileToProcessEnv', () => {
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(getFreshAPIProvider()).toBe('firstParty')
})
test('openai profile with multi-model string sets only first model in OPENAI_MODEL', async () => {
const { applyProviderProfileToProcessEnv } =
await importFreshProviderProfileModules()
applyProviderProfileToProcessEnv(
buildProfile({
provider: 'openai',
baseUrl: 'https://api.openai.com/v1',
model: 'glm-4.7, glm-4.7-flash, glm-4.7-plus',
}),
)
expect(process.env.OPENAI_MODEL).toBe('glm-4.7')
expect(String(process.env.CLAUDE_CODE_USE_OPENAI)).toBe('1')
expect(process.env.OPENAI_BASE_URL).toBe('https://api.openai.com/v1')
})
test('anthropic profile with multi-model string sets only first model in ANTHROPIC_MODEL', async () => {
const { applyProviderProfileToProcessEnv } =
await importFreshProviderProfileModules()
applyProviderProfileToProcessEnv(
buildProfile({
provider: 'anthropic',
baseUrl: 'https://api.anthropic.com',
model: 'claude-sonnet-4-6, claude-opus-4-6',
}),
)
expect(process.env.ANTHROPIC_MODEL).toBe('claude-sonnet-4-6')
expect(process.env.ANTHROPIC_BASE_URL).toBe('https://api.anthropic.com')
})
})
describe('applyActiveProviderProfileFromConfig', () => {
@@ -361,6 +394,169 @@ describe('getProviderPresetDefaults', () => {
})
})
describe('setActiveProviderProfile', () => {
test('sets OPENAI_MODEL env var when switching to an openai-type provider', async () => {
const { setActiveProviderProfile } =
await importFreshProviderProfileModules()
const openaiProfile = buildProfile({
id: 'openai_prof',
name: 'OpenAI Provider',
provider: 'openai',
baseUrl: 'https://api.openai.com/v1',
model: 'gpt-4o',
})
saveMockGlobalConfig(current => ({
...current,
providerProfiles: [openaiProfile],
}))
const result = setActiveProviderProfile('openai_prof')
expect(result?.id).toBe('openai_prof')
expect(String(process.env.CLAUDE_CODE_USE_OPENAI)).toBe('1')
expect(process.env.OPENAI_MODEL).toBe('gpt-4o')
expect(process.env.OPENAI_BASE_URL).toBe('https://api.openai.com/v1')
expect(process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID).toBe(
'openai_prof',
)
})
test('sets ANTHROPIC_MODEL env var when switching to an anthropic-type provider', async () => {
const { setActiveProviderProfile } =
await importFreshProviderProfileModules()
const anthropicProfile = buildProfile({
id: 'anthro_prof',
name: 'Anthropic Provider',
provider: 'anthropic',
baseUrl: 'https://api.anthropic.com',
model: 'claude-sonnet-4-6',
})
saveMockGlobalConfig(current => ({
...current,
providerProfiles: [anthropicProfile],
}))
const result = setActiveProviderProfile('anthro_prof')
expect(result?.id).toBe('anthro_prof')
expect(process.env.ANTHROPIC_MODEL).toBe('claude-sonnet-4-6')
expect(process.env.ANTHROPIC_BASE_URL).toBe('https://api.anthropic.com')
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(process.env.OPENAI_MODEL).toBeUndefined()
expect(process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID).toBe(
'anthro_prof',
)
})
test('clears openai model env and sets anthropic model env when switching from openai to anthropic provider', async () => {
const { setActiveProviderProfile } =
await importFreshProviderProfileModules()
const openaiProfile = buildProfile({
id: 'openai_prof',
name: 'OpenAI Provider',
provider: 'openai',
baseUrl: 'https://api.openai.com/v1',
model: 'gpt-4o',
apiKey: 'sk-openai-key',
})
const anthropicProfile = buildProfile({
id: 'anthro_prof',
name: 'Anthropic Provider',
provider: 'anthropic',
baseUrl: 'https://api.anthropic.com',
model: 'claude-sonnet-4-6',
apiKey: 'sk-ant-key',
})
saveMockGlobalConfig(current => ({
...current,
providerProfiles: [openaiProfile, anthropicProfile],
}))
// First activate the openai profile
setActiveProviderProfile('openai_prof')
expect(process.env.OPENAI_MODEL).toBe('gpt-4o')
expect(String(process.env.CLAUDE_CODE_USE_OPENAI)).toBe('1')
// Now switch to the anthropic profile
const result = setActiveProviderProfile('anthro_prof')
expect(result?.id).toBe('anthro_prof')
expect(process.env.ANTHROPIC_MODEL).toBe('claude-sonnet-4-6')
expect(process.env.ANTHROPIC_BASE_URL).toBe('https://api.anthropic.com')
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(process.env.OPENAI_MODEL).toBeUndefined()
expect(process.env.OPENAI_BASE_URL).toBeUndefined()
expect(process.env.OPENAI_API_KEY).toBeUndefined()
expect(process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID).toBe(
'anthro_prof',
)
})
test('clears anthropic model env and sets openai model env when switching from anthropic to openai provider', async () => {
const { setActiveProviderProfile } =
await importFreshProviderProfileModules()
const anthropicProfile = buildProfile({
id: 'anthro_prof',
name: 'Anthropic Provider',
provider: 'anthropic',
baseUrl: 'https://api.anthropic.com',
model: 'claude-sonnet-4-6',
apiKey: 'sk-ant-key',
})
const openaiProfile = buildProfile({
id: 'openai_prof',
name: 'OpenAI Provider',
provider: 'openai',
baseUrl: 'https://api.openai.com/v1',
model: 'gpt-4o',
apiKey: 'sk-openai-key',
})
saveMockGlobalConfig(current => ({
...current,
providerProfiles: [anthropicProfile, openaiProfile],
}))
// First activate the anthropic profile
setActiveProviderProfile('anthro_prof')
expect(process.env.ANTHROPIC_MODEL).toBe('claude-sonnet-4-6')
expect(process.env.ANTHROPIC_BASE_URL).toBe('https://api.anthropic.com')
// Now switch to the openai profile
const result = setActiveProviderProfile('openai_prof')
expect(result?.id).toBe('openai_prof')
expect(String(process.env.CLAUDE_CODE_USE_OPENAI)).toBe('1')
expect(process.env.OPENAI_MODEL).toBe('gpt-4o')
expect(process.env.OPENAI_BASE_URL).toBe('https://api.openai.com/v1')
// ANTHROPIC_MODEL is set to the profile model for all provider types
expect(process.env.ANTHROPIC_MODEL).toBe('gpt-4o')
expect(process.env.ANTHROPIC_BASE_URL).toBeUndefined()
expect(process.env.ANTHROPIC_API_KEY).toBeUndefined()
expect(process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID).toBe(
'openai_prof',
)
})
test('returns null for non-existent profile id', async () => {
const { setActiveProviderProfile } =
await importFreshProviderProfileModules()
const openaiProfile = buildProfile({ id: 'existing_prof' })
saveMockGlobalConfig(current => ({
...current,
providerProfiles: [openaiProfile],
}))
const result = setActiveProviderProfile('nonexistent_prof')
expect(result).toBeNull()
})
})
describe('deleteProviderProfile', () => {
test('deleting final profile clears provider env when active profile applied it', async () => {
const {
@@ -429,3 +625,82 @@ describe('deleteProviderProfile', () => {
expect(process.env.OPENAI_MODEL).toBe('qwen2.5:3b')
})
})
describe('getProfileModelOptions', () => {
test('generates options for multi-model profile', async () => {
const { getProfileModelOptions } =
await importFreshProviderProfileModules()
const options = getProfileModelOptions(
buildProfile({
name: 'Test Provider',
model: 'glm-4.7, glm-4.7-flash, glm-4.7-plus',
}),
)
expect(options).toEqual([
{ value: 'glm-4.7', label: 'glm-4.7', description: 'Provider: Test Provider' },
{ value: 'glm-4.7-flash', label: 'glm-4.7-flash', description: 'Provider: Test Provider' },
{ value: 'glm-4.7-plus', label: 'glm-4.7-plus', description: 'Provider: Test Provider' },
])
})
test('returns single option for single-model profile', async () => {
const { getProfileModelOptions } =
await importFreshProviderProfileModules()
const options = getProfileModelOptions(
buildProfile({
name: 'Single Model',
model: 'llama3.1:8b',
}),
)
expect(options).toEqual([
{ value: 'llama3.1:8b', label: 'llama3.1:8b', description: 'Provider: Single Model' },
])
})
test('returns empty array for empty model field', async () => {
const { getProfileModelOptions } =
await importFreshProviderProfileModules()
const options = getProfileModelOptions(
buildProfile({
name: 'Empty',
model: '',
}),
)
expect(options).toEqual([])
})
})
describe('setActiveProviderProfile model cache', () => {
test('populates model cache with all models from multi-model profile on activation', async () => {
const {
setActiveProviderProfile,
getActiveOpenAIModelOptionsCache,
} = await importFreshProviderProfileModules()
mockConfigState = {
...createMockConfigState(),
providerProfiles: [
buildProfile({
id: 'multi_provider',
name: 'Multi Provider',
model: 'glm-4.7, glm-4.7-flash, glm-4.7-plus',
baseUrl: 'https://api.example.com/v1',
}),
],
}
setActiveProviderProfile('multi_provider')
const cache = getActiveOpenAIModelOptionsCache()
const cacheValues = cache.map(opt => opt.value)
expect(cacheValues).toContain('glm-4.7')
expect(cacheValues).toContain('glm-4.7-flash')
expect(cacheValues).toContain('glm-4.7-plus')
})
})

View File

@@ -5,6 +5,7 @@ import {
type ProviderProfile,
} from './config.js'
import type { ModelOption } from './model/modelOptions.js'
import { getPrimaryModel, parseModelList } from './providerModels.js'
export type ProviderPreset =
| 'anthropic'
@@ -331,7 +332,7 @@ function isProcessEnvAlignedWithProfile(
return (
!hasProviderSelectionFlags(processEnv) &&
sameOptionalEnvValue(processEnv.ANTHROPIC_BASE_URL, profile.baseUrl) &&
sameOptionalEnvValue(processEnv.ANTHROPIC_MODEL, profile.model) &&
sameOptionalEnvValue(processEnv.ANTHROPIC_MODEL, getPrimaryModel(profile.model)) &&
(!includeApiKey ||
sameOptionalEnvValue(processEnv.ANTHROPIC_API_KEY, profile.apiKey))
)
@@ -346,7 +347,7 @@ function isProcessEnvAlignedWithProfile(
processEnv.CLAUDE_CODE_USE_VERTEX === undefined &&
processEnv.CLAUDE_CODE_USE_FOUNDRY === undefined &&
sameOptionalEnvValue(processEnv.OPENAI_BASE_URL, profile.baseUrl) &&
sameOptionalEnvValue(processEnv.OPENAI_MODEL, profile.model) &&
sameOptionalEnvValue(processEnv.OPENAI_MODEL, getPrimaryModel(profile.model)) &&
(!includeApiKey ||
sameOptionalEnvValue(processEnv.OPENAI_API_KEY, profile.apiKey))
)
@@ -397,7 +398,7 @@ export function applyProviderProfileToProcessEnv(profile: ProviderProfile): void
process.env[PROFILE_ENV_APPLIED_FLAG] = '1'
process.env[PROFILE_ENV_APPLIED_ID] = profile.id
process.env.ANTHROPIC_MODEL = profile.model
process.env.ANTHROPIC_MODEL = getPrimaryModel(profile.model)
if (profile.provider === 'anthropic') {
process.env.ANTHROPIC_BASE_URL = profile.baseUrl
@@ -416,7 +417,7 @@ export function applyProviderProfileToProcessEnv(profile: ProviderProfile): void
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = profile.baseUrl
process.env.OPENAI_MODEL = profile.model
process.env.OPENAI_MODEL = getPrimaryModel(profile.model)
if (profile.apiKey) {
process.env.OPENAI_API_KEY = profile.apiKey
@@ -581,6 +582,16 @@ export function persistActiveProviderProfileModel(
return null
}
// If the model is already part of the profile's model list, don't
// overwrite the field. This preserves comma-separated model lists like
// "glm-4.5, glm-4.7". Switching between models in the list is a
// session-level choice handled by mainLoopModelOverride, not a profile
// edit — the profile's model list should only change via explicit edit.
const existingModels = parseModelList(activeProfile.model)
if (existingModels.includes(nextModel)) {
return activeProfile
}
saveGlobalConfig(current => {
const currentProfiles = getProviderProfiles(current)
const profileIndex = currentProfiles.findIndex(
@@ -623,6 +634,23 @@ export function persistActiveProviderProfileModel(
return resolvedProfile
}
/**
* Generate model options from a provider profile's model field.
* Each comma-separated model becomes a separate option in the picker.
*/
export function getProfileModelOptions(profile: ProviderProfile): ModelOption[] {
const models = parseModelList(profile.model)
if (models.length === 0) {
return []
}
return models.map(model => ({
value: model,
label: model,
description: `Provider: ${profile.name}`,
}))
}
export function setActiveProviderProfile(
profileId: string,
): ProviderProfile | null {
@@ -634,10 +662,20 @@ export function setActiveProviderProfile(
return null
}
const profileModelOptions = getProfileModelOptions(activeProfile)
saveGlobalConfig(config => ({
...config,
activeProviderProfileId: profileId,
openaiAdditionalModelOptionsCache: getModelCacheByProfile(profileId, config),
openaiAdditionalModelOptionsCache: profileModelOptions.length > 0
? profileModelOptions
: getModelCacheByProfile(profileId, config),
openaiAdditionalModelOptionsCacheByProfile: {
...(config.openaiAdditionalModelOptionsCacheByProfile ?? {}),
[profileId]: profileModelOptions.length > 0
? profileModelOptions
: (config.openaiAdditionalModelOptionsCacheByProfile?.[profileId] ?? []),
},
}))
applyProviderProfileToProcessEnv(activeProfile)