feat: fix open-source build and add Ollama model picker (#302)

* feat: fix open-source build and add Ollama model picker

- Fix build failures by stubbing 62+ missing Anthropic-internal modules
  with a catch-all plugin in scripts/build.ts
- Add runtime shim exports (isReplBridgeActive, getReplBridgeHandle) in
  bootstrap/state.ts for feature-gated code references
- Add /model picker support for Ollama: fetches available models from
  Ollama server at startup and displays them in the model selection menu
- Add Ollama model validation against cached server model list

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: address PR review feedback for Ollama integration

- Move Ollama validation before enterprise allowlist check in validateModel
- Truncate model list in error messages to first 5 entries
- Fix isOllamaProvider() to detect OLLAMA_BASE_URL-only configurations
- Reuse getOllamaApiBaseUrl() from providerDiscovery instead of duplicating
- Reset fetchPromise on failure to allow retry in prefetchOllamaModels
- Include Default option in Ollama model picker, prevent Claude model fallthrough
- Add file existence check for src/tasks/ stubs in build script

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: use pre-scanned exact-match resolvers to avoid Bun bundler corruption

Bun's onResolve plugin corrupts the module graph even when returning null
for non-matching imports. This caused lodash-es memoize and zod's util
namespace to be incorrectly tree-shaken, producing runtime ReferenceErrors.

Replace all pattern-based onResolve hooks with a pre-build scan that
identifies missing modules upfront, then registers exact-match resolvers
only for confirmed missing imports. This avoids touching any valid module
resolution paths.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: move Ollama model prefetch outside startup throttle gate

prefetchOllamaModels() was inside the skipStartupPrefetches condition,
so it would be skipped on subsequent launches due to the bgRefresh
throttle timestamp. Ollama model fetch targets a local/remote server
and is fast & cheap, so it should always run at startup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
RUO
2026-04-04 18:22:18 +09:00
committed by GitHub
parent 08be5181ab
commit 280c9732f5
6 changed files with 257 additions and 0 deletions

View File

@@ -10,6 +10,7 @@ import {
AuthenticationError,
} from '@anthropic-ai/sdk'
import { getModelStrings } from './modelStrings.js'
import { getCachedOllamaModelOptions, isOllamaProvider } from './ollamaModels.js'
// Cache valid models to avoid repeated API calls
const validModelCache = new Map<string, boolean>()
@@ -27,6 +28,25 @@ export async function validateModel(
return { valid: false, error: 'Model name cannot be empty' }
}
// For Ollama provider, validate against cached model list instead of API call
// (skip enterprise allowlist since Ollama models are user-managed)
if (getAPIProvider() === 'openai' && isOllamaProvider()) {
const ollamaModels = getCachedOllamaModelOptions()
const found = ollamaModels.some(m => m.value === normalizedModel)
if (found) {
validModelCache.set(normalizedModel, true)
return { valid: true }
}
if (ollamaModels.length > 0) {
const MAX_SHOWN = 5
const names = ollamaModels.map(m => m.value)
const shown = names.slice(0, MAX_SHOWN).join(', ')
const suffix = names.length > MAX_SHOWN ? ` and ${names.length - MAX_SHOWN} more` : ''
return { valid: false, error: `Model '${normalizedModel}' not found on Ollama server. Available: ${shown}${suffix}` }
}
// If cache is empty, fall through to API validation
}
// Check against availableModels allowlist before any API call
if (!isModelAllowed(normalizedModel)) {
return {