feat: add intelligent provider profile recommendation

This commit is contained in:
Vasanthdev2004
2026-04-01 11:10:51 +05:30
parent 2d7aa9c841
commit 174eb8ad3b
9 changed files with 945 additions and 40 deletions

View File

@@ -206,12 +206,21 @@ Use profile launchers to avoid repeated environment setup:
# one-time profile bootstrap (auto-detect ollama, otherwise openai)
bun run profile:init
# preview the best provider/model for your goal
bun run profile:recommend -- --goal coding --benchmark
# auto-apply the best available profile for your goal
bun run profile:auto -- --goal latency
# openai bootstrap with explicit key
bun run profile:init -- --provider openai --api-key sk-...
# ollama bootstrap with custom model
bun run profile:init -- --provider ollama --model llama3.1:8b
# ollama bootstrap with intelligent model auto-selection
bun run profile:init -- --provider ollama --goal coding
# launch using persisted profile (.openclaude-profile.json)
bun run dev:profile
@@ -222,6 +231,9 @@ bun run dev:openai
bun run dev:ollama
```
`profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
If no profile exists yet, `dev:profile` now uses the same goal-aware defaults when picking the initial model.
`dev:openai` and `dev:ollama` run `doctor:runtime` first and only launch the app if checks pass.
For `dev:ollama`, make sure Ollama is running locally before launch.