Add DeepSeek V4 flash/pro support and DeepSeek thinking compatibility (#877)
* Add DeepSeek V4 support and thinking compatibility * Fix DeepSeek profile persistence regression * Align multi-model handling with openai-multi-model
This commit is contained in:
@@ -68,9 +68,11 @@ openclaude
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-...
|
||||
export OPENAI_BASE_URL=https://api.deepseek.com/v1
|
||||
export OPENAI_MODEL=deepseek-chat
|
||||
export OPENAI_MODEL=deepseek-v4-flash
|
||||
```
|
||||
|
||||
Use `deepseek-v4-pro` when you want the stronger model. `deepseek-chat` and `deepseek-reasoner` remain available as DeepSeek's legacy API aliases.
|
||||
|
||||
### Google Gemini via OpenRouter
|
||||
|
||||
```bash
|
||||
@@ -169,7 +171,7 @@ export OPENAI_MODEL=gpt-4o
|
||||
|----------|----------|-------------|
|
||||
| `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
|
||||
| `OPENAI_API_KEY` | Yes* | Your API key (`*` not needed for local models like Ollama or Atomic Chat) |
|
||||
| `OPENAI_MODEL` | Yes | Model name such as `gpt-4o`, `deepseek-chat`, or `llama3.3:70b` |
|
||||
| `OPENAI_MODEL` | Yes | Model name such as `gpt-4o`, `deepseek-v4-flash`, or `llama3.3:70b` |
|
||||
| `OPENAI_BASE_URL` | No | API endpoint, defaulting to `https://api.openai.com/v1` |
|
||||
| `CODEX_API_KEY` | Codex only | Codex or ChatGPT access token override |
|
||||
| `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
|
||||
|
||||
Reference in New Issue
Block a user