Add DeepSeek V4 flash/pro support and DeepSeek thinking compatibility (#877)
* Add DeepSeek V4 support and thinking compatibility * Fix DeepSeek profile persistence regression * Align multi-model handling with openai-multi-model
This commit is contained in:
@@ -145,6 +145,11 @@ ANTHROPIC_API_KEY=sk-ant-your-key-here
|
||||
# CLAUDE_CODE_USE_OPENAI=1
|
||||
# OPENAI_API_KEY=sk-your-key-here
|
||||
# OPENAI_MODEL=gpt-4o
|
||||
# For DeepSeek, set:
|
||||
# OPENAI_BASE_URL=https://api.deepseek.com/v1
|
||||
# OPENAI_MODEL=deepseek-v4-flash
|
||||
# Optional: OPENAI_MODEL=deepseek-v4-pro
|
||||
# Legacy aliases also work: deepseek-chat and deepseek-reasoner
|
||||
|
||||
# Use a custom OpenAI-compatible endpoint (optional — defaults to api.openai.com)
|
||||
# OPENAI_BASE_URL=https://api.openai.com/v1
|
||||
|
||||
Reference in New Issue
Block a user