Add DeepSeek V4 flash/pro support and DeepSeek thinking compatibility (#877)

* Add DeepSeek V4 support and thinking compatibility

* Fix DeepSeek profile persistence regression

* Align multi-model handling with openai-multi-model
This commit is contained in:
JATMN
2026-04-24 11:29:46 -07:00
committed by GitHub
parent c4cb98a4f0
commit ff2a380723
15 changed files with 356 additions and 31 deletions

View File

@@ -145,6 +145,11 @@ ANTHROPIC_API_KEY=sk-ant-your-key-here
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_API_KEY=sk-your-key-here
# OPENAI_MODEL=gpt-4o
# For DeepSeek, set:
# OPENAI_BASE_URL=https://api.deepseek.com/v1
# OPENAI_MODEL=deepseek-v4-flash
# Optional: OPENAI_MODEL=deepseek-v4-pro
# Legacy aliases also work: deepseek-chat and deepseek-reasoner
# Use a custom OpenAI-compatible endpoint (optional — defaults to api.openai.com)
# OPENAI_BASE_URL=https://api.openai.com/v1