fix: make OpenAI fallback context window configurable + support external model lookup (#861)
* fix: make OpenAI fallback context window configurable and support external lookup table Unknown OpenAI-compatible models fell back to a hardcoded 128k constant, causing auto-compact to fire prematurely on models with larger windows (issue #635 follow-up). Two escape hatches are added without touching the built-in table: - CLAUDE_CODE_OPENAI_FALLBACK_CONTEXT_WINDOW (number): overrides the 128k default for all unknown models. - CLAUDE_CODE_OPENAI_CONTEXT_WINDOWS (JSON object): per-model overrides that take precedence over the built-in OPENAI_CONTEXT_WINDOWS table; supports the same provider-qualified and prefix-matching lookup as the built-in path. - CLAUDE_CODE_OPENAI_MAX_OUTPUT_TOKENS (JSON object): same pattern for output token limits. This lets operators deploy new or private models without patching openaiContextWindows.ts on every model release. * docs: add new OpenAI context window env vars to .env.example Document CLAUDE_CODE_OPENAI_FALLBACK_CONTEXT_WINDOW, CLAUDE_CODE_OPENAI_CONTEXT_WINDOWS, and CLAUDE_CODE_OPENAI_MAX_OUTPUT_TOKENS with usage examples. Addresses reviewer feedback on PR #861. --------- Co-authored-by: opencode <dev@example.com>
This commit is contained in:
17
.env.example
17
.env.example
@@ -149,6 +149,23 @@ ANTHROPIC_API_KEY=sk-ant-your-key-here
|
||||
# Use a custom OpenAI-compatible endpoint (optional — defaults to api.openai.com)
|
||||
# OPENAI_BASE_URL=https://api.openai.com/v1
|
||||
|
||||
# Fallback context window size (tokens) when the model is not found in the
|
||||
# built-in table (default: 128000). Increase this for models with larger
|
||||
# context windows (e.g. 200000 for Claude-sized contexts).
|
||||
# CLAUDE_CODE_OPENAI_FALLBACK_CONTEXT_WINDOW=128000
|
||||
|
||||
# Per-model context window overrides as a JSON object.
|
||||
# Takes precedence over the built-in table, so you can register new or
|
||||
# custom models without patching source.
|
||||
# Example: CLAUDE_CODE_OPENAI_CONTEXT_WINDOWS={"my-corp/llm-v3":262144,"gpt-4o-mini":128000}
|
||||
# CLAUDE_CODE_OPENAI_CONTEXT_WINDOWS=
|
||||
|
||||
# Per-model maximum output token overrides as a JSON object.
|
||||
# Use this alongside CLAUDE_CODE_OPENAI_CONTEXT_WINDOWS when your model
|
||||
# supports a different output limit than what the built-in table specifies.
|
||||
# Example: CLAUDE_CODE_OPENAI_MAX_OUTPUT_TOKENS={"my-corp/llm-v3":8192}
|
||||
# CLAUDE_CODE_OPENAI_MAX_OUTPUT_TOKENS=
|
||||
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Option 3: Google Gemini
|
||||
|
||||
Reference in New Issue
Block a user