feat(api): compress old tool_result content for small-context providers (#801)
* feat(api): compress old tool_result content for small-context providers Adds a shim-layer pass that tiers tool_result content by age on providers with small effective context windows (Copilot gpt-4o 128k, Mistral, Ollama). Recent turns remain full; mid-tier results are truncated to 2k chars; older results are replaced with a stub that preserves tool name and arguments so the model can re-invoke if needed. Tier sizes auto-tune via getEffectiveContextWindowSize, same calculation used by auto-compact. Reuses COMPACTABLE_TOOLS and TOOL_RESULT_CLEARED_MESSAGE to complement (not duplicate) microCompact. Configurable via /config toolHistoryCompressionEnabled. Addresses active-session context accumulation on Copilot where microCompact's time-based trigger never fires, which surfaces as "tools appearing in a loop" and prompt_too_long errors after ~15 turns. * fix: config tool history
This commit is contained in:
@@ -46,6 +46,7 @@ import {
|
||||
type AnthropicUsage,
|
||||
type ShimCreateParams,
|
||||
} from './codexShim.js'
|
||||
import { compressToolHistory } from './compressToolHistory.js'
|
||||
import { fetchWithProxyRetry } from './fetchWithProxyRetry.js'
|
||||
import {
|
||||
getLocalProviderRetryBaseUrls,
|
||||
@@ -1299,14 +1300,15 @@ class OpenAIShimMessages {
|
||||
params: ShimCreateParams,
|
||||
options?: { signal?: AbortSignal; headers?: Record<string, string> },
|
||||
): Promise<Response> {
|
||||
const openaiMessages = convertMessages(
|
||||
const compressedMessages = compressToolHistory(
|
||||
params.messages as Array<{
|
||||
role: string
|
||||
message?: { role?: string; content?: unknown }
|
||||
content?: unknown
|
||||
}>,
|
||||
params.system,
|
||||
request.resolvedModel,
|
||||
)
|
||||
const openaiMessages = convertMessages(compressedMessages, params.system)
|
||||
|
||||
const body: Record<string, unknown> = {
|
||||
model: request.resolvedModel,
|
||||
|
||||
Reference in New Issue
Block a user