Files
orcs-code/src/services
gnanam1990 cb86f73c06 fix: prevent duplicate responses in OpenAI streaming
When certain OpenAI-compatible APIs (LM Studio, some proxies) send
multiple stream chunks with finish_reason set, the finish block ran
multiple times — emitting content_block_stop and message_delta for
each one. Each content_block_stop caused claude.ts to create and yield
a new assistant message, making every response appear twice in the UI.

Fix: add hasProcessedFinishReason flag (same pattern as the existing
hasEmittedFinalUsage flag) so the finish block only executes once per
response regardless of how many chunks contain finish_reason.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 18:14:41 +05:30
..
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00
2026-03-31 03:34:03 -07:00