DeepSeek V4 Pro (and similar reasoning models reached via OpenRouter) reject multi-turn requests in thinking mode with: 400 The `reasoning_content` in the thinking mode must be passed back to the API. omp's existing kimi placeholder injection (`requiresReasoningContentForToolCalls`) covered this requirement only for `thinkingFormat == "openai"`. OpenRouter sets `thinkingFormat == "openrouter"`, so the gate never fired even though the underlying providers behind OpenRouter (DeepSeek, Kimi, etc.) all enforce the same invariant. This patch: 1. Extends `requiresReasoningContentForToolCalls` detection: any reasoning-capable model fronted by OpenRouter now sets the flag. 2. Extends the placeholder gate in `convertMessages` to accept `thinkingFormat == "openrouter"` alongside `"openai"`. Cross-provider continuations are the dominant trigger: a conversation warmed up by Anthropic Claude (whose reasoning is redacted/encrypted on the wire) followed by a switch to DeepSeek V4 Pro via OpenRouter. omp cannot synthesize plaintext `reasoning_content` from Anthropic's encrypted blocks, so the placeholder satisfies DeepSeek's validator without fabricating a reasoning trace. Real captured reasoning, when present, short-circuits the placeholder via `hasReasoningField` and survives intact. Side benefit: also closes a latent gap where Kimi-via-OpenRouter (`thinkingFormat == "openrouter"`) had the compat flag set but the placeholder gate silently rejected it. Applies cleanly on top of patch 0001.
7.1 KiB
7.1 KiB