Compare commits

..

18 Commits

Author SHA1 Message Date
4f98023203 update
All checks were successful
Build and Deploy / mreow (push) Successful in 4m2s
Build and Deploy / yarn (push) Successful in 1m4s
Build and Deploy / muffin (push) Successful in 1m11s
2026-04-27 11:40:09 -04:00
bbdc478e84 omp: update patches
All checks were successful
Build and Deploy / mreow (push) Successful in 13m8s
Build and Deploy / yarn (push) Successful in 1m11s
Build and Deploy / muffin (push) Successful in 7m15s
2026-04-27 01:36:08 -04:00
675fc7f805 update
Some checks failed
Build and Deploy / mreow (push) Failing after 5m10s
Build and Deploy / yarn (push) Failing after 1m1s
Build and Deploy / muffin (push) Has been cancelled
2026-04-27 01:27:13 -04:00
141754ca39 ghostty: fix???
All checks were successful
Build and Deploy / mreow (push) Successful in 1m20s
Build and Deploy / yarn (push) Successful in 54s
Build and Deploy / muffin (push) Successful in 1m14s
2026-04-26 01:11:09 -04:00
4b173ef164 jellyfin-qbittorrent-monitor: fix hairpin handling 2026-04-26 01:03:11 -04:00
3201b5726e update
Some checks failed
Build and Deploy / mreow (push) Successful in 1m44s
Build and Deploy / yarn (push) Successful in 1m3s
Build and Deploy / muffin (push) Failing after 27s
2026-04-26 00:12:30 -04:00
3c7bdc0c42 ghostty: colors
Some checks failed
Build and Deploy / mreow (push) Successful in 1m9s
Build and Deploy / yarn (push) Successful in 1m4s
Build and Deploy / muffin (push) Failing after 30s
2026-04-25 22:36:29 -04:00
2ebb7fc90d ghostty: open in home 2026-04-25 22:34:42 -04:00
72320e2332 ghostty: speedup start 2026-04-25 22:31:21 -04:00
b5a94520fe README.md: i don't use KDE anymore 2026-04-25 22:24:36 -04:00
9ee3547d5d ghostty 2026-04-25 22:21:27 -04:00
ce288ccdb0 update
Some checks failed
Build and Deploy / mreow (push) Successful in 8m39s
Build and Deploy / yarn (push) Successful in 1m6s
Build and Deploy / muffin (push) Failing after 34s
2026-04-25 20:22:48 -04:00
da87f82a66 noctalia: disable startup animation 2026-04-25 20:21:44 -04:00
90f2c27c2c DISABLE KMSCON
Some checks failed
Build and Deploy / mreow (push) Successful in 7m39s
Build and Deploy / yarn (push) Successful in 1m5s
Build and Deploy / muffin (push) Failing after 36s
THIS is what caused issues with greetd, nothing kernel related
2026-04-25 19:20:24 -04:00
450b77140b pi: apply omp patches via prePatch (bun2nix.hook overrides patchPhase)
`bun2nix.hook` (used by upstream omp's package.nix) sets

  patchPhase = bunPatchPhase

at the end of its setup-hook unless `dontUseBunPatch` is already set.
`bunPatchPhase` only runs `patchShebangs` plus a HOME mktemp; it never
iterates over `$patches`. The standard nixpkgs `patches` attribute
therefore went into the derivation env but was silently ignored at
build time, leaving the deployed omp binary unpatched.

Switch to applying the two patches via `prePatch` (which `bunPatchPhase`
does call). Verified with strings(1) over the rebuilt binary that both
patch hunks land:

  /wrong_api_format|...|invalid tool parameters/  (patch 0001)
  stubsReasoningContent ... thinkingFormat == "openrouter"  (patch 0002)
2026-04-25 19:20:08 -04:00
318373c09c pi: patch omp to require reasoning_content for OpenRouter reasoning models
DeepSeek V4 Pro (and similar reasoning models reached via OpenRouter) reject
multi-turn requests in thinking mode with:

  400 The `reasoning_content` in the thinking mode must be passed back
  to the API.

omp's existing kimi placeholder injection (`requiresReasoningContentForToolCalls`)
covered this requirement only for `thinkingFormat == "openai"`. OpenRouter
sets `thinkingFormat == "openrouter"`, so the gate never fired even though
the underlying providers behind OpenRouter (DeepSeek, Kimi, etc.) all enforce
the same invariant.

This patch:

1. Extends `requiresReasoningContentForToolCalls` detection: any
   reasoning-capable model fronted by OpenRouter now sets the flag.
2. Extends the placeholder gate in `convertMessages` to accept
   `thinkingFormat == "openrouter"` alongside `"openai"`.

Cross-provider continuations are the dominant trigger: a conversation
warmed up by Anthropic Claude (whose reasoning is redacted/encrypted on
the wire) followed by a switch to DeepSeek V4 Pro via OpenRouter. omp
cannot synthesize plaintext `reasoning_content` from Anthropic's
encrypted blocks, so the placeholder satisfies DeepSeek's validator
without fabricating a reasoning trace. Real captured reasoning, when
present, short-circuits the placeholder via `hasReasoningField` and
survives intact.

Side benefit: also closes a latent gap where Kimi-via-OpenRouter
(`thinkingFormat == "openrouter"`) had the compat flag set but the
placeholder gate silently rejected it.

Applies cleanly on top of patch 0001.
2026-04-25 19:20:05 -04:00
d55743a9e7 revert: roll back flake.lock pre-update (niri 8ed0da4 black-screens on amdgpu) 2026-04-25 16:21:28 -04:00
8ab4924948 omp: add patch that fixes deepseek 2026-04-25 15:38:39 -04:00
11 changed files with 1051 additions and 201 deletions

View File

@@ -12,11 +12,11 @@ Browser: Firefox 🦊 (actually [Zen Browser](https://github.com/zen-browser/des
Text Editor: [Doom Emacs](https://github.com/doomemacs/doomemacs) Text Editor: [Doom Emacs](https://github.com/doomemacs/doomemacs)
Terminal: [alacritty](https://github.com/alacritty/alacritty) Terminal: [ghostty](https://ghostty.org/)
Shell: [fish](https://fishshell.com/) with the [pure](https://github.com/pure-fish/pure) prompt Shell: [fish](https://fishshell.com/) with the [pure](https://github.com/pure-fish/pure) prompt
WM: [niri](https://github.com/YaLTeR/niri) (KDE on my desktop) WM: [niri](https://github.com/YaLTeR/niri)
### Background ### Background
- Got my background from [here](https://old.reddit.com/r/celestegame/comments/11dtgwg/all_most_of_the_backgrounds_in_celeste_edited/) and used the command `magick input.png -filter Point -resize 2880x1920! output.png` to upscale it bilinearly - Got my background from [here](https://old.reddit.com/r/celestegame/comments/11dtgwg/all_most_of_the_backgrounds_in_celeste_edited/) and used the command `magick input.png -filter Point -resize 2880x1920! output.png` to upscale it bilinearly

114
flake.lock generated
View File

@@ -140,11 +140,11 @@
}, },
"crane": { "crane": {
"locked": { "locked": {
"lastModified": 1776635034, "lastModified": 1777242778,
"narHash": "sha256-OEOJrT3ZfwbChzODfIH4GzlNTtOFuZFWPtW7jIeR8xU=", "narHash": "sha256-VWTeqWeb8Sel/QiWyaPvCa9luAbcGawR+Rw09FJoHz0=",
"owner": "ipetkov", "owner": "ipetkov",
"repo": "crane", "repo": "crane",
"rev": "dc7496d8ea6e526b1254b55d09b966e94673750f", "rev": "ad8b31ad0ba8448bd958d7a5d50d811dc5d271c0",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -222,11 +222,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1777138175, "lastModified": 1777293736,
"narHash": "sha256-UrexPU1xQ/qB0qCjuTeljQOCDmjeCNuipZMBv3FyoJM=", "narHash": "sha256-/60J4/D2wY0afSPbjMBrfIQ1nYvxT6Aacu1RlOxtuY4=",
"owner": "nix-community", "owner": "nix-community",
"repo": "emacs-overlay", "repo": "emacs-overlay",
"rev": "d7d0c87d15148472eef847dfe298095ef4298dc1", "rev": "dd3b17c608252cc107e3df25496132d04c9c0233",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -266,11 +266,11 @@
}, },
"locked": { "locked": {
"dir": "pkgs/firefox-addons", "dir": "pkgs/firefox-addons",
"lastModified": 1777089773, "lastModified": 1777295287,
"narHash": "sha256-ZIlNuebeWTncyl7mcV9VbceSLAaZki+UeXLPQG959xI=", "narHash": "sha256-BkdAlwRrxqFf3PRbfFXr9j2JS+dzsNMme6edRBW4H60=",
"owner": "rycee", "owner": "rycee",
"repo": "nur-expressions", "repo": "nur-expressions",
"rev": "402ba229617a12d918c2a887a4c83a9a24f9a36c", "rev": "c4ce0a56da1b9a816ef4e5129be136c94ea8b564",
"type": "gitlab" "type": "gitlab"
}, },
"original": { "original": {
@@ -484,11 +484,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1777138498, "lastModified": 1777295000,
"narHash": "sha256-mZdL0akv+PiA9h4DXNVGCqUeV5NiODy5lzRWoDsYhtI=", "narHash": "sha256-xzWerLYQG2W+VGJfaZ+8/Puswbok1o8Tix6/6hIW1rY=",
"owner": "nix-community", "owner": "nix-community",
"repo": "home-manager", "repo": "home-manager",
"rev": "026e21038902970e54226133e718e8c197fac799", "rev": "b408d49b845167add697937761a89c41c996ac7a",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -610,11 +610,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776797459, "lastModified": 1777299656,
"narHash": "sha256-utv296Xwk0PwjONe9dsyKx+9Z5xAB70aAsMI//aakpg=", "narHash": "sha256-c0r3xXp2+xFJwkryS+nhyQwoACbFzSt4C1TVs3QMh8E=",
"owner": "nix-community", "owner": "nix-community",
"repo": "lanzaboote", "repo": "lanzaboote",
"rev": "4eda91dd5abd2157a2c7bfb33142fc64da668b0a", "rev": "079c608988c2747db3902c9de033572cd50e8656",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -657,11 +657,11 @@
"treefmt-nix": "treefmt-nix" "treefmt-nix": "treefmt-nix"
}, },
"locked": { "locked": {
"lastModified": 1777143457, "lastModified": 1777266861,
"narHash": "sha256-mGvWYLxSaJwHv2ndcaHj1FrLnRFKqcBEo/lcm+Sz7aQ=", "narHash": "sha256-cdSr2nIz4I+ysG1gAZxbKQo+f79vCCKfQCdiRYnyPec=",
"owner": "numtide", "owner": "numtide",
"repo": "llm-agents.nix", "repo": "llm-agents.nix",
"rev": "4aaa2a28b09897b1858eb8db4cb3cf509e95cd14", "rev": "c8f7c7882804510f2b807021cac0a69c1aeb4829",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -704,11 +704,11 @@
"xwayland-satellite-unstable": "xwayland-satellite-unstable" "xwayland-satellite-unstable": "xwayland-satellite-unstable"
}, },
"locked": { "locked": {
"lastModified": 1777130270, "lastModified": 1777240421,
"narHash": "sha256-AgOIR3O+hLkTe/spgYjp0knc37iy/A5DqGRY+8DP3LE=", "narHash": "sha256-ooPmu+8tqOGh4kozPW4rJC7Y7WM/FHtEY3OK1PoNW7g=",
"owner": "sodiboo", "owner": "sodiboo",
"repo": "niri-flake", "repo": "niri-flake",
"rev": "e43ef13f23c2c7ae5b10e842745cb345faff4f40", "rev": "2bb22af2985e5f3cfd051b3d977ebfbf81126280",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -737,11 +737,11 @@
"niri-unstable": { "niri-unstable": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1777115961, "lastModified": 1777237919,
"narHash": "sha256-ehSMsSpE+0k8r+2Vseu8kangsYxToZv3vinynsDp9zs=", "narHash": "sha256-bZHBzo4EuW/xLzXnnMKsIMdZYqgY2O0mIMdplwDHB8Y=",
"owner": "YaLTeR", "owner": "YaLTeR",
"repo": "niri", "repo": "niri",
"rev": "8ed0da44d974c32c6877d2f4630c314da0717ecb", "rev": "a85b922919815c32a3ae34e0838830fe522d6a1c",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -761,11 +761,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1777140538, "lastModified": 1777227006,
"narHash": "sha256-2y5SwHxTOwEdr8WZv1IGBVoJM47YcomfoxFnZj9TgN0=", "narHash": "sha256-A7GcOXjfo2xmZ3ERgN0j6GcqaVzqIf5zpYQcdfDaMr0=",
"owner": "xddxdd", "owner": "xddxdd",
"repo": "nix-cachyos-kernel", "repo": "nix-cachyos-kernel",
"rev": "ce6083d35e50516dd6eb6156d0cbda67baed9117", "rev": "0f7e2bea4088227a80502557f6c0e3b74949d6b5",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -787,11 +787,11 @@
"systems": "systems_6" "systems": "systems_6"
}, },
"locked": { "locked": {
"lastModified": 1776938345, "lastModified": 1777289939,
"narHash": "sha256-3/BFiytDNoIXMUQHcJLoxa7JK0Q1/49M0ffOR9pbzvw=", "narHash": "sha256-wZnl3HB88oTME6oL7zGLrhxQmMNeH+QUCpXIfrci1pE=",
"owner": "marienz", "owner": "marienz",
"repo": "nix-doom-emacs-unstraightened", "repo": "nix-doom-emacs-unstraightened",
"rev": "eb25c754986165e509ad2ab8c6b6729f4a861f0c", "rev": "29e1722cb93ede486b79369e81a9c15d7d7b7a48",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -802,11 +802,11 @@
}, },
"nix-flatpak": { "nix-flatpak": {
"locked": { "locked": {
"lastModified": 1776625032, "lastModified": 1777229239,
"narHash": "sha256-edvwHiFhgOiwywt6/Iwe+sSn6ybhU3WZGnIoiGcKjfQ=", "narHash": "sha256-OwSaWqlBdKn8QIa7BrPtJmlrr46U7AuwMc/toDKuMZw=",
"owner": "gmodena", "owner": "gmodena",
"repo": "nix-flatpak", "repo": "nix-flatpak",
"rev": "479e19f1decb390aa5b75cae13ddf87d763c74cc", "rev": "3f1d78b63b6af353c0685b8a7411c04d980426e4",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -937,11 +937,11 @@
}, },
"nixpkgs-stable": { "nixpkgs-stable": {
"locked": { "locked": {
"lastModified": 1776734388, "lastModified": 1777077449,
"narHash": "sha256-vl3dkhlE5gzsItuHoEMVe+DlonsK+0836LIRDnm6MXQ=", "narHash": "sha256-AIiMJiqvGrN4HyLEbKAoCSRRYn0rnlW5VbKNIMIYqm4=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "10e7ad5bbcb421fe07e3a4ad53a634b0cd57ffac", "rev": "a4bf06618f0b5ee50f14ed8f0da77d34ecc19160",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -991,11 +991,11 @@
"noctalia-qs": "noctalia-qs" "noctalia-qs": "noctalia-qs"
}, },
"locked": { "locked": {
"lastModified": 1777079905, "lastModified": 1777253304,
"narHash": "sha256-TvYEXwkZnRFQRuFyyqTNSfPnU2tMdhtiBOXSk2AWLJA=", "narHash": "sha256-XqSHEKEW5pSAx9MoMo8mKPgkjoy4FEhZ4x0a6hGYrSI=",
"owner": "noctalia-dev", "owner": "noctalia-dev",
"repo": "noctalia-shell", "repo": "noctalia-shell",
"rev": "a50c92167c8d438000270f7eca36f6eea74f388e", "rev": "6773c4750a12c9e9af9c4ce2365e083f1d0d0ad8",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1014,11 +1014,11 @@
"treefmt-nix": "treefmt-nix_2" "treefmt-nix": "treefmt-nix_2"
}, },
"locked": { "locked": {
"lastModified": 1776585574, "lastModified": 1777167795,
"narHash": "sha256-j35EWhKoGhKrfcXcAOpoRVgXEPQt41Eukji/h59cnjk=", "narHash": "sha256-VHdtmxVX7oF2+FxYQQPARQmtaHw23FoTBiTaH6ucOEg=",
"owner": "noctalia-dev", "owner": "noctalia-dev",
"repo": "noctalia-qs", "repo": "noctalia-qs",
"rev": "75d180c28a9ab4470e980f3d6f706ad6c5213add", "rev": "697db4c14e27d841956ff76887fc312443e6fb17",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1037,11 +1037,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1775585728, "lastModified": 1776796298,
"narHash": "sha256-8Psjt+TWvE4thRKktJsXfR6PA/fWWsZ04DVaY6PUhr4=", "narHash": "sha256-PcRvlWayisPSjd0UcRQbhG8Oqw78AcPE6x872cPRHN8=",
"owner": "cachix", "owner": "cachix",
"repo": "pre-commit-hooks.nix", "repo": "pre-commit-hooks.nix",
"rev": "580633fa3fe5fc0379905986543fd7495481913d", "rev": "3cfd774b0a530725a077e17354fbdb87ea1c4aad",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1133,11 +1133,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1777086717, "lastModified": 1777259803,
"narHash": "sha256-vEl3cGHRxEFdVNuP9PbrhAWnmU98aPOLGy9/1JXzSuM=", "narHash": "sha256-fIb/EoVu/1U0qVrE6qZCJ2WCfprRpywNIAVzKEACIQc=",
"owner": "oxalica", "owner": "oxalica",
"repo": "rust-overlay", "repo": "rust-overlay",
"rev": "3be56bd430bfd65d3c468a50626c3a601c7dee03", "rev": "a6cb2224d975e16b5e67de688c6ad306f7203425",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1190,11 +1190,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1777000965, "lastModified": 1777275019,
"narHash": "sha256-xcrhVgfI13s1WH4hg5MLL83zAp6/htfF8Pjw4RPiKM8=", "narHash": "sha256-bTnyyCZ89TpvSHMEcBqS5PKqbc/lPc0Km8KdbMVKdsw=",
"owner": "nix-community", "owner": "nix-community",
"repo": "srvos", "repo": "srvos",
"rev": "7ae6f096b2ffbd25d17da8a4d0fe299a164c4eac", "rev": "ab8cddb4a783231e99ff868f90512ed744a39a02",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1356,11 +1356,11 @@
"trackerlist": { "trackerlist": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1777068584, "lastModified": 1777241384,
"narHash": "sha256-UZr6mQfauhIUo8n3SDYnBWeq11xs5lTAoc9onh2MHBc=", "narHash": "sha256-mzqjBOMvL8951W4qt5VA31rQB+TiOYDRyMXTQ7ScSUY=",
"owner": "ngosang", "owner": "ngosang",
"repo": "trackerslist", "repo": "trackerslist",
"rev": "747c048c604c8d12b9d20cfccea4800a32382a66", "rev": "50a204edfeb4f5f904a28e20b650966241203edb",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1524,11 +1524,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1777138694, "lastModified": 1777269342,
"narHash": "sha256-yjAFuyqQyOtQ5entLYmSRf/1L0kuSDWQndS2QNBLQlc=", "narHash": "sha256-8Wok2HzykE2yc9V3vtMXuBNuV8Yh4+JMdzIET9PghfM=",
"owner": "0xc000022070", "owner": "0xc000022070",
"repo": "zen-browser-flake", "repo": "zen-browser-flake",
"rev": "5ceb2bfc5671bfca6b1b363669309d6871043d66", "rev": "3c01a7253335cb590da182eec76862b981b00ad9",
"type": "github" "type": "github"
}, },
"original": { "original": {

View File

@@ -8,8 +8,7 @@
{ {
imports = [ imports = [
./no-gui.nix ./no-gui.nix
# ../progs/ghostty.nix ../progs/ghostty.nix
../progs/alacritty.nix
../progs/emacs.nix ../progs/emacs.nix
# ../progs/trezor.nix # - broken # ../progs/trezor.nix # - broken
../progs/flatpak.nix ../progs/flatpak.nix

View File

@@ -1,131 +0,0 @@
{ pkgs, ... }:
{
home.sessionVariables = {
TERMINAL = "alacritty";
};
programs.alacritty = {
enable = true;
package = pkgs.alacritty;
settings = {
# some programs can't handle alacritty
env.TERM = "xterm-256color";
window = {
# using a window manager, no decorations needed
decorations = "none";
# semi-transparent
opacity = 0.90;
# padding between the content of the terminal and the edge
padding = {
x = 10;
y = 10;
};
dimensions = {
columns = 80;
lines = 40;
};
};
scrolling = {
history = 1000;
multiplier = 3;
};
font =
let
baseFont = {
family = "JetBrains Mono Nerd Font";
style = "Regular";
};
in
{
size = 12;
normal = baseFont;
bold = baseFont // {
style = "Bold";
};
italic = baseFont // {
style = "Italic";
};
offset.y = 0;
glyph_offset.y = 0;
};
# color scheme
colors =
let
normal = {
black = "0x1b1e28";
red = "0xd0679d";
green = "0x5de4c7";
yellow = "0xfffac2";
blue = "#435c89";
magenta = "0xfcc5e9";
cyan = "0xadd7ff";
white = "0xffffff";
};
bright = {
black = "0xa6accd";
red = normal.red;
green = normal.green;
yellow = normal.yellow;
blue = normal.cyan;
magenta = "0xfae4fc";
cyan = "0x89ddff";
white = normal.white;
};
in
{
inherit normal bright;
primary = {
background = "0x131621";
foreground = bright.black;
};
cursor = {
text = "CellBackground";
cursor = "CellForeground";
};
search =
let
foreground = normal.black;
background = normal.cyan;
in
{
matches = {
inherit foreground background;
};
focused_match = {
inherit foreground background;
};
};
selection = {
text = "CellForeground";
background = "0x303340";
};
vi_mode_cursor = {
text = "CellBackground";
cursor = "CellForeground";
};
};
cursor = {
style = "Underline";
vi_mode_style = "Underline";
};
};
};
}

View File

@@ -1,12 +1,71 @@
{ pkgs, ... }: { ... }:
{ {
# https://mynixos.com/home-manager/option/programs.ghostty # https://mynixos.com/home-manager/option/programs.ghostty
programs.ghostty = { programs.ghostty = {
enable = true; enable = true;
enableFishIntegration = true; enableFishIntegration = true;
# custom palette ported verbatim from the previous alacritty config
# (poimandres-ish). lives in ~/.config/ghostty/themes/poimandres and is
# selected by `theme = "poimandres"` below.
themes.poimandres = {
palette = [
"0=#1b1e28"
"1=#d0679d"
"2=#5de4c7"
"3=#fffac2"
"4=#435c89"
"5=#fcc5e9"
"6=#add7ff"
"7=#ffffff"
"8=#a6accd"
"9=#d0679d"
"10=#5de4c7"
"11=#fffac2"
"12=#add7ff"
"13=#fae4fc"
"14=#89ddff"
"15=#ffffff"
];
background = "131621";
foreground = "a6accd";
cursor-color = "a6accd";
cursor-text = "131621";
selection-background = "303340";
selection-foreground = "a6accd";
};
settings = { settings = {
theme = "Adventure"; theme = "poimandres";
background-opacity = 0.7;
# font
font-family = "JetBrainsMono Nerd Font";
font-size = 12;
# window
window-decoration = false;
window-padding-x = 10;
window-padding-y = 10;
window-width = 80;
window-height = 40;
# semi-transparent background
background-opacity = 0.90;
# cursor
cursor-style = "underline";
# always open new windows at $HOME instead of inheriting whatever cwd the
# currently-focused ghostty window has. with gtk-single-instance, the
# focused-window inherit rule otherwise sticks the daemon's first cwd to
# every subsequent niri Mod+T launch.
window-inherit-working-directory = false;
working-directory = "home";
# keep one daemon alive so subsequent launches (e.g. niri Mod+T) are
# instant instead of paying GTK + wgpu init each time. relies on the
# dbus-activated systemd user service that the HM module wires up.
gtk-single-instance = true;
}; };
}; };

View File

@@ -32,6 +32,7 @@
}; };
wallpaper = { wallpaper = {
enabled = true; enabled = true;
skipStartupTransition = true;
}; };
}; };
}; };

View File

@@ -37,8 +37,13 @@ let
in in
{ {
home.packages = [ home.packages = [
# `bun2nix.hook` sets `patchPhase = bunPatchPhase`, which only runs `patchShebangs` and
# silently ignores the standard `patches` attribute. Apply patches via `prePatch` instead
# so they actually take effect. Tracking: nothing upstream yet.
(inputs.llm-agents.packages.${pkgs.stdenv.hostPlatform.system}.omp.overrideAttrs (old: { (inputs.llm-agents.packages.${pkgs.stdenv.hostPlatform.system}.omp.overrideAttrs (old: {
patches = (old.patches or [ ]) ++ [ ]; prePatch = (old.prePatch or "") + ''
patch -p1 < ${../../patches/omp/0001-fix-reasoning_content.patch}
'';
})) }))
]; ];

View File

@@ -58,8 +58,6 @@
]; ];
}; };
services.kmscon.enable = true;
environment.systemPackages = with pkgs; [ environment.systemPackages = with pkgs; [
doas-sudo-shim doas-sudo-shim
]; ];

View File

@@ -0,0 +1,804 @@
From e145b627cffb6907e6bde348f1318f48acba3801 Mon Sep 17 00:00:00 2001
From: sonhyrd <son.hong.do@hyrd.ai>
Date: Mon, 27 Apr 2026 00:00:18 +0700
Subject: [PATCH 1/5] fix(ai/providers): cover opencode-go reasoning tool-call
history
---
.../providers/openai-completions-compat.ts | 12 +++--
.../ai/src/providers/openai-completions.ts | 4 +-
.../ai/test/openai-completions-compat.test.ts | 51 +++++++++++++++----
3 files changed, 49 insertions(+), 18 deletions(-)
diff --git a/packages/ai/src/providers/openai-completions-compat.ts b/packages/ai/src/providers/openai-completions-compat.ts
index 69f4811c8..c777f312b 100644
--- a/packages/ai/src/providers/openai-completions-compat.ts
+++ b/packages/ai/src/providers/openai-completions-compat.ts
@@ -107,12 +107,14 @@ export function detectOpenAICompat(model: Model<"openai-completions">, resolvedB
reasoningContentField: "reasoning_content",
// Backends that 400 follow-up requests when prior assistant tool-call turns lack `reasoning_content`:
// - Kimi: documented invariant on its native API and via OpenCode-Go.
- // - Any reasoning-capable model reached through OpenRouter: DeepSeek V4 Pro and similar enforce
- // this server-side whenever the request is in thinking mode. We can't translate Anthropic's
- // redacted/encrypted reasoning into DeepSeek's plaintext form, so cross-provider continuations
- // rely on a placeholder — see `convertMessages` for the placeholder injection.
+ // - Reasoning-capable models reached through OpenRouter or OpenCode-Go: DeepSeek V4 Pro and
+ // similar enforce this server-side whenever the request is in thinking mode.
+ // We can't translate Anthropic's redacted/encrypted reasoning into DeepSeek's plaintext form, so
+ // cross-provider continuations rely on a placeholder — see `convertMessages` for injection rules.
requiresReasoningContentForToolCalls:
- isKimiModel || ((provider === "openrouter" || baseUrl.includes("openrouter.ai")) && Boolean(model.reasoning)),
+ isKimiModel ||
+ ((provider === "openrouter" || baseUrl.includes("openrouter.ai") || provider === "opencode-go" ||
+ baseUrl.includes("opencode.ai/zen/go")) && Boolean(model.reasoning)),
requiresAssistantContentForToolCalls: isKimiModel,
openRouterRouting: undefined,
vercelGatewayRouting: undefined,
diff --git a/packages/ai/src/providers/openai-completions.ts b/packages/ai/src/providers/openai-completions.ts
index 3785af106..70f2e3b63 100644
--- a/packages/ai/src/providers/openai-completions.ts
+++ b/packages/ai/src/providers/openai-completions.ts
@@ -1213,8 +1213,8 @@ export function convertMessages(
// Inject a `reasoning_content` placeholder on assistant tool-call turns when the backend
// rejects history without it. The compat flag captures the rule:
// - Kimi (native or via OpenCode-Go): chat completion endpoint demands the field.
- // - Reasoning models reached through OpenRouter (e.g. DeepSeek V4 Pro): the underlying
- // provider's thinking-mode validator demands it on every prior assistant turn. omp
+ // - Reasoning models reached through OpenRouter or OpenCode-Go (e.g. DeepSeek V4 Pro):
+ // the upstream thinking-mode validator demands it on every prior assistant turn. omp
// cannot synthesize real reasoning when the conversation was warmed up by another
// provider whose reasoning is redacted/encrypted (Anthropic) or simply absent, so we
// emit a placeholder. Real captured reasoning, when present, is preserved earlier via
diff --git a/packages/ai/test/openai-completions-compat.test.ts b/packages/ai/test/openai-completions-compat.test.ts
index 6fc3ca9af..6d60ba5e4 100644
--- a/packages/ai/test/openai-completions-compat.test.ts
+++ b/packages/ai/test/openai-completions-compat.test.ts
@@ -283,23 +283,59 @@ describe("openai-completions compatibility", () => {
});
describe("kimi model detection via detectCompat", () => {
- function kimiOpenCodeModel(id: string): Model<"openai-completions"> {
+ function openCodeGoModel(id: string, reasoning = true): Model<"openai-completions"> {
return {
...getBundledModel("openai", "gpt-4o-mini"),
api: "openai-completions",
provider: "opencode-go",
baseUrl: "https://opencode.ai/zen/go/v1",
id,
- reasoning: true,
+ reasoning,
};
}
+ function kimiOpenCodeModel(id: string): Model<"openai-completions"> {
+ return openCodeGoModel(id, true);
+ }
+
it("requires reasoning_content for tool calls on kimi-k2.5 (opencode-go)", () => {
const compat = detectCompat(kimiOpenCodeModel("kimi-k2.5"));
expect(compat.requiresReasoningContentForToolCalls).toBe(true);
expect(compat.requiresAssistantContentForToolCalls).toBe(true);
});
+ it("requires reasoning_content for tool calls on reasoning DeepSeek models via opencode-go", () => {
+ const compat = detectCompat(openCodeGoModel("deepseek-v4-pro", true));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(true);
+ expect(compat.requiresAssistantContentForToolCalls).toBe(false);
+ });
+
+ it("injects reasoning_content placeholder for reasoning DeepSeek tool-call turns via opencode-go", () => {
+ const model = openCodeGoModel("deepseek-v4-pro", true);
+ const compat = detectCompat(model);
+ const toolCallMessage: AssistantMessage = {
+ role: "assistant",
+ content: [{ type: "toolCall", id: "call_ds_go", name: "web_search", arguments: { query: "hi" } }],
+ api: model.api,
+ provider: model.provider,
+ model: model.id,
+ usage: {
+ input: 0,
+ output: 0,
+ cacheRead: 0,
+ cacheWrite: 0,
+ totalTokens: 0,
+ cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
+ },
+ stopReason: "toolUse",
+ timestamp: Date.now(),
+ };
+ const messages = convertMessages(model, { messages: [toolCallMessage] }, compat);
+ const assistant = messages.find(m => m.role === "assistant");
+ expect(assistant).toBeDefined();
+ expect(Reflect.get(assistant as object, "reasoning_content")).toBe(".");
+ });
+
it("injects reasoning_content placeholder when assistant with tool calls has no reasoning field", () => {
const model = kimiOpenCodeModel("kimi-k2.5");
const compat = detectCompat(model);
@@ -338,15 +374,8 @@ describe("kimi model detection via detectCompat", () => {
expect((reasoningContent as string).length).toBeGreaterThan(0);
});
- it("does not inject reasoning_content when model is not kimi", () => {
- const model: Model<"openai-completions"> = {
- ...getBundledModel("openai", "gpt-4o-mini"),
- api: "openai-completions",
- provider: "opencode-go",
- baseUrl: "https://opencode.ai/zen/go/v1",
- id: "some-other-model",
- };
- const compat = detectCompat(model);
+ it("does not require reasoning_content when opencode-go model is not reasoning-capable", () => {
+ const compat = detectCompat(openCodeGoModel("some-other-model", false));
expect(compat.requiresReasoningContentForToolCalls).toBe(false);
});
From 70eda0132d7ff48314cbf2dc9560339f0a765d9e Mon Sep 17 00:00:00 2001
From: sonhyrd <son.hong.do@hyrd.ai>
Date: Mon, 27 Apr 2026 00:08:04 +0700
Subject: [PATCH 2/5] fix(ai/providers): generalize opencode reasoning_content
gating
---
.../providers/openai-completions-compat.ts | 14 +-
.../ai/src/providers/openai-completions.ts | 4 +-
.../ai/test/openai-completions-compat.test.ts | 160 ++++++++----------
3 files changed, 82 insertions(+), 96 deletions(-)
diff --git a/packages/ai/src/providers/openai-completions-compat.ts b/packages/ai/src/providers/openai-completions-compat.ts
index c777f312b..b4825a31c 100644
--- a/packages/ai/src/providers/openai-completions-compat.ts
+++ b/packages/ai/src/providers/openai-completions-compat.ts
@@ -54,6 +54,8 @@ export function detectOpenAICompat(model: Model<"openai-completions">, resolvedB
const isKimiModel = model.id.includes("moonshotai/kimi") || /^kimi[-.]/i.test(model.id);
const isAlibaba = provider === "alibaba-coding-plan" || baseUrl.includes("dashscope");
const isQwen = model.id.toLowerCase().includes("qwen");
+ const isOpenRouter = provider === "openrouter" || baseUrl.includes("openrouter.ai");
+ const isOpenCode = provider === "opencode-zen" || provider === "opencode-go" || baseUrl.includes("opencode.ai/zen");
const isNonStandard =
isCerebras ||
@@ -99,22 +101,20 @@ export function detectOpenAICompat(model: Model<"openai-completions">, resolvedB
requiresMistralToolIds: isMistral,
thinkingFormat: isZai
? "zai"
- : provider === "openrouter" || baseUrl.includes("openrouter.ai")
+ : isOpenRouter
? "openrouter"
: isAlibaba || isQwen
? "qwen"
: "openai",
reasoningContentField: "reasoning_content",
// Backends that 400 follow-up requests when prior assistant tool-call turns lack `reasoning_content`:
- // - Kimi: documented invariant on its native API and via OpenCode-Go.
- // - Reasoning-capable models reached through OpenRouter or OpenCode-Go: DeepSeek V4 Pro and
- // similar enforce this server-side whenever the request is in thinking mode.
+ // - Kimi: documented invariant on its native API and via OpenCode.
+ // - Reasoning-capable models reached through OpenRouter or OpenCode (Zen/Go): DeepSeek V4 Pro,
+ // Kimi, and similar models can enforce this server-side whenever the request is in thinking mode.
// We can't translate Anthropic's redacted/encrypted reasoning into DeepSeek's plaintext form, so
// cross-provider continuations rely on a placeholder — see `convertMessages` for injection rules.
requiresReasoningContentForToolCalls:
- isKimiModel ||
- ((provider === "openrouter" || baseUrl.includes("openrouter.ai") || provider === "opencode-go" ||
- baseUrl.includes("opencode.ai/zen/go")) && Boolean(model.reasoning)),
+ isKimiModel || ((isOpenRouter || isOpenCode) && Boolean(model.reasoning)),
requiresAssistantContentForToolCalls: isKimiModel,
openRouterRouting: undefined,
vercelGatewayRouting: undefined,
diff --git a/packages/ai/src/providers/openai-completions.ts b/packages/ai/src/providers/openai-completions.ts
index 70f2e3b63..e25aeffb3 100644
--- a/packages/ai/src/providers/openai-completions.ts
+++ b/packages/ai/src/providers/openai-completions.ts
@@ -1212,8 +1212,8 @@ export function convertMessages(
(assistantMsg as any).reasoning_text !== undefined;
// Inject a `reasoning_content` placeholder on assistant tool-call turns when the backend
// rejects history without it. The compat flag captures the rule:
- // - Kimi (native or via OpenCode-Go): chat completion endpoint demands the field.
- // - Reasoning models reached through OpenRouter or OpenCode-Go (e.g. DeepSeek V4 Pro):
+ // - Kimi (native or via OpenCode Zen/Go): chat completion endpoint demands the field.
+ // - Reasoning models reached through OpenRouter or OpenCode Zen/Go (e.g. DeepSeek V4 Pro):
// the upstream thinking-mode validator demands it on every prior assistant turn. omp
// cannot synthesize real reasoning when the conversation was warmed up by another
// provider whose reasoning is redacted/encrypted (Anthropic) or simply absent, so we
diff --git a/packages/ai/test/openai-completions-compat.test.ts b/packages/ai/test/openai-completions-compat.test.ts
index 6d60ba5e4..c743dd246 100644
--- a/packages/ai/test/openai-completions-compat.test.ts
+++ b/packages/ai/test/openai-completions-compat.test.ts
@@ -282,105 +282,91 @@ describe("openai-completions compatibility", () => {
});
});
-describe("kimi model detection via detectCompat", () => {
- function openCodeGoModel(id: string, reasoning = true): Model<"openai-completions"> {
+describe("opencode reasoning-content compatibility via detectCompat", () => {
+ type OpenCodeProvider = "opencode-go" | "opencode-zen";
+
+ function openCodeModel(provider: OpenCodeProvider, id: string, reasoning = true): Model<"openai-completions"> {
+ const baseUrl = provider === "opencode-go" ? "https://opencode.ai/zen/go/v1" : "https://opencode.ai/zen/v1";
return {
...getBundledModel("openai", "gpt-4o-mini"),
api: "openai-completions",
- provider: "opencode-go",
- baseUrl: "https://opencode.ai/zen/go/v1",
+ provider,
+ baseUrl,
id,
reasoning,
};
}
- function kimiOpenCodeModel(id: string): Model<"openai-completions"> {
- return openCodeGoModel(id, true);
- }
-
- it("requires reasoning_content for tool calls on kimi-k2.5 (opencode-go)", () => {
- const compat = detectCompat(kimiOpenCodeModel("kimi-k2.5"));
- expect(compat.requiresReasoningContentForToolCalls).toBe(true);
- expect(compat.requiresAssistantContentForToolCalls).toBe(true);
- });
-
- it("requires reasoning_content for tool calls on reasoning DeepSeek models via opencode-go", () => {
- const compat = detectCompat(openCodeGoModel("deepseek-v4-pro", true));
- expect(compat.requiresReasoningContentForToolCalls).toBe(true);
- expect(compat.requiresAssistantContentForToolCalls).toBe(false);
- });
-
- it("injects reasoning_content placeholder for reasoning DeepSeek tool-call turns via opencode-go", () => {
- const model = openCodeGoModel("deepseek-v4-pro", true);
- const compat = detectCompat(model);
- const toolCallMessage: AssistantMessage = {
- role: "assistant",
- content: [{ type: "toolCall", id: "call_ds_go", name: "web_search", arguments: { query: "hi" } }],
- api: model.api,
- provider: model.provider,
- model: model.id,
- usage: {
- input: 0,
- output: 0,
- cacheRead: 0,
- cacheWrite: 0,
- totalTokens: 0,
- cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
- },
- stopReason: "toolUse",
- timestamp: Date.now(),
+ it.each(["opencode-go", "opencode-zen"] as const)(
+ "requires reasoning_content for tool calls on kimi-k2.5 via %s",
+ provider => {
+ const compat = detectCompat(openCodeModel(provider, "kimi-k2.5", true));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(true);
+ expect(compat.requiresAssistantContentForToolCalls).toBe(true);
+ },
+ );
+
+ it.each(["opencode-go", "opencode-zen"] as const)(
+ "requires reasoning_content for tool calls on reasoning DeepSeek models via %s",
+ provider => {
+ const compat = detectCompat(openCodeModel(provider, "deepseek-v4-pro", true));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(true);
+ expect(compat.requiresAssistantContentForToolCalls).toBe(false);
+ },
+ );
+
+ it("requires reasoning_content when custom openai provider targets opencode zen baseUrl", () => {
+ const model: Model<"openai-completions"> = {
+ ...getBundledModel("openai", "gpt-4o-mini"),
+ api: "openai-completions",
+ provider: "openai",
+ baseUrl: "https://opencode.ai/zen/v1",
+ id: "deepseek-v4-pro",
+ reasoning: true,
};
- const messages = convertMessages(model, { messages: [toolCallMessage] }, compat);
- const assistant = messages.find(m => m.role === "assistant");
- expect(assistant).toBeDefined();
- expect(Reflect.get(assistant as object, "reasoning_content")).toBe(".");
- });
-
- it("injects reasoning_content placeholder when assistant with tool calls has no reasoning field", () => {
- const model = kimiOpenCodeModel("kimi-k2.5");
const compat = detectCompat(model);
- const toolCallMessage: AssistantMessage = {
- role: "assistant",
- content: [
- // Thinking returned as plain text (as kimi-k2.5 on opencode-go does)
- { type: "text", text: "Let me research this." },
- {
- type: "toolCall",
- id: "call_abc123",
- name: "web_search",
- arguments: { query: "beads gastownhall" },
- },
- ],
- api: model.api,
- provider: model.provider,
- model: model.id,
- usage: {
- input: 0,
- output: 0,
- cacheRead: 0,
- cacheWrite: 0,
- totalTokens: 0,
- cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
- },
- stopReason: "toolUse",
- timestamp: Date.now(),
- };
- const messages = convertMessages(model, { messages: [toolCallMessage] }, compat);
- const assistant = messages.find(m => m.role === "assistant");
- expect(assistant).toBeDefined();
- const reasoningContent = Reflect.get(assistant as object, "reasoning_content");
- expect(reasoningContent).toBeDefined();
- expect(typeof reasoningContent).toBe("string");
- expect((reasoningContent as string).length).toBeGreaterThan(0);
- });
-
- it("does not require reasoning_content when opencode-go model is not reasoning-capable", () => {
- const compat = detectCompat(openCodeGoModel("some-other-model", false));
- expect(compat.requiresReasoningContentForToolCalls).toBe(false);
+ expect(compat.requiresReasoningContentForToolCalls).toBe(true);
});
- it.each(["kimi-k2.5", "kimi-k1.5", "kimi-k2-5"])("matches kimi model id: %s", id => {
- const compat = detectCompat(kimiOpenCodeModel(id));
+ it.each(["opencode-go", "opencode-zen"] as const)(
+ "injects reasoning_content placeholder for reasoning DeepSeek tool-call turns via %s",
+ provider => {
+ const model = openCodeModel(provider, "deepseek-v4-pro", true);
+ const compat = detectCompat(model);
+ const toolCallMessage: AssistantMessage = {
+ role: "assistant",
+ content: [{ type: "toolCall", id: `call_ds_${provider}`, name: "web_search", arguments: { query: "hi" } }],
+ api: model.api,
+ provider: model.provider,
+ model: model.id,
+ usage: {
+ input: 0,
+ output: 0,
+ cacheRead: 0,
+ cacheWrite: 0,
+ totalTokens: 0,
+ cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
+ },
+ stopReason: "toolUse",
+ timestamp: Date.now(),
+ };
+ const messages = convertMessages(model, { messages: [toolCallMessage] }, compat);
+ const assistant = messages.find(m => m.role === "assistant");
+ expect(assistant).toBeDefined();
+ expect(Reflect.get(assistant as object, "reasoning_content")).toBe(".");
+ },
+ );
+
+ it.each(["opencode-go", "opencode-zen"] as const)(
+ "does not require reasoning_content when %s model is not reasoning-capable",
+ provider => {
+ const compat = detectCompat(openCodeModel(provider, "some-other-model", false));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(false);
+ },
+ );
+
+ it.each(["kimi-k2.5", "kimi-k1.5", "kimi-k2-5"])("matches kimi model id pattern via opencode-zen: %s", id => {
+ const compat = detectCompat(openCodeModel("opencode-zen", id, true));
expect(compat.requiresReasoningContentForToolCalls).toBe(true);
});
From 76c1fe9ee083836ecca43900fefc458c8cf4c4fb Mon Sep 17 00:00:00 2001
From: sonhyrd <son.hong.do@hyrd.ai>
Date: Mon, 27 Apr 2026 00:14:27 +0700
Subject: [PATCH 3/5] test(ai): restore non-kimi coverage while adding
opencode-zen cases
---
.../ai/test/openai-completions-compat.test.ts | 215 +++++++++++++-----
1 file changed, 154 insertions(+), 61 deletions(-)
diff --git a/packages/ai/test/openai-completions-compat.test.ts b/packages/ai/test/openai-completions-compat.test.ts
index c743dd246..8b8cef393 100644
--- a/packages/ai/test/openai-completions-compat.test.ts
+++ b/packages/ai/test/openai-completions-compat.test.ts
@@ -282,38 +282,56 @@ describe("openai-completions compatibility", () => {
});
});
-describe("opencode reasoning-content compatibility via detectCompat", () => {
- type OpenCodeProvider = "opencode-go" | "opencode-zen";
+describe("kimi model detection via detectCompat", () => {
+ function openCodeGoModel(id: string, reasoning = true): Model<"openai-completions"> {
+ return {
+ ...getBundledModel("openai", "gpt-4o-mini"),
+ api: "openai-completions",
+ provider: "opencode-go",
+ baseUrl: "https://opencode.ai/zen/go/v1",
+ id,
+ reasoning,
+ };
+ }
- function openCodeModel(provider: OpenCodeProvider, id: string, reasoning = true): Model<"openai-completions"> {
- const baseUrl = provider === "opencode-go" ? "https://opencode.ai/zen/go/v1" : "https://opencode.ai/zen/v1";
+ function openCodeZenModel(id: string, reasoning = true): Model<"openai-completions"> {
return {
...getBundledModel("openai", "gpt-4o-mini"),
api: "openai-completions",
- provider,
- baseUrl,
+ provider: "opencode-zen",
+ baseUrl: "https://opencode.ai/zen/v1",
id,
reasoning,
};
}
- it.each(["opencode-go", "opencode-zen"] as const)(
- "requires reasoning_content for tool calls on kimi-k2.5 via %s",
- provider => {
- const compat = detectCompat(openCodeModel(provider, "kimi-k2.5", true));
- expect(compat.requiresReasoningContentForToolCalls).toBe(true);
- expect(compat.requiresAssistantContentForToolCalls).toBe(true);
- },
- );
-
- it.each(["opencode-go", "opencode-zen"] as const)(
- "requires reasoning_content for tool calls on reasoning DeepSeek models via %s",
- provider => {
- const compat = detectCompat(openCodeModel(provider, "deepseek-v4-pro", true));
- expect(compat.requiresReasoningContentForToolCalls).toBe(true);
- expect(compat.requiresAssistantContentForToolCalls).toBe(false);
- },
- );
+ function kimiOpenCodeModel(id: string): Model<"openai-completions"> {
+ return openCodeGoModel(id, true);
+ }
+
+ it("requires reasoning_content for tool calls on kimi-k2.5 (opencode-go)", () => {
+ const compat = detectCompat(kimiOpenCodeModel("kimi-k2.5"));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(true);
+ expect(compat.requiresAssistantContentForToolCalls).toBe(true);
+ });
+
+ it("requires reasoning_content for tool calls on kimi-k2.5 (opencode-zen)", () => {
+ const compat = detectCompat(openCodeZenModel("kimi-k2.5", true));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(true);
+ expect(compat.requiresAssistantContentForToolCalls).toBe(true);
+ });
+
+ it("requires reasoning_content for tool calls on reasoning DeepSeek models via opencode-go", () => {
+ const compat = detectCompat(openCodeGoModel("deepseek-v4-pro", true));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(true);
+ expect(compat.requiresAssistantContentForToolCalls).toBe(false);
+ });
+
+ it("requires reasoning_content for tool calls on reasoning DeepSeek models via opencode-zen", () => {
+ const compat = detectCompat(openCodeZenModel("deepseek-v4-pro", true));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(true);
+ expect(compat.requiresAssistantContentForToolCalls).toBe(false);
+ });
it("requires reasoning_content when custom openai provider targets opencode zen baseUrl", () => {
const model: Model<"openai-completions"> = {
@@ -328,45 +346,120 @@ describe("opencode reasoning-content compatibility via detectCompat", () => {
expect(compat.requiresReasoningContentForToolCalls).toBe(true);
});
- it.each(["opencode-go", "opencode-zen"] as const)(
- "injects reasoning_content placeholder for reasoning DeepSeek tool-call turns via %s",
- provider => {
- const model = openCodeModel(provider, "deepseek-v4-pro", true);
- const compat = detectCompat(model);
- const toolCallMessage: AssistantMessage = {
- role: "assistant",
- content: [{ type: "toolCall", id: `call_ds_${provider}`, name: "web_search", arguments: { query: "hi" } }],
- api: model.api,
- provider: model.provider,
- model: model.id,
- usage: {
- input: 0,
- output: 0,
- cacheRead: 0,
- cacheWrite: 0,
- totalTokens: 0,
- cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
+ it("injects reasoning_content placeholder for reasoning DeepSeek tool-call turns via opencode-go", () => {
+ const model = openCodeGoModel("deepseek-v4-pro", true);
+ const compat = detectCompat(model);
+ const toolCallMessage: AssistantMessage = {
+ role: "assistant",
+ content: [{ type: "toolCall", id: "call_ds_go", name: "web_search", arguments: { query: "hi" } }],
+ api: model.api,
+ provider: model.provider,
+ model: model.id,
+ usage: {
+ input: 0,
+ output: 0,
+ cacheRead: 0,
+ cacheWrite: 0,
+ totalTokens: 0,
+ cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
+ },
+ stopReason: "toolUse",
+ timestamp: Date.now(),
+ };
+ const messages = convertMessages(model, { messages: [toolCallMessage] }, compat);
+ const assistant = messages.find(m => m.role === "assistant");
+ expect(assistant).toBeDefined();
+ expect(Reflect.get(assistant as object, "reasoning_content")).toBe(".");
+ });
+
+ it("injects reasoning_content placeholder for reasoning DeepSeek tool-call turns via opencode-zen", () => {
+ const model = openCodeZenModel("deepseek-v4-pro", true);
+ const compat = detectCompat(model);
+ const toolCallMessage: AssistantMessage = {
+ role: "assistant",
+ content: [{ type: "toolCall", id: "call_ds_zen", name: "web_search", arguments: { query: "hi" } }],
+ api: model.api,
+ provider: model.provider,
+ model: model.id,
+ usage: {
+ input: 0,
+ output: 0,
+ cacheRead: 0,
+ cacheWrite: 0,
+ totalTokens: 0,
+ cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
+ },
+ stopReason: "toolUse",
+ timestamp: Date.now(),
+ };
+ const messages = convertMessages(model, { messages: [toolCallMessage] }, compat);
+ const assistant = messages.find(m => m.role === "assistant");
+ expect(assistant).toBeDefined();
+ expect(Reflect.get(assistant as object, "reasoning_content")).toBe(".");
+ });
+
+ it("injects reasoning_content placeholder when assistant with tool calls has no reasoning field", () => {
+ const model = kimiOpenCodeModel("kimi-k2.5");
+ const compat = detectCompat(model);
+ const toolCallMessage: AssistantMessage = {
+ role: "assistant",
+ content: [
+ // Thinking returned as plain text (as kimi-k2.5 on opencode-go does)
+ { type: "text", text: "Let me research this." },
+ {
+ type: "toolCall",
+ id: "call_abc123",
+ name: "web_search",
+ arguments: { query: "beads gastownhall" },
},
- stopReason: "toolUse",
- timestamp: Date.now(),
- };
- const messages = convertMessages(model, { messages: [toolCallMessage] }, compat);
- const assistant = messages.find(m => m.role === "assistant");
- expect(assistant).toBeDefined();
- expect(Reflect.get(assistant as object, "reasoning_content")).toBe(".");
- },
- );
-
- it.each(["opencode-go", "opencode-zen"] as const)(
- "does not require reasoning_content when %s model is not reasoning-capable",
- provider => {
- const compat = detectCompat(openCodeModel(provider, "some-other-model", false));
- expect(compat.requiresReasoningContentForToolCalls).toBe(false);
- },
- );
-
- it.each(["kimi-k2.5", "kimi-k1.5", "kimi-k2-5"])("matches kimi model id pattern via opencode-zen: %s", id => {
- const compat = detectCompat(openCodeModel("opencode-zen", id, true));
+ ],
+ api: model.api,
+ provider: model.provider,
+ model: model.id,
+ usage: {
+ input: 0,
+ output: 0,
+ cacheRead: 0,
+ cacheWrite: 0,
+ totalTokens: 0,
+ cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
+ },
+ stopReason: "toolUse",
+ timestamp: Date.now(),
+ };
+ const messages = convertMessages(model, { messages: [toolCallMessage] }, compat);
+ const assistant = messages.find(m => m.role === "assistant");
+ expect(assistant).toBeDefined();
+ const reasoningContent = Reflect.get(assistant as object, "reasoning_content");
+ expect(reasoningContent).toBeDefined();
+ expect(typeof reasoningContent).toBe("string");
+ expect((reasoningContent as string).length).toBeGreaterThan(0);
+ });
+
+ it("does not inject reasoning_content when model is not kimi", () => {
+ const model: Model<"openai-completions"> = {
+ ...getBundledModel("openai", "gpt-4o-mini"),
+ api: "openai-completions",
+ provider: "opencode-go",
+ baseUrl: "https://opencode.ai/zen/go/v1",
+ id: "some-other-model",
+ };
+ const compat = detectCompat(model);
+ expect(compat.requiresReasoningContentForToolCalls).toBe(false);
+ });
+
+ it("does not require reasoning_content when opencode-go model is not reasoning-capable", () => {
+ const compat = detectCompat(openCodeGoModel("some-other-model", false));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(false);
+ });
+
+ it("does not require reasoning_content when opencode-zen model is not reasoning-capable", () => {
+ const compat = detectCompat(openCodeZenModel("some-other-model", false));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(false);
+ });
+
+ it.each(["kimi-k2.5", "kimi-k1.5", "kimi-k2-5"])("matches kimi model id: %s", id => {
+ const compat = detectCompat(kimiOpenCodeModel(id));
expect(compat.requiresReasoningContentForToolCalls).toBe(true);
});
From 9c7a8958c682b16990504500551827320508087d Mon Sep 17 00:00:00 2001
From: sonhyrd <son.hong.do@hyrd.ai>
Date: Mon, 27 Apr 2026 00:29:48 +0700
Subject: [PATCH 4/5] fix(ai/providers): gate reasoning_content stubs on
deepseek models
---
.../providers/openai-completions-compat.ts | 7 ++--
.../ai/src/providers/openai-completions.ts | 4 +--
.../ai/test/openai-completions-compat.test.ts | 36 +++++++++++++++++++
3 files changed, 42 insertions(+), 5 deletions(-)
diff --git a/packages/ai/src/providers/openai-completions-compat.ts b/packages/ai/src/providers/openai-completions-compat.ts
index b4825a31c..bba1cef70 100644
--- a/packages/ai/src/providers/openai-completions-compat.ts
+++ b/packages/ai/src/providers/openai-completions-compat.ts
@@ -54,6 +54,7 @@ export function detectOpenAICompat(model: Model<"openai-completions">, resolvedB
const isKimiModel = model.id.includes("moonshotai/kimi") || /^kimi[-.]/i.test(model.id);
const isAlibaba = provider === "alibaba-coding-plan" || baseUrl.includes("dashscope");
const isQwen = model.id.toLowerCase().includes("qwen");
+ const isDeepSeekModel = model.id.toLowerCase().includes("deepseek");
const isOpenRouter = provider === "openrouter" || baseUrl.includes("openrouter.ai");
const isOpenCode = provider === "opencode-zen" || provider === "opencode-go" || baseUrl.includes("opencode.ai/zen");
@@ -109,12 +110,12 @@ export function detectOpenAICompat(model: Model<"openai-completions">, resolvedB
reasoningContentField: "reasoning_content",
// Backends that 400 follow-up requests when prior assistant tool-call turns lack `reasoning_content`:
// - Kimi: documented invariant on its native API and via OpenCode.
- // - Reasoning-capable models reached through OpenRouter or OpenCode (Zen/Go): DeepSeek V4 Pro,
- // Kimi, and similar models can enforce this server-side whenever the request is in thinking mode.
+ // - DeepSeek reasoning models reached through OpenRouter or OpenCode (Zen/Go): enforced when
+ // thinking mode is enabled on those model families.
// We can't translate Anthropic's redacted/encrypted reasoning into DeepSeek's plaintext form, so
// cross-provider continuations rely on a placeholder — see `convertMessages` for injection rules.
requiresReasoningContentForToolCalls:
- isKimiModel || ((isOpenRouter || isOpenCode) && Boolean(model.reasoning)),
+ isKimiModel || (isDeepSeekModel && (isOpenRouter || isOpenCode) && Boolean(model.reasoning)),
requiresAssistantContentForToolCalls: isKimiModel,
openRouterRouting: undefined,
vercelGatewayRouting: undefined,
diff --git a/packages/ai/src/providers/openai-completions.ts b/packages/ai/src/providers/openai-completions.ts
index e25aeffb3..89a997a0f 100644
--- a/packages/ai/src/providers/openai-completions.ts
+++ b/packages/ai/src/providers/openai-completions.ts
@@ -1213,8 +1213,8 @@ export function convertMessages(
// Inject a `reasoning_content` placeholder on assistant tool-call turns when the backend
// rejects history without it. The compat flag captures the rule:
// - Kimi (native or via OpenCode Zen/Go): chat completion endpoint demands the field.
- // - Reasoning models reached through OpenRouter or OpenCode Zen/Go (e.g. DeepSeek V4 Pro):
- // the upstream thinking-mode validator demands it on every prior assistant turn. omp
+ // - DeepSeek reasoning models reached through OpenRouter or OpenCode Zen/Go: the upstream
+ // thinking-mode validator demands it on every prior assistant turn. omp
// cannot synthesize real reasoning when the conversation was warmed up by another
// provider whose reasoning is redacted/encrypted (Anthropic) or simply absent, so we
// emit a placeholder. Real captured reasoning, when present, is preserved earlier via
diff --git a/packages/ai/test/openai-completions-compat.test.ts b/packages/ai/test/openai-completions-compat.test.ts
index 8b8cef393..c083c2151 100644
--- a/packages/ai/test/openai-completions-compat.test.ts
+++ b/packages/ai/test/openai-completions-compat.test.ts
@@ -333,6 +333,29 @@ describe("kimi model detection via detectCompat", () => {
expect(compat.requiresAssistantContentForToolCalls).toBe(false);
});
+ it("does not require reasoning_content for non-DeepSeek reasoning models via opencode-go", () => {
+ const compat = detectCompat(openCodeGoModel("glm-5", true));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(false);
+ });
+
+ it("does not require reasoning_content for non-DeepSeek reasoning models via opencode-zen", () => {
+ const compat = detectCompat(openCodeZenModel("glm-5", true));
+ expect(compat.requiresReasoningContentForToolCalls).toBe(false);
+ });
+
+ it("does not require reasoning_content when custom openai provider targets opencode zen baseUrl with non-DeepSeek model", () => {
+ const model: Model<"openai-completions"> = {
+ ...getBundledModel("openai", "gpt-4o-mini"),
+ api: "openai-completions",
+ provider: "openai",
+ baseUrl: "https://opencode.ai/zen/v1",
+ id: "glm-5",
+ reasoning: true,
+ };
+ const compat = detectCompat(model);
+ expect(compat.requiresReasoningContentForToolCalls).toBe(false);
+ });
+
it("requires reasoning_content when custom openai provider targets opencode zen baseUrl", () => {
const model: Model<"openai-completions"> = {
...getBundledModel("openai", "gpt-4o-mini"),
@@ -453,6 +476,19 @@ describe("kimi model detection via detectCompat", () => {
expect(compat.requiresReasoningContentForToolCalls).toBe(false);
});
+ it("does not require reasoning_content for non-DeepSeek reasoning models via openrouter", () => {
+ const model: Model<"openai-completions"> = {
+ ...getBundledModel("openai", "gpt-4o-mini"),
+ api: "openai-completions",
+ provider: "openrouter",
+ baseUrl: "https://openrouter.ai/api/v1",
+ id: "openai/gpt-4.1-mini",
+ reasoning: true,
+ };
+ const compat = detectCompat(model);
+ expect(compat.requiresReasoningContentForToolCalls).toBe(false);
+ });
+
it("does not require reasoning_content when opencode-zen model is not reasoning-capable", () => {
const compat = detectCompat(openCodeZenModel("some-other-model", false));
expect(compat.requiresReasoningContentForToolCalls).toBe(false);
From 53a03286cf658bb4aeab67dad3246b7ba80cf244 Mon Sep 17 00:00:00 2001
From: sonhyrd <son.hong.do@hyrd.ai>
Date: Mon, 27 Apr 2026 00:52:22 +0700
Subject: [PATCH 5/5] fix(ai/providers): set content when reasoning placeholder
is injected
---
packages/ai/src/providers/openai-completions.ts | 3 ++-
packages/ai/test/openai-completions-compat.test.ts | 2 ++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/packages/ai/src/providers/openai-completions.ts b/packages/ai/src/providers/openai-completions.ts
index 89a997a0f..b490e254e 100644
--- a/packages/ai/src/providers/openai-completions.ts
+++ b/packages/ai/src/providers/openai-completions.ts
@@ -1206,7 +1206,7 @@ export function convertMessages(
}
const toolCalls = msg.content.filter(b => b.type === "toolCall") as ToolCall[];
- const hasReasoningField =
+ let hasReasoningField =
(assistantMsg as any).reasoning_content !== undefined ||
(assistantMsg as any).reasoning !== undefined ||
(assistantMsg as any).reasoning_text !== undefined;
@@ -1227,6 +1227,7 @@ export function convertMessages(
if (toolCalls.length > 0 && stubsReasoningContent && !hasReasoningField) {
const reasoningField = compat.reasoningContentField ?? "reasoning_content";
(assistantMsg as any)[reasoningField] = ".";
+ hasReasoningField = true;
}
if (toolCalls.length > 0) {
assistantMsg.tool_calls = toolCalls.map((tc, toolCallIndex) => {
diff --git a/packages/ai/test/openai-completions-compat.test.ts b/packages/ai/test/openai-completions-compat.test.ts
index c083c2151..8efae899a 100644
--- a/packages/ai/test/openai-completions-compat.test.ts
+++ b/packages/ai/test/openai-completions-compat.test.ts
@@ -393,6 +393,7 @@ describe("kimi model detection via detectCompat", () => {
const assistant = messages.find(m => m.role === "assistant");
expect(assistant).toBeDefined();
expect(Reflect.get(assistant as object, "reasoning_content")).toBe(".");
+ expect(Reflect.get(assistant as object, "content")).toBe("");
});
it("injects reasoning_content placeholder for reasoning DeepSeek tool-call turns via opencode-zen", () => {
@@ -419,6 +420,7 @@ describe("kimi model detection via detectCompat", () => {
const assistant = messages.find(m => m.role === "assistant");
expect(assistant).toBeDefined();
expect(Reflect.get(assistant as object, "reasoning_content")).toBe(".");
+ expect(Reflect.get(assistant as object, "content")).toBe("");
});
it("injects reasoning_content placeholder when assistant with tool calls has no reasoning field", () => {

View File

@@ -38,6 +38,7 @@ class JellyfinQBittorrentMonitor:
stream_bitrate_headroom=1.1, stream_bitrate_headroom=1.1,
webhook_port=0, webhook_port=0,
webhook_bind="127.0.0.1", webhook_bind="127.0.0.1",
gateway_ip=None,
): ):
self.jellyfin_url = jellyfin_url self.jellyfin_url = jellyfin_url
self.qbittorrent_url = qbittorrent_url self.qbittorrent_url = qbittorrent_url
@@ -77,6 +78,15 @@ class JellyfinQBittorrentMonitor:
ipaddress.ip_network("fe80::/10"), # IPv6 link-local ipaddress.ip_network("fe80::/10"), # IPv6 link-local
] ]
# Hairpin marker. When a LAN client reaches Jellyfin via the public
# hostname, the router NAT-loopbacks the packet and SNATs the source
# to itself — the session arrives looking local but still costs WAN
# bandwidth. Sessions whose source equals the gateway must therefore
# NOT be skipped. None disables the check (pre-hairpin-aware behavior).
if gateway_ip is None:
gateway_ip = self._discover_default_gateway()
self.gateway_ip = gateway_ip
def is_local_ip(self, ip_address: str) -> bool: def is_local_ip(self, ip_address: str) -> bool:
"""Check if an IP address is from a local network""" """Check if an IP address is from a local network"""
try: try:
@@ -86,6 +96,39 @@ class JellyfinQBittorrentMonitor:
logger.warning(f"Invalid IP address format: {ip_address}") logger.warning(f"Invalid IP address format: {ip_address}")
return True # Treat invalid IPs as local for safety return True # Treat invalid IPs as local for safety
def _discover_default_gateway(self) -> str | None:
"""Read the IPv4 default gateway from /proc/net/route, or None."""
try:
with open("/proc/net/route") as f:
next(f) # skip header
for line in f:
fields = line.split()
if len(fields) < 8 or fields[1] != "00000000":
continue
flags = int(fields[3], 16)
if not flags & 0x2: # RTF_GATEWAY
continue
gw_bytes = bytes.fromhex(fields[2])[::-1] # little-endian
if len(gw_bytes) != 4:
continue
return ".".join(str(b) for b in gw_bytes)
except (OSError, ValueError) as e:
logger.warning(f"Could not autodetect default gateway: {e}")
return None
def is_skippable(self, ip_address: str) -> bool:
"""True iff this source IP can be ignored when deciding to throttle.
Truly LAN-direct sessions are skippable (no WAN cost). Hairpin-NAT'd
LAN sessions arrive with the LAN gateway as their source — those still
cost WAN bandwidth and must NOT be skipped.
"""
if not self.is_local_ip(ip_address):
return False
if self.gateway_ip and ip_address == self.gateway_ip:
return False
return True
def signal_handler(self, signum, frame): def signal_handler(self, signum, frame):
logger.info("Received shutdown signal, cleaning up...") logger.info("Received shutdown signal, cleaning up...")
self.running = False self.running = False
@@ -164,7 +207,7 @@ class JellyfinQBittorrentMonitor:
if ( if (
"NowPlayingItem" in session "NowPlayingItem" in session
and not session.get("PlayState", {}).get("IsPaused", True) and not session.get("PlayState", {}).get("IsPaused", True)
and not self.is_local_ip(session.get("RemoteEndPoint", "")) and not self.is_skippable(session.get("RemoteEndPoint", ""))
): ):
item = session["NowPlayingItem"] item = session["NowPlayingItem"]
item_type = item.get("Type", "").lower() item_type = item.get("Type", "").lower()
@@ -354,6 +397,9 @@ class JellyfinQBittorrentMonitor:
logger.info(f"Default stream bitrate: {self.default_stream_bitrate} bps") logger.info(f"Default stream bitrate: {self.default_stream_bitrate} bps")
logger.info(f"Minimum torrent speed: {self.min_torrent_speed} KB/s") logger.info(f"Minimum torrent speed: {self.min_torrent_speed} KB/s")
logger.info(f"Stream bitrate headroom: {self.stream_bitrate_headroom}x") logger.info(f"Stream bitrate headroom: {self.stream_bitrate_headroom}x")
logger.info(
f"LAN gateway (hairpin marker): {self.gateway_ip or 'none / autodetect failed'}"
)
if self.webhook_port: if self.webhook_port:
logger.info(f"Webhook receiver: {self.webhook_bind}:{self.webhook_port}") logger.info(f"Webhook receiver: {self.webhook_bind}:{self.webhook_port}")
@@ -484,6 +530,7 @@ if __name__ == "__main__":
stream_bitrate_headroom = float(os.getenv("STREAM_BITRATE_HEADROOM", "1.1")) stream_bitrate_headroom = float(os.getenv("STREAM_BITRATE_HEADROOM", "1.1"))
webhook_port = int(os.getenv("WEBHOOK_PORT", "0")) webhook_port = int(os.getenv("WEBHOOK_PORT", "0"))
webhook_bind = os.getenv("WEBHOOK_BIND", "127.0.0.1") webhook_bind = os.getenv("WEBHOOK_BIND", "127.0.0.1")
gateway_ip = os.getenv("LAN_GATEWAY_IP") or None
monitor = JellyfinQBittorrentMonitor( monitor = JellyfinQBittorrentMonitor(
jellyfin_url=jellyfin_url, jellyfin_url=jellyfin_url,
@@ -499,6 +546,7 @@ if __name__ == "__main__":
stream_bitrate_headroom=stream_bitrate_headroom, stream_bitrate_headroom=stream_bitrate_headroom,
webhook_port=webhook_port, webhook_port=webhook_port,
webhook_bind=webhook_bind, webhook_bind=webhook_bind,
gateway_ip=gateway_ip,
) )
monitor.run() monitor.run()

View File

@@ -428,6 +428,73 @@ pkgs.testers.runNixOSTest {
local_playback["PositionTicks"] = 50000000 local_playback["PositionTicks"] = 50000000
server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing/Stopped' -d '{json.dumps(local_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{local_auth}, Token={local_token}'") server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing/Stopped' -d '{json.dumps(local_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{local_auth}, Token={local_token}'")
with subtest("Hairpin'd LAN session (source IP = configured gateway) DOES throttle"):
# Simulates a LAN client reaching Jellyfin via the public hostname:
# the router SNATs the source to itself, so Jellyfin sees the gateway
# IP and IsInLocalNetwork=True even though WAN bandwidth is in play.
# We use 127.0.0.1 as the "gateway" in this VM because the localhost
# curl below produces source 127.0.0.1 from Jellyfin's view.
server.succeed("systemctl stop monitor-test || true")
time.sleep(1)
server.succeed(f"""
systemd-run --unit=monitor-hairpin \
--setenv=JELLYFIN_URL=http://localhost:8096 \
--setenv=JELLYFIN_API_KEY={token} \
--setenv=QBITTORRENT_URL=http://localhost:8080 \
--setenv=CHECK_INTERVAL=1 \
--setenv=STREAMING_START_DELAY=1 \
--setenv=STREAMING_STOP_DELAY=1 \
--setenv=TOTAL_BANDWIDTH_BUDGET=50000000 \
--setenv=SERVICE_BUFFER=2000000 \
--setenv=DEFAULT_STREAM_BITRATE=10000000 \
--setenv=MIN_TORRENT_SPEED=100 \
--setenv=LAN_GATEWAY_IP=127.0.0.1 \
{python} {monitor}
""")
time.sleep(2)
assert not is_throttled(), "Should start unthrottled (no streams yet)"
hairpin_auth = 'MediaBrowser Client="Hairpin Client", DeviceId="hairpin-2222", Device="HairpinDevice", Version="1.0"'
hairpin_auth_result = json.loads(server.succeed(
f"curl -sf -X POST 'http://localhost:8096/Users/AuthenticateByName' -d '@${jfLib.payloads.auth}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{hairpin_auth}'"
))
hairpin_token = hairpin_auth_result["AccessToken"]
hairpin_playback = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-hairpin",
"CanSeek": True,
"IsPaused": False,
}
server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing' -d '{json.dumps(hairpin_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{hairpin_auth}, Token={hairpin_token}'")
time.sleep(3)
assert is_throttled(), "Hairpin'd session (source=gateway) should throttle even though source is RFC1918"
# Cleanup: stop the playback and the override-monitor, restore the normal one.
hairpin_playback["PositionTicks"] = 50000000
server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing/Stopped' -d '{json.dumps(hairpin_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{hairpin_auth}, Token={hairpin_token}'")
time.sleep(2)
assert not is_throttled(), "Should unthrottle after hairpin'd playback stops"
server.succeed("systemctl stop monitor-hairpin || true")
time.sleep(1)
server.succeed(f"""
systemd-run --unit=monitor-test \
--setenv=JELLYFIN_URL=http://localhost:8096 \
--setenv=JELLYFIN_API_KEY={token} \
--setenv=QBITTORRENT_URL=http://localhost:8080 \
--setenv=CHECK_INTERVAL=1 \
--setenv=STREAMING_START_DELAY=1 \
--setenv=STREAMING_STOP_DELAY=1 \
--setenv=TOTAL_BANDWIDTH_BUDGET=50000000 \
--setenv=SERVICE_BUFFER=2000000 \
--setenv=DEFAULT_STREAM_BITRATE=10000000 \
--setenv=MIN_TORRENT_SPEED=100 \
{python} {monitor}
""")
time.sleep(2)
# === WEBHOOK TESTS === # === WEBHOOK TESTS ===
# #
# Configure the Jellyfin Webhook plugin to target the monitor, then verify # Configure the Jellyfin Webhook plugin to target the monitor, then verify
@@ -589,7 +656,7 @@ pkgs.testers.runNixOSTest {
server.succeed("systemctl restart jellyfin.service") server.succeed("systemctl restart jellyfin.service")
server.wait_for_unit("jellyfin.service") server.wait_for_unit("jellyfin.service")
server.wait_for_open_port(8096) server.wait_for_open_port(8096)
server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=60) server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=180)
# During Jellyfin restart, monitor can't reach Jellyfin # During Jellyfin restart, monitor can't reach Jellyfin
# After restart, sessions are cleared - monitor should eventually unthrottle # After restart, sessions are cleared - monitor should eventually unthrottle
@@ -645,7 +712,7 @@ pkgs.testers.runNixOSTest {
server.succeed("systemctl start jellyfin.service") server.succeed("systemctl start jellyfin.service")
server.wait_for_unit("jellyfin.service") server.wait_for_unit("jellyfin.service")
server.wait_for_open_port(8096) server.wait_for_open_port(8096)
server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=60) server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=180)
# After Jellyfin comes back, sessions are gone - should unthrottle # After Jellyfin comes back, sessions are gone - should unthrottle
time.sleep(3) time.sleep(3)