Compare commits

..

23 Commits

Author SHA1 Message Date
4bc5d57fa6 jellyfin: restartTriggers on webhook plugin so install runs at activation
The jellyfin-webhook-install oneshot has 'wantedBy = jellyfin.service',
which only runs it when jellyfin (re)starts. On first rollout to a host
where jellyfin is already running, the unit gets added but never fires,
leaving the Webhook plugin files absent -- jellyfin-webhook-configure
then gets 404 from /Plugins/$GUID/Configuration and deploy-rs rolls back.

Pinning jellyfin.restartTriggers to the plugin package + install script
forces a restart whenever either derivation changes, which pulls install
in via the existing before/wantedBy chain.
2026-04-17 22:08:29 -04:00
1403c9d3bc jellyfin-qbittorrent-monitor: add webhook receiver for instant throttling
Some checks failed
Build and Deploy / deploy (push) Failing after 2m9s
2026-04-17 19:47:29 -04:00
48ac68c297 jellyfin: add webhook plugin helper 2026-04-17 19:47:26 -04:00
fc548a137f patches/nixpkgs: add jellyfin declarative network.xml options 2026-04-17 19:47:23 -04:00
9ea45d4558 hardware: tighten mq-deadline read_expire for jellyfin coexistence 2026-04-17 19:47:20 -04:00
cebdd3ea96 arr: fix prowlarrUrl for cross-netns reachability
All checks were successful
Build and Deploy / deploy (push) Successful in 1m47s
Prowlarr runs in the wg VPN namespace; Sonarr/Radarr run in the host
namespace. Configuring the Prowlarr sync with prowlarrUrl=localhost:9696
made Sonarr/Radarr try to connect to their own localhost, where
Prowlarr does not exist — the host netns. Every indexer sync emitted
'Prowlarr URL is invalid' with Connection refused (localhost:9696).

Use vpnNamespaces.wg.namespaceAddress (192.168.15.1) so host-netns
clients hit the wg-side veth where Prowlarr is listening.

Also re-enables healthChecks on prowlarr-init: the /applications/testall
endpoint now validates clean (manually verified via API).
2026-04-17 00:53:24 -04:00
df57d636f5 arr: declare critical config.xml elements via configXml
All checks were successful
Build and Deploy / deploy (push) Successful in 2m43s
Pin <Port>, <BindAddress>, and <EnableSsl> in each arr service's
config.xml through arr-init's new configXml option. A preStart hook
ensures these elements exist before the service reads its config,
fixing the recurring Prowlarr bug where <Port> was absent from
config.xml and the service would run without binding any socket.

Updates arr-init lock to 6dde2a3.
2026-04-17 00:47:08 -04:00
2f09c800e0 update arr-init
All checks were successful
Build and Deploy / deploy (push) Successful in 3m43s
2026-04-17 00:38:44 -04:00
2c67b9729b arr-init: fix prowlarr health check failure
All checks were successful
Build and Deploy / deploy (push) Successful in 2m59s
Disable health checks on Prowlarr -- the synced-app testall endpoint
requires Sonarr/Radarr to reverse-connect to prowlarrUrl, which is
unreachable across the wg namespace boundary.

Also add networkNamespaceService = "wg" for the new configurable
namespace service dependency (replaces old hardcoded wg.service).
2026-04-16 17:45:19 -04:00
7d77926f8a update arr-init
Some checks failed
Build and Deploy / deploy (push) Failing after 4m43s
2026-04-16 17:33:54 -04:00
2aa401a9ef update
Some checks failed
Build and Deploy / deploy (push) Failing after 3m7s
2026-04-16 16:47:27 -04:00
92f44d6c71 Reapply "minecraft: tweak jvm args"
All checks were successful
Build and Deploy / deploy (push) Successful in 55s
This reverts commit 82a383482e.
2026-04-16 14:35:28 -04:00
daae941d36 minecraft: 1.21.1 -> 26.1.2 2026-04-16 14:35:23 -04:00
5990319445 jellyfin: fix caddy reverse proxy
All checks were successful
Build and Deploy / deploy (push) Successful in 2m46s
2026-04-16 01:30:10 -04:00
55fda4b5ee update (including llamacpp)
All checks were successful
Build and Deploy / deploy (push) Successful in 2m11s
2026-04-15 21:30:06 -04:00
20ca945436 qbt: create timer to flush WAL
All checks were successful
Build and Deploy / deploy (push) Successful in 2m45s
2026-04-15 18:46:26 -04:00
aecd9002b0 zfs tuning 2026-04-15 18:25:56 -04:00
48efd7fcf7 qbittorent: fix (?) perms 2026-04-15 18:25:56 -04:00
0289ce0856 xmrig-auto-pause: tweak resume_threshold 2026-04-15 18:25:56 -04:00
5b98e6197e kernel: rollback to 6.12
Major ZFS issue causing deadlocks on my system:
https://github.com/openzfs/zfs/issues/18426
2026-04-15 18:25:55 -04:00
a0085187a9 fix systemd-tmpfiles
All checks were successful
Build and Deploy / deploy (push) Successful in 3m14s
2026-04-14 21:59:08 -04:00
0c70c2b2b4 add infra for providing updates to yarn 2026-04-14 20:55:39 -04:00
f28dd190bf move off of hardened kernel to latest LTS 2026-04-14 20:04:26 -04:00
17 changed files with 1128 additions and 91 deletions

View File

@@ -133,8 +133,10 @@
boot.kernel.sysctl."vm.nr_hugepages" = service_configs.hugepages_2m.total_pages; boot.kernel.sysctl."vm.nr_hugepages" = service_configs.hugepages_2m.total_pages;
boot = { boot = {
# 6.12 LTS until 2026 # 6.12 LTS until 2027-03. Kernel 6.18 causes a reproducible ZFS deadlock
kernelPackages = pkgs.linuxPackages_6_12_hardened; # in dbuf_evict due to page allocator changes (__free_frozen_pages).
# https://github.com/openzfs/zfs/issues/18426
kernelPackages = pkgs.linuxPackages_6_12;
loader = { loader = {
# Use the systemd-boot EFI boot loader. # Use the systemd-boot EFI boot loader.

88
flake.lock generated
View File

@@ -27,16 +27,17 @@
}, },
"arr-init": { "arr-init": {
"inputs": { "inputs": {
"flake-utils": "flake-utils",
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
] ]
}, },
"locked": { "locked": {
"lastModified": 1776124758, "lastModified": 1776401121,
"narHash": "sha256-bWLlqMPM5bh6KzENwWN8gIVXm41ptR6/1/k472yzjQo=", "narHash": "sha256-BELV1YMBuLL0aQNQ3SLvSLq8YN5h2o1jcrwz1+Zt32Q=",
"ref": "refs/heads/main", "ref": "refs/heads/main",
"rev": "60fcce47df643c02d5a41d47719c2cdb2c0f327e", "rev": "6dde2a3e0d087208b8084b61113707c5533c4c2d",
"revCount": 13, "revCount": 19,
"type": "git", "type": "git",
"url": "ssh://gitea@git.gardling.com/titaniumtown/arr-init" "url": "ssh://gitea@git.gardling.com/titaniumtown/arr-init"
}, },
@@ -193,7 +194,25 @@
}, },
"flake-utils": { "flake-utils": {
"inputs": { "inputs": {
"systems": "systems_5" "systems": "systems_2"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_6"
}, },
"locked": { "locked": {
"lastModified": 1731533236, "lastModified": 1731533236,
@@ -304,11 +323,11 @@
"rust-overlay": "rust-overlay" "rust-overlay": "rust-overlay"
}, },
"locked": { "locked": {
"lastModified": 1775866084, "lastModified": 1776248416,
"narHash": "sha256-mWn8D/oXXAaqeFFFRorKHvTLw5V9M8eYzAWRr4iffag=", "narHash": "sha256-TC6yzbCAex1pDfqUZv9u8fVm8e17ft5fNrcZ0JRDOIQ=",
"owner": "nix-community", "owner": "nix-community",
"repo": "lanzaboote", "repo": "lanzaboote",
"rev": "29d2cca7fc3841708c1d48e2d1272f79db1538b6", "rev": "18e9e64bae15b828c092658335599122a6db939b",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -325,11 +344,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1775754125, "lastModified": 1776301820,
"narHash": "sha256-4udYhEvii0xPmRiKXYWLhPakPDd1mJppnEFY6uWdv8s=", "narHash": "sha256-Yr3JRZ05PNmX4sR2Ak7e0jT+oCQgTAAML7FUoyTmitk=",
"owner": "TheTom", "owner": "TheTom",
"repo": "llama-cpp-turboquant", "repo": "llama-cpp-turboquant",
"rev": "8590cbff961dbaf1d3a9793fd11d402e248869b9", "rev": "1073622985bb68075472474b4b0fdfcdabcfc9d0",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -365,14 +384,14 @@
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
], ],
"systems": "systems_3" "systems": "systems_4"
}, },
"locked": { "locked": {
"lastModified": 1776051551, "lastModified": 1776310483,
"narHash": "sha256-zqDhVyUtctq7HlpMC9cdR277ner0L/f7SkC3oKbZwy0=", "narHash": "sha256-xMFl+umxGmo5VEgcZcXT5Dk9sXU5WyTRz1Olpywr/60=",
"owner": "Infinidoge", "owner": "Infinidoge",
"repo": "nix-minecraft", "repo": "nix-minecraft",
"rev": "c5eb01b60873e331265779028a839cd2b5237874", "rev": "74abd91054e2655d6c392428a27e5d27edd5e6bf",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -399,11 +418,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1776067740, "lastModified": 1776221942,
"narHash": "sha256-B35lpsqnSZwn1Lmz06BpwF7atPgFmUgw1l8KAV3zpVQ=", "narHash": "sha256-FbQAeVNi7G4v3QCSThrSAAvzQTmrmyDLiHNPvTF2qFM=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "7e495b747b51f95ae15e74377c5ce1fe69c1765f", "rev": "1766437c5509f444c1b15331e82b8b6a9b967000",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -503,7 +522,7 @@
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
], ],
"systems": "systems_4" "systems": "systems_5"
}, },
"locked": { "locked": {
"lastModified": 1771989937, "lastModified": 1771989937,
@@ -624,11 +643,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776047941, "lastModified": 1776306894,
"narHash": "sha256-XjIqkHJjn5e5UbwS2Nl63uBOF1AaC5coRiO+ukENAmM=", "narHash": "sha256-l4N3O1cfXiQCHJGspAkg6WlZyOFBTbLXhi8Anf8jB0g=",
"owner": "nix-community", "owner": "nix-community",
"repo": "srvos", "repo": "srvos",
"rev": "df399d4ba5d7f4ddd8dae16e5ace5a70e958153d", "rev": "01d98209264c78cb323b636d7ab3fe8e7a8b60c7",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -712,14 +731,29 @@
"type": "github" "type": "github"
} }
}, },
"systems_6": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"trackerlist": { "trackerlist": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1776118185, "lastModified": 1776290985,
"narHash": "sha256-7gh0sONRGuZIU0ziZeVRZITFXcbcNJfcvE//OBZxiiU=", "narHash": "sha256-eNWDOLBA0vk1TiKqse71siIAgLycjvBFDw35eAtnUPs=",
"owner": "ngosang", "owner": "ngosang",
"repo": "trackerslist", "repo": "trackerslist",
"rev": "68e3e822f392fa52b0af6ebf132af51c2b9b4e0c", "rev": "9bb380b3c2a641a3289f92dedef97016f2e47f36",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -730,7 +764,7 @@
}, },
"utils": { "utils": {
"inputs": { "inputs": {
"systems": "systems_2" "systems": "systems_3"
}, },
"locked": { "locked": {
"lastModified": 1731533236, "lastModified": 1731533236,
@@ -779,7 +813,7 @@
}, },
"ytbn-graphing-software": { "ytbn-graphing-software": {
"inputs": { "inputs": {
"flake-utils": "flake-utils", "flake-utils": "flake-utils_2",
"nixpkgs": "nixpkgs_3", "nixpkgs": "nixpkgs_3",
"rust-overlay": "rust-overlay_2" "rust-overlay": "rust-overlay_2"
}, },

View File

@@ -12,7 +12,7 @@ let
parent=''${1%%[0-9]*} parent=''${1%%[0-9]*}
dev="/sys/block/$parent" dev="/sys/block/$parent"
[ -d "$dev/queue/iosched" ] || exit 0 [ -d "$dev/queue/iosched" ] || exit 0
echo 15000 > "$dev/queue/iosched/read_expire" echo 500 > "$dev/queue/iosched/read_expire"
echo 15000 > "$dev/queue/iosched/write_expire" echo 15000 > "$dev/queue/iosched/write_expire"
echo 128 > "$dev/queue/iosched/fifo_batch" echo 128 > "$dev/queue/iosched/fifo_batch"
echo 16 > "$dev/queue/iosched/writes_starved" echo 16 > "$dev/queue/iosched/writes_starved"
@@ -36,11 +36,17 @@ in
hardware.cpu.amd.updateMicrocode = true; hardware.cpu.amd.updateMicrocode = true;
hardware.enableRedistributableFirmware = true; hardware.enableRedistributableFirmware = true;
# HDD I/O tuning for torrent seeding workload (high-concurrency random reads). # HDD I/O tuning for torrent seeding workload (high-concurrency random reads)
# sharing the pool with latency-sensitive sequential reads (Jellyfin playback).
# #
# mq-deadline sorts requests into elevator sweeps, reducing seek distance. # mq-deadline sorts requests into elevator sweeps, reducing seek distance.
# Aggressive deadlines (15s) let the scheduler accumulate more ops before dispatching, # read_expire=500ms keeps reads bounded so a Jellyfin segment can't queue for
# maximizing coalescence — latency is irrelevant since torrent peers tolerate 30-60s. # seconds behind a torrent burst; write_expire=15s lets the scheduler batch
# writes for coalescence (torrent writes are async and tolerate delay).
# The bulk of read coalescence already happens above the scheduler via ZFS
# aggregation (zfs_vdev_aggregation_limit=4M, read_gap_limit=128K,
# async_read_max=32), so the scheduler deadline only needs to be large enough
# to keep the elevator sweep coherent -- 500ms is plenty on rotational disks.
# fifo_batch=128 keeps sweeps long; writes_starved=16 heavily favors reads. # fifo_batch=128 keeps sweeps long; writes_starved=16 heavily favors reads.
# 4 MiB readahead matches libtorrent piece extent affinity for sequential prefetch. # 4 MiB readahead matches libtorrent piece extent affinity for sequential prefetch.
# #

View File

@@ -13,12 +13,89 @@
# disable coredumps # disable coredumps
systemd.coredump.enable = false; systemd.coredump.enable = false;
# The hardened kernel defaults kernel.unprivileged_userns_clone to 0, which # Needed for Nix sandbox UID/GID mapping inside derivation builds.
# prevents the Nix sandbox from mapping UIDs/GIDs. Without this, any derivation # See https://github.com/NixOS/nixpkgs/issues/287194
# that calls `id` in its build phase (e.g. logrotate checkPhase) fails when not
# served from the binary cache. See https://github.com/NixOS/nixpkgs/issues/287194
security.unprivilegedUsernsClone = true; security.unprivilegedUsernsClone = true;
# Disable kexec to prevent replacing the running kernel at runtime.
security.protectKernelImage = true;
# Kernel hardening boot parameters. These recover most of the runtime-
# configurable protections that the linux-hardened patchset provided.
boot.kernelParams = [
# Zero all page allocator pages on free / alloc. Prevents info leaks
# and use-after-free from seeing stale data. Modest CPU overhead.
"init_on_alloc=1"
"init_on_free=1"
# Prevent SLUB allocator from merging caches with similar size/flags.
# Keeps different kernel object types in separate slabs, making heap
# exploitation (type confusion, spray, use-after-free) significantly harder.
"slab_nomerge"
# Randomize order of pages returned by the buddy allocator.
"page_alloc.shuffle=1"
# Disable debugfs entirely (exposes kernel internals).
"debugfs=off"
# Disable legacy vsyscall emulation (unused by any modern glibc).
"vsyscall=none"
# Strict IOMMU TLB invalidation (no batching). Prevents DMA-capable
# devices from accessing stale mappings after unmap.
"iommu.strict=1"
];
boot.kernel.sysctl = {
# Immediately reboot on kernel oops (don't leave a compromised
# kernel running). Negative value = reboot without delay.
"kernel.panic" = -1;
# Hide kernel pointers from all processes, including CAP_SYSLOG.
# Prevents info leaks used to defeat KASLR.
"kernel.kptr_restrict" = 2;
# Disable bpf() JIT compiler (eliminates JIT spray attack vector).
"net.core.bpf_jit_enable" = false;
# Disable ftrace (kernel function tracer) at runtime.
"kernel.ftrace_enabled" = false;
# Strict reverse-path filtering: drop packets arriving on an interface
# where the source address isn't routable back via that interface.
"net.ipv4.conf.all.rp_filter" = 1;
"net.ipv4.conf.default.rp_filter" = 1;
"net.ipv4.conf.all.log_martians" = true;
"net.ipv4.conf.default.log_martians" = true;
# Ignore ICMP redirects (prevents route table poisoning).
"net.ipv4.conf.all.accept_redirects" = false;
"net.ipv4.conf.all.secure_redirects" = false;
"net.ipv4.conf.default.accept_redirects" = false;
"net.ipv4.conf.default.secure_redirects" = false;
"net.ipv6.conf.all.accept_redirects" = false;
"net.ipv6.conf.default.accept_redirects" = false;
# Don't send ICMP redirects (we are not a router).
"net.ipv4.conf.all.send_redirects" = false;
"net.ipv4.conf.default.send_redirects" = false;
# Ignore broadcast ICMP (SMURF amplification mitigation).
"net.ipv4.icmp_echo_ignore_broadcasts" = true;
# Filesystem hardening: prevent hardlink/symlink-based attacks.
# protected_hardlinks/symlinks: block unprivileged creation of hard/symlinks
# to files the user doesn't own (prevents TOCTOU privilege escalation).
# protected_fifos/regular (level 2): restrict opening FIFOs and regular files
# in world-writable sticky directories to owner/group match only.
# Also required for systemd-tmpfiles to chmod hardlinked files.
"fs.protected_hardlinks" = true;
"fs.protected_symlinks" = true;
"fs.protected_fifos" = 2;
"fs.protected_regular" = 2;
};
services = { services = {
dbus.implementation = "broker"; dbus.implementation = "broker";
/* /*

View File

@@ -1,15 +1,39 @@
{ {
config, config,
lib,
service_configs, service_configs,
pkgs, pkgs,
... ...
}: }:
let
# Total RAM in bytes (from /proc/meminfo: 65775836 KiB).
totalRamBytes = 65775836 * 1024;
# Hugepage reservations that the kernel carves out before ZFS can use them.
hugepages2mBytes = service_configs.hugepages_2m.total_pages * 2 * 1024 * 1024;
hugepages1gBytes = 3 * 1024 * 1024 * 1024; # 3x 1G pages for RandomX (xmrig.nix)
totalHugepageBytes = hugepages2mBytes + hugepages1gBytes;
# ARC max: 60% of RAM remaining after hugepages. Leaves headroom for
# application RSS (PostgreSQL, qBittorrent, Jellyfin, Grafana, etc.),
# kernel slabs, and page cache.
arcMaxBytes = (totalRamBytes - totalHugepageBytes) * 60 / 100;
in
{ {
boot.zfs.package = pkgs.zfs; boot.zfs.package = pkgs.zfs_2_4;
boot.initrd.kernelModules = [ "zfs" ]; boot.initrd.kernelModules = [ "zfs" ];
boot.kernelParams = [ boot.kernelParams = [
"zfs.zfs_txg_timeout=120" # longer TXG open time = larger sequential writes # 120s TXG timeout: batch more dirty data per transaction group so the
# HDD pool (hdds) writes larger, sequential I/Os instead of many small syncs.
# This is a global setting (no per-pool control); the SSD pool (tank) syncs
# infrequently but handles it fine since SSDs don't suffer from seek overhead.
"zfs.zfs_txg_timeout=120"
# Cap ARC to prevent it from claiming memory reserved for hugepages.
# Without this, ZFS auto-sizes c_max to ~62 GiB on a 64 GiB system,
# ignoring the 11.5 GiB of hugepage reservations.
"zfs.zfs_arc_max=${toString arcMaxBytes}"
# vdev I/O scheduler: feed more concurrent reads to the block scheduler so # vdev I/O scheduler: feed more concurrent reads to the block scheduler so
# mq-deadline has a larger pool of requests to sort and merge into elevator sweeps. # mq-deadline has a larger pool of requests to sort and merge into elevator sweeps.

View File

@@ -0,0 +1,443 @@
From f0582558f0a8b0ef543b3251c4a07afab89fde63 Mon Sep 17 00:00:00 2001
From: Simon Gardling <titaniumtown@proton.me>
Date: Fri, 17 Apr 2026 19:37:11 -0400
Subject: [PATCH] nixos/jellyfin: add declarative network.xml options
Adds services.jellyfin.network.* (baseUrl, ports, IPv4/6, LAN subnets,
known proxies, remote IP filter, etc.) and services.jellyfin.forceNetworkConfig,
mirroring the existing hardwareAcceleration / forceEncodingConfig pattern.
Motivation: running Jellyfin behind a reverse proxy requires configuring
KnownProxies (so the real client IP is extracted from X-Forwarded-For)
and LocalNetworkSubnets (so LAN clients are correctly classified and not
subject to RemoteClientBitrateLimit). These settings previously had no
declarative option -- they could only be set via the web dashboard or
by hand-editing network.xml, with no guarantee they would survive a
reinstall or be consistent across deployments.
Implementation:
- Adds a networkXmlText template alongside the existing encodingXmlText.
- Factors the force-vs-soft install logic out of preStart into a
small 'manage_config_xml' shell helper; encoding.xml and network.xml
now share the same install/backup semantics.
- Extends the VM test with a machineWithNetworkConfig node and a
subtest that verifies the declared values land in network.xml,
Jellyfin parses them at startup, and the backup-on-overwrite path
works (same shape as the existing 'Force encoding config' subtest).
---
nixos/modules/services/misc/jellyfin.nix | 303 ++++++++++++++++++++---
nixos/tests/jellyfin.nix | 50 ++++
2 files changed, 317 insertions(+), 36 deletions(-)
diff --git a/nixos/modules/services/misc/jellyfin.nix b/nixos/modules/services/misc/jellyfin.nix
index 5c08fc478e45..387da907c652 100644
--- a/nixos/modules/services/misc/jellyfin.nix
+++ b/nixos/modules/services/misc/jellyfin.nix
@@ -26,8 +26,10 @@ let
bool
enum
ints
+ listOf
nullOr
path
+ port
str
submodule
;
@@ -68,6 +70,41 @@ let
</EncodingOptions>
'';
encodingXmlFile = pkgs.writeText "encoding.xml" encodingXmlText;
+ stringListToXml =
+ tag: items:
+ if items == [ ] then
+ "<${tag} />"
+ else
+ "<${tag}>\n ${
+ concatMapStringsSep "\n " (item: "<string>${escapeXML item}</string>") items
+ }\n </${tag}>";
+ networkXmlText = ''
+ <?xml version="1.0" encoding="utf-8"?>
+ <NetworkConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
+ <BaseUrl>${escapeXML cfg.network.baseUrl}</BaseUrl>
+ <EnableHttps>${boolToString cfg.network.enableHttps}</EnableHttps>
+ <RequireHttps>${boolToString cfg.network.requireHttps}</RequireHttps>
+ <InternalHttpPort>${toString cfg.network.internalHttpPort}</InternalHttpPort>
+ <InternalHttpsPort>${toString cfg.network.internalHttpsPort}</InternalHttpsPort>
+ <PublicHttpPort>${toString cfg.network.publicHttpPort}</PublicHttpPort>
+ <PublicHttpsPort>${toString cfg.network.publicHttpsPort}</PublicHttpsPort>
+ <AutoDiscovery>${boolToString cfg.network.autoDiscovery}</AutoDiscovery>
+ <EnableUPnP>${boolToString cfg.network.enableUPnP}</EnableUPnP>
+ <EnableIPv4>${boolToString cfg.network.enableIPv4}</EnableIPv4>
+ <EnableIPv6>${boolToString cfg.network.enableIPv6}</EnableIPv6>
+ <EnableRemoteAccess>${boolToString cfg.network.enableRemoteAccess}</EnableRemoteAccess>
+ ${stringListToXml "LocalNetworkSubnets" cfg.network.localNetworkSubnets}
+ ${stringListToXml "LocalNetworkAddresses" cfg.network.localNetworkAddresses}
+ ${stringListToXml "KnownProxies" cfg.network.knownProxies}
+ <IgnoreVirtualInterfaces>${boolToString cfg.network.ignoreVirtualInterfaces}</IgnoreVirtualInterfaces>
+ ${stringListToXml "VirtualInterfaceNames" cfg.network.virtualInterfaceNames}
+ <EnablePublishedServerUriByRequest>${boolToString cfg.network.enablePublishedServerUriByRequest}</EnablePublishedServerUriByRequest>
+ ${stringListToXml "PublishedServerUriBySubnet" cfg.network.publishedServerUriBySubnet}
+ ${stringListToXml "RemoteIPFilter" cfg.network.remoteIPFilter}
+ <IsRemoteIPFilterBlacklist>${boolToString cfg.network.isRemoteIPFilterBlacklist}</IsRemoteIPFilterBlacklist>
+ </NetworkConfiguration>
+ '';
+ networkXmlFile = pkgs.writeText "network.xml" networkXmlText;
codecListToType =
desc: list:
submodule {
@@ -205,6 +242,196 @@ in
'';
};
+ network = {
+ baseUrl = mkOption {
+ type = str;
+ default = "";
+ example = "/jellyfin";
+ description = ''
+ Prefix added to Jellyfin's internal URLs when it sits behind a reverse proxy at a sub-path.
+ Leave empty when Jellyfin is served at the root of its host.
+ '';
+ };
+
+ enableHttps = mkOption {
+ type = bool;
+ default = false;
+ description = ''
+ Serve HTTPS directly from Jellyfin. Usually unnecessary when terminating TLS in a reverse proxy.
+ '';
+ };
+
+ requireHttps = mkOption {
+ type = bool;
+ default = false;
+ description = ''
+ Redirect plaintext HTTP requests to HTTPS. Only meaningful when {option}`enableHttps` is true.
+ '';
+ };
+
+ internalHttpPort = mkOption {
+ type = port;
+ default = 8096;
+ description = "TCP port Jellyfin binds for HTTP.";
+ };
+
+ internalHttpsPort = mkOption {
+ type = port;
+ default = 8920;
+ description = "TCP port Jellyfin binds for HTTPS. Only used when {option}`enableHttps` is true.";
+ };
+
+ publicHttpPort = mkOption {
+ type = port;
+ default = 8096;
+ description = "HTTP port Jellyfin advertises in server discovery responses and published URIs.";
+ };
+
+ publicHttpsPort = mkOption {
+ type = port;
+ default = 8920;
+ description = "HTTPS port Jellyfin advertises in server discovery responses and published URIs.";
+ };
+
+ autoDiscovery = mkOption {
+ type = bool;
+ default = true;
+ description = "Respond to LAN client auto-discovery broadcasts (UDP 7359).";
+ };
+
+ enableUPnP = mkOption {
+ type = bool;
+ default = false;
+ description = "Attempt to open the public ports on the router via UPnP.";
+ };
+
+ enableIPv4 = mkOption {
+ type = bool;
+ default = true;
+ description = "Listen on IPv4.";
+ };
+
+ enableIPv6 = mkOption {
+ type = bool;
+ default = true;
+ description = "Listen on IPv6.";
+ };
+
+ enableRemoteAccess = mkOption {
+ type = bool;
+ default = true;
+ description = ''
+ Allow connections from clients outside the subnets listed in {option}`localNetworkSubnets`.
+ When false, Jellyfin rejects non-local requests regardless of reverse proxy configuration.
+ '';
+ };
+
+ localNetworkSubnets = mkOption {
+ type = listOf str;
+ default = [ ];
+ example = [
+ "192.168.1.0/24"
+ "10.0.0.0/8"
+ ];
+ description = ''
+ CIDR ranges (or bare IPs) that Jellyfin classifies as the local network.
+ Clients originating from these ranges -- as seen after {option}`knownProxies` X-Forwarded-For
+ unwrapping -- are not subject to {option}`services.jellyfin` remote-client bitrate limits.
+ '';
+ };
+
+ localNetworkAddresses = mkOption {
+ type = listOf str;
+ default = [ ];
+ example = [ "192.168.1.50" ];
+ description = ''
+ Specific interface addresses Jellyfin binds to. Leave empty to bind all interfaces.
+ '';
+ };
+
+ knownProxies = mkOption {
+ type = listOf str;
+ default = [ ];
+ example = [ "127.0.0.1" ];
+ description = ''
+ Addresses of reverse proxies trusted to forward the real client IP via `X-Forwarded-For`.
+ Without this, Jellyfin sees the proxy's address for every request and cannot apply
+ {option}`localNetworkSubnets` classification to the true client.
+ '';
+ };
+
+ ignoreVirtualInterfaces = mkOption {
+ type = bool;
+ default = true;
+ description = "Skip virtual network interfaces (matching {option}`virtualInterfaceNames`) during auto-bind.";
+ };
+
+ virtualInterfaceNames = mkOption {
+ type = listOf str;
+ default = [ "veth" ];
+ description = "Interface name prefixes treated as virtual when {option}`ignoreVirtualInterfaces` is true.";
+ };
+
+ enablePublishedServerUriByRequest = mkOption {
+ type = bool;
+ default = false;
+ description = ''
+ Derive the server's public URI from the incoming request's Host header instead of any
+ configured {option}`publishedServerUriBySubnet` entry.
+ '';
+ };
+
+ publishedServerUriBySubnet = mkOption {
+ type = listOf str;
+ default = [ ];
+ example = [ "192.168.1.0/24=http://jellyfin.lan:8096" ];
+ description = ''
+ Per-subnet overrides for the URI Jellyfin advertises to clients, in `subnet=uri` form.
+ '';
+ };
+
+ remoteIPFilter = mkOption {
+ type = listOf str;
+ default = [ ];
+ example = [ "203.0.113.0/24" ];
+ description = ''
+ IPs or CIDRs used as the allow- or denylist for remote access.
+ Behaviour is controlled by {option}`isRemoteIPFilterBlacklist`.
+ '';
+ };
+
+ isRemoteIPFilterBlacklist = mkOption {
+ type = bool;
+ default = false;
+ description = ''
+ When true, {option}`remoteIPFilter` is a denylist; when false, it is an allowlist
+ (and an empty list allows all remote addresses).
+ '';
+ };
+ };
+
+ forceNetworkConfig = mkOption {
+ type = bool;
+ default = false;
+ description = ''
+ Whether to overwrite Jellyfin's `network.xml` configuration file on each service start.
+
+ When enabled, the network configuration specified in {option}`services.jellyfin.network`
+ is applied on every service restart. A backup of the existing `network.xml` will be
+ created at `network.xml.backup-$timestamp`.
+
+ ::: {.warning}
+ Enabling this option means that any changes made to networking settings through
+ Jellyfin's web dashboard will be lost on the next service restart. The NixOS configuration
+ becomes the single source of truth for network settings.
+ :::
+
+ When disabled (the default), the network configuration is only written if no `network.xml`
+ exists yet. This allows settings to be changed through Jellyfin's web dashboard and persist
+ across restarts, but means the NixOS configuration options will be ignored after the initial setup.
+ '';
+ };
+
transcoding = {
maxConcurrentStreams = mkOption {
type = nullOr ints.positive;
@@ -384,46 +611,50 @@ in
wants = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
- preStart = mkIf cfg.hardwareAcceleration.enable (
- ''
- configDir=${escapeShellArg cfg.configDir}
- encodingXml="$configDir/encoding.xml"
- ''
- + (
- if cfg.forceEncodingConfig then
- ''
- if [[ -e $encodingXml ]]; then
+ preStart =
+ let
+ # manage_config_xml <source> <destination> <force> <description>
+ #
+ # Installs a NixOS-declared XML config at <destination>, preserving
+ # any existing file as a timestamped backup when <force> is true.
+ # With <force>=false, leaves existing files untouched and warns if
+ # the on-disk content differs from the declared content.
+ helper = ''
+ manage_config_xml() {
+ local src="$1" dest="$2" force="$3" desc="$4"
+ if [[ -e "$dest" ]]; then
# this intentionally removes trailing newlines
- currentText="$(<"$encodingXml")"
- configuredText="$(<${encodingXmlFile})"
- if [[ $currentText == "$configuredText" ]]; then
- # don't need to do anything
- exit 0
- else
- encodingXmlBackup="$configDir/encoding.xml.backup-$(date -u +"%FT%H_%M_%SZ")"
- mv --update=none-fail -T "$encodingXml" "$encodingXmlBackup"
+ local currentText configuredText
+ currentText="$(<"$dest")"
+ configuredText="$(<"$src")"
+ if [[ "$currentText" == "$configuredText" ]]; then
+ return 0
fi
- fi
- cp --update=none-fail -T ${encodingXmlFile} "$encodingXml"
- chmod u+w "$encodingXml"
- ''
- else
- ''
- if [[ -e $encodingXml ]]; then
- # this intentionally removes trailing newlines
- currentText="$(<"$encodingXml")"
- configuredText="$(<${encodingXmlFile})"
- if [[ $currentText != "$configuredText" ]]; then
- echo "WARN: $encodingXml already exists and is different from the configured settings. transcoding options NOT applied." >&2
- echo "WARN: Set config.services.jellyfin.forceEncodingConfig = true to override." >&2
+ if [[ "$force" == true ]]; then
+ local backup
+ backup="$dest.backup-$(date -u +"%FT%H_%M_%SZ")"
+ mv --update=none-fail -T "$dest" "$backup"
+ else
+ echo "WARN: $dest already exists and is different from the configured settings. $desc options NOT applied." >&2
+ echo "WARN: Set the corresponding force*Config option to override." >&2
+ return 0
fi
- else
- cp --update=none-fail -T ${encodingXmlFile} "$encodingXml"
- chmod u+w "$encodingXml"
fi
- ''
- )
- );
+ cp --update=none-fail -T "$src" "$dest"
+ chmod u+w "$dest"
+ }
+ configDir=${escapeShellArg cfg.configDir}
+ '';
+ in
+ (
+ helper
+ + optionalString cfg.hardwareAcceleration.enable ''
+ manage_config_xml ${encodingXmlFile} "$configDir/encoding.xml" ${boolToString cfg.forceEncodingConfig} transcoding
+ ''
+ + ''
+ manage_config_xml ${networkXmlFile} "$configDir/network.xml" ${boolToString cfg.forceNetworkConfig} network
+ ''
+ );
# This is mostly follows: https://github.com/jellyfin/jellyfin/blob/master/fedora/jellyfin.service
# Upstream also disable some hardenings when running in LXC, we do the same with the isContainer option
diff --git a/nixos/tests/jellyfin.nix b/nixos/tests/jellyfin.nix
index 4896c13d4eca..0c9191960f78 100644
--- a/nixos/tests/jellyfin.nix
+++ b/nixos/tests/jellyfin.nix
@@ -63,6 +63,26 @@
environment.systemPackages = with pkgs; [ ffmpeg ];
virtualisation.diskSize = 3 * 1024;
};
+
+ machineWithNetworkConfig = {
+ services.jellyfin = {
+ enable = true;
+ forceNetworkConfig = true;
+ network = {
+ localNetworkSubnets = [
+ "192.168.1.0/24"
+ "10.0.0.0/8"
+ ];
+ knownProxies = [ "127.0.0.1" ];
+ enableUPnP = false;
+ enableIPv6 = false;
+ remoteIPFilter = [ "203.0.113.5" ];
+ isRemoteIPFilterBlacklist = true;
+ };
+ };
+ environment.systemPackages = with pkgs; [ ffmpeg ];
+ virtualisation.diskSize = 3 * 1024;
+ };
};
# Documentation of the Jellyfin API: https://api.jellyfin.org/
@@ -122,6 +142,36 @@
# Verify the new encoding.xml does not have the marker (was overwritten)
machineWithForceConfig.fail("grep -q 'MARKER' /var/lib/jellyfin/config/encoding.xml")
+ # Test forceNetworkConfig and network.xml generation
+ with subtest("Force network config writes declared values and backs up on overwrite"):
+ wait_for_jellyfin(machineWithNetworkConfig)
+
+ # Verify network.xml exists and contains the declared values
+ machineWithNetworkConfig.succeed("test -f /var/lib/jellyfin/config/network.xml")
+ machineWithNetworkConfig.succeed("grep -F '<string>192.168.1.0/24</string>' /var/lib/jellyfin/config/network.xml")
+ machineWithNetworkConfig.succeed("grep -F '<string>10.0.0.0/8</string>' /var/lib/jellyfin/config/network.xml")
+ machineWithNetworkConfig.succeed("grep -F '<string>127.0.0.1</string>' /var/lib/jellyfin/config/network.xml")
+ machineWithNetworkConfig.succeed("grep -F '<string>203.0.113.5</string>' /var/lib/jellyfin/config/network.xml")
+ machineWithNetworkConfig.succeed("grep -F '<IsRemoteIPFilterBlacklist>true</IsRemoteIPFilterBlacklist>' /var/lib/jellyfin/config/network.xml")
+ machineWithNetworkConfig.succeed("grep -F '<EnableIPv6>false</EnableIPv6>' /var/lib/jellyfin/config/network.xml")
+ machineWithNetworkConfig.succeed("grep -F '<EnableUPnP>false</EnableUPnP>' /var/lib/jellyfin/config/network.xml")
+
+ # Stop service before modifying config
+ machineWithNetworkConfig.succeed("systemctl stop jellyfin.service")
+
+ # Plant a marker so we can prove the backup-and-overwrite path runs
+ machineWithNetworkConfig.succeed("echo '<!-- NETMARKER -->' > /var/lib/jellyfin/config/network.xml")
+
+ # Restart the service to trigger the backup
+ machineWithNetworkConfig.succeed("systemctl restart jellyfin.service")
+ wait_for_jellyfin(machineWithNetworkConfig)
+
+ # Verify the marked content was preserved as a timestamped backup
+ machineWithNetworkConfig.succeed("grep -q 'NETMARKER' /var/lib/jellyfin/config/network.xml.backup-*")
+
+ # Verify the new network.xml does not have the marker (was overwritten)
+ machineWithNetworkConfig.fail("grep -q 'NETMARKER' /var/lib/jellyfin/config/network.xml")
+
auth_header = 'MediaBrowser Client="NixOS Integration Tests", DeviceId="1337", Device="Apple II", Version="20.09"'
--
2.53.0

View File

@@ -81,6 +81,12 @@ rec {
port = 6011; port = 6011;
proto = "tcp"; proto = "tcp";
}; };
# Webhook receiver for the Jellyfin-qBittorrent monitor — Jellyfin pushes
# playback events here so throttling reacts without waiting for the poll.
jellyfin_qbittorrent_monitor_webhook = {
port = 9898;
proto = "tcp";
};
bitmagnet = { bitmagnet = {
port = 3333; port = 3333;
proto = "tcp"; proto = "tcp";

View File

@@ -8,13 +8,26 @@
dataDir = service_configs.prowlarr.dataDir; dataDir = service_configs.prowlarr.dataDir;
apiVersion = "v1"; apiVersion = "v1";
networkNamespacePath = "/run/netns/wg"; networkNamespacePath = "/run/netns/wg";
networkNamespaceService = "wg";
# Guarantee critical config.xml elements before startup. Prowlarr has a
# history of losing <Port> from config.xml, causing the service to run
# without binding any socket. See arr-init's configXml for details.
configXml = {
Port = service_configs.ports.private.prowlarr.port;
BindAddress = "*";
EnableSsl = false;
};
# Prowlarr runs in the wg netns; Sonarr/Radarr in the host netns.
# From host netns, Prowlarr is reachable at the wg namespace address,
# not at localhost (which resolves to the host's own netns).
# Health checks can now run — the reverse-connect is reachable.
healthChecks = true; healthChecks = true;
syncedApps = [ syncedApps = [
{ {
name = "Sonarr"; name = "Sonarr";
implementation = "Sonarr"; implementation = "Sonarr";
configContract = "SonarrSettings"; configContract = "SonarrSettings";
prowlarrUrl = "http://localhost:${builtins.toString service_configs.ports.private.prowlarr.port}"; prowlarrUrl = "http://${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString service_configs.ports.private.prowlarr.port}";
baseUrl = "http://${config.vpnNamespaces.wg.bridgeAddress}:${builtins.toString service_configs.ports.private.sonarr.port}"; baseUrl = "http://${config.vpnNamespaces.wg.bridgeAddress}:${builtins.toString service_configs.ports.private.sonarr.port}";
apiKeyFrom = "${service_configs.sonarr.dataDir}/config.xml"; apiKeyFrom = "${service_configs.sonarr.dataDir}/config.xml";
serviceName = "sonarr"; serviceName = "sonarr";
@@ -23,7 +36,7 @@
name = "Radarr"; name = "Radarr";
implementation = "Radarr"; implementation = "Radarr";
configContract = "RadarrSettings"; configContract = "RadarrSettings";
prowlarrUrl = "http://localhost:${builtins.toString service_configs.ports.private.prowlarr.port}"; prowlarrUrl = "http://${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString service_configs.ports.private.prowlarr.port}";
baseUrl = "http://${config.vpnNamespaces.wg.bridgeAddress}:${builtins.toString service_configs.ports.private.radarr.port}"; baseUrl = "http://${config.vpnNamespaces.wg.bridgeAddress}:${builtins.toString service_configs.ports.private.radarr.port}";
apiKeyFrom = "${service_configs.radarr.dataDir}/config.xml"; apiKeyFrom = "${service_configs.radarr.dataDir}/config.xml";
serviceName = "radarr"; serviceName = "radarr";
@@ -37,6 +50,11 @@
port = service_configs.ports.private.sonarr.port; port = service_configs.ports.private.sonarr.port;
dataDir = service_configs.sonarr.dataDir; dataDir = service_configs.sonarr.dataDir;
healthChecks = true; healthChecks = true;
configXml = {
Port = service_configs.ports.private.sonarr.port;
BindAddress = "*";
EnableSsl = false;
};
rootFolders = [ service_configs.media.tvDir ]; rootFolders = [ service_configs.media.tvDir ];
naming = { naming = {
renameEpisodes = true; renameEpisodes = true;
@@ -69,6 +87,11 @@
port = service_configs.ports.private.radarr.port; port = service_configs.ports.private.radarr.port;
dataDir = service_configs.radarr.dataDir; dataDir = service_configs.radarr.dataDir;
healthChecks = true; healthChecks = true;
configXml = {
Port = service_configs.ports.private.radarr.port;
BindAddress = "*";
EnableSsl = false;
};
rootFolders = [ service_configs.media.moviesDir ]; rootFolders = [ service_configs.media.moviesDir ];
naming = { naming = {
renameMovies = true; renameMovies = true;

View File

@@ -17,8 +17,22 @@
settings.bind = "127.0.0.1:${toString service_configs.ports.private.harmonia.port}"; settings.bind = "127.0.0.1:${toString service_configs.ports.private.harmonia.port}";
}; };
# serve latest deploy store paths (unauthenticated — just a path string)
# CI writes to /var/lib/dotfiles-deploy/<hostname> after building
services.caddy.virtualHosts."nix-cache.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."nix-cache.${service_configs.https.domain}".extraConfig = ''
handle_path /deploy/* {
root * /var/lib/dotfiles-deploy
file_server
}
handle {
import ${config.age.secrets.nix-cache-auth.path} import ${config.age.secrets.nix-cache-auth.path}
reverse_proxy :${toString service_configs.ports.private.harmonia.port} reverse_proxy :${toString service_configs.ports.private.harmonia.port}
}
''; '';
# directory for CI to record latest deploy store paths
systemd.tmpfiles.rules = [
"d /var/lib/dotfiles-deploy 0755 gitea-runner gitea-runner"
];
} }

View File

@@ -5,14 +5,80 @@
lib, lib,
... ...
}: }:
let
webhookPlugin = import ./jellyfin-webhook-plugin.nix { inherit pkgs lib; };
jellyfinPort = service_configs.ports.private.jellyfin.port;
webhookPort = service_configs.ports.private.jellyfin_qbittorrent_monitor_webhook.port;
in
lib.mkIf config.services.jellyfin.enable { lib.mkIf config.services.jellyfin.enable {
# Materialise the Jellyfin Webhook plugin into Jellyfin's plugins dir before
# Jellyfin starts. Jellyfin rewrites meta.json at runtime, so a read-only
# nix-store symlink would EACCES -- we copy instead.
#
# `wantedBy = [ "jellyfin.service" ]` alone is insufficient on initial rollout:
# if jellyfin is already running at activation time, systemd won't start the
# oneshot until the next jellyfin restart. `restartTriggers` on jellyfin pinned
# to the plugin package + install script forces that restart whenever either
# changes, which invokes this unit via the `before`/`wantedBy` chain.
systemd.services.jellyfin-webhook-install = {
before = [ "jellyfin.service" ];
wantedBy = [ "jellyfin.service" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
User = config.services.jellyfin.user;
Group = config.services.jellyfin.group;
ExecStart = webhookPlugin.mkInstallScript {
pluginsDir = "${config.services.jellyfin.dataDir}/plugins";
};
};
};
systemd.services.jellyfin.restartTriggers = [
webhookPlugin.package
(webhookPlugin.mkInstallScript {
pluginsDir = "${config.services.jellyfin.dataDir}/plugins";
})
];
# After Jellyfin starts, POST the plugin configuration so the webhook
# targets the monitor's receiver. Idempotent; runs on every boot.
systemd.services.jellyfin-webhook-configure = {
after = [ "jellyfin.service" ];
wants = [ "jellyfin.service" ];
before = [ "jellyfin-qbittorrent-monitor.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
DynamicUser = true;
LoadCredential = "jellyfin-api-key:${config.age.secrets.jellyfin-api-key.path}";
ExecStart = webhookPlugin.mkConfigureScript {
jellyfinUrl = "http://127.0.0.1:${toString jellyfinPort}";
webhooks = [
{
name = "qBittorrent Monitor";
uri = "http://127.0.0.1:${toString webhookPort}/";
notificationTypes = [
"PlaybackStart"
"PlaybackProgress"
"PlaybackStop"
];
}
];
};
};
};
systemd.services."jellyfin-qbittorrent-monitor" = { systemd.services."jellyfin-qbittorrent-monitor" = {
description = "Monitor Jellyfin streaming and control qBittorrent rate limits"; description = "Monitor Jellyfin streaming and control qBittorrent rate limits";
after = [ after = [
"network.target" "network.target"
"jellyfin.service" "jellyfin.service"
"qbittorrent.service" "qbittorrent.service"
"jellyfin-webhook-configure.service"
]; ];
wants = [ "jellyfin-webhook-configure.service" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
serviceConfig = { serviceConfig = {
@@ -44,7 +110,7 @@ lib.mkIf config.services.jellyfin.enable {
}; };
environment = { environment = {
JELLYFIN_URL = "http://localhost:${builtins.toString service_configs.ports.private.jellyfin.port}"; JELLYFIN_URL = "http://localhost:${builtins.toString jellyfinPort}";
QBITTORRENT_URL = "http://${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString service_configs.ports.private.torrent.port}"; QBITTORRENT_URL = "http://${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString service_configs.ports.private.torrent.port}";
CHECK_INTERVAL = "30"; CHECK_INTERVAL = "30";
# Bandwidth budget configuration # Bandwidth budget configuration
@@ -53,6 +119,9 @@ lib.mkIf config.services.jellyfin.enable {
DEFAULT_STREAM_BITRATE = "10000000"; # 10 Mbps fallback when bitrate unknown (bps) DEFAULT_STREAM_BITRATE = "10000000"; # 10 Mbps fallback when bitrate unknown (bps)
MIN_TORRENT_SPEED = "100"; # KB/s - below this, pause torrents instead MIN_TORRENT_SPEED = "100"; # KB/s - below this, pause torrents instead
STREAM_BITRATE_HEADROOM = "1.1"; # multiplier per stream for bitrate fluctuations STREAM_BITRATE_HEADROOM = "1.1"; # multiplier per stream for bitrate fluctuations
# Webhook receiver: Jellyfin Webhook plugin POSTs events here to throttle immediately.
WEBHOOK_BIND = "127.0.0.1";
WEBHOOK_PORT = toString webhookPort;
}; };
}; };
} }

View File

@@ -7,6 +7,8 @@ import sys
import signal import signal
import json import json
import ipaddress import ipaddress
import threading
from http.server import HTTPServer, BaseHTTPRequestHandler
logging.basicConfig( logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s" level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
@@ -34,6 +36,8 @@ class JellyfinQBittorrentMonitor:
default_stream_bitrate=10000000, default_stream_bitrate=10000000,
min_torrent_speed=100, min_torrent_speed=100,
stream_bitrate_headroom=1.1, stream_bitrate_headroom=1.1,
webhook_port=0,
webhook_bind="127.0.0.1",
): ):
self.jellyfin_url = jellyfin_url self.jellyfin_url = jellyfin_url
self.qbittorrent_url = qbittorrent_url self.qbittorrent_url = qbittorrent_url
@@ -57,6 +61,12 @@ class JellyfinQBittorrentMonitor:
self.streaming_stop_delay = streaming_stop_delay self.streaming_stop_delay = streaming_stop_delay
self.last_state_change = 0 self.last_state_change = 0
# Webhook receiver: allows Jellyfin to push events instead of waiting for the poll
self.webhook_port = webhook_port
self.webhook_bind = webhook_bind
self.wake_event = threading.Event()
self.webhook_server = None
# Local network ranges (RFC 1918 private networks + localhost) # Local network ranges (RFC 1918 private networks + localhost)
self.local_networks = [ self.local_networks = [
ipaddress.ip_network("10.0.0.0/8"), ipaddress.ip_network("10.0.0.0/8"),
@@ -79,9 +89,56 @@ class JellyfinQBittorrentMonitor:
def signal_handler(self, signum, frame): def signal_handler(self, signum, frame):
logger.info("Received shutdown signal, cleaning up...") logger.info("Received shutdown signal, cleaning up...")
self.running = False self.running = False
if self.webhook_server is not None:
# shutdown() blocks until serve_forever returns; run from a thread so we don't deadlock
threading.Thread(target=self.webhook_server.shutdown, daemon=True).start()
self.restore_normal_limits() self.restore_normal_limits()
sys.exit(0) sys.exit(0)
def wake(self) -> None:
"""Signal the main loop to re-evaluate state immediately."""
self.wake_event.set()
def sleep_or_wake(self, seconds: float) -> None:
"""Wait up to `seconds`, returning early if a webhook wakes the loop."""
self.wake_event.wait(seconds)
self.wake_event.clear()
def start_webhook_server(self) -> None:
"""Start a background HTTP server that wakes the monitor on any POST."""
if not self.webhook_port:
return
monitor = self
class WebhookHandler(BaseHTTPRequestHandler):
def do_POST(self): # noqa: N802
length = int(self.headers.get("Content-Length", "0") or "0")
body = self.rfile.read(min(length, 65536)) if length else b""
event = "unknown"
try:
if body:
event = json.loads(body).get("NotificationType", "unknown")
except (json.JSONDecodeError, ValueError):
pass
logger.info(f"Webhook received: {event}")
self.send_response(204)
self.end_headers()
monitor.wake()
def log_message(self, format, *args):
return # suppress default access log
self.webhook_server = HTTPServer(
(self.webhook_bind, self.webhook_port), WebhookHandler
)
threading.Thread(
target=self.webhook_server.serve_forever, daemon=True, name="webhook-server"
).start()
logger.info(
f"Webhook receiver listening on http://{self.webhook_bind}:{self.webhook_port}"
)
def check_jellyfin_sessions(self) -> list[dict]: def check_jellyfin_sessions(self) -> list[dict]:
headers = ( headers = (
{"X-Emby-Token": self.jellyfin_api_key} if self.jellyfin_api_key else {} {"X-Emby-Token": self.jellyfin_api_key} if self.jellyfin_api_key else {}
@@ -297,10 +354,14 @@ class JellyfinQBittorrentMonitor:
logger.info(f"Default stream bitrate: {self.default_stream_bitrate} bps") logger.info(f"Default stream bitrate: {self.default_stream_bitrate} bps")
logger.info(f"Minimum torrent speed: {self.min_torrent_speed} KB/s") logger.info(f"Minimum torrent speed: {self.min_torrent_speed} KB/s")
logger.info(f"Stream bitrate headroom: {self.stream_bitrate_headroom}x") logger.info(f"Stream bitrate headroom: {self.stream_bitrate_headroom}x")
if self.webhook_port:
logger.info(f"Webhook receiver: {self.webhook_bind}:{self.webhook_port}")
signal.signal(signal.SIGINT, self.signal_handler) signal.signal(signal.SIGINT, self.signal_handler)
signal.signal(signal.SIGTERM, self.signal_handler) signal.signal(signal.SIGTERM, self.signal_handler)
self.start_webhook_server()
while self.running: while self.running:
try: try:
self.sync_qbittorrent_state() self.sync_qbittorrent_state()
@@ -309,7 +370,7 @@ class JellyfinQBittorrentMonitor:
active_streams = self.check_jellyfin_sessions() active_streams = self.check_jellyfin_sessions()
except ServiceUnavailable: except ServiceUnavailable:
logger.warning("Jellyfin unavailable, maintaining current state") logger.warning("Jellyfin unavailable, maintaining current state")
time.sleep(self.check_interval) self.sleep_or_wake(self.check_interval)
continue continue
streaming_active = len(active_streams) > 0 streaming_active = len(active_streams) > 0
@@ -394,13 +455,13 @@ class JellyfinQBittorrentMonitor:
self.current_state = desired_state self.current_state = desired_state
self.last_active_streams = active_streams self.last_active_streams = active_streams
time.sleep(self.check_interval) self.sleep_or_wake(self.check_interval)
except KeyboardInterrupt: except KeyboardInterrupt:
break break
except Exception as e: except Exception as e:
logger.error(f"Unexpected error in monitoring loop: {e}") logger.error(f"Unexpected error in monitoring loop: {e}")
time.sleep(self.check_interval) self.sleep_or_wake(self.check_interval)
self.restore_normal_limits() self.restore_normal_limits()
logger.info("Monitor stopped") logger.info("Monitor stopped")
@@ -421,6 +482,8 @@ if __name__ == "__main__":
default_stream_bitrate = int(os.getenv("DEFAULT_STREAM_BITRATE", "10000000")) default_stream_bitrate = int(os.getenv("DEFAULT_STREAM_BITRATE", "10000000"))
min_torrent_speed = int(os.getenv("MIN_TORRENT_SPEED", "100")) min_torrent_speed = int(os.getenv("MIN_TORRENT_SPEED", "100"))
stream_bitrate_headroom = float(os.getenv("STREAM_BITRATE_HEADROOM", "1.1")) stream_bitrate_headroom = float(os.getenv("STREAM_BITRATE_HEADROOM", "1.1"))
webhook_port = int(os.getenv("WEBHOOK_PORT", "0"))
webhook_bind = os.getenv("WEBHOOK_BIND", "127.0.0.1")
monitor = JellyfinQBittorrentMonitor( monitor = JellyfinQBittorrentMonitor(
jellyfin_url=jellyfin_url, jellyfin_url=jellyfin_url,
@@ -434,6 +497,8 @@ if __name__ == "__main__":
default_stream_bitrate=default_stream_bitrate, default_stream_bitrate=default_stream_bitrate,
min_torrent_speed=min_torrent_speed, min_torrent_speed=min_torrent_speed,
stream_bitrate_headroom=stream_bitrate_headroom, stream_bitrate_headroom=stream_bitrate_headroom,
webhook_port=webhook_port,
webhook_bind=webhook_bind,
) )
monitor.run() monitor.run()

View File

@@ -0,0 +1,105 @@
{ pkgs, lib }:
let
pluginVersion = "18.0.0.0";
# GUID from the plugin's meta.json; addresses it on /Plugins/<guid>/Configuration.
pluginGuid = "71552a5a-5c5c-4350-a2ae-ebe451a30173";
package = pkgs.stdenvNoCC.mkDerivation {
pname = "jellyfin-plugin-webhook";
version = pluginVersion;
src = pkgs.fetchurl {
url = "https://repo.jellyfin.org/files/plugin/webhook/webhook_${pluginVersion}.zip";
hash = "sha256-LFFojiPnBGl9KJ0xVyPBnCmatcaeVbllRwRkz5Z3dqI=";
};
nativeBuildInputs = [ pkgs.unzip ];
unpackPhase = ''unzip "$src"'';
installPhase = ''
mkdir -p "$out"
cp *.dll meta.json "$out/"
'';
dontFixup = true; # managed .NET assemblies must not be patched
};
# Minimal Handlebars template, base64 encoded. The monitor only needs the POST;
# NotificationType is parsed for the debug log line.
# Decoded: {"NotificationType":"{{NotificationType}}"}
templateB64 = "eyJOb3RpZmljYXRpb25UeXBlIjoie3tOb3RpZmljYXRpb25UeXBlfX0ifQ==";
# Build a PluginConfiguration payload accepted by Jellyfin's JSON deserializer.
# Each webhook is `{ name, uri, notificationTypes }`.
mkConfigJson =
webhooks:
builtins.toJSON {
ServerUrl = "";
GenericOptions = map (w: {
NotificationTypes = w.notificationTypes;
WebhookName = w.name;
WebhookUri = w.uri;
EnableMovies = true;
EnableEpisodes = true;
EnableVideos = true;
EnableWebhook = true;
Template = templateB64;
Headers = [
{
Key = "Content-Type";
Value = "application/json";
}
];
}) webhooks;
};
# Oneshot that POSTs the plugin configuration. Retries past the window
# between Jellyfin API health and plugin registration.
mkConfigureScript =
{ jellyfinUrl, webhooks }:
pkgs.writeShellScript "jellyfin-webhook-configure" ''
set -euo pipefail
export PATH=${
lib.makeBinPath [
pkgs.coreutils
pkgs.curl
]
}
URL=${lib.escapeShellArg jellyfinUrl}
AUTH="Authorization: MediaBrowser Token=\"$(cat "$CREDENTIALS_DIRECTORY/jellyfin-api-key")\""
CONFIG=${lib.escapeShellArg (mkConfigJson webhooks)}
for _ in $(seq 1 120); do curl -sf -o /dev/null "$URL/health" && break; sleep 1; done
curl -sf -o /dev/null "$URL/health"
for _ in $(seq 1 60); do
if printf '%s' "$CONFIG" | curl -sf -X POST \
-H "$AUTH" -H "Content-Type: application/json" --data-binary @- \
"$URL/Plugins/${pluginGuid}/Configuration"; then
echo "Jellyfin webhook plugin configured"; exit 0
fi
sleep 1
done
echo "Failed to configure webhook plugin" >&2; exit 1
'';
# Materialise a writable copy of the plugin. Jellyfin rewrites meta.json at
# runtime, so a read-only nix-store symlink would EACCES.
mkInstallScript =
{ pluginsDir }:
pkgs.writeShellScript "jellyfin-webhook-install" ''
set -euo pipefail
export PATH=${lib.makeBinPath [ pkgs.coreutils ]}
dst=${lib.escapeShellArg "${pluginsDir}/Webhook_${pluginVersion}"}
mkdir -p ${lib.escapeShellArg pluginsDir}
rm -rf "$dst" && mkdir -p "$dst"
cp ${package}/*.dll ${package}/meta.json "$dst/"
chmod u+rw "$dst"/*
'';
in
{
inherit
package
pluginVersion
pluginGuid
mkConfigureScript
mkInstallScript
;
}

View File

@@ -26,6 +26,14 @@
services.caddy.virtualHosts."jellyfin.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."jellyfin.${service_configs.https.domain}".extraConfig = ''
reverse_proxy :${builtins.toString service_configs.ports.private.jellyfin.port} { reverse_proxy :${builtins.toString service_configs.ports.private.jellyfin.port} {
# Disable response buffering for streaming. Caddy's default partial
# buffering delays fMP4-HLS segments and direct-play responses where
# Content-Length is known (so auto-flush doesn't trigger).
flush_interval -1
transport http {
# Localhost: compression wastes CPU re-encoding already-compressed media.
compression off
}
header_up X-Real-IP {remote_host} header_up X-Real-IP {remote_host}
header_up X-Forwarded-For {remote_host} header_up X-Forwarded-For {remote_host}
header_up X-Forwarded-Proto {scheme} header_up X-Forwarded-Proto {scheme}

View File

@@ -37,15 +37,21 @@
servers.${service_configs.minecraft.server_name} = { servers.${service_configs.minecraft.server_name} = {
enable = true; enable = true;
package = pkgs.fabricServers.fabric-1_21_11; package = pkgs.fabricServers.fabric-26_1_2.override { jre_headless = pkgs.openjdk25_headless; };
jvmOpts = lib.concatStringsSep " " [ jvmOpts = lib.concatStringsSep " " [
# Memory # Memory
"-Xmx${builtins.toString service_configs.minecraft.memory.heap_size_m}M" "-Xmx${builtins.toString service_configs.minecraft.memory.heap_size_m}M"
"-Xms${builtins.toString service_configs.minecraft.memory.heap_size_m}M" "-Xms${builtins.toString service_configs.minecraft.memory.heap_size_m}M"
# GC # GC
"-XX:+UseZGC" "-XX:+UseZGC"
"-XX:+ZGenerational" "-XX:+ZGenerational"
# added in new minecraft version
"-XX:+UseCompactObjectHeaders"
"-XX:+UseStringDeduplication"
# Base JVM optimizations (brucethemoose/Minecraft-Performance-Flags-Benchmarks) # Base JVM optimizations (brucethemoose/Minecraft-Performance-Flags-Benchmarks)
"-XX:+UnlockExperimentalVMOptions" "-XX:+UnlockExperimentalVMOptions"
"-XX:+UnlockDiagnosticVMOptions" "-XX:+UnlockDiagnosticVMOptions"
@@ -67,6 +73,7 @@
"-XX:NonProfiledCodeHeapSize=194M" "-XX:NonProfiledCodeHeapSize=194M"
"-XX:NmethodSweepActivity=1" "-XX:NmethodSweepActivity=1"
"-XX:+UseVectorCmov" "-XX:+UseVectorCmov"
# Large pages (requires vm.nr_hugepages sysctl) # Large pages (requires vm.nr_hugepages sysctl)
"-XX:+UseLargePages" "-XX:+UseLargePages"
"-XX:LargePageSizeInBytes=${builtins.toString service_configs.minecraft.memory.large_page_size_m}M" "-XX:LargePageSizeInBytes=${builtins.toString service_configs.minecraft.memory.large_page_size_m}M"
@@ -92,71 +99,68 @@
with pkgs; with pkgs;
builtins.attrValues { builtins.attrValues {
FabricApi = fetchurl { FabricApi = fetchurl {
url = "https://cdn.modrinth.com/data/P7dR8mSH/versions/i5tSkVBH/fabric-api-0.141.3%2B1.21.11.jar"; url = "https://cdn.modrinth.com/data/P7dR8mSH/versions/fm7UYECV/fabric-api-0.145.4%2B26.1.2.jar";
sha512 = "c20c017e23d6d2774690d0dd774cec84c16bfac5461da2d9345a1cd95eee495b1954333c421e3d1c66186284d24a433f6b0cced8021f62e0bfa617d2384d0471"; sha512 = "ffd5ef62a745f76cd2e5481252cb7bc67006c809b4f436827d05ea22c01d19279e94a3b24df3d57e127af1cd08440b5de6a92a4ea8f39b2dcbbe1681275564c3";
}; };
FerriteCore = fetchurl { # No 26.1.2 version available
url = "https://cdn.modrinth.com/data/uXXizFIs/versions/Ii0gP3D8/ferritecore-8.2.0-fabric.jar"; # FerriteCore = fetchurl {
sha512 = "3210926a82eb32efd9bcebabe2f6c053daf5c4337eebc6d5bacba96d283510afbde646e7e195751de795ec70a2ea44fef77cb54bf22c8e57bb832d6217418869"; # url = "https://cdn.modrinth.com/data/uXXizFIs/versions/d5ddUdiB/ferritecore-9.0.0-fabric.jar";
}; # sha512 = "d81fa97e11784c19d42f89c2f433831d007603dd7193cee45fa177e4a6a9c52b384b198586e04a0f7f63cd996fed713322578bde9a8db57e1188854ae5cbe584";
# };
Lithium = fetchurl { Lithium = fetchurl {
url = "https://cdn.modrinth.com/data/gvQqBUqZ/versions/Ow7wA0kG/lithium-fabric-0.21.4%2Bmc1.21.11.jar"; url = "https://cdn.modrinth.com/data/gvQqBUqZ/versions/v2xoRvRP/lithium-fabric-0.24.1%2Bmc26.1.2.jar";
sha512 = "f14a5c3d2fad786347ca25083f902139694f618b7c103947f2fd067a7c5ee88a63e1ef8926f7d693ea79ed7d00f57317bae77ef9c2d630bf5ed01ac97a752b94"; sha512 = "8711bc8c6f39be4c8511becb7a68e573ced56777bd691639f2fc62299b35bb4ccd2efe4a39bd9c308084b523be86a5f5c4bf921ab85f7a22bf075d8ea2359621";
}; };
NoChatReports = fetchurl { NoChatReports = fetchurl {
url = "https://cdn.modrinth.com/data/qQyHxfxd/versions/rhykGstm/NoChatReports-FABRIC-1.21.11-v2.18.0.jar"; url = "https://cdn.modrinth.com/data/qQyHxfxd/versions/2yrLNE3S/NoChatReports-FABRIC-26.1-v2.19.0.jar";
sha512 = "d2c35cc8d624616f441665aff67c0e366e4101dba243bad25ed3518170942c1a3c1a477b28805cd1a36c44513693b1c55e76bea627d3fced13927a3d67022ccc"; sha512 = "94d58a1a4cde4e3b1750bdf724e65c5f4ff3436c2532f36a465d497d26bf59f5ac996cddbff8ecdfed770c319aa2f2dcc9c7b2d19a35651c2a7735c5b2124dad";
}; };
squaremap = fetchurl { squaremap = fetchurl {
url = "https://cdn.modrinth.com/data/PFb7ZqK6/versions/BW8lMXBi/squaremap-fabric-mc1.21.11-1.3.12.jar"; url = "https://cdn.modrinth.com/data/PFb7ZqK6/versions/UBN6MFvH/squaremap-fabric-mc26.1.2-1.3.13.jar";
sha512 = "f62eb791a3f5812eb174565d318f2e6925353f846ef8ac56b4e595f481494e0c281f26b9e9fcfdefa855093c96b735b12f67ee17c07c2477aa7a3439238670d9"; sha512 = "97bc130184b5d0ddc4ff98a15acef6203459d982e0e2afbd49a2976d546c55a86ef22b841378b51dd782be9b2cfbe4cfa197717f2b7f6800fd8b4ff4df6e564f";
}; };
scalablelux = fetchurl { scalablelux = fetchurl {
url = "https://cdn.modrinth.com/data/Ps1zyz6x/versions/PV9KcrYQ/ScalableLux-0.1.6%2Bfabric.c25518a-all.jar"; url = "https://cdn.modrinth.com/data/Ps1zyz6x/versions/gYbHVCz8/ScalableLux-0.2.0%2Bfabric.2b63825-all.jar";
sha512 = "729515c1e75cf8d9cd704f12b3487ddb9664cf9928e7b85b12289c8fbbc7ed82d0211e1851375cbd5b385820b4fedbc3f617038fff5e30b302047b0937042ae7"; sha512 = "48565a4d8a1cbd623f0044086d971f2c0cf1c40e1d0b6636a61d41512f4c1c1ddff35879d9dba24b088a670ee254e2d5842d13a30b6d76df23706fa94ea4a58b";
}; };
c2me = fetchurl { c2me = fetchurl {
url = "https://cdn.modrinth.com/data/VSNURh3q/versions/QdLiMUjx/c2me-fabric-mc1.21.11-0.3.7%2Balpha.0.7.jar"; url = "https://cdn.modrinth.com/data/VSNURh3q/versions/yrNQQ1AQ/c2me-fabric-mc26.1.2-0.3.7%2Balpha.0.65.jar";
sha512 = "f9543febe2d649a82acd6d5b66189b6a3d820cf24aa503ba493fdb3bbd4e52e30912c4c763fe50006f9a46947ae8cd737d420838c61b93429542573ed67f958e"; sha512 = "6666ebaa3bfa403e386776590fc845b7c306107d37ebc7b1be3b057893fbf9f933abb2314c171d7fe19c177cf8823cb47fdc32040d34a9704f5ab656dd5d93f8";
}; };
krypton = fetchurl { # No 26.1 version available
url = "https://cdn.modrinth.com/data/fQEb0iXm/versions/O9LmWYR7/krypton-0.2.10.jar"; # krypton = fetchurl {
sha512 = "4dcd7228d1890ddfc78c99ff284b45f9cf40aae77ef6359308e26d06fa0d938365255696af4cc12d524c46c4886cdcd19268c165a2bf0a2835202fe857da5cab"; # url = "https://cdn.modrinth.com/data/fQEb0iXm/versions/O9LmWYR7/krypton-0.2.10.jar";
}; # sha512 = "4dcd7228d1890ddfc78c99ff284b45f9cf40aae77ef6359308e26d06fa0d938365255696af4cc12d524c46c4886cdcd19268c165a2bf0a2835202fe857da5cab";
# };
better-fabric-console = fetchurl { # No 26.1.2 version available
url = "https://cdn.modrinth.com/data/Y8o1j1Sf/versions/6aIKl5wy/better-fabric-console-mc1.21.11-1.2.9.jar"; # disconnect-packet-fix = fetchurl {
sha512 = "427247dafd99df202ee10b4bf60ffcbbecbabfadb01c167097ffb5b85670edb811f4d061c2551be816295cbbc6b8ec5ec464c14a6ff41912ef1f6c57b038d320"; # url = "https://cdn.modrinth.com/data/rd9rKuJT/versions/x9gVeaTU/disconnect-packet-fix-fabric-2.1.0.jar";
}; # sha512 = "bf84d02bdcd737706df123e452dd31ef535580fa4ced6af1e4ceea022fef94e4764775253e970b8caa1292e2fa00eb470557f70b290fafdb444479fa801b07a1";
# };
disconnect-packet-fix = fetchurl {
url = "https://cdn.modrinth.com/data/rd9rKuJT/versions/Gv74xveQ/disconnect-packet-fix-fabric-2.0.0.jar";
sha512 = "1fd6f09a41ce36284e1a8e9def53f3f6834d7201e69e54e24933be56445ba569fbc26278f28300d36926ba92db6f4f9c0ae245d23576aaa790530345587316db";
};
packet-fixer = fetchurl { packet-fixer = fetchurl {
url = "https://cdn.modrinth.com/data/c7m1mi73/versions/CUh1DWeO/packetfixer-fabric-3.3.4-1.21.11.jar"; url = "https://cdn.modrinth.com/data/c7m1mi73/versions/M8PqPQr4/packetfixer-fabric-3.3.4-26.1.2.jar";
sha512 = "33331b16cb40c5e6fbaade3cacc26f3a0e8fa5805a7186f94d7366a0e14dbeee9de2d2e8c76fa71f5e9dd24eb1c261667c35447e32570ea965ca0f154fdfba0a"; sha512 = "698020edba2a1fd80bb282bfd4832a00d6447b08eaafbc2e16a8f3bf89e187fc9a622c92dfe94ae140dd485fc0220a86890f12158ec08054e473fef8337829bc";
}; };
# fork of Modernfix for 1.21.11 (upstream will support 26.1) # mVUS fork: upstream ModernFix no longer ships Fabric builds
modernfix = fetchurl { modernfix = fetchurl {
url = "https://cdn.modrinth.com/data/TjSm1wrD/versions/JwSO8JCN/modernfix-5.25.2-build.4.jar"; url = "https://cdn.modrinth.com/data/TjSm1wrD/versions/dqQ7mabN/modernfix-5.26.2-build.1.jar";
sha512 = "0d65c05ac0475408c58ef54215714e6301113101bf98bfe4bb2ba949fbfddd98225ac4e2093a5f9206a9e01ba80a931424b237bdfa3b6e178c741ca6f7f8c6a3"; sha512 = "fbef93c2dabf7bcd0ccd670226dfc4958f7ebe5d8c2b1158e88a65e6954a40f595efd58401d2a3dbb224660dca5952199cf64df29100e7bd39b1b1941290b57b";
}; };
debugify = fetchurl { debugify = fetchurl {
url = "https://cdn.modrinth.com/data/QwxR6Gcd/versions/8Q49lnaU/debugify-1.21.11%2B1.0.jar"; url = "https://cdn.modrinth.com/data/QwxR6Gcd/versions/mfTTfiKn/debugify-26.1.2%2B1.0.jar";
sha512 = "04d82dd33f44ced37045f1f9a54ad4eacd70861ff74a8800f2d2df358579e6cb0ea86a34b0086b3e87026b1a0691dd6594b4fdc49f89106466eea840518beb03"; sha512 = "63db82f2163b9f7fc27ebea999ffcd7a961054435b3ed7d8bf32d905b5f60ce81715916b7fd4e9509dd23703d5492059f3ce7e5f176402f8ed4f985a415553f4";
}; };
} }
); );
}; };

View File

@@ -26,11 +26,12 @@ lib.mkIf config.services.xmrig.enable {
environment = { environment = {
POLL_INTERVAL = "3"; POLL_INTERVAL = "3";
GRACE_PERIOD = "15"; GRACE_PERIOD = "15";
# This server's background services (qbittorrent, monero, bazarr, etc.) # Background services (qbittorrent, bitmagnet, postgresql, etc.) produce
# produce 5-14% non-nice CPU during normal operation. Thresholds must # 15-25% non-nice CPU during normal operation. The stop threshold must
# sit above that noise floor. # sit above transient spikes; the resume threshold must be below the
# steady-state floor to avoid restarting xmrig while services are active.
CPU_STOP_THRESHOLD = "40"; CPU_STOP_THRESHOLD = "40";
CPU_RESUME_THRESHOLD = "30"; CPU_RESUME_THRESHOLD = "10";
STARTUP_COOLDOWN = "10"; STARTUP_COOLDOWN = "10";
STATE_DIR = "/var/lib/xmrig-auto-pause"; STATE_DIR = "/var/lib/xmrig-auto-pause";
}; };

View File

@@ -23,7 +23,9 @@ in
(lib.serviceFilePerms "qbittorrent" [ (lib.serviceFilePerms "qbittorrent" [
# 0770: group (media) needs write to delete files during upgrades — # 0770: group (media) needs write to delete files during upgrades —
# Radarr/Sonarr must unlink the old file before placing the new one. # Radarr/Sonarr must unlink the old file before placing the new one.
"Z ${config.services.qbittorrent.serverConfig.Preferences.Downloads.SavePath} 0770 ${config.services.qbittorrent.user} ${service_configs.media_group}" # Non-recursive (z not Z): UMask=0007 ensures new files get correct perms.
# A recursive Z rule would walk millions of files on the HDD pool at every boot.
"z ${config.services.qbittorrent.serverConfig.Preferences.Downloads.SavePath} 0770 ${config.services.qbittorrent.user} ${service_configs.media_group}"
"z ${config.services.qbittorrent.serverConfig.Preferences.Downloads.TempPath} 0700 ${config.services.qbittorrent.user} ${config.services.qbittorrent.group}" "z ${config.services.qbittorrent.serverConfig.Preferences.Downloads.TempPath} 0700 ${config.services.qbittorrent.user} ${config.services.qbittorrent.group}"
"Z ${config.services.qbittorrent.profileDir} 0700 ${config.services.qbittorrent.user} ${config.services.qbittorrent.group}" "Z ${config.services.qbittorrent.profileDir} 0700 ${config.services.qbittorrent.user} ${config.services.qbittorrent.group}"
]) ])
@@ -162,6 +164,35 @@ in
_: path: "d ${path} 0770 ${config.services.qbittorrent.user} ${service_configs.media_group} -" _: path: "d ${path} 0770 ${config.services.qbittorrent.user} ${service_configs.media_group} -"
) service_configs.torrent.categories; ) service_configs.torrent.categories;
# Periodically checkpoint qBittorrent's SQLite WAL (Write-Ahead Log).
# qBittorrent holds a read transaction open for its entire lifetime,
# preventing SQLite's auto-checkpoint from running. The WAL grows
# unbounded (observed: 405 MB) and must be replayed on next startup,
# causing 10+ minute "internal preparations" hangs.
# A second sqlite3 connection can checkpoint concurrently and safely.
# See: https://github.com/qbittorrent/qBittorrent/issues/20433
systemd.services.qbittorrent-wal-checkpoint = {
description = "Checkpoint qBittorrent SQLite WAL";
after = [ "qbittorrent.service" ];
requires = [ "qbittorrent.service" ];
serviceConfig = {
Type = "oneshot";
ExecStart = "${pkgs.sqlite}/bin/sqlite3 ${config.services.qbittorrent.profileDir}/qBittorrent/data/torrents.db 'PRAGMA wal_checkpoint(TRUNCATE);'";
User = config.services.qbittorrent.user;
Group = config.services.qbittorrent.group;
};
};
systemd.timers.qbittorrent-wal-checkpoint = {
description = "Periodically checkpoint qBittorrent SQLite WAL";
wantedBy = [ "timers.target" ];
timerConfig = {
OnUnitActiveSec = "4h";
OnBootSec = "30min";
RandomizedDelaySec = "10min";
};
};
users.users.${config.services.qbittorrent.user}.extraGroups = [ users.users.${config.services.qbittorrent.user}.extraGroups = [
service_configs.media_group service_configs.media_group
]; ];

View File

@@ -6,6 +6,21 @@
}: }:
let let
jfLib = import ./jellyfin-test-lib.nix { inherit pkgs lib; }; jfLib = import ./jellyfin-test-lib.nix { inherit pkgs lib; };
webhookPlugin = import ../services/jellyfin/jellyfin-webhook-plugin.nix { inherit pkgs lib; };
configureWebhook = webhookPlugin.mkConfigureScript {
jellyfinUrl = "http://localhost:8096";
webhooks = [
{
name = "qBittorrent Monitor";
uri = "http://127.0.0.1:9898/";
notificationTypes = [
"PlaybackStart"
"PlaybackProgress"
"PlaybackStop"
];
}
];
};
in in
pkgs.testers.runNixOSTest { pkgs.testers.runNixOSTest {
name = "jellyfin-qbittorrent-monitor"; name = "jellyfin-qbittorrent-monitor";
@@ -69,11 +84,30 @@ pkgs.testers.runNixOSTest {
} }
]; ];
# Create directories for qBittorrent # Create directories for qBittorrent.
systemd.tmpfiles.rules = [ systemd.tmpfiles.rules = [
"d /var/lib/qbittorrent/downloads 0755 qbittorrent qbittorrent" "d /var/lib/qbittorrent/downloads 0755 qbittorrent qbittorrent"
"d /var/lib/qbittorrent/incomplete 0755 qbittorrent qbittorrent" "d /var/lib/qbittorrent/incomplete 0755 qbittorrent qbittorrent"
]; ];
# Install the Jellyfin Webhook plugin before Jellyfin starts, mirroring
# the production module. Jellyfin rewrites meta.json at runtime so a
# read-only nix-store symlink would fail — we materialise a writable copy.
systemd.services."jellyfin-webhook-install" = {
description = "Install Jellyfin Webhook plugin files";
before = [ "jellyfin.service" ];
wantedBy = [ "jellyfin.service" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
User = "jellyfin";
Group = "jellyfin";
UMask = "0077";
ExecStart = webhookPlugin.mkInstallScript {
pluginsDir = "/var/lib/jellyfin/plugins";
};
};
};
}; };
# Public test IP (RFC 5737 TEST-NET-3) so Jellyfin sees it as external # Public test IP (RFC 5737 TEST-NET-3) so Jellyfin sees it as external
@@ -394,6 +428,97 @@ pkgs.testers.runNixOSTest {
local_playback["PositionTicks"] = 50000000 local_playback["PositionTicks"] = 50000000
server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing/Stopped' -d '{json.dumps(local_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{local_auth}, Token={local_token}'") server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing/Stopped' -d '{json.dumps(local_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{local_auth}, Token={local_token}'")
# === WEBHOOK TESTS ===
#
# Configure the Jellyfin Webhook plugin to target the monitor, then verify
# the real Jellyfin plugin monitor path reacts faster than any possible
# poll. CHECK_INTERVAL=30 rules out polling as the cause.
WEBHOOK_PORT = 9898
WEBHOOK_CREDS = "/tmp/webhook-creds"
# Start a webhook-enabled monitor with long poll interval.
server.succeed("systemctl stop monitor-test || true")
time.sleep(1)
server.succeed(f"""
systemd-run --unit=monitor-webhook \
--setenv=JELLYFIN_URL=http://localhost:8096 \
--setenv=JELLYFIN_API_KEY={token} \
--setenv=QBITTORRENT_URL=http://localhost:8080 \
--setenv=CHECK_INTERVAL=30 \
--setenv=STREAMING_START_DELAY=1 \
--setenv=STREAMING_STOP_DELAY=1 \
--setenv=TOTAL_BANDWIDTH_BUDGET=50000000 \
--setenv=SERVICE_BUFFER=2000000 \
--setenv=DEFAULT_STREAM_BITRATE=10000000 \
--setenv=MIN_TORRENT_SPEED=100 \
--setenv=WEBHOOK_PORT={WEBHOOK_PORT} \
--setenv=WEBHOOK_BIND=127.0.0.1 \
{python} {monitor}
""")
server.wait_until_succeeds(f"ss -ltn | grep -q ':{WEBHOOK_PORT}'", timeout=15)
time.sleep(2)
assert not is_throttled(), "Should start unthrottled"
# Drop the admin token where the configure script expects it (production uses agenix).
server.succeed(f"mkdir -p {WEBHOOK_CREDS} && echo '{token}' > {WEBHOOK_CREDS}/jellyfin-api-key")
server.succeed(
f"systemd-run --wait --unit=webhook-configure-test "
f"--setenv=CREDENTIALS_DIRECTORY={WEBHOOK_CREDS} "
f"${configureWebhook}"
)
with subtest("Real PlaybackStart event throttles via the plugin"):
playback_start = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-plugin-start",
"CanSeek": True,
"IsPaused": False,
}
start_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing' -d '{json.dumps(playback_start)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(start_cmd)
server.wait_until_succeeds(
"curl -sf http://localhost:8080/api/v2/transfer/speedLimitsMode | grep -q '^1$'",
timeout=5,
)
# Let STREAMING_STOP_DELAY (1s) elapse so the upcoming stop is not swallowed by hysteresis.
time.sleep(2)
with subtest("Real PlaybackStop event unthrottles via the plugin"):
playback_stop = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-plugin-start",
"PositionTicks": 50000000,
}
stop_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing/Stopped' -d '{json.dumps(playback_stop)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(stop_cmd)
server.wait_until_succeeds(
"curl -sf http://localhost:8080/api/v2/transfer/speedLimitsMode | grep -q '^0$'",
timeout=10,
)
# Restore fast-polling monitor for the service-restart tests below.
server.succeed("systemctl stop monitor-webhook || true")
time.sleep(1)
server.succeed(f"""
systemd-run --unit=monitor-test \
--setenv=JELLYFIN_URL=http://localhost:8096 \
--setenv=JELLYFIN_API_KEY={token} \
--setenv=QBITTORRENT_URL=http://localhost:8080 \
--setenv=CHECK_INTERVAL=1 \
--setenv=STREAMING_START_DELAY=1 \
--setenv=STREAMING_STOP_DELAY=1 \
--setenv=TOTAL_BANDWIDTH_BUDGET=50000000 \
--setenv=SERVICE_BUFFER=2000000 \
--setenv=DEFAULT_STREAM_BITRATE=10000000 \
--setenv=MIN_TORRENT_SPEED=100 \
{python} {monitor}
""")
time.sleep(2)
# === SERVICE RESTART TESTS === # === SERVICE RESTART TESTS ===
with subtest("qBittorrent restart during throttled state re-applies throttling"): with subtest("qBittorrent restart during throttled state re-applies throttling"):