Compare commits

..

4 Commits

Author SHA1 Message Date
0dc307874d update 2026-03-15 13:25:54 -04:00
bd192e4803 fix arr services 2026-03-15 13:23:49 -04:00
6318e90a36 qbt: things 2026-03-15 02:50:19 -04:00
4d1ec4b6e1 AGENTS.md: init 2026-03-15 02:38:13 -04:00
5 changed files with 161 additions and 32 deletions

136
AGENTS.md Normal file
View File

@@ -0,0 +1,136 @@
# AGENTS.md - server-config (NixOS server "muffin")
## Overview
NixOS flake-based server configuration for host **muffin** (deployed to `root@server-public`).
Uses deploy-rs for remote deployment, disko for disk management, impermanence (tmpfs root),
agenix for secrets, lanzaboote for secure boot, and ZFS for data storage.
## Target Hardware
- **CPU**: AMD Ryzen 5 5600X (6C/12T, Zen 3 / `znver3`)
- **RAM**: 64 GB DDR4, no swap
- **Motherboard**: ASRock B550M Pro4
- **Boot drive**: WD_BLACK SN770 1TB NVMe (f2fs: 20G /persistent, 911G /nix; root is tmpfs)
- **SSD pool `tank`**: 4x 2TB SATA SSDs (raidz2, 7.27T raw, ~2.2T free) -- services, backups, music
- **HDD pool `hdds`**: 4x 18TB Seagate Exos X18 (raidz1, 65.5T raw, ~17.9T free) -- torrents, monero
- **USB**: 8GB VFAT drive mounted at /mnt/usb-secrets (agenix identity key)
- **GPU**: Intel (integrated, xe driver) -- used for Jellyfin hardware transcoding
- **NIC**: enp4s0 (static 192.168.1.50/24)
## Build / Deploy / Test Commands
```bash
# Format code (nixfmt-tree)
nix fmt
# Build the system configuration (check for eval errors)
nix build .#nixosConfigurations.muffin.config.system.build.toplevel -L
# Deploy to server
nix run .#deploy -- .#muffin
# Run ALL tests (NixOS VM tests, takes a long time)
nix build .#packages.x86_64-linux.tests -L
# Run a SINGLE test by name (preferred during development)
nix build .#test-zfsTest -L
nix build .#test-testTest -L
nix build .#test-fail2banSshTest -L
nix build .#test-ntfyAlertsTest -L
nix build .#test-filePermsTest -L
# Pattern: nix build .#test-<testName> -L
# Test names are defined in tests/tests.nix (keys of the returned attrset)
# Check flake outputs (list what's available)
nix flake show
# Evaluate without building (fast syntax/eval check)
nix eval .#nixosConfigurations.muffin.config.system.build.toplevel --no-build 2>&1 | head -5
```
## Code Style
### Nix Formatting
- **Formatter**: `nixfmt-tree` (declared in flake.nix). Always run `nix fmt` before committing.
- **Indentation**: 2 spaces (enforced by nixfmt-tree).
### Module Pattern
Every `.nix` file is a function taking an attrset with named args and `...`:
```nix
{
config,
lib,
pkgs,
service_configs,
...
}:
{
# module body
}
```
- Function args on separate lines, one per line, with trailing comma.
- Opening brace on its own line for multi-line arg lists.
- Use `service_configs` (from `service-configs.nix`) for all ports, paths, domains -- never hardcode.
### Service File Convention
Each service file in `services/` follows this structure:
1. `imports` block with `lib.serviceMountWithZpool` and optionally `lib.serviceFilePerms`
2. Service configuration (`services.<name> = { ... }`)
3. Caddy reverse proxy vhost (`services.caddy.virtualHosts."subdomain.${service_configs.https.domain}"`)
4. Firewall rules if needed (`networking.firewall.allowed{TCP,UDP}Ports`)
5. fail2ban jail if the service has authentication (`services.fail2ban.jails.<name>`)
### Custom Lib Functions (modules/lib.nix)
- `lib.serviceMountWithZpool serviceName zpoolName [dirs]` -- ensures ZFS datasets are mounted before service starts, validates pool membership
- `lib.serviceFilePerms serviceName [tmpfilesRules]` -- sets file permissions via systemd-tmpfiles before service starts
- `lib.optimizePackage pkg` -- applies `-O3 -march=znver3 -mtune=znver3` compiler flags
- `lib.vpnNamespaceOpenPort port serviceName` -- confines service to WireGuard VPN namespace
### Naming Conventions
- **Files**: lowercase with hyphens (`jellyfin-qbittorrent-monitor.nix`)
- **Test names**: camelCase with `Test` suffix in `tests/tests.nix` (`fail2banSshTest`, `zfsTest`)
- **Ports**: all declared in `service-configs.nix` under `ports.*`, referenced as `service_configs.ports.<name>`
- **ZFS datasets**: `tank/services/<name>` for SSD-backed, `hdds/services/<name>` for HDD-backed
- **Commit messages**: terse, lowercase; prefix with service/module name when scoped (`caddy: add redirect`, `zfs: remove unneeded options`). Generic changes use `update` or short description.
### Secrets
- **git-crypt**: `secrets/` directory and `usb-secrets/usb-secrets-key*` are encrypted (see `.gitattributes`)
- **agenix**: secrets declared in `modules/age-secrets.nix`, decrypted at runtime to `/run/agenix/`
- **Identity**: USB drive at `/mnt/usb-secrets/usb-secrets-key`
- Never read or commit plaintext secrets. Never log secret values.
### Important Patterns
- **Impermanence**: Root `/` is tmpfs. Only `/persistent`, `/nix`, and ZFS mounts survive reboots. Any new persistent state must be declared in `modules/impermanence.nix`.
- **Port uniqueness**: `flake.nix` has an assertion that all ports in `service_configs.ports` are unique. Always add new ports there.
- **Hugepages**: Services needing large pages declare their budget in `service-configs.nix` under `hugepages_2m.services`. The kernel sysctl is set automatically from the total.
- **Domain**: Primary domain is `sigkill.computer`. Old domain `gardling.com` redirects automatically.
- **Hardened kernel**: Uses `linuxPackages_6_12_hardened`. Security-sensitive defaults apply.
### Test Pattern
Tests use `pkgs.testers.runNixOSTest` (NixOS VM tests):
```nix
{ config, lib, pkgs, ... }:
pkgs.testers.runNixOSTest {
name = "descriptive-test-name";
nodes.machine = { pkgs, ... }: {
imports = [ /* modules under test */ ];
# VM config
};
testScript = ''
start_all()
machine.wait_for_unit("multi-user.target")
# Python test script using machine.succeed/machine.fail
'';
}
```
- Register new tests in `tests/tests.nix` with `handleTest ./filename.nix`
- Tests needing the overlay should use `pkgs.appendOverlays [ (import ../modules/overlays.nix) ]`
- Test scripts are Python; use `machine.succeed(...)`, `machine.fail(...)`, `assert`, `subtest`
## SSH Access
```bash
ssh root@server-public # deploy user
ssh primary@server-public # normal user (doas instead of sudo)
```

20
flake.lock generated
View File

@@ -32,11 +32,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1772566016, "lastModified": 1773595529,
"narHash": "sha256-+LExMzFXahgoFYayrmqK4MhT9QkUplVwurcTDjk8kGI=", "narHash": "sha256-fueNL7zKHnWNkOCNTjDowUfsUS4ERXrrL+HAWDKrhsQ=",
"ref": "refs/heads/main", "ref": "refs/heads/main",
"rev": "4cc1ae4e00844d2bd80da3869f17649c1cef3f8a", "rev": "7c0a6176407a228a8363723f14f4d41c7cf1ea29",
"revCount": 3, "revCount": 4,
"type": "git", "type": "git",
"url": "ssh://gitea@git.gardling.com/titaniumtown/arr-init" "url": "ssh://gitea@git.gardling.com/titaniumtown/arr-init"
}, },
@@ -300,11 +300,11 @@
}, },
"nixos-hardware": { "nixos-hardware": {
"locked": { "locked": {
"lastModified": 1772972630, "lastModified": 1773533765,
"narHash": "sha256-mUJxsNOrBMNOUJzN0pfdVJ1r2pxeqm9gI/yIKXzVVbk=", "narHash": "sha256-qonGfS2lzCgCl59Zl63jF6dIRRpvW3AJooBGMaXjHiY=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixos-hardware", "repo": "nixos-hardware",
"rev": "3966ce987e1a9a164205ac8259a5fe8a64528f72", "rev": "f8e82243fd601afb9f59ad230958bd073795cbfe",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -316,11 +316,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1773375660, "lastModified": 1773524153,
"narHash": "sha256-SEzUWw2Rf5Ki3bcM26nSKgbeoqi2uYy8IHVBqOKjX3w=", "narHash": "sha256-Jms57zzlFf64ayKzzBWSE2SGvJmK+NGt8Gli71d9kmY=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "3e20095fe3c6cbb1ddcef89b26969a69a1570776", "rev": "e9f278faa1d0c2fc835bd331d4666b59b505a410",
"type": "github" "type": "github"
}, },
"original": { "original": {

View File

@@ -8,6 +8,7 @@
dataDir = service_configs.prowlarr.dataDir; dataDir = service_configs.prowlarr.dataDir;
apiVersion = "v1"; apiVersion = "v1";
networkNamespacePath = "/run/netns/wg"; networkNamespacePath = "/run/netns/wg";
healthChecks = true;
syncedApps = [ syncedApps = [
{ {
name = "Sonarr"; name = "Sonarr";
@@ -57,6 +58,7 @@
serviceName = "sonarr"; serviceName = "sonarr";
port = service_configs.ports.sonarr; port = service_configs.ports.sonarr;
dataDir = service_configs.sonarr.dataDir; dataDir = service_configs.sonarr.dataDir;
healthChecks = true;
rootFolders = [ service_configs.media.tvDir ]; rootFolders = [ service_configs.media.tvDir ];
downloadClients = [ downloadClients = [
{ {
@@ -78,6 +80,7 @@
serviceName = "radarr"; serviceName = "radarr";
port = service_configs.ports.radarr; port = service_configs.ports.radarr;
dataDir = service_configs.radarr.dataDir; dataDir = service_configs.radarr.dataDir;
healthChecks = true;
rootFolders = [ service_configs.media.moviesDir ]; rootFolders = [ service_configs.media.moviesDir ];
downloadClients = [ downloadClients = [
{ {

View File

@@ -11,6 +11,9 @@
service_configs.prowlarr.dataDir service_configs.prowlarr.dataDir
]) ])
(lib.vpnNamespaceOpenPort service_configs.ports.prowlarr "prowlarr") (lib.vpnNamespaceOpenPort service_configs.ports.prowlarr "prowlarr")
(lib.serviceFilePerms "prowlarr" [
"Z ${service_configs.prowlarr.dataDir} 0700 prowlarr prowlarr"
])
]; ];
services.prowlarr = { services.prowlarr = {
@@ -19,10 +22,6 @@
settings.server.port = service_configs.ports.prowlarr; settings.server.port = service_configs.ports.prowlarr;
}; };
systemd.services.prowlarr.serviceConfig = {
ExecStartPre = "+${pkgs.coreutils}/bin/chown -R prowlarr /var/lib/prowlarr";
};
services.caddy.virtualHosts."prowlarr.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."prowlarr.${service_configs.https.domain}".extraConfig = ''
import ${config.age.secrets.caddy_auth.path} import ${config.age.secrets.caddy_auth.path}
reverse_proxy ${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString service_configs.ports.prowlarr} reverse_proxy ${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString service_configs.ports.prowlarr}

View File

@@ -54,11 +54,11 @@
serverConfig.BitTorrent = { serverConfig.BitTorrent = {
Session = { Session = {
MaxConnectionsPerTorrent = 100; MaxConnectionsPerTorrent = 100;
MaxUploadsPerTorrent = 15; MaxUploadsPerTorrent = 50;
MaxConnections = -1; MaxConnections = -1;
MaxUploads = -1; MaxUploads = -1;
MaxActiveCheckingTorrents = 2; # reduce disk pressure from concurrent hash checks MaxActiveCheckingTorrents = 2;
# queueing # queueing
QueueingSystemEnabled = true; QueueingSystemEnabled = true;
@@ -89,7 +89,7 @@
inherit (config.services.qbittorrent.serverConfig.Preferences.Downloads) TempPath; inherit (config.services.qbittorrent.serverConfig.Preferences.Downloads) TempPath;
TempPathEnabled = true; TempPathEnabled = true;
ConnectionSpeed = 100; ConnectionSpeed = 200; # half-open connections/s; faster peer discovery
SaveResumeDataInterval = 300; # save resume data every 5 min (default 60s) SaveResumeDataInterval = 300; # save resume data every 5 min (default 60s)
ResumeDataStorageType = "SQLite"; # SQLite is more efficient than legacy per-file .fastresume storage ResumeDataStorageType = "SQLite"; # SQLite is more efficient than legacy per-file .fastresume storage
@@ -100,29 +100,20 @@
DisableAutoTMMTriggers.DefaultSavePathChanged = false; DisableAutoTMMTriggers.DefaultSavePathChanged = false;
ChokingAlgorithm = "RateBased"; ChokingAlgorithm = "RateBased";
SeedChokingAlgorithm = "FastestUpload"; # unchoke peers we upload to fastest
PieceExtentAffinity = true; PieceExtentAffinity = true;
SuggestMode = true; SuggestMode = true;
# max_queued_disk_bytes: the max bytes waiting in the disk I/O queue.
# When this limit is reached, peer connections stop reading from their
# sockets until the disk thread catches up -- causing the spike-then-zero
# pattern. Default is 1MB; high_performance_seed() uses 7MB.
# 64MB is above the preset but justified for slow raidz1 HDD random writes
# where ZFS txg commits cause periodic I/O stalls.
DiskQueueSize = 67108864; # 64MB
# POSIX-compliant disk I/O: uses pread/pwrite instead of mmap. # POSIX-compliant disk I/O: uses pread/pwrite instead of mmap.
# On ZFS, mmap forces data into BOTH ARC and Linux page cache (double-caching), # On ZFS, mmap forces data into BOTH ARC and Linux page cache (double-caching),
# wasting RAM. pread/pwrite goes only through ARC, maximizing its effectiveness. # wasting RAM. pread/pwrite goes only through ARC, maximizing its effectiveness.
# Saved 26 gb of memory!!
DiskIOType = "Posix"; DiskIOType = "Posix";
# === Network buffer tuning (from libtorrent high_performance_seed preset) === FilePoolSize = 500; # keep more files open to reduce open/close overhead
# "always stuff at least 1 MiB down each peer pipe, to quickly ramp up send rates" AioThreads = 24; # 6 cores * 4; better disk I/O parallelism
SendBufferLowWatermark = 1024; # 1MB (KiB) -- matches high_performance_seed
# "of 500 ms, and a send rate of 4 MB/s, the upper limit should be 2 MB" SendBufferLowWatermark = 512; # 512 KiB -- trigger reads sooner to prevent upload stalls
SendBufferWatermark = 3072; # 3MB (KiB) -- matches high_performance_seed SendBufferWatermark = 3072; # 3 MiB -- matches high_performance_seed
# "put 1.5 seconds worth of data in the send buffer"
SendBufferWatermarkFactor = 150; # percent -- matches high_performance_seed SendBufferWatermarkFactor = 150; # percent -- matches high_performance_seed
}; };