Compare commits

...

90 Commits

Author SHA1 Message Date
54f48671bd Revert "lact: disable undervolt"
All checks were successful
Build and Deploy / mreow (push) Successful in 8m5s
Build and Deploy / yarn (push) Successful in 1m9s
Build and Deploy / muffin (push) Successful in 1m7s
This reverts commit 6e69b40b4e.
2026-05-03 01:30:54 -04:00
0df5b74265 steam-config-nix: move to my fork and drop gameMods 2026-05-03 01:30:49 -04:00
e6ac7b433e remove stupid comment from optiscalar config 2026-05-03 00:44:08 -04:00
774d748bfe yarn: drop GE-Proton compat-tool pin from steam-config-nix
nixpkgs' proton-ge-bin (the package wired into programs.steam.extra-
CompatPackages via modules/desktop-steam.nix) registers in Steam's
compat-tool list under its versioned id, currently GE-Proton10-34.
steam-config-nix's README example uses the unversioned string
"GE-Proton", which on a fresh boot wrote that literal value into
localconfig.vdf — Steam resolved it to no installed tool and silently
fell back to bundled Proton 10. FH5 then launched on stock Proton,
which doesn't pick up PROTON_FSR4_UPGRADE the way GE does.

Drop both `compatTool` (per-app) and `defaultCompatTool` (global).
The wrapper-based launchOptions.env path is unaffected — env vars
still pass through to whatever Proton Steam ends up using. Tool
selection goes back to manual Steam UI > Properties > Compatibility.

A versioned pin (`compatTool = "GE-Proton10-34";`) would work but
couples the host config to whatever the proton-ge-bin nixpkgs entry
ships this week; not worth the maintenance.
2026-05-03 00:42:07 -04:00
e010b4e3c1 game-mods: drop in-house launchOptions writer, hardcode FH5 ini
Replaces three handfuls of custom code with upstream / static data:

- Per-app Steam launch options now declared via different-name/steam-
  config-nix's `programs.steam.config.apps.<n>` instead of a custom
  ~70-line `apply_launch_options` Python function. The dropped writer
  was racy: it edited localconfig.vdf without checking for a running
  Steam, so any timer firing while Steam was open would lose its
  changes on the next Steam shutdown. steam-config-nix's `closeSteam`
  flag closes that race.

  Also moves the GE-Proton compat-tool pin to declarative config —
  one fewer manual click in Steam UI to remember.

- `mods.<>.launchOptions` option, the `launchOptionsData` aggregation,
  and `LAUNCH_OPTIONS_DATA` are removed from desktop-game-mods.nix.
  The module now does file-drops only; Steam config lives in its own
  `programs.steam.config` namespace, where it belongs.

  fh5-vkd3d-no-hvv (which existed only to set VKD3D_CONFIG) collapses
  into the FH5 launchOptions block in hosts/yarn/default.nix.

- `unitConfig.X-ConfigHash` on game-mods.service is replaced with
  `restartTriggers`. NixOS already emits `X-Restart-Triggers=<hash>`
  on the unit; the workaround was redundant. The Type=oneshot,
  RemainAfterExit=no semantics make `systemctl restart` re-run
  ExecStart cleanly on hash change.

- The awk pipeline that patched OptiScaler's stock OptiScaler.ini at
  build time is replaced with a hand-written hosts/yarn/optiscaler-
  fh5-rdna3.ini containing only the keys we override (5 of them).
  OptiScaler's Config::readString defaults missing keys to "auto"
  (Config.cpp:1568), so a minimal file is sufficient. Side benefits:
  one upstream-source dependency removed, a key-rename in upstream
  becomes a behavior change rather than a silent awk-no-match.

  Override values + sources:
    Fsr4Update=true              FH5 wiki, FSR4 Linux Setup
    DlssReactiveMaskBias=0.65    FH5 wiki, "Known Issues"
    FsrNonLinearColorSpace=true  FSR4 wiki, "Image Quality"
    EnableFsr2Inputs=false       FH5 wiki, "Known Issues"
    Dxgi=false                   FH5 wiki

- forza-trigger's three custom Python derivations (pydualsense,
  hidapi-usb, fdp) factored out of default.nix into a sibling
  python-packages.nix. Same logic, single-purpose file. Bumping a
  version is now a one-place hash roll.

- pkgs.dualsensectl removed from the daemon's environment.system-
  Packages. Single-shot writes from the CLI get clobbered by the BG
  sendReport thread within ~4ms anyway, so the tool is only useful
  with the daemon stopped — not worth the unconditional install.
  Bring it in ad-hoc with `nix-shell -p dualsensectl`.
2026-05-03 00:35:49 -04:00
1e8c294a80 forza-trigger: gate throttle on clutch state
User report: with the clutch in (pedal pressed, engine disconnected from
wheels), steering left still produced resistance on R2. The throttle
shouldn't have any feel when it's mechanically irrelevant.

RacingDSX's throttle resistance formula is
`avgAccel = sqrt(0.25*X^2 + 1.0*Z^2)`
derived from the accelerometer alone. It never checks clutch state, so
cornering G-forces keep producing trigger resistance even while the
clutch pedal is floored. Bug.

Fix: when Forza's clutch byte > 128 (clutch fully or mostly disengaged)
bypass the entire throttle path \u2014 slip detection and non-slip Feedback
both \u2014 and release the trigger. Uses the same one-shot 0x05 (active
retract) on transition + steady-state 0x00 (no-op) pattern as the
in-race \u2192 not-in-race transition (divergence #4) so we don't get the
trigger-motor whine from re-asserting 0x05 every frame.

Brake is unaffected: brake calipers operate independently of clutch
state, so ABS feel during clutch-in is still correct.

For auto-clutch users this also produces brief (~100 ms) trigger
relaxations during shifts \u2014 physically accurate (the engine *is*
momentarily disconnected during a shift) and matches the haptic feel of
a real manual transmission.

Documented as divergence #5 in the module docstring.
2026-05-03 00:35:49 -04:00
c9ddc8f8f2 game-mods: restore BACKUP_SUFFIX, doc launchOptions, fix blank lines
Three small follow-ups to 1751603:

- BACKUP_SUFFIX was lost during the launchOptions refactor. apply_mod
  references it on every non-skip path (new target, drifted bytes, or
  replace mode), so the moment a deployment hit one of those, the
  service would NameError at runtime. The bug was latent on yarn
  because every dropped file's bytes already matched its source, so
  every apply short-circuited at the byte-match check; an empirical
  rm libxell.dll + systemctl start reproduced the NameError before
  the fix and showed a successful recreate after.

- Mention launchOptions in the leading file docstring. The Example
  block already covers file ops; the new option had no entry-level
  doc.

- Normalize blank lines between top-level Python defs in the heredoc
  (PEP-8 wants exactly two: we had four between apply_mod and
  apply_launch_options, zero between apply_launch_options and main).
2026-05-03 00:35:49 -04:00
6b72ce2d6d yarn: FH5 OptiScaler FSR 4 + VKD3D upload-hvv workaround
Drops OptiScaler v0.9.1 + a FH5-tuned OptiScaler.ini into the FH5
install dir to unlock FSR 4 INT8 on this RDNA 3 (Navi 32) box.
OptiScaler intercepts FH5's DLSS/XeSS calls and reroutes them through
the bundled FFX SDK. Per the OptiScaler FH5 wiki page: rename
OptiScaler.dll to dxgi.dll, set Dxgi=false, DlssReactiveMaskBias=0.65,
and Fsr4Update=true for the INT8 RDNA 3 path.

Sets Steam launch options PROTON_FSR4_UPGRADE=1 and
DXIL_SPIRV_CONFIG=wmma_rdna3_workaround on fh5-optiscaler (the FSR 4
wiki documents both as required for RDNA 3 on Linux).

fh5-vkd3d-no-hvv is its own mod (no files, just one launchOptions
entry for VKD3D_CONFIG=no_upload_hvv) so the upload-hvv workaround
can be removed when a future Proton release fixes the underlying
issue without disturbing the OptiScaler config.

Extends the intro skip stub to cover the hires variant of the
T10/Microsoft Studios splash; the engine picks SD or hires based on
the installed asset profile, so stub both per PCGamingWiki.
2026-05-03 00:35:49 -04:00
87e606c6b6 optiscaler: package v0.9.1
stdenvNoCC + p7zip extraction; strips installer scripts and README,
keeps Licenses/. dontFixup since the artifacts are Windows DLLs.
meta.license is unfreeRedistributable to reflect the bundled XeSS
(Intel SLA) alongside the GPL-3.0 source.

Wires lib/overlays.nix into mkDesktopHost (was muffin-only) and adds
"optiscaler" to the unfree allowlist on jovian hosts so yarn can
consume it without flipping the global allowUnfree flag.
2026-05-03 00:35:49 -04:00
9250147c36 game-mods: list-merged launchOptions, init mode, writable targets
Three additions on top of the file-replacement scaffolding:

- mode = "init": create-on-first-apply, leave-alone-otherwise. For
  files the application writes back to (configs edited in-game, save
  files). Operator pushes a new template by deleting the target.

- chmod 644 after every copy. shutil.copy2 preserved the source's
  /nix/store mode (0o444), which made dropped DLL configs read-only.
  Apps that wrote back (OptiScaler "Save INI") got EACCES, which in
  OptiScaler's case cascaded into CreateSwapChainForHwnd returning
  E_FAIL and crashed FH5 on launch.

- launchOptions = listOf str. Multiple mods targeting the same
  steamAppId have their lists concatenated (mod-name alphabetical),
  joined with spaces, %command% appended once. Written into Steam's
  per-app block in userdata/<id>/config/localconfig.vdf via vdf
  parse + atomic os.replace. Idempotent.

- X-ConfigHash on the systemd unit so switch-to-configuration switch
  re-runs apply when the manifest changes.
2026-05-03 00:35:49 -04:00
b25cb4a90f forza-trigger: stop emitting mode 0x05 every frame in pre-race idle
The previous fix used canonical Off (mode 0x05) everywhere we wanted the
trigger to feel released \u2014 pre-race per-frame, idle timeout, shutdown.
Per Sony's docs (Nielk1 Rev 6) mode 0x05 "actively returns the trigger
stop to the neutral position". Re-asserting it 60 times/sec from main
thread, propagated by pydualsense's BG thread to the controller at
~250 Hz, made the trigger motor audibly whine as the firmware repeatedly
snapped the (already-neutral) trigger back to neutral.

Right answer: hybrid. One-shot 0x05 on the in-race \u2192 not-in-race
transition (and on the telemetry-idle timeout) so the firmware actually
retracts the motor; mode 0x00 (TriggerModes.Off, no-op clear) for
steady-state pre-race / idle frames so we're not yelling RESET in the
firmware's ear forever.

Implementation: prev_in_race tracks the last frame's race state. Steady
non-race frames call _apply_normal (mode 0x00); the first frame after a
race-end transition calls _apply_off (mode 0x05). pydualsense's BG
thread holds the 0x05 in memory long enough (one main-thread frame =
~16ms = ~4 BG iterations) to publish it to the controller before main
switches the in-memory state to 0x00.

Restores _apply_normal and DS_MODE_NORMAL that the previous commit
deleted. Updates divergence #4 in the module docstring.
2026-05-03 00:35:49 -04:00
bb983a88e2 game-mod: extend module 2026-05-03 00:35:49 -04:00
07583b6f96 steamos: disable steam deck cmdlineConfig for non-steamdeck hosts 2026-05-03 00:35:49 -04:00
876864c854 forza-trigger: actively release trigger and clear lightbar on idle
Two issues in the deployed daemon:

  1. After FH5 exits, the lightbar stayed lit. reset_triggers() touched
     only triggers; pydualsense's BG sendReport thread kept re-publishing
     whatever TouchpadColor we last set, so the controller stayed in the
     last race color forever.

  2. R2 had residual tension in FH5's main menu and on the desktop after
     a race. Pre-race / idle states were emitting RacingDSX's NormalTrigger
     (mode byte 0x00), which per Sony's docs (Nielk1 Rev6) only clears
     state without retracting the trigger motor; mode 0x05 (canonical Off
     / Reset) actively returns the trigger to neutral. RacingDSX-on-Windows
     gets away with 0x00 because something else (Steam Input or the OS)
     reliably resets the motor on focus loss; on Linux nothing does.

Fixes:
  - Drop _apply_normal/DS_MODE_NORMAL. Use _apply_off (mode 0x05) for every
    'release the trigger' intent: pre-race per-frame, idle timeout, mid-race
    zero-strength fallback, shutdown.
  - Add reset_lightbar() that writes RGB(0,0,0).
  - Track have_telemetry and fire the idle-timeout branch whenever
    telemetry has been silent for IDLE_TIMEOUT_S, regardless of in_race.
    Reset both triggers and lightbar in that branch.

Documented as divergence #4 in the module docstring.
2026-05-03 00:35:49 -04:00
6e69b40b4e lact: disable undervolt 2026-05-03 00:35:49 -04:00
de0b5a6009 game-mods: init
Add override for fh5 startup video
2026-05-03 00:35:49 -04:00
bb640b4b53 omp: remove patch 2026-05-03 00:35:49 -04:00
7749149c5d lact: -130 -> -120 2026-05-03 00:35:49 -04:00
6aff8c878a update 2026-05-03 00:35:48 -04:00
fa741d9c29 lact: -150 -> -130 2026-05-03 00:35:48 -04:00
31c309af1f yarn: forza dualsense adaptive trigger bridge 2026-05-03 00:35:48 -04:00
975c4f7af1 yarn: declarative lact config 2026-05-03 00:35:48 -04:00
06a192c57f yarn: PROPERLY enable amdgpu overdrive 2026-05-03 00:35:48 -04:00
c7416f114b AGENTS.md: yarn is zen 3, not zen 5
ASUS ROG STRIX B550-I GAMING is AM4 (zen 2/3 only). lspci reports
Matisse/Vermeer data fabric → Vermeer = ryzen 5000 = zen 3.
2026-05-03 00:35:48 -04:00
12b038cba7 yarn: rotate tpm identity after fTPM reset
BIOS 2423→4101 update on yarn required an fTPM reset, which broke the
sealed age identity at /var/lib/agenix/tpm-identity. Bootstrapped a new
identity against the new SRK and rotated yarn's recipient.

age-plugin-tpm 1.0+ emits age1tag1… (p256tag) recipients by default and
refuses to encrypt to legacy age1tpm1… ones, so rotated mreow's recipient
to the same encoding (same key, new bech32 HRP) and added an
age-plugin-tag→age-plugin-tpm symlink in the rage wrapper so rage's
plugin dispatch finds the binary under the new prefix. Stripped the
trailing host labels from the tpm recipient strings — rage's stricter
bech32 parser now rejects the trailing whitespace; labels live in
adjacent Nix comments instead.
2026-05-03 00:35:48 -04:00
394b890008 yarn: add impermanence for bluetooth devices (doesn't forget them now) 2026-05-03 00:35:48 -04:00
5637eccc8d oo7-daemon: cherry-pick PR #443 to use credential on first run
oo7-server 0.6.0 only feeds the systemd / PAM secret to existing
keyrings discovered on disk. On first run no keyring exists yet, the
daemon creates an empty 'Login' collection via LockedKeyring::open,
the credential is silently ignored, and any client Unlock() routes to
a prompt that nothing on a niri desktop can satisfy.

Patches/oo7-server/0001-... is upstream commit cf7b9a9 (PR #443)
regenerated relative to the package's sourceRoot ('server/'). It
switches the auto-created default-keyring path to UnlockedKeyring::open
when a secret is available.

The override threads the patch through pkgs.oo7-server.overrideAttrs
in modules/desktop-oo7-daemon.nix and uses the patched derivation for
both services.dbus.packages and systemd.packages so the user unit and
D-Bus activation file land from the same store path. Cargo.lock is
untouched, so the existing cargoDeps hash stays valid.

Drop the override once nixpkgs ships an oo7-server release that
includes the fix (anything past 0.6.0).
2026-05-03 00:35:47 -04:00
8b2a18c8c0 oo7-daemon: unlock the Login keyring via systemd credential
oo7-daemon was running but its 'Login' keyring stayed locked because
nothing supplied a master password, so libsecret clients (flare in
particular) blocked indefinitely on keyring.unlock().

The upstream user unit declares
  ImportCredential=oo7.keyring-encryption-password
which picks up matching credentials from systemd's per-service
credential machinery. Wire LoadCredential=oo7.keyring-encryption-password
to the agenix-decrypted secret so the daemon unlocks at session start
without any prompt.

The password itself is a fresh 64-byte urandom value encrypted to all
desktop recipients (admin SSH key + mreow + yarn TPM identities); it's
opaque to the user and never typed manually. Owner is primary so the
user-scope unit's LoadCredential read works without elevating.

Verified the activation script chowns the decrypted file primary:users
mode 0400, the user unit override carries the LoadCredential line, and
the resulting drv builds clean.
2026-05-03 00:35:47 -04:00
f96f5ce8fd desktop: add oo7-daemon as the org.freedesktop.secrets provider
Without a secret-service implementation on the bus, libsecret clients
like flare fail at startup with 'The communication with libsecret
failed'. None of the desktop hosts had one wired up.

oo7-daemon is the matching pure-Rust implementation (same project as
the oo7 crate flare uses internally), without the GNOME plumbing that
gnome-keyring would drag in. Register the package's D-Bus service
file and systemd user unit, start the daemon at user login, and alias
the unit as dbus-org.freedesktop.secrets.service so D-Bus
auto-activation also resolves cleanly when the wantedBy start hasn't
fired yet.

Verified the toplevel build and that the resulting system carries the
oo7-daemon user unit, the dbus alias symlink, and the
default.target.wants entry.
2026-05-03 00:35:47 -04:00
bab097da6b flare: add patched flare-signal with five local feature patches
- patches/flare/000{1..5}-*.patch: typing indicators, formatted
  messages, edited messages, multi-select with delete-for-me, and
  in-channel message search. Mirror the matching commits in
  ~/projects/forks/flare and apply cleanly on top of upstream 0.20.4
  (which is what nixpkgs ships).
- home/profiles/gui.nix: include a flare-signal override that appends
  the patches via overrideAttrs. None of them touch Cargo.lock so the
  cargoDeps hash stays valid; signal-desktop stays alongside it.
2026-05-03 00:35:47 -04:00
8768b285df pi: add android skills 2026-04-30 02:15:24 -04:00
47565c9e95 torrent-audit: only filter out complete torrents
All checks were successful
Build and Deploy / mreow (push) Successful in 2m7s
Build and Deploy / yarn (push) Successful in 45s
Build and Deploy / muffin (push) Successful in 1m11s
2026-04-29 14:42:24 -04:00
365efe3482 update
All checks were successful
Build and Deploy / mreow (push) Successful in 11m31s
Build and Deploy / yarn (push) Successful in 1m8s
Build and Deploy / muffin (push) Successful in 1m19s
2026-04-29 12:57:12 -04:00
994f39d308 arr-search: shuffle and do more
Some checks failed
Build and Deploy / mreow (push) Successful in 1m5s
Build and Deploy / yarn (push) Successful in 48s
Build and Deploy / muffin (push) Failing after 33s
2026-04-29 11:36:18 -04:00
a31c82d184 recyclarr: add fallback SD qualities for old shows 2026-04-29 11:35:32 -04:00
c9d0035cc2 update
Some checks failed
Build and Deploy / mreow (push) Successful in 50s
Build and Deploy / yarn (push) Failing after 3h11m19s
Build and Deploy / muffin (push) Successful in 2m59s
2026-04-28 13:36:19 -04:00
6f86827d6c noctalia: change transparency of background
Some checks failed
Build and Deploy / mreow (push) Successful in 1m16s
Build and Deploy / yarn (push) Failing after 14m35s
Build and Deploy / muffin (push) Successful in 1m7s
2026-04-28 01:24:54 -04:00
f0d7da5141 bluez: fix a2dp (cherry-pick patch)
Some checks failed
Build and Deploy / mreow (push) Successful in 56m32s
Build and Deploy / muffin (push) Has been cancelled
Build and Deploy / yarn (push) Has been cancelled
2026-04-28 00:59:05 -04:00
e6d7e1a73a update
All checks were successful
Build and Deploy / mreow (push) Successful in 14m28s
Build and Deploy / yarn (push) Successful in 1m6s
Build and Deploy / muffin (push) Successful in 1m7s
2026-04-27 23:37:17 -04:00
44a5d01960 yarn: mount /var/lib/agenix in initrd
All checks were successful
Build and Deploy / mreow (push) Successful in 2m16s
Build and Deploy / yarn (push) Successful in 1m3s
Build and Deploy / muffin (push) Successful in 1m6s
agenix activation runs from initrd-nixos-activation-start, which fires
right after /sysroot/persistent is mounted but before impermanence's
stage-2 bind mounts. The TPM identity at /var/lib/agenix/tpm-identity
was therefore unreadable at activation time, and every secret silently
failed to decrypt: 'no readable identities found'. Visible downstream
fallout was pull-update-apply hitting HTTP 401 against the binary cache
because nix-cache-netrc was never written to /run/agenix.

Mark /var/lib/agenix as neededForBoot via a bare fileSystems entry,
mirroring the existing /home/${username} bind. Drop the now-redundant
environment.persistence directory entry to avoid two competing units.
2026-04-27 17:42:40 -04:00
9cf4ba928a ghostty: fix ssh 2026-04-27 17:39:14 -04:00
59e6f7b3b9 doom: disable workspaces 2026-04-27 12:39:29 -04:00
4f98023203 update
All checks were successful
Build and Deploy / mreow (push) Successful in 4m2s
Build and Deploy / yarn (push) Successful in 1m4s
Build and Deploy / muffin (push) Successful in 1m11s
2026-04-27 11:40:09 -04:00
bbdc478e84 omp: update patches
All checks were successful
Build and Deploy / mreow (push) Successful in 13m8s
Build and Deploy / yarn (push) Successful in 1m11s
Build and Deploy / muffin (push) Successful in 7m15s
2026-04-27 01:36:08 -04:00
675fc7f805 update
Some checks failed
Build and Deploy / mreow (push) Failing after 5m10s
Build and Deploy / yarn (push) Failing after 1m1s
Build and Deploy / muffin (push) Has been cancelled
2026-04-27 01:27:13 -04:00
141754ca39 ghostty: fix???
All checks were successful
Build and Deploy / mreow (push) Successful in 1m20s
Build and Deploy / yarn (push) Successful in 54s
Build and Deploy / muffin (push) Successful in 1m14s
2026-04-26 01:11:09 -04:00
4b173ef164 jellyfin-qbittorrent-monitor: fix hairpin handling 2026-04-26 01:03:11 -04:00
3201b5726e update
Some checks failed
Build and Deploy / mreow (push) Successful in 1m44s
Build and Deploy / yarn (push) Successful in 1m3s
Build and Deploy / muffin (push) Failing after 27s
2026-04-26 00:12:30 -04:00
3c7bdc0c42 ghostty: colors
Some checks failed
Build and Deploy / mreow (push) Successful in 1m9s
Build and Deploy / yarn (push) Successful in 1m4s
Build and Deploy / muffin (push) Failing after 30s
2026-04-25 22:36:29 -04:00
2ebb7fc90d ghostty: open in home 2026-04-25 22:34:42 -04:00
72320e2332 ghostty: speedup start 2026-04-25 22:31:21 -04:00
b5a94520fe README.md: i don't use KDE anymore 2026-04-25 22:24:36 -04:00
9ee3547d5d ghostty 2026-04-25 22:21:27 -04:00
ce288ccdb0 update
Some checks failed
Build and Deploy / mreow (push) Successful in 8m39s
Build and Deploy / yarn (push) Successful in 1m6s
Build and Deploy / muffin (push) Failing after 34s
2026-04-25 20:22:48 -04:00
da87f82a66 noctalia: disable startup animation 2026-04-25 20:21:44 -04:00
90f2c27c2c DISABLE KMSCON
Some checks failed
Build and Deploy / mreow (push) Successful in 7m39s
Build and Deploy / yarn (push) Successful in 1m5s
Build and Deploy / muffin (push) Failing after 36s
THIS is what caused issues with greetd, nothing kernel related
2026-04-25 19:20:24 -04:00
450b77140b pi: apply omp patches via prePatch (bun2nix.hook overrides patchPhase)
`bun2nix.hook` (used by upstream omp's package.nix) sets

  patchPhase = bunPatchPhase

at the end of its setup-hook unless `dontUseBunPatch` is already set.
`bunPatchPhase` only runs `patchShebangs` plus a HOME mktemp; it never
iterates over `$patches`. The standard nixpkgs `patches` attribute
therefore went into the derivation env but was silently ignored at
build time, leaving the deployed omp binary unpatched.

Switch to applying the two patches via `prePatch` (which `bunPatchPhase`
does call). Verified with strings(1) over the rebuilt binary that both
patch hunks land:

  /wrong_api_format|...|invalid tool parameters/  (patch 0001)
  stubsReasoningContent ... thinkingFormat == "openrouter"  (patch 0002)
2026-04-25 19:20:08 -04:00
318373c09c pi: patch omp to require reasoning_content for OpenRouter reasoning models
DeepSeek V4 Pro (and similar reasoning models reached via OpenRouter) reject
multi-turn requests in thinking mode with:

  400 The `reasoning_content` in the thinking mode must be passed back
  to the API.

omp's existing kimi placeholder injection (`requiresReasoningContentForToolCalls`)
covered this requirement only for `thinkingFormat == "openai"`. OpenRouter
sets `thinkingFormat == "openrouter"`, so the gate never fired even though
the underlying providers behind OpenRouter (DeepSeek, Kimi, etc.) all enforce
the same invariant.

This patch:

1. Extends `requiresReasoningContentForToolCalls` detection: any
   reasoning-capable model fronted by OpenRouter now sets the flag.
2. Extends the placeholder gate in `convertMessages` to accept
   `thinkingFormat == "openrouter"` alongside `"openai"`.

Cross-provider continuations are the dominant trigger: a conversation
warmed up by Anthropic Claude (whose reasoning is redacted/encrypted on
the wire) followed by a switch to DeepSeek V4 Pro via OpenRouter. omp
cannot synthesize plaintext `reasoning_content` from Anthropic's
encrypted blocks, so the placeholder satisfies DeepSeek's validator
without fabricating a reasoning trace. Real captured reasoning, when
present, short-circuits the placeholder via `hasReasoningField` and
survives intact.

Side benefit: also closes a latent gap where Kimi-via-OpenRouter
(`thinkingFormat == "openrouter"`) had the compat flag set but the
placeholder gate silently rejected it.

Applies cleanly on top of patch 0001.
2026-04-25 19:20:05 -04:00
d55743a9e7 revert: roll back flake.lock pre-update (niri 8ed0da4 black-screens on amdgpu) 2026-04-25 16:21:28 -04:00
8ab4924948 omp: add patch that fixes deepseek 2026-04-25 15:38:39 -04:00
8bd148dc96 update
All checks were successful
Build and Deploy / mreow (push) Successful in 12m7s
Build and Deploy / yarn (push) Successful in 1m36s
Build and Deploy / muffin (push) Successful in 1m11s
2026-04-25 15:20:34 -04:00
2ab1c855ec Revert "muffin: test, move to 7.0"
All checks were successful
Build and Deploy / mreow (push) Successful in 1m45s
Build and Deploy / yarn (push) Successful in 47s
Build and Deploy / muffin (push) Successful in 1m31s
This reverts commit f67ec5bde6.
2026-04-25 10:50:00 -04:00
f67ec5bde6 muffin: test, move to 7.0
Some checks failed
Build and Deploy / mreow (push) Successful in 1h43m17s
Build and Deploy / yarn (push) Successful in 22m1s
Build and Deploy / muffin (push) Failing after 33s
2026-04-25 02:12:21 -04:00
112b85f3fb update
Some checks failed
Build and Deploy / yarn (push) Has been cancelled
Build and Deploy / muffin (push) Has been cancelled
Build and Deploy / mreow (push) Has been cancelled
2026-04-25 01:45:47 -04:00
86cf624027 Revert "muffin: test, move to 6.18"
All checks were successful
Build and Deploy / mreow (push) Successful in 50s
Build and Deploy / yarn (push) Successful in 44s
Build and Deploy / muffin (push) Successful in 1m2s
This reverts commit 1df3a303f5.
2026-04-24 14:21:40 -04:00
1df3a303f5 muffin: test, move to 6.18
All checks were successful
Build and Deploy / mreow (push) Successful in 1m15s
Build and Deploy / yarn (push) Successful in 43s
Build and Deploy / muffin (push) Successful in 1m29s
2026-04-24 14:08:26 -04:00
07a5276e40 patiodeck: fix disko partition order (fixed-size before 100%) 2026-04-24 01:47:25 -04:00
f3d21f16fb desktop-jovian: unify steam/jovian config across yarn + patiodeck
- modules/desktop-jovian.nix: shared Jovian deck-mode config (unfree
  predicate, jovian.steam, sddm, gamescope override, imports
  desktop-steam-update.nix)
- home/progs/steam-shortcuts.nix: declarative non-Steam shortcuts
  (Prism Launcher); add new entries here for all Jovian hosts
- hosts/yarn/default.nix: reduced to host-specific config only
- hosts/patiodeck/default.nix: same
2026-04-23 22:42:25 -04:00
5b2a1a652a patiodeck: add prism launcher to steam shortcuts 2026-04-23 22:34:58 -04:00
665793668d patiodeck: add steam deck LCD host 2026-04-23 22:34:47 -04:00
5ccd84c77e yarn: fix steamos-update exit code — 7 means no update, not 0
Some checks failed
Build and Deploy / mreow (push) Successful in 1m48s
Build and Deploy / yarn (push) Successful in 4m39s
Build and Deploy / muffin (push) Failing after 31s
Steam interprets exit 0 from 'steamos-update check' as 'update applied
successfully' and shows a persistent 'update available' notification.
The SteamOS convention is exit 7 = no update available.
2026-04-23 20:47:33 -04:00
7721c9d3a2 ssh: remove desktop key
Some checks failed
Build and Deploy / mreow (push) Successful in 1m58s
Build and Deploy / yarn (push) Successful in 47s
Build and Deploy / muffin (push) Failing after 30s
2026-04-23 20:23:37 -04:00
b41a547589 yarn: persist root fish history
Some checks failed
Build and Deploy / mreow (push) Successful in 46s
Build and Deploy / yarn (push) Successful in 51s
Build and Deploy / muffin (push) Failing after 28s
2026-04-23 20:17:02 -04:00
d122842995 secrets: update yarn TPM recipient after tmpfs wipe
Some checks failed
Build and Deploy / mreow (push) Successful in 2m8s
Build and Deploy / yarn (push) Successful in 48s
Build and Deploy / muffin (push) Failing after 29s
2026-04-23 19:56:54 -04:00
d65d991118 secrets: add mreow + yarn TPM recipients, re-encrypt desktop secrets
Some checks failed
Build and Deploy / mreow (push) Successful in 2m56s
Build and Deploy / yarn (push) Successful in 1m49s
Build and Deploy / muffin (push) Failing after 31s
2026-04-23 19:45:57 -04:00
06ccc337c1 secrets: proper agenix for desktop hosts via TPM identity
- modules/desktop-age-secrets.nix: agenix + rage wrapped with age-plugin-tpm,
  TPM identity primary, admin SSH key fallback for recovery/pre-bootstrap
- modules/desktop-lanzaboote-agenix.nix: extract secureboot.tar at activation
- modules/desktop-networkmanager.nix: revert to simple import of git-crypt file
- modules/server-age-secrets.nix: renamed from age-secrets.nix
- modules/desktop-common.nix: wire netrc + password-hash to agenix paths
- hosts/yarn/impermanence.nix: persist /var/lib/agenix across tmpfs wipes
- secrets/secrets.nix: recipient declarations (admin + tpm + muffin USB)
- secrets/desktop/*.age: secureboot.tar, nix-cache-netrc, password-hash
- scripts/bootstrap-desktop-tpm.sh: generate TPM identity + print recipient
2026-04-23 19:24:34 -04:00
a3f7a19cc2 update
All checks were successful
Build and Deploy / mreow (push) Successful in 3m39s
Build and Deploy / yarn (push) Successful in 1m3s
Build and Deploy / muffin (push) Successful in 2m26s
2026-04-23 14:23:17 -04:00
e019f2d4fb secrets overhaul: use tpm for laptop (need to migrate desktop later) 2026-04-23 14:22:37 -04:00
22282691e7 grafana: add minecraft server stats 2026-04-23 01:17:10 -04:00
bc3652c782 kernel: cleanup + add back intel gpu (for future server unification)
All checks were successful
Build and Deploy / mreow (push) Successful in 1h25m37s
Build and Deploy / yarn (push) Successful in 1m3s
Build and Deploy / muffin (push) Successful in 1m6s
2026-04-23 00:23:21 -04:00
0a8b863e4b gitea: fix actions visibility
All checks were successful
Build and Deploy / mreow (push) Successful in 2m39s
Build and Deploy / yarn (push) Successful in 1m48s
Build and Deploy / muffin (push) Successful in 1m14s
2026-04-22 23:02:53 -04:00
0901f5edf0 deploy: potentially fix self-deploy issue? 2026-04-22 23:02:38 -04:00
a1924849d6 pi: edit AGENTS.md
Some checks failed
Build and Deploy / mreow (push) Successful in 51s
Build and Deploy / yarn (push) Successful in 54s
Build and Deploy / muffin (push) Failing after 27s
2026-04-22 21:28:20 -04:00
fdd5c5fba0 gitea: hide actions when not logged in
All checks were successful
Build and Deploy / mreow (push) Successful in 56s
Build and Deploy / yarn (push) Successful in 52s
Build and Deploy / muffin (push) Successful in 1m1s
2026-04-22 21:23:47 -04:00
d00ff42e8e site-config: dedupe cross-host values, fix stale dark-reader urls, drop desktop 1g hugepages
new site-config.nix holds values previously duplicated across hosts:
  domain, old_domain, contact_email, timezone, binary_cache (url + pubkey),
  dns_servers, lan (cidr + gateway), hosts.{muffin,yarn} (ip/alias/ssh_host_key),
  ssh_keys.{laptop,desktop,ci_deploy}.

threaded through specialArgs on all three hosts + home-manager extraSpecialArgs +
homeConfigurations.primary + serverLib. service-configs.nix now takes
{ site_config } as a function arg and drops its https namespace; per-service
domains (gitea/matrix/ntfy/mollysocket/livekit/firefox-sync/grafana) are
derived from site_config.domain. ~15 service files and 6 vm tests migrated.

breakage fixes rolled in:
 - home/progs/zen/dark-reader.nix: 5 stale *.gardling.com entries in
   disabledFor rewritten to *.sigkill.computer (caddy 301s the old names so
   these never fired and the new sigkill urls were getting dark-reader applied)
 - modules/desktop-common.nix: drop unused hugepagesz=1G/hugepages=3
   kernelParams (no consumer on mreow or yarn; xmrig on muffin still reserves
   its own via services/monero/xmrig.nix)

verification: muffin toplevel is bit-identical to pre-refactor baseline.
mreow/yarn toplevels differ only in boot.json kernelParams + darkreader
storage.js (nix-diff verified). deployGuardTest and fail2banVaultwardenTest
(latter exercises site_config.domain via bitwarden.nix) pass.
2026-04-22 20:48:29 -04:00
8cdb9c4381 yarn: improve pull-update-apply script
Some checks failed
Build and Deploy / mreow (push) Successful in 2m3s
Build and Deploy / yarn (push) Successful in 1m3s
Build and Deploy / muffin (push) Failing after 28s
2026-04-22 20:11:22 -04:00
3902ad5de3 yarn: fix jovian-stubs
Some checks failed
Build and Deploy / mreow (push) Successful in 1m9s
Build and Deploy / yarn (push) Successful in 4m36s
Build and Deploy / muffin (push) Failing after 33s
2026-04-22 19:54:00 -04:00
0538907674 yarn: simplify stubs
Some checks failed
Build and Deploy / mreow (push) Successful in 41s
Build and Deploy / yarn (push) Failing after 1m8s
Build and Deploy / muffin (push) Failing after 1m39s
2026-04-22 19:44:53 -04:00
90ce41cd9e gitea: move gitea-runner user declaration to actions-runner.nix
Some checks failed
Build and Deploy / mreow (push) Successful in 55s
Build and Deploy / yarn (push) Failing after 58s
Build and Deploy / muffin (push) Has started running
2026-04-22 19:24:18 -04:00
1be21b6c52 split off terminal utilities 2026-04-22 18:45:00 -04:00
92 changed files with 8327 additions and 942 deletions

1
.gitignore vendored
View File

@@ -1,2 +1,3 @@
/result /result
/result-* /result-*
__pycache__

View File

@@ -7,7 +7,7 @@ Unified NixOS flake for three hosts:
| Host | Role | nixpkgs channel | Activation | | Host | Role | nixpkgs channel | Activation |
|------|------|----------------|-----------| |------|------|----------------|-----------|
| `mreow` | Framework 13 AMD AI 300 laptop (niri, greetd, swaylock) | `nixos-unstable` | `./deploy.sh` locally | | `mreow` | Framework 13 AMD AI 300 laptop (niri, greetd, swaylock) | `nixos-unstable` | `./deploy.sh` locally |
| `yarn` | AMD Zen 5 desktop (niri + Jovian-NixOS Steam deck mode, impermanence) | `nixos-unstable` | pull from CI binary cache | | `yarn` | AMD Zen 3 desktop (niri + Jovian-NixOS Steam deck mode, impermanence) | `nixos-unstable` | pull from CI binary cache |
| `muffin` | AMD Zen 3 server (Caddy, ZFS, agenix, deploy-rs, 25+ services) | `nixos-25.11` | deploy-rs from CI | | `muffin` | AMD Zen 3 server (Caddy, ZFS, agenix, deploy-rs, 25+ services) | `nixos-25.11` | deploy-rs from CI |
One `flake.nix` declares both channels (`nixpkgs` and `nixpkgs-stable`) and composes each host from the correct channel. No single-channel migration is intended. One `flake.nix` declares both channels (`nixpkgs` and `nixpkgs-stable`) and composes each host from the correct channel. No single-channel migration is intended.
@@ -36,10 +36,11 @@ lib/
overlays.nix # jellyfin-exporter, igpu-exporter, reflac, ensureZfsMounts overlays.nix # jellyfin-exporter, igpu-exporter, reflac, ensureZfsMounts
patches/nixpkgs/ # applied to nixpkgs-stable for muffin builds patches/nixpkgs/ # applied to nixpkgs-stable for muffin builds
secrets/ secrets/
desktop/ # git-crypt: mreow + yarn share these (wifi, nix-cache-netrc, secureboot.tar, password-hash, disk-password) secrets.nix # agenix recipients (who can decrypt each .age)
desktop/ # agenix *.age (mreow + yarn) + disk-password (install-time only, git-crypt)
home/ # git-crypt: per-user HM secrets (api keys, steam id) home/ # git-crypt: per-user HM secrets (api keys, steam id)
server/ # agenix *.age + git-crypt *.nix/*.tar/livekit_keys server/ # agenix *.age + git-crypt *.nix/*.tar/livekit_keys (muffin)
usb-secrets/ # USB-resident agenix identity key (git-crypt inside the repo) usb-secrets/ # USB-resident agenix identity for muffin (git-crypt inside the repo)
``` ```
**Never read or write files under `secrets/`.** They are encrypted at rest (git-crypt for plaintext, agenix for `.age`). The git-crypt key is delivered to `muffin` at runtime as `/run/agenix/git-crypt-key-nixos.age`. **Never read or write files under `secrets/`.** They are encrypted at rest (git-crypt for plaintext, agenix for `.age`). The git-crypt key is delivered to `muffin` at runtime as `/run/agenix/git-crypt-key-nixos.age`.
@@ -89,7 +90,7 @@ If Nix complains about a missing file, `git add` it first — flakes only see tr
| `common-` | imported by ALL hosts | `common-doas.nix`, `common-nix.nix`, `common-shell-fish.nix` | | `common-` | imported by ALL hosts | `common-doas.nix`, `common-nix.nix`, `common-shell-fish.nix` |
| `desktop-` | imported by mreow + yarn only | `desktop-common.nix`, `desktop-steam.nix`, `desktop-networkmanager.nix` | | `desktop-` | imported by mreow + yarn only | `desktop-common.nix`, `desktop-steam.nix`, `desktop-networkmanager.nix` |
| `server-` | imported by muffin only | `server-security.nix`, `server-power.nix`, `server-impermanence.nix`, `server-lanzaboote-agenix.nix` | | `server-` | imported by muffin only | `server-security.nix`, `server-power.nix`, `server-impermanence.nix`, `server-lanzaboote-agenix.nix` |
| *(none)* | host-specific filename-scoped; see file contents | `age-secrets.nix`, `zfs.nix`, `no-rgb.nix` (yarn + muffin) | | *(none)* | host-specific filename-scoped; see file contents | `zfs.nix`, `no-rgb.nix` (yarn + muffin) |
New modules: pick the narrowest prefix that's true, then add the import explicitly in the host's `default.nix` (there is no auto-discovery). New modules: pick the narrowest prefix that's true, then add the import explicitly in the host's `default.nix` (there is no auto-discovery).
@@ -117,14 +118,18 @@ New modules: pick the narrowest prefix that's true, then add the import explicit
## Secrets ## Secrets
- **git-crypt** covers `secrets/**` per the root `.gitattributes`. Initialized with a single symmetric key checked into `secrets/server/git-crypt-key-nixos.age` (agenix-encrypted to the USB SSH identity). - **git-crypt** covers `secrets/**` per the root `.gitattributes`. Initialized with a single symmetric key checked into `secrets/server/git-crypt-key-nixos.age` (agenix-encrypted to the USB SSH identity).
- **agenix** decrypts `secrets/server/*.age` at activation into `/run/agenix/` on muffin. - **agenix** decrypts `*.age` into `/run/agenix/` at activation on every host:
- **USB identity**: `/mnt/usb-secrets/usb-secrets-key` on muffin; the age identity path is wired in `modules/usb-secrets.nix`. - **muffin**: identity is `/mnt/usb-secrets/usb-secrets-key` (ssh-ed25519 on a physical USB). Wired in `modules/usb-secrets.nix`.
- **Encrypting a new agenix secret** uses the SSH public key directly with `age -R`: - **mreow + yarn**: identity is `/var/lib/agenix/tpm-identity` (an `age-plugin-tpm` handle sealed by the host's TPM 2.0). Wired in `modules/desktop-age-secrets.nix`; yarn persists `/var/lib/agenix` through impermanence.
- **Recipients** are declared in `secrets/secrets.nix`. Desktop secrets are encrypted to the admin SSH key + each host's TPM recipient; server secrets stay encrypted to the muffin USB key.
- **Bootstrap a new desktop**: run `doas scripts/bootstrap-desktop-tpm.sh` on the host. It generates a TPM-sealed identity at `/var/lib/agenix/tpm-identity` and prints an `age1tag1…` recipient (legacy `age1tpm1…` recipients still decrypt but `age-plugin-tpm` 1.0+ refuses to encrypt to them; `modules/desktop-age-secrets.nix` symlinks `age-plugin-tag → age-plugin-tpm` so rage's plugin dispatch finds the binary under both prefixes). Append it to the `tpm` list in `secrets/secrets.nix` (label as a Nix `# host` comment, not as a trailing word inside the recipient string — rage's bech32 parser rejects the trailing whitespace), run `agenix -r` to re-encrypt, commit, `./deploy.sh switch`.
- **Encrypting a new server secret** uses the SSH public key directly with `age -R`:
```sh ```sh
age -R <(ssh-keygen -y -f secrets/usb-secrets/usb-secrets-key) \ age -R <(ssh-keygen -y -f secrets/usb-secrets/usb-secrets-key) \
-o secrets/server/<name>.age \ -o secrets/server/<name>.age \
/path/to/plaintext /path/to/plaintext
``` ```
For desktop secrets, prefer `agenix -e secrets/desktop/<name>.age` from a shell with `age-plugin-tpm` on PATH — it reads `secrets/secrets.nix` and encrypts to every recipient listed there.
- **DO NOT use `ssh-to-age`**. It produces `X25519` recipient stanzas, which the SSH private key on muffin cannot decrypt (it only decrypts `ssh-ed25519` stanzas produced by `age -R` against the SSH pubkey). Mismatched stanzas show up as `age: error: no identity matched any of the recipients` at deploy time. - **DO NOT use `ssh-to-age`**. It produces `X25519` recipient stanzas, which the SSH private key on muffin cannot decrypt (it only decrypts `ssh-ed25519` stanzas produced by `age -R` against the SSH pubkey). Mismatched stanzas show up as `age: error: no identity matched any of the recipients` at deploy time.
- Never read or commit plaintext secrets. Never log secret values. - Never read or commit plaintext secrets. Never log secret values.
@@ -191,11 +196,26 @@ lib.mkIf config.services.<service>.enable {
Existing registrations live in `services/jellyfin/jellyfin-deploy-guard.nix` (REST `/Sessions` via curl+jq) and `services/minecraft-deploy-guard.nix` (Server List Ping via `mcstatus`). Prefer soft-fail on unreachable — a service that's already down has no users to disrupt. Existing registrations live in `services/jellyfin/jellyfin-deploy-guard.nix` (REST `/Sessions` via curl+jq) and `services/minecraft-deploy-guard.nix` (Server List Ping via `mcstatus`). Prefer soft-fail on unreachable — a service that's already down has no users to disrupt.
## Deploy finalize (muffin)
`modules/server-deploy-finalize.nix` solves the self-deploy problem: the gitea-actions runner driving CI deploys lives on muffin itself, so a direct `switch-to-configuration switch` restarts the runner mid-activation, killing the SSH session, the CI job, and deploy-rs's magic-rollback handshake. The failure mode is visible as "deploy appears to fail even though the new config landed" (or worse, a rollback storm).
The fix is a two-phase activation wired into `deploy.nodes.muffin.profiles.system.path` in `flake.nix`:
1. `switch-to-configuration boot` — bootloader-only, no service restarts. The runner, SSH session, and magic-rollback survive.
2. `deploy-finalize` — schedules a detached `systemd-run --on-active=N` transient unit (default 60s). The unit is owned by pid1, so it survives the eventual runner restart. If `/run/booted-system/{kernel,initrd,kernel-modules}` differs from the new profile's, the unit runs `systemctl reboot`; otherwise it runs `switch-to-configuration switch`.
That is, reboot is dynamically gated on kernel/initrd/kernel-modules change. The 60s delay is tuned so the CI job (or manual `./deploy.sh muffin`) has time to emit status/notification steps before the runner is recycled.
Back-to-back deploys supersede each other: each invocation cancels any still-pending `deploy-finalize-*.timer` before scheduling its own. `deploy-finalize --dry-run` prints the decision without scheduling anything — useful when debugging.
Prior art: the 3-path `{kernel,initrd,kernel-modules}` diff is lifted from nixpkgs's `system.autoUpgrade` module (the `allowReboot = true` branch) and was packaged the same way in [obsidiansystems/obelisk#957](https://github.com/obsidiansystems/obelisk/pull/957). nixpkgs#185030 tracks lifting it into `switch-to-configuration` proper but has been stale since 2025-07. The self-deploy `systemd-run` detachment is the proposed fix from [deploy-rs#153](https://github.com/serokell/deploy-rs/issues/153), also unmerged upstream.
## Technical details ## Technical details
- **Privilege escalation**: `doas` everywhere; `sudo` is disabled on every host. - **Privilege escalation**: `doas` everywhere; `sudo` is disabled on every host.
- **Shell**: fish. `bash` login shells re-exec into fish via `programs.bash.interactiveShellInit` (see `modules/common-shell-fish.nix`). - **Shell**: fish. `bash` login shells re-exec into fish via `programs.bash.interactiveShellInit` (see `modules/common-shell-fish.nix`).
- **Secure boot**: lanzaboote. Desktops extract keys from `secrets/desktop/secureboot.tar`; muffin extracts from an agenix-decrypted tar (see `modules/server-lanzaboote-agenix.nix`). - **Secure boot**: lanzaboote. Every host extracts keys from an agenix-decrypted tar at activation — desktops via `modules/desktop-lanzaboote-agenix.nix`, muffin via `modules/server-lanzaboote-agenix.nix`.
- **Impermanence**: muffin is tmpfs-root with `/persistent` surviving reboots (`modules/server-impermanence.nix`); yarn binds `/home/primary` from `/persistent` (`hosts/yarn/impermanence.nix`). - **Impermanence**: muffin is tmpfs-root with `/persistent` surviving reboots (`modules/server-impermanence.nix`); yarn binds `/home/primary` from `/persistent` (`hosts/yarn/impermanence.nix`).
- **Disks**: disko. - **Disks**: disko.
- **Binary cache**: muffin runs harmonia; desktops consume it at `https://nix-cache.sigkill.computer`. - **Binary cache**: muffin runs harmonia; desktops consume it at `https://nix-cache.sigkill.computer`.

View File

@@ -12,11 +12,11 @@ Browser: Firefox 🦊 (actually [Zen Browser](https://github.com/zen-browser/des
Text Editor: [Doom Emacs](https://github.com/doomemacs/doomemacs) Text Editor: [Doom Emacs](https://github.com/doomemacs/doomemacs)
Terminal: [alacritty](https://github.com/alacritty/alacritty) Terminal: [ghostty](https://ghostty.org/)
Shell: [fish](https://fishshell.com/) with the [pure](https://github.com/pure-fish/pure) prompt Shell: [fish](https://fishshell.com/) with the [pure](https://github.com/pure-fish/pure) prompt
WM: [niri](https://github.com/YaLTeR/niri) (KDE on my desktop) WM: [niri](https://github.com/YaLTeR/niri)
### Background ### Background
- Got my background from [here](https://old.reddit.com/r/celestegame/comments/11dtgwg/all_most_of_the_backgrounds_in_celeste_edited/) and used the command `magick input.png -filter Point -resize 2880x1920! output.png` to upscale it bilinearly - Got my background from [here](https://old.reddit.com/r/celestegame/comments/11dtgwg/all_most_of_the_backgrounds_in_celeste_edited/) and used the command `magick input.png -filter Point -resize 2880x1920! output.png` to upscale it bilinearly

269
flake.lock generated
View File

@@ -25,6 +25,22 @@
"type": "github" "type": "github"
} }
}, },
"android-skills": {
"flake": false,
"locked": {
"lastModified": 1777544816,
"narHash": "sha256-GwaYeUlqCwksqWruHxi4b4LL1wUmuabYGpTxTzSPGMM=",
"owner": "android",
"repo": "skills",
"rev": "55f41f97a9716ca5e692cb0ca6f1bd78cfc4417e",
"type": "github"
},
"original": {
"owner": "android",
"repo": "skills",
"type": "github"
}
},
"arr-init": { "arr-init": {
"inputs": { "inputs": {
"flake-utils": "flake-utils", "flake-utils": "flake-utils",
@@ -77,7 +93,6 @@
"llm-agents", "llm-agents",
"flake-parts" "flake-parts"
], ],
"import-tree": "import-tree",
"nixpkgs": [ "nixpkgs": [
"llm-agents", "llm-agents",
"nixpkgs" "nixpkgs"
@@ -92,16 +107,16 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776182890, "lastModified": 1777369708,
"narHash": "sha256-+/VOe8XGq5klpU+I19D+3TcaR7o+Cwbq67KNF7mcFak=", "narHash": "sha256-1xW7cRZNsFNPQD+cE0fwnLVStnDth0HSoASEIFeT7uI=",
"owner": "Mic92", "owner": "nix-community",
"repo": "bun2nix", "repo": "bun2nix",
"rev": "648d293c51e981aec9cb07ba4268bc19e7a8c575", "rev": "e659e1cc4b8e1b21d0aa85f1c481f9db61ecfa98",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "Mic92", "owner": "nix-community",
"ref": "catalog-support", "ref": "staging-2.1.0",
"repo": "bun2nix", "repo": "bun2nix",
"type": "github" "type": "github"
} }
@@ -109,11 +124,11 @@
"cachyos-kernel": { "cachyos-kernel": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1776608760, "lastModified": 1776881435,
"narHash": "sha256-ehDv8bF7k/2Kf4b8CCoSm51U/MOoFuLsRXqe5wZ57sE=", "narHash": "sha256-j8AobLjMzeKJugseObrVC4O5k7/aZCWoft2sCS3jWYs=",
"owner": "CachyOS", "owner": "CachyOS",
"repo": "linux-cachyos", "repo": "linux-cachyos",
"rev": "7e06e29005853bbaaa3b1c1067f915d6e0db728a", "rev": "1c61dfd1c3ad7762faa0db8b06c6af6c59cc4340",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -125,11 +140,11 @@
"cachyos-kernel-patches": { "cachyos-kernel-patches": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1776792814, "lastModified": 1777002108,
"narHash": "sha256-39dlIhz9KxUNQFxGpE9SvCviaOWAivdW0XJM8RnPNmg=", "narHash": "sha256-PIZCIf6xUTOUqLFbEGH0mSwu2O/YfeAmYlgdAbP4dhs=",
"owner": "CachyOS", "owner": "CachyOS",
"repo": "kernel-patches", "repo": "kernel-patches",
"rev": "d7d558d0b2e239e27b40bcf1af6fe12e323aa391", "rev": "46476ae2538db486462aef8a9de37d19030cdaf2",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -140,11 +155,11 @@
}, },
"crane": { "crane": {
"locked": { "locked": {
"lastModified": 1776635034, "lastModified": 1777242778,
"narHash": "sha256-OEOJrT3ZfwbChzODfIH4GzlNTtOFuZFWPtW7jIeR8xU=", "narHash": "sha256-VWTeqWeb8Sel/QiWyaPvCa9luAbcGawR+Rw09FJoHz0=",
"owner": "ipetkov", "owner": "ipetkov",
"repo": "crane", "repo": "crane",
"rev": "dc7496d8ea6e526b1254b55d09b966e94673750f", "rev": "ad8b31ad0ba8448bd958d7a5d50d811dc5d271c0",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -199,11 +214,11 @@
"doomemacs": { "doomemacs": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1776590253, "lastModified": 1777326848,
"narHash": "sha256-wU0gAHaCCX/sTUvbsgSxXAPxb1xEazfu5PClDe3SbXA=", "narHash": "sha256-7ErKUgw6Ch7hP1oBjMSos8xXRD+rxxjaOldRn+TcClo=",
"owner": "doomemacs", "owner": "doomemacs",
"repo": "doomemacs", "repo": "doomemacs",
"rev": "707da6f7e90f26a4e00e5f8f98f29fd08824e71e", "rev": "6be3337b49867bd86f90fe5ca4beeb6b38afaddb",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -222,11 +237,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776849698, "lastModified": 1777631609,
"narHash": "sha256-t2I9ZhBuAcaLV1Z65aVd/5BmDFGvyzLY5kpiSedx2uY=", "narHash": "sha256-mVuwqfmX3ev8eRgzRBYkN3UZ9rn1Uo5MRAoX6pCN6q0=",
"owner": "nix-community", "owner": "nix-community",
"repo": "emacs-overlay", "repo": "emacs-overlay",
"rev": "87dff52c245cba0c5103cf89b964e508ed9bb720", "rev": "00b5f387f4f557a0f5a3902023b62aa15d020683",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -266,11 +281,11 @@
}, },
"locked": { "locked": {
"dir": "pkgs/firefox-addons", "dir": "pkgs/firefox-addons",
"lastModified": 1776830588, "lastModified": 1777608175,
"narHash": "sha256-1X4L6+F7DgYTUDah+PDs7IYJiQrb7MwYfateq2fBxGY=", "narHash": "sha256-h2OfjJd3r0zopvSCN3kD2TPv01oz1pNTxurYucIDI8s=",
"owner": "rycee", "owner": "rycee",
"repo": "nur-expressions", "repo": "nur-expressions",
"rev": "f3db83bc13aee22474fab41fa838e50a691dfbc5", "rev": "6c1761398df2be37cffd5891486f15665e18a398",
"type": "gitlab" "type": "gitlab"
}, },
"original": { "original": {
@@ -401,6 +416,27 @@
"type": "github" "type": "github"
} }
}, },
"flake-parts_4": {
"inputs": {
"nixpkgs-lib": [
"steam-config-nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1765835352,
"narHash": "sha256-XswHlK/Qtjasvhd1nOa1e8MgZ8GS//jBoTqWtrS1Giw=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "a34fae9c08a15ad73f295041fec82323541400a9",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-utils": { "flake-utils": {
"inputs": { "inputs": {
"systems": "systems_2" "systems": "systems_2"
@@ -439,7 +475,7 @@
}, },
"flake-utils_3": { "flake-utils_3": {
"inputs": { "inputs": {
"systems": "systems_10" "systems": "systems_11"
}, },
"locked": { "locked": {
"lastModified": 1731533236, "lastModified": 1731533236,
@@ -484,11 +520,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776891022, "lastModified": 1777659959,
"narHash": "sha256-vEe2f4NEhMvaNDpM1pla4hteaIIGQyAMKUfIBPLasr0=", "narHash": "sha256-ax3229dUvNuwTQwo2o68kOQ24dvOlJ/BrVYY4miD1bI=",
"owner": "nix-community", "owner": "nix-community",
"repo": "home-manager", "repo": "home-manager",
"rev": "508daf831ab8d1b143d908239c39a7d8d39561b2", "rev": "5c1b74905c7261e8280dcda3623dbe677a1bc158",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -541,21 +577,6 @@
"type": "github" "type": "github"
} }
}, },
"import-tree": {
"locked": {
"lastModified": 1763762820,
"narHash": "sha256-ZvYKbFib3AEwiNMLsejb/CWs/OL/srFQ8AogkebEPF0=",
"owner": "vic",
"repo": "import-tree",
"rev": "3c23749d8013ec6daa1d7255057590e9ca726646",
"type": "github"
},
"original": {
"owner": "vic",
"repo": "import-tree",
"type": "github"
}
},
"jovian-nixos": { "jovian-nixos": {
"inputs": { "inputs": {
"nix-github-actions": "nix-github-actions", "nix-github-actions": "nix-github-actions",
@@ -564,11 +585,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776874528, "lastModified": 1777614199,
"narHash": "sha256-X4Y2vMbVBuyUQzbZnl72BzpZMYUsWdE78JuDg2ySDxE=", "narHash": "sha256-k8fgidVoDNQTZWGLdhe6kLgpsLcydhPzal5YKVwxD2U=",
"owner": "Jovian-Experiments", "owner": "Jovian-Experiments",
"repo": "Jovian-NixOS", "repo": "Jovian-NixOS",
"rev": "4c8ccc482a3665fb4a3b2cadbbe7772fb7cc2629", "rev": "79f3e3cc5c643138b7b3405c42681451be85d838",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -610,11 +631,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776797459, "lastModified": 1777299656,
"narHash": "sha256-utv296Xwk0PwjONe9dsyKx+9Z5xAB70aAsMI//aakpg=", "narHash": "sha256-c0r3xXp2+xFJwkryS+nhyQwoACbFzSt4C1TVs3QMh8E=",
"owner": "nix-community", "owner": "nix-community",
"repo": "lanzaboote", "repo": "lanzaboote",
"rev": "4eda91dd5abd2157a2c7bfb33142fc64da668b0a", "rev": "079c608988c2747db3902c9de033572cd50e8656",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -631,11 +652,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776862155, "lastModified": 1777651934,
"narHash": "sha256-EDvbwsGNE/N5ul+9ul1dJP3Ouf72+Ub2C0UMbDWcxyQ=", "narHash": "sha256-HxXrm8DyUf2g8hG8R0Ly7cgS6TM9gk70rmVT0DA1q2E=",
"owner": "TheTom", "owner": "TheTom",
"repo": "llama-cpp-turboquant", "repo": "llama-cpp-turboquant",
"rev": "9e3fb40e8bc0f873ad4d3d8329b17dacff28e4ca", "rev": "4f331667d9badcc71ab864f1e298591061d82050",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -657,11 +678,11 @@
"treefmt-nix": "treefmt-nix" "treefmt-nix": "treefmt-nix"
}, },
"locked": { "locked": {
"lastModified": 1776883427, "lastModified": 1777637197,
"narHash": "sha256-prHCm++hniRcoqzvWTEFyAiLKT6m+EUVCRaDLrsuEgM=", "narHash": "sha256-RSevrcyS4z2Fx4+fk2NoWCvnxG3Z8lws3uemRJ3XaWc=",
"owner": "numtide", "owner": "numtide",
"repo": "llm-agents.nix", "repo": "llm-agents.nix",
"rev": "6fd26c9cb50d9549f3791b3d35e4f72f97677103", "rev": "7381a70995f62d5f54545539765b8d638984b43c",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -704,11 +725,11 @@
"xwayland-satellite-unstable": "xwayland-satellite-unstable" "xwayland-satellite-unstable": "xwayland-satellite-unstable"
}, },
"locked": { "locked": {
"lastModified": 1776879043, "lastModified": 1777633931,
"narHash": "sha256-M9RjuowtoqQbFRdQAm2P6GjFwgHjRcnWYcB7ChSjDms=", "narHash": "sha256-306tONvDv0lhoT7Ge42ghjxPE2ndB3wTKwwtyZS2qJE=",
"owner": "sodiboo", "owner": "sodiboo",
"repo": "niri-flake", "repo": "niri-flake",
"rev": "535ebbe038039215a5d1c6c0c67f833409a5be96", "rev": "c291d31da4a27a31b08fab5a468c086888095a3f",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -737,11 +758,11 @@
"niri-unstable": { "niri-unstable": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1776853441, "lastModified": 1777627080,
"narHash": "sha256-mSxfoEs7DiDhMCBzprI/1K7UXzMISuGq0b7T06LVJXE=", "narHash": "sha256-9xzxgWsZZRbiMDa6iSZfD1dZGlUvsHp2aawWM5LK6F8=",
"owner": "YaLTeR", "owner": "YaLTeR",
"repo": "niri", "repo": "niri",
"rev": "74d2b18603366b98ec9045ecf4a632422f472365", "rev": "5f6f131b24826a01374d5cd87b281bd7ea181537",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -761,11 +782,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776796985, "lastModified": 1777227006,
"narHash": "sha256-cNFg3H09sBZl1v9ds6PDHfLCUTDJbefGMSv+WxFs+9c=", "narHash": "sha256-A7GcOXjfo2xmZ3ERgN0j6GcqaVzqIf5zpYQcdfDaMr0=",
"owner": "xddxdd", "owner": "xddxdd",
"repo": "nix-cachyos-kernel", "repo": "nix-cachyos-kernel",
"rev": "ac5956bbceb022998fc1dd0001322f10ef1e6dda", "rev": "0f7e2bea4088227a80502557f6c0e3b74949d6b5",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -787,11 +808,11 @@
"systems": "systems_6" "systems": "systems_6"
}, },
"locked": { "locked": {
"lastModified": 1776851701, "lastModified": 1777629537,
"narHash": "sha256-tdtOcU2Hz/eLqAhkzUcEocgX0WpjKSbl2SkVjOZGZw0=", "narHash": "sha256-fBpoc/+nYcE2TCyNPzfy5OPZCyvp5GARc6r4R0EYXUs=",
"owner": "marienz", "owner": "marienz",
"repo": "nix-doom-emacs-unstraightened", "repo": "nix-doom-emacs-unstraightened",
"rev": "7ac65a49eec5e3f87d27396e645eddbf9dc626de", "rev": "ff8ef70369fcfb398577cd866b415f649fc68022",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -802,11 +823,11 @@
}, },
"nix-flatpak": { "nix-flatpak": {
"locked": { "locked": {
"lastModified": 1776625032, "lastModified": 1777402031,
"narHash": "sha256-edvwHiFhgOiwywt6/Iwe+sSn6ybhU3WZGnIoiGcKjfQ=", "narHash": "sha256-6gkfl9y3+ti0Z6dgby8/R4/DRT8sWU0I0TLCIxwWtjk=",
"owner": "gmodena", "owner": "gmodena",
"repo": "nix-flatpak", "repo": "nix-flatpak",
"rev": "479e19f1decb390aa5b75cae13ddf87d763c74cc", "rev": "22a3adbe7c5c8c8a10a635d32c9ef7fc01a6e4b8",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -846,11 +867,11 @@
"systems": "systems_7" "systems": "systems_7"
}, },
"locked": { "locked": {
"lastModified": 1776828595, "lastModified": 1777608106,
"narHash": "sha256-LkFpFnPTK6H0gwyfYezN3kEKHVxjSdPp/tBnrQRFP3E=", "narHash": "sha256-wiBYCs2swNJefX1xH7tiyZLAw9ZmHZQ5DRo8VeFW6fg=",
"owner": "Infinidoge", "owner": "Infinidoge",
"repo": "nix-minecraft", "repo": "nix-minecraft",
"rev": "28f0f2369655a5910e810c35c698dfaa9ccec692", "rev": "6643116cd25bd53641a9724db8a530e36899484d",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -861,11 +882,11 @@
}, },
"nixos-hardware": { "nixos-hardware": {
"locked": { "locked": {
"lastModified": 1776830795, "lastModified": 1776983936,
"narHash": "sha256-PAfvLwuHc1VOvsLcpk6+HDKgMEibvZjCNvbM1BJOA7o=", "narHash": "sha256-ZOQyNqSvJ8UdrrqU1p7vaFcdL53idK+LOM8oRWEWh6o=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixos-hardware", "repo": "nixos-hardware",
"rev": "72674a6b5599e844c045ae7449ba91f803d44ebc", "rev": "2096f3f411ce46e88a79ae4eafcfc9df8ed41c61",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -877,11 +898,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1776548001, "lastModified": 1777268161,
"narHash": "sha256-ZSK0NL4a1BwVbbTBoSnWgbJy9HeZFXLYQizjb2DPF24=", "narHash": "sha256-bxrdOn8SCOv8tN4JbTF/TXq7kjo9ag4M+C8yzzIRYbE=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "b12141ef619e0a9c1c84dc8c684040326f27cdcc", "rev": "1c3fe55ad329cbcb28471bb30f05c9827f724c76",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -937,11 +958,11 @@
}, },
"nixpkgs-stable": { "nixpkgs-stable": {
"locked": { "locked": {
"lastModified": 1776734388, "lastModified": 1777428379,
"narHash": "sha256-vl3dkhlE5gzsItuHoEMVe+DlonsK+0836LIRDnm6MXQ=", "narHash": "sha256-ypxFOeDz+CqADEQNL72haqGjvZQdBR5Vc7pyx2JDttI=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "10e7ad5bbcb421fe07e3a4ad53a634b0cd57ffac", "rev": "755f5aa91337890c432639c60b6064bb7fe67769",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -991,11 +1012,11 @@
"noctalia-qs": "noctalia-qs" "noctalia-qs": "noctalia-qs"
}, },
"locked": { "locked": {
"lastModified": 1776888984, "lastModified": 1777427472,
"narHash": "sha256-Up2F/eoMuPUsZnPVYdH5TMHe1TBP2Ue1QuWd0vWZoxY=", "narHash": "sha256-kqcfLdxb+CqTroMErCScvx6YQcZYJcf6X+z5I8kBJK8=",
"owner": "noctalia-dev", "owner": "noctalia-dev",
"repo": "noctalia-shell", "repo": "noctalia-shell",
"rev": "2c1808f9f8937fc0b82c54af513f7620fec56d71", "rev": "9f8dd48c8df5ab1f7f87ddf9842627e1e5682186",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1014,11 +1035,11 @@
"treefmt-nix": "treefmt-nix_2" "treefmt-nix": "treefmt-nix_2"
}, },
"locked": { "locked": {
"lastModified": 1776585574, "lastModified": 1777380063,
"narHash": "sha256-j35EWhKoGhKrfcXcAOpoRVgXEPQt41Eukji/h59cnjk=", "narHash": "sha256-q5mWOEICcZzr+KnjIwDHV9EXiBxOC9cnBpxZbDAViU8=",
"owner": "noctalia-dev", "owner": "noctalia-dev",
"repo": "noctalia-qs", "repo": "noctalia-qs",
"rev": "75d180c28a9ab4470e980f3d6f706ad6c5213add", "rev": "8742a7a748c43bf44eb6862a8ebd3591ed71502d",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1037,11 +1058,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1775585728, "lastModified": 1776796298,
"narHash": "sha256-8Psjt+TWvE4thRKktJsXfR6PA/fWWsZ04DVaY6PUhr4=", "narHash": "sha256-PcRvlWayisPSjd0UcRQbhG8Oqw78AcPE6x872cPRHN8=",
"owner": "cachix", "owner": "cachix",
"repo": "pre-commit-hooks.nix", "repo": "pre-commit-hooks.nix",
"rev": "580633fa3fe5fc0379905986543fd7495481913d", "rev": "3cfd774b0a530725a077e17354fbdb87ea1c4aad",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1075,6 +1096,7 @@
"root": { "root": {
"inputs": { "inputs": {
"agenix": "agenix", "agenix": "agenix",
"android-skills": "android-skills",
"arr-init": "arr-init", "arr-init": "arr-init",
"deploy-rs": "deploy-rs", "deploy-rs": "deploy-rs",
"disko": "disko", "disko": "disko",
@@ -1102,6 +1124,7 @@
"rust-overlay": "rust-overlay", "rust-overlay": "rust-overlay",
"senior_project-website": "senior_project-website", "senior_project-website": "senior_project-website",
"srvos": "srvos", "srvos": "srvos",
"steam-config-nix": "steam-config-nix",
"trackerlist": "trackerlist", "trackerlist": "trackerlist",
"vpn-confinement": "vpn-confinement", "vpn-confinement": "vpn-confinement",
"website": "website", "website": "website",
@@ -1133,11 +1156,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776827647, "lastModified": 1777605393,
"narHash": "sha256-sYixYhp5V8jCajO8TRorE4fzs7IkL4MZdfLTKgkPQBk=", "narHash": "sha256-Hjp0VOOHgHcTrX23iVvnfAudPcuCmfkfpQNFwv2v/ks=",
"owner": "oxalica", "owner": "oxalica",
"repo": "rust-overlay", "repo": "rust-overlay",
"rev": "40e6ccc06e1245a4837cbbd6bdda64e21cc67379", "rev": "ff88db34cfa486fc4964a6991cab1678d82eee8c",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1190,11 +1213,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776653059, "lastModified": 1777557025,
"narHash": "sha256-K3tWnUj6FXaK95sBUajedutJrFVrOzYhvrQwQjJ0FbU=", "narHash": "sha256-jvp1VDDPL3Tj2OuhAW6g9EuCyD//cGt47vmo6Mko0L4=",
"owner": "nix-community", "owner": "nix-community",
"repo": "srvos", "repo": "srvos",
"rev": "4968d2a44c84edfc9a38a2494cc7f85ad2c7122b", "rev": "9e2d48327b65b4020fd07acbd26b43ed9215c9ce",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1203,6 +1226,29 @@
"type": "github" "type": "github"
} }
}, },
"steam-config-nix": {
"inputs": {
"flake-parts": "flake-parts_4",
"nixpkgs": [
"nixpkgs"
],
"systems": "systems_10"
},
"locked": {
"lastModified": 1777785758,
"narHash": "sha256-lPCklrUYn8ZydCaHb33YcWSLV05/j2ukPiY8fkhIRCg=",
"owner": "Titaniumtown",
"repo": "steam-config-nix",
"rev": "ef8f67a02da61595314c76978a01546500978106",
"type": "github"
},
"original": {
"owner": "Titaniumtown",
"ref": "pr/write-files",
"repo": "steam-config-nix",
"type": "github"
}
},
"systems": { "systems": {
"locked": { "locked": {
"lastModified": 1681028828, "lastModified": 1681028828,
@@ -1233,6 +1279,21 @@
"type": "github" "type": "github"
} }
}, },
"systems_11": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_2": { "systems_2": {
"locked": { "locked": {
"lastModified": 1681028828, "lastModified": 1681028828,
@@ -1356,11 +1417,11 @@
"trackerlist": { "trackerlist": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1776809383, "lastModified": 1777500579,
"narHash": "sha256-r4V5l+Yk3jxVfZNQk2Ddu8Vlyshd9FWcnGGFyaL4UCw=", "narHash": "sha256-GnY1tjfEHEskzz4XqN8TfNtzH8gdIUpd6kXM7snRNYc=",
"owner": "ngosang", "owner": "ngosang",
"repo": "trackerslist", "repo": "trackerslist",
"rev": "37d5c0552c25abf50f05cc6b377345e65a588dc2", "rev": "83732622eddacee296945a2e7150534fa90b580e",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -1524,11 +1585,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1776844129, "lastModified": 1777564084,
"narHash": "sha256-DaYSEBVzTvUhTuoVe70NHphoq5JKUHqUhlNlN5XnTuU=", "narHash": "sha256-O9VRkxg+2j+sh+c73wi4VeIBECoqW2PlnCR9Qe1nQKA=",
"owner": "0xc000022070", "owner": "0xc000022070",
"repo": "zen-browser-flake", "repo": "zen-browser-flake",
"rev": "90706e6ab801e4fb7bc53343db67583631936192", "rev": "d93443c0f6fdb3b179bed68856f322dba4842612",
"type": "github" "type": "github"
}, },
"original": { "original": {

View File

@@ -82,6 +82,18 @@
url = "github:ChrisOboe/json2steamshortcut"; url = "github:ChrisOboe/json2steamshortcut";
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";
}; };
steam-config-nix = {
# tracking the pr/write-files branch (per-app `files` option) until
# it lands upstream in different-name/steam-config-nix.
url = "github:Titaniumtown/steam-config-nix/pr/write-files";
inputs.nixpkgs.follows = "nixpkgs";
};
# Google's official agent-skills for Android development (Apache 2.0).
# Consumed by home/progs/pi.nix and exposed under ~/.omp/agent/skills/.
android-skills = {
url = "github:android/skills";
flake = false;
};
# Server (muffin) — follows nixpkgs-stable # Server (muffin) — follows nixpkgs-stable
nix-minecraft = { nix-minecraft = {
@@ -163,6 +175,9 @@
niriPackage = inputs.niri.packages.${system}.niri-unstable; niriPackage = inputs.niri.packages.${system}.niri-unstable;
# --- Desktop-channel pkgs (used by portable homeConfigurations) ---
desktopPkgs = import nixpkgs { inherit system; };
# --- Server (muffin) plumbing --- # --- Server (muffin) plumbing ---
bootstrapPkgs = import nixpkgs-stable { inherit system; }; bootstrapPkgs = import nixpkgs-stable { inherit system; };
patchedStableSrc = bootstrapPkgs.applyPatches { patchedStableSrc = bootstrapPkgs.applyPatches {
@@ -177,17 +192,20 @@
targetPlatform = system; targetPlatform = system;
buildPlatform = builtins.currentSystem; buildPlatform = builtins.currentSystem;
}; };
serviceConfigs = import ./hosts/muffin/service-configs.nix; siteConfig = import ./site-config.nix;
serviceConfigs = import ./hosts/muffin/service-configs.nix { site_config = siteConfig; };
serverLib = import ./lib { serverLib = import ./lib {
inherit inputs; inherit inputs;
lib = nixpkgs-stable.lib; lib = nixpkgs-stable.lib;
pkgs = serverPkgs; pkgs = serverPkgs;
service_configs = serviceConfigs; service_configs = serviceConfigs;
site_config = siteConfig;
}; };
testSuite = import ./tests/tests.nix { testSuite = import ./tests/tests.nix {
pkgs = serverPkgs; pkgs = serverPkgs;
lib = serverLib; lib = serverLib;
inherit inputs; inherit inputs;
site_config = siteConfig;
config = self.nixosConfigurations.muffin.config; config = self.nixosConfigurations.muffin.config;
}; };
@@ -200,6 +218,7 @@
specialArgs = { specialArgs = {
inherit inputs username hostname; inherit inputs username hostname;
niri-package = niriPackage; niri-package = niriPackage;
site_config = siteConfig;
}; };
modules = [ modules = [
home-manager.nixosModules.home-manager home-manager.nixosModules.home-manager
@@ -219,10 +238,13 @@
niri-package = niriPackage; niri-package = niriPackage;
homeDirectory = "/home/${username}"; homeDirectory = "/home/${username}";
stateVersion = config.system.stateVersion; stateVersion = config.system.stateVersion;
site_config = siteConfig;
}; };
home-manager.users.${username} = import ./hosts/${hostname}/home.nix; home-manager.users.${username} = import ./hosts/${hostname}/home.nix;
} }
) )
{ nixpkgs.overlays = [ (import ./lib/overlays.nix) ]; }
inputs.steam-config-nix.nixosModules.default
./hosts/${hostname}/default.nix ./hosts/${hostname}/default.nix
]; ];
}; };
@@ -238,6 +260,7 @@
hostname = "muffin"; hostname = "muffin";
eth_interface = "enp4s0"; eth_interface = "enp4s0";
service_configs = serviceConfigs; service_configs = serviceConfigs;
site_config = siteConfig;
lib = serverLib; lib = serverLib;
}; };
modules = [ modules = [
@@ -346,6 +369,9 @@
( (
{ ... }: { ... }:
{ {
home-manager.extraSpecialArgs = {
site_config = siteConfig;
};
home-manager.users.${username} = import ./hosts/muffin/home.nix; home-manager.users.${username} = import ./hosts/muffin/home.nix;
} }
) )
@@ -364,11 +390,33 @@
nixosConfigurations = { nixosConfigurations = {
mreow = mkDesktopHost "mreow"; mreow = mkDesktopHost "mreow";
yarn = mkDesktopHost "yarn"; yarn = mkDesktopHost "yarn";
patiodeck = mkDesktopHost "patiodeck";
muffin = muffinHost; muffin = muffinHost;
}; };
# Standalone home-manager profile — usable on any x86_64-linux machine
# with nix installed (NixOS or not). Activate with:
# nix run home-manager/master -- switch --flake ".#primary"
# Ships the shared terminal profile (fish, helix, modern CLI, git).
homeConfigurations.primary = home-manager.lib.homeManagerConfiguration {
pkgs = desktopPkgs;
extraSpecialArgs = {
site_config = siteConfig;
};
modules = [
./home/profiles/terminal.nix
{
home = {
username = username;
homeDirectory = "/home/${username}";
stateVersion = "24.11";
};
}
];
};
deploy.nodes.muffin = { deploy.nodes.muffin = {
hostname = "server-public"; hostname = siteConfig.hosts.muffin.alias;
profiles.system = { profiles.system = {
sshUser = "root"; sshUser = "root";
user = "root"; user = "root";
@@ -382,7 +430,27 @@
# want to avoid when the deploy is supposed to be a no-op blocked by # want to avoid when the deploy is supposed to be a no-op blocked by
# the guard. Blocking before the deploy-rs invocation is the only # the guard. Blocking before the deploy-rs invocation is the only
# clean way to leave the running system untouched. # clean way to leave the running system untouched.
path = deploy-rs.lib.${system}.activate.nixos self.nixosConfigurations.muffin; #
# Activation uses `switch-to-configuration boot` + a detached finalize
# (modules/server-deploy-finalize.nix) rather than the default
# `switch`. The gitea-actions runner driving CI deploys lives on
# muffin itself; a direct `switch` restarts gitea-runner-muffin mid-
# activation, killing the SSH session, the CI job, and deploy-rs's
# magic-rollback handshake. `boot` only touches the bootloader — no
# service restarts — and deploy-finalize schedules a pid1-owned
# transient unit that runs the real `switch` (or `systemctl reboot`
# when kernel/initrd/kernel-modules changed) ~60s later, surviving
# runner restart because it's decoupled from the SSH session.
path =
deploy-rs.lib.${system}.activate.custom self.nixosConfigurations.muffin.config.system.build.toplevel
''
# matches activate.nixos's workaround for NixOS/nixpkgs#73404
cd /tmp
$PROFILE/bin/switch-to-configuration boot
${nixpkgs-stable.lib.getExe self.nixosConfigurations.muffin.config.system.build.deployFinalize}
'';
}; };
}; };
@@ -395,6 +463,11 @@
path = test; path = test;
}) testSuite }) testSuite
); );
# Buildenv of every binary in the portable terminal profile. Install
# without home-manager via:
# nix profile install ".#cli-tools"
cli-tools = self.homeConfigurations.primary.config.home.path;
} }
// (serverPkgs.lib.mapAttrs' (name: test: { // (serverPkgs.lib.mapAttrs' (name: test: {
name = "test-${name}"; name = "test-${name}";

View File

@@ -8,8 +8,7 @@
{ {
imports = [ imports = [
./no-gui.nix ./no-gui.nix
# ../progs/ghostty.nix ../progs/ghostty.nix
../progs/alacritty.nix
../progs/emacs.nix ../progs/emacs.nix
# ../progs/trezor.nix # - broken # ../progs/trezor.nix # - broken
../progs/flatpak.nix ../progs/flatpak.nix
@@ -87,6 +86,21 @@
signal-desktop signal-desktop
# alternative GTK signal client; carries five local feature patches
# under patches/flare/ on top of upstream 0.20.4 (typing indicators,
# formatted messages, edited messages, multi-select with delete-for-me,
# and in-channel message search).
(pkgs.flare-signal.overrideAttrs (old: {
patches = (old.patches or [ ]) ++ [
../../patches/flare/0001-feat-typing-Implement-typing-indicators.patch
../../patches/flare/0002-feat-messages-Implement-formatted-messages.patch
../../patches/flare/0003-feat-messages-Implement-edited-messages.patch
../../patches/flare/0004-feat-messages-Multi-select-messages-and-delete-for-m.patch
../../patches/flare/0005-feat-messages-In-channel-message-search.patch
../../patches/flare/0006-feat-messages-Show-This-message-was-deleted.-placeho.patch
];
}))
# accounting # accounting
# gnucash # gnucash
@@ -227,4 +241,11 @@
uris = [ "qemu:///system" ]; uris = [ "qemu:///system" ];
}; };
}; };
# macOS-style clipboard aliases — depend on wl-clipboard, so scoped here
# rather than in the shared fish config.
programs.fish.shellAliases = {
pbcopy = "${pkgs.wl-clipboard}/bin/wl-copy";
pbpaste = "${pkgs.wl-clipboard}/bin/wl-paste";
};
} }

View File

@@ -59,83 +59,16 @@ let
# jasmin # jasmin
]; ];
common_tools = with pkgs; [ # hardware diagnostics — wanted on dev machines, not part of the shared
# hex viewer # terminal profile (which is meant to be portable to any machine).
hexyl hw_diag = with pkgs; [
# find typos in code
typos
# replacements for common posix tools
eza # ls replacement
bat # pretty `cat` clone
delta # viewer for `git` and `diff` output
dust # pretty `du` version
duf # better `df` clone
gping # `ping`... but with a graph!!
tldr # `man` but more straight-forward and simpler
ripgrep # grep, but written in rust, respects .gitignore, and very very fast, command is `rg`
fd # alternative to `find`
# status tools
htop
bottom
# other tools
unzip
wget
killall
file
b3sum
# "A hexadecimal, binary, and ASCII dump utility with color support"
tinyxxd
# networking tool
lsof
# view SMART status of drives
smartmontools smartmontools
# adds `sensors` command
lm_sensors lm_sensors
# lspci
pciutils pciutils
# convert between various units
units
jq
# DNS things
dig
bun
]; ];
in # dev-only tools. Universal CLI (bat, rg, htop, jq, …) lives in terminal.nix.
{ dev_tools = with pkgs; [
imports = [
../progs/fish.nix
../progs/helix.nix
../progs/pi.nix
(
{ ... }:
{
nixpkgs.overlays = [
inputs.rust-overlay.overlays.default
];
}
)
];
home.stateVersion = stateVersion;
home.packages =
with pkgs;
lib.concatLists [
[
# python formatter # python formatter
ruff ruff
@@ -143,23 +76,13 @@ in
hugo hugo
go go
# for benchmaking stuff
hyperfine
pfetch-rs
waypipe waypipe
sshfs sshfs
# nix formatter
nixfmt-tree
# serial viewer # serial viewer
minicom minicom
# "~~matt's~~ my trace route"
mtr
ffmpeg-full ffmpeg-full
# microcontroller tooling # microcontroller tooling
@@ -189,10 +112,6 @@ in
clang clang
gdb gdb
git-crypt
imagemagick
nixpkgs-review nixpkgs-review
nmap nmap
@@ -212,51 +131,52 @@ in
powerstat powerstat
yt-dlp yt-dlp
]
# JS runtime
bun
# convert between various units
units
];
in
{
imports = [
./terminal.nix
../progs/pi.nix
(
{ ... }:
{
nixpkgs.overlays = [
inputs.rust-overlay.overlays.default
];
}
)
];
home.stateVersion = stateVersion;
home.packages = lib.concatLists [
rust_pkgs rust_pkgs
lsps lsps
java_tools java_tools
common_tools hw_diag
dev_tools
]; ];
# fish aliases that depend on packages only present in this profile.
# Universal aliases (ls/la/ll/lt, git-size) live in home/progs/fish.nix.
programs.fish.shellAliases = {
c = "${lib.getExe pkgs.cargo}";
cr = "${lib.getExe pkgs.cargo} run";
cb = "${lib.getExe pkgs.cargo} build";
gcc-native = "${lib.getExe pkgs.gcc} -Q --help=target -mtune=native -march=native | ${lib.getExe pkgs.gnugrep} -E '^\\s+\\-(mtune|march)=' | ${pkgs.coreutils}/bin/tr -d '[:blank:]'";
};
# https://github.com/flamegraph-rs/flamegraph # https://github.com/flamegraph-rs/flamegraph
home.file.".cargo/config.toml".text = '' home.file.".cargo/config.toml".text = ''
[target.${lib.strings.removeSuffix "-linux" pkgs.stdenv.hostPlatform.system}-unknown-linux-gnu] [target.${lib.strings.removeSuffix "-linux" pkgs.stdenv.hostPlatform.system}-unknown-linux-gnu]
linker = "${lib.getExe pkgs.clang}" linker = "${lib.getExe pkgs.clang}"
rustflags = ["-Clink-arg=-Wl,--no-rosegment"] rustflags = ["-Clink-arg=-Wl,--no-rosegment"]
''; '';
# git (self explanatory)
programs.git = {
enable = true;
package = pkgs.git;
lfs.enable = true;
ignores = [ ".sisyphus" ];
settings = {
init = {
# master -> main
defaultBranch = "main";
};
push.autoSetupRemote = true;
user = {
name = "Simon Gardling";
email = "titaniumtown@proton.me";
};
};
# gpg signing keys
signing = {
key = "9AB28AC10ECE533D";
signByDefault = true;
};
};
# better way to view diffs
programs.delta = {
enable = true;
enableGitIntegration = true;
};
} }

103
home/profiles/terminal.nix Normal file
View File

@@ -0,0 +1,103 @@
# Shared terminal-tools profile.
#
# The set of CLI tooling I want available on every machine I use:
# - mreow + yarn pick this up via home/profiles/no-gui.nix
# - muffin picks this up via hosts/muffin/home.nix
# - any non-NixOS machine picks it up via the homeConfigurations output in flake.nix
#
# Scope is intentionally narrow: the daily-driver shell (fish + helix + modern
# CLI replacements + git). No language toolchains, no hardware-specific admin
# tools, no GUI-adjacent utilities — those belong in profiles layered on top.
{
lib,
site_config,
pkgs,
...
}:
{
imports = [
../progs/fish.nix
../progs/helix.nix
];
home.packages = with pkgs; [
# modern CLI replacements for POSIX basics
eza # ls
bat # cat
delta # diff viewer (also wired into git below)
dust # du
duf # df
gping # ping, with a graph
ripgrep # grep, respects .gitignore
fd # find
tldr # man, simpler
# system / process tools
htop
bottom
lsof
file
killall
unzip
tmux
wget
# network
dig
mtr
# text / data
jq
hexyl
tinyxxd
b3sum
typos
# media (handy from a shell, lightweight enough to be universal)
imagemagick
# universal dev-adjacent
git-crypt
hyperfine
# nix
nixfmt-tree
# shell greeter (invoked from fish's interactiveShellInit)
pfetch-rs
];
# Git: mechanical config + identity lives here so `git` works out of the box
# on every machine. Signing is opt-in via lib.mkDefault so machines without
# my GPG key can override `signing.signByDefault = false` without fighting
# priority.
programs.git = {
enable = true;
package = pkgs.git;
lfs.enable = true;
ignores = [ ".sisyphus" ];
settings = {
init.defaultBranch = "main";
push.autoSetupRemote = true;
user = {
name = "Simon Gardling";
email = site_config.contact_email;
};
};
signing = {
format = "openpgp";
key = lib.mkDefault "9AB28AC10ECE533D";
signByDefault = lib.mkDefault true;
};
};
# Pretty diff viewer, wired into git.
programs.delta = {
enable = true;
enableGitIntegration = true;
};
}

View File

@@ -1,131 +0,0 @@
{ pkgs, ... }:
{
home.sessionVariables = {
TERMINAL = "alacritty";
};
programs.alacritty = {
enable = true;
package = pkgs.alacritty;
settings = {
# some programs can't handle alacritty
env.TERM = "xterm-256color";
window = {
# using a window manager, no decorations needed
decorations = "none";
# semi-transparent
opacity = 0.90;
# padding between the content of the terminal and the edge
padding = {
x = 10;
y = 10;
};
dimensions = {
columns = 80;
lines = 40;
};
};
scrolling = {
history = 1000;
multiplier = 3;
};
font =
let
baseFont = {
family = "JetBrains Mono Nerd Font";
style = "Regular";
};
in
{
size = 12;
normal = baseFont;
bold = baseFont // {
style = "Bold";
};
italic = baseFont // {
style = "Italic";
};
offset.y = 0;
glyph_offset.y = 0;
};
# color scheme
colors =
let
normal = {
black = "0x1b1e28";
red = "0xd0679d";
green = "0x5de4c7";
yellow = "0xfffac2";
blue = "#435c89";
magenta = "0xfcc5e9";
cyan = "0xadd7ff";
white = "0xffffff";
};
bright = {
black = "0xa6accd";
red = normal.red;
green = normal.green;
yellow = normal.yellow;
blue = normal.cyan;
magenta = "0xfae4fc";
cyan = "0x89ddff";
white = normal.white;
};
in
{
inherit normal bright;
primary = {
background = "0x131621";
foreground = bright.black;
};
cursor = {
text = "CellBackground";
cursor = "CellForeground";
};
search =
let
foreground = normal.black;
background = normal.cyan;
in
{
matches = {
inherit foreground background;
};
focused_match = {
inherit foreground background;
};
};
selection = {
text = "CellForeground";
background = "0x303340";
};
vi_mode_cursor = {
text = "CellBackground";
cursor = "CellForeground";
};
};
cursor = {
style = "Underline";
vi_mode_style = "Underline";
};
};
};
}

View File

@@ -50,7 +50,7 @@
(vc-gutter +pretty) ; vcs diff in the fringe (vc-gutter +pretty) ; vcs diff in the fringe
vi-tilde-fringe ; fringe tildes to mark beyond EOB vi-tilde-fringe ; fringe tildes to mark beyond EOB
;;window-select ; visually switch windows ;;window-select ; visually switch windows
workspaces ; tab emulation, persistence & separate workspaces ;; workspaces ; tab emulation, persistence & separate workspaces
;;zen ; distraction-free coding or writing ;;zen ; distraction-free coding or writing
:editor :editor

View File

@@ -1,7 +1,12 @@
# Shared fish configuration — imported from home/profiles/terminal.nix, so it
# runs on every host (mreow, yarn, muffin, and any machine using the portable
# homeConfigurations output).
#
# Desktop/dev-specific aliases (cargo, gcc, wl-clipboard) are added from the
# profile that owns their dependencies, not here.
{ pkgs, lib, ... }: { pkgs, lib, ... }:
let let
eza = "${lib.getExe pkgs.eza} --color=always --group-directories-first"; eza = "${lib.getExe pkgs.eza} --color=always --group-directories-first";
cargo = "${lib.getExe pkgs.cargo}";
coreutils = "${pkgs.coreutils}/bin"; coreutils = "${pkgs.coreutils}/bin";
in in
{ {
@@ -20,10 +25,6 @@ in
''; '';
shellAliases = { shellAliases = {
c = cargo;
cr = "${cargo} run";
cb = "${cargo} build";
# from DistroTube's dot files: Changing "ls" to "eza" # from DistroTube's dot files: Changing "ls" to "eza"
ls = "${eza} -al"; ls = "${eza} -al";
la = "${eza} -a"; la = "${eza} -a";
@@ -38,12 +39,6 @@ in
${coreutils}/sort --numeric-sort --key=2 | ${coreutils}/sort --numeric-sort --key=2 |
${coreutils}/cut -c 1-12,41- | ${coreutils}/cut -c 1-12,41- |
${coreutils}/numfmt --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest''; ${coreutils}/numfmt --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest'';
# aliases for (I think) macos commands
pbcopy = "${pkgs.wl-clipboard}/bin/wl-copy";
pbpaste = "${pkgs.wl-clipboard}/bin/wl-paste";
gcc-native = "${lib.getExe pkgs.gcc} -Q --help=target -mtune=native -march=native | ${lib.getExe pkgs.gnugrep} -E '^\\s+\-(mtune|march)=' | ${coreutils}/tr -d '[:blank:]'";
}; };
shellInit = '' shellInit = ''

View File

@@ -1,12 +1,76 @@
{ pkgs, ... }: { ... }:
{ {
# https://mynixos.com/home-manager/option/programs.ghostty # https://mynixos.com/home-manager/option/programs.ghostty
programs.ghostty = { programs.ghostty = {
enable = true; enable = true;
enableFishIntegration = true; enableFishIntegration = true;
# custom palette ported verbatim from the previous alacritty config
# (poimandres-ish). lives in ~/.config/ghostty/themes/poimandres and is
# selected by `theme = "poimandres"` below.
themes.poimandres = {
palette = [
"0=#1b1e28"
"1=#d0679d"
"2=#5de4c7"
"3=#fffac2"
"4=#435c89"
"5=#fcc5e9"
"6=#add7ff"
"7=#ffffff"
"8=#a6accd"
"9=#d0679d"
"10=#5de4c7"
"11=#fffac2"
"12=#add7ff"
"13=#fae4fc"
"14=#89ddff"
"15=#ffffff"
];
background = "131621";
foreground = "a6accd";
cursor-color = "a6accd";
cursor-text = "131621";
selection-background = "303340";
selection-foreground = "a6accd";
};
settings = { settings = {
theme = "Adventure"; theme = "poimandres";
background-opacity = 0.7;
# font
font-family = "JetBrainsMono Nerd Font";
font-size = 12;
# window
window-decoration = false;
window-padding-x = 10;
window-padding-y = 10;
window-width = 80;
window-height = 40;
# semi-transparent background
background-opacity = 0.90;
# cursor
cursor-style = "underline";
# always open new windows at $HOME instead of inheriting whatever cwd the
# currently-focused ghostty window has. with gtk-single-instance, the
# focused-window inherit rule otherwise sticks the daemon's first cwd to
# every subsequent niri Mod+T launch.
window-inherit-working-directory = false;
working-directory = "home";
# ssh into hosts that lack ghostty's terminfo: ssh-terminfo auto-installs
# it remotely on first connect (and caches), ssh-env is the fallback that
# downgrades TERM to xterm-256color when the install can't run.
shell-integration-features = "ssh-env,ssh-terminfo";
# keep one daemon alive so subsequent launches (e.g. niri Mod+T) are
# instant instead of paying GTK + wgpu init each time. relies on the
# dbus-activated systemd user service that the HM module wires up.
gtk-single-instance = true;
}; };
}; };

View File

@@ -17,7 +17,8 @@
bar = { bar = {
position = "top"; position = "top";
floating = true; floating = true;
backgroundOpacity = 0.93; backgroundOpacity = 0.0;
useSeparateOpacity = true;
}; };
general = { general = {
animationSpeed = 1.5; animationSpeed = 1.5;
@@ -32,6 +33,7 @@
}; };
wallpaper = { wallpaper = {
enabled = true; enabled = true;
skipStartupTransition = true;
}; };
}; };
}; };

View File

@@ -34,22 +34,62 @@ let
}; };
}; };
}; };
# Pull Google's official agent-skills (github:android/skills, Apache 2.0).
# The upstream tree nests skills as <category>/<name>/SKILL.md (build/agp/…,
# jetpack-compose/migration/…, performance/r8-analyzer, etc.). omp expects a
# flat layout, so we walk the tree, find every SKILL.md, and mount each
# parent directory at ~/.omp/agent/skills/<basename>/. Every leaf basename
# in upstream is unique, so flattening is lossless. New skills upstream show
# up automatically on `nix flake update --input-name android-skills`.
findSkillDirs =
path:
let
entries = builtins.readDir path;
hasSkillMd = builtins.pathExists (path + "/SKILL.md");
subdirs = lib.filterAttrs (_: t: t == "directory") entries;
recurse = lib.concatLists (lib.mapAttrsToList (n: _: findSkillDirs (path + "/${n}")) subdirs);
in
if hasSkillMd then [ path ] else recurse;
androidSkillFiles = lib.listToAttrs (
map (
dir:
lib.nameValuePair ".omp/agent/skills/${builtins.unsafeDiscardStringContext (baseNameOf dir)}" {
source = dir;
}
) (findSkillDirs inputs.android-skills)
);
# Browser path for the playwright skill body.
playwrightChromium =
let
browsers = pkgs.playwright-driver.browsers;
chromiumDir = builtins.head (
builtins.filter (n: builtins.match "chromium-[0-9]+" n != null) (
builtins.attrNames browsers.passthru.entries
)
);
in
{
browsers = "${browsers}";
chrome = "${browsers}/${chromiumDir}/chrome-linux64/chrome";
};
in in
{ {
home.packages = [ home.packages = [
(inputs.llm-agents.packages.${pkgs.stdenv.hostPlatform.system}.omp.overrideAttrs (old: { inputs.llm-agents.packages.${pkgs.stdenv.hostPlatform.system}.omp
patches = (old.patches or [ ]) ++ [ ];
}))
]; ];
home.file = androidSkillFiles // {
# main settings: ~/.omp/agent/config.yml (JSON is valid YAML) # main settings: ~/.omp/agent/config.yml (JSON is valid YAML)
home.file.".omp/agent/config.yml".text = builtins.toJSON ompSettings; ".omp/agent/config.yml".text = builtins.toJSON ompSettings;
# model/provider config: ~/.omp/agent/models.yml # model/provider config: ~/.omp/agent/models.yml
home.file.".omp/agent/models.yml".text = builtins.toJSON ompModels; ".omp/agent/models.yml".text = builtins.toJSON ompModels;
# global instructions loaded at startup # global instructions loaded at startup
home.file.".omp/agent/AGENTS.md".text = '' ".omp/agent/AGENTS.md".text = ''
You are an intelligent and observant agent. You are an intelligent and observant agent.
If instructed to commit, disable gpg signing. If instructed to commit, disable gpg signing.
You are on nixOS, if you don't have access to a tool, you can access it via the `nix-shell` command. You are on nixOS, if you don't have access to a tool, you can access it via the `nix-shell` command.
@@ -69,9 +109,12 @@ in
## Nix ## Nix
For using `nix build` append `-L` to get better visibility into the logs. For using `nix build` append `-L` to get better visibility into the logs.
If you get an error that a file can't be found, always try to `git add` the file before trying other troubleshooting steps. If you get an error that a file can't be found, always try to `git add` the file before trying other troubleshooting steps.
## Implementation
When sketching out an implementation of something, always look for tools that already exist in the space first before implementing something custom. This is also the case when it comes to submodules and sections of code, I don't want you to implement things in-house when it isn't needed.
''; '';
home.file.".omp/agent/skills/android-ui/SKILL.md".text = '' ".omp/agent/skills/android-ui/SKILL.md".text = ''
--- ---
name: android-ui name: android-ui
description: Android UI automation via ADB. Use for any Android device interaction, UI testing, screenshot analysis, element coordinate lookup, and gesture automation. description: Android UI automation via ADB. Use for any Android device interaction, UI testing, screenshot analysis, element coordinate lookup, and gesture automation.
@@ -140,17 +183,7 @@ in
# omp has a built-in browser tool with NixOS auto-detection, # omp has a built-in browser tool with NixOS auto-detection,
# but this skill provides playwright MCP as a supplementary option # but this skill provides playwright MCP as a supplementary option
home.file.".omp/agent/skills/playwright/SKILL.md".text = ".omp/agent/skills/playwright/SKILL.md".text = ''
let
browsers = pkgs.playwright-driver.browsers;
chromiumDir = builtins.head (
builtins.filter (n: builtins.match "chromium-[0-9]+" n != null) (
builtins.attrNames browsers.passthru.entries
)
);
chromiumPath = "${browsers}/${chromiumDir}/chrome-linux64/chrome";
in
''
--- ---
name: playwright name: playwright
description: Browser automation via Playwright MCP. Use as an alternative to the built-in browser tool for Playwright-specific workflows, testing, and web scraping. Chromium is provided by NixOS. description: Browser automation via Playwright MCP. Use as an alternative to the built-in browser tool for Playwright-specific workflows, testing, and web scraping. Chromium is provided by NixOS.
@@ -161,19 +194,20 @@ in
## Browser Setup ## Browser Setup
Chromium is provided by NixOS. Do NOT attempt to download browsers. Chromium is provided by NixOS. Do NOT attempt to download browsers.
- Chromium path: `${chromiumPath}` - Chromium path: `${playwrightChromium.chrome}`
- Browsers path: `${browsers}` - Browsers path: `${playwrightChromium.browsers}`
## Usage ## Usage
Launch the Playwright MCP server for browser automation: Launch the Playwright MCP server for browser automation:
```bash ```bash
npx @playwright/mcp@latest --executable-path "${chromiumPath}" --user-data-dir "${config.home.homeDirectory}/.cache/playwright-mcp" npx @playwright/mcp@latest --executable-path "${playwrightChromium.chrome}" --user-data-dir "${config.home.homeDirectory}/.cache/playwright-mcp"
``` ```
Set these environment variables if not already set: Set these environment variables if not already set:
```bash ```bash
export PLAYWRIGHT_BROWSERS_PATH="${browsers}" export PLAYWRIGHT_BROWSERS_PATH="${playwrightChromium.browsers}"
export PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD=1 export PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD=1
``` ```
''; '';
};
} }

View File

@@ -0,0 +1,29 @@
# Declarative non-Steam game shortcuts for the Steam library.
# Add entries to the `shortcuts` list to have them appear in Steam's UI.
{
pkgs,
inputs,
lib,
...
}:
{
imports = [
inputs.json2steamshortcut.homeModules.default
];
services.steam-shortcuts = {
enable = true;
overwriteExisting = true;
steamUserId = lib.strings.toInt (
lib.strings.trim (builtins.readFile ../../secrets/home/steam-user-id)
);
shortcuts = [
{
AppName = "Prism Launcher";
Exe = "${pkgs.prismlauncher}/bin/prismlauncher";
Icon = "${pkgs.prismlauncher}/share/icons/hicolor/scalable/apps/org.prismlauncher.PrismLauncher.svg";
Tags = [ "Game" ];
}
];
};
}

View File

@@ -68,19 +68,19 @@ in
"element.envs.net" "element.envs.net"
"mail.proton.me" "mail.proton.me"
"mail.google.com" "mail.google.com"
"www.gardling.com" "www.sigkill.computer"
"projects.fivethirtyeight.com" "projects.fivethirtyeight.com"
"secure.bankofamerica.com" "secure.bankofamerica.com"
"billpay-ui.bankofamerica.com" "billpay-ui.bankofamerica.com"
"plus.pearson.com" "plus.pearson.com"
"immich.gardling.com" "immich.sigkill.computer"
"huggingface.co" "huggingface.co"
"session.masteringphysics.com" "session.masteringphysics.com"
"brainly.com" "brainly.com"
"www.270towin.com" "www.270towin.com"
"phet.colorado.edu" "phet.colorado.edu"
"8042-1.portal.athenahealth.com" "8042-1.portal.athenahealth.com"
"torrent.gardling.com" "torrent.sigkill.computer"
"nssb-p.adm.fit.edu" "nssb-p.adm.fit.edu"
"mail.openbenchmarking.org" "mail.openbenchmarking.org"
"moneroocean.stream" "moneroocean.stream"
@@ -89,11 +89,11 @@ in
"chat.deepseek.com" "chat.deepseek.com"
"n21.ultipro.com" "n21.ultipro.com"
"www.egaroucid.nyanyan.dev" "www.egaroucid.nyanyan.dev"
"bitmagnet.gardling.com" "bitmagnet.sigkill.computer"
"frame.work" "frame.work"
"www.altcancer.net" "www.altcancer.net"
"jenkins.jpenilla.xyz" "jenkins.jpenilla.xyz"
"soulseek.gardling.com" "soulseek.sigkill.computer"
"discord.com" "discord.com"
"www.lufthansa.com" "www.lufthansa.com"
"surveys.hyundaicx.com" "surveys.hyundaicx.com"

View File

@@ -5,6 +5,7 @@
hostname, hostname,
username, username,
eth_interface, eth_interface,
site_config,
service_configs, service_configs,
options, options,
... ...
@@ -18,13 +19,14 @@
../../modules/zfs.nix ../../modules/zfs.nix
../../modules/server-impermanence.nix ../../modules/server-impermanence.nix
../../modules/usb-secrets.nix ../../modules/usb-secrets.nix
../../modules/age-secrets.nix ../../modules/server-age-secrets.nix
../../modules/server-lanzaboote-agenix.nix ../../modules/server-lanzaboote-agenix.nix
../../modules/no-rgb.nix ../../modules/no-rgb.nix
../../modules/server-security.nix ../../modules/server-security.nix
../../modules/ntfy-alerts.nix ../../modules/ntfy-alerts.nix
../../modules/server-power.nix ../../modules/server-power.nix
../../modules/server-deploy-guard.nix ../../modules/server-deploy-guard.nix
../../modules/server-deploy-finalize.nix
../../services/postgresql.nix ../../services/postgresql.nix
../../services/jellyfin ../../services/jellyfin
@@ -79,19 +81,32 @@
]; ];
# Hosts entries for CI/CD deploy targets # Hosts entries for CI/CD deploy targets
networking.hosts."192.168.1.50" = [ "server-public" ]; networking.hosts.${site_config.hosts.muffin.ip} = [ site_config.hosts.muffin.alias ];
networking.hosts."192.168.1.223" = [ "desktop" ]; networking.hosts.${site_config.hosts.yarn.ip} = [ site_config.hosts.yarn.alias ];
# SSH known_hosts for CI runner (pinned host keys) # SSH known_hosts for CI runner (pinned host keys). All four names resolve to
environment.etc."ci-known-hosts".text = '' # the same muffin host and therefore serve the same host key.
server-public ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMjgaMnE+zS7tL+m5E7gh9Q9U1zurLdmU0qcmEmaucu environment.etc."ci-known-hosts".text =
192.168.1.50 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMjgaMnE+zS7tL+m5E7gh9Q9U1zurLdmU0qcmEmaucu let
git.sigkill.computer ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMjgaMnE+zS7tL+m5E7gh9Q9U1zurLdmU0qcmEmaucu key = site_config.hosts.muffin.ssh_host_key;
git.gardling.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMjgaMnE+zS7tL+m5E7gh9Q9U1zurLdmU0qcmEmaucu names = [
''; site_config.hosts.muffin.alias
site_config.hosts.muffin.ip
"git.${site_config.domain}"
"git.${site_config.old_domain}"
];
in
lib.concatMapStrings (n: "${n} ${key}\n") names;
services.deployGuard.enable = true; services.deployGuard.enable = true;
# Detached deploy finalize: see modules/server-deploy-finalize.nix. deploy-rs
# activates in `boot` mode and invokes deploy-finalize to schedule the real
# `switch` (or reboot, when kernel/initrd/kernel-modules changed) 60s later
# as a pid1-owned transient unit. Prevents the self-hosted gitea runner from
# being restarted mid-CI-deploy.
services.deployFinalize.enable = true;
# Disable serial getty on ttyS0 to prevent dmesg warnings # Disable serial getty on ttyS0 to prevent dmesg warnings
systemd.services."serial-getty@ttyS0".enable = false; systemd.services."serial-getty@ttyS0".enable = false;
@@ -149,9 +164,6 @@
}; };
}; };
# Set your time zone.
time.timeZone = "America/New_York";
hardware.graphics = { hardware.graphics = {
enable = true; enable = true;
extraPackages = with pkgs; [ extraPackages = with pkgs; [
@@ -161,35 +173,21 @@
]; ];
}; };
# Root-facing admin tools only. User-facing CLI (fish, helix, htop, bottom,
# tmux, ripgrep, lsof, wget, pfetch-rs, …) is provided via home-manager in
# home/profiles/terminal.nix — shared with mreow and yarn.
environment.systemPackages = with pkgs; [ environment.systemPackages = with pkgs; [
helix
lm_sensors lm_sensors
bottom
htop
neofetch
borgbackup borgbackup
smartmontools smartmontools
ripgrep
intel-gpu-tools intel-gpu-tools
iotop iotop
iftop iftop
tmux
wget
powertop powertop
lsof
reflac reflac
pfetch-rs
sbctl sbctl
# add `skdump` # add `skdump`
@@ -197,10 +195,7 @@
]; ];
networking = { networking = {
nameservers = [ nameservers = site_config.dns_servers;
"1.1.1.1"
"9.9.9.9"
];
hostName = hostname; hostName = hostname;
hostId = "0f712d56"; hostId = "0f712d56";
@@ -214,8 +209,7 @@
interfaces.${eth_interface} = { interfaces.${eth_interface} = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = "192.168.1.50"; address = site_config.hosts.muffin.ip;
# address = "10.1.1.102";
prefixLength = 24; prefixLength = 24;
} }
]; ];
@@ -227,8 +221,7 @@
]; ];
}; };
defaultGateway = { defaultGateway = {
#address = "10.1.1.1"; address = site_config.lan.gateway;
address = "192.168.1.1";
interface = eth_interface; interface = eth_interface;
}; };
# TODO! fix this # TODO! fix this
@@ -240,14 +233,6 @@
users.groups.${service_configs.media_group} = { }; users.groups.${service_configs.media_group} = { };
users.users.gitea-runner = {
isSystemUser = true;
group = "gitea-runner";
home = "/var/lib/gitea-runner";
description = "Gitea Actions CI runner";
};
users.groups.gitea-runner = { };
users.users.${username} = { users.users.${username} = {
isNormalUser = true; isNormalUser = true;
extraGroups = [ extraGroups = [

View File

@@ -1,31 +1,12 @@
{ ... }:
{ {
pkgs, imports = [
lib, ../../home/profiles/terminal.nix
... ];
}:
{
home.stateVersion = "24.11"; home.stateVersion = "24.11";
programs.fish = {
enable = true;
interactiveShellInit = '' # Muffin typically doesn't have the GPG key loaded (no agent forwarded,
# disable greeting # no key in the keyring). Unsigned commits here rather than failing silently.
set fish_greeting programs.git.signing.signByDefault = false;
# pfetch on shell start (disable pkgs because of execution time)
PF_INFO="ascii title os host kernel uptime memory editor wm" ${lib.getExe pkgs.pfetch-rs}
'';
shellAliases =
let
eza = "${lib.getExe pkgs.eza} --color=always --group-directories-first";
in
{
# from DistroTube's dot files: Changing "ls" to "eza"
ls = "${eza} -al";
la = "${eza} -a";
ll = "${eza} -l";
lt = "${eza} -aT";
};
};
} }

View File

@@ -1,3 +1,4 @@
{ site_config }:
rec { rec {
zpool_ssds = "tank"; zpool_ssds = "tank";
zpool_hdds = "hdds"; zpool_hdds = "hdds";
@@ -195,6 +196,10 @@ rec {
port = 9563; port = 9563;
proto = "tcp"; proto = "tcp";
}; };
minecraft_exporter = {
port = 9567;
proto = "tcp";
};
prometheus_zfs = { prometheus_zfs = {
port = 9134; port = 9134;
proto = "tcp"; proto = "tcp";
@@ -206,15 +211,9 @@ rec {
}; };
}; };
https = {
certs = services_dir + "/http_certs";
domain = "sigkill.computer";
old_domain = "gardling.com"; # Redirect traffic from old domain
};
gitea = { gitea = {
dir = services_dir + "/gitea"; dir = services_dir + "/gitea";
domain = "git.${https.domain}"; domain = "git.${site_config.domain}";
}; };
postgres = { postgres = {
@@ -278,19 +277,19 @@ rec {
matrix = { matrix = {
dataDir = "/var/lib/continuwuity"; dataDir = "/var/lib/continuwuity";
domain = "matrix.${https.domain}"; domain = "matrix.${site_config.domain}";
}; };
ntfy = { ntfy = {
domain = "ntfy.${https.domain}"; domain = "ntfy.${site_config.domain}";
}; };
mollysocket = { mollysocket = {
domain = "mollysocket.${https.domain}"; domain = "mollysocket.${site_config.domain}";
}; };
livekit = { livekit = {
domain = "livekit.${https.domain}"; domain = "livekit.${site_config.domain}";
}; };
syncthing = { syncthing = {
@@ -324,12 +323,12 @@ rec {
}; };
firefox_syncserver = { firefox_syncserver = {
domain = "firefox-sync.${https.domain}"; domain = "firefox-sync.${site_config.domain}";
}; };
grafana = { grafana = {
dir = services_dir + "/grafana"; dir = services_dir + "/grafana";
domain = "grafana.${https.domain}"; domain = "grafana.${site_config.domain}";
}; };
trilium = { trilium = {

View File

@@ -0,0 +1,44 @@
{
username,
inputs,
site_config,
...
}:
{
imports = [
../../modules/desktop-common.nix
../../modules/desktop-jovian.nix
./disk.nix
./impermanence.nix
inputs.impermanence.nixosModules.impermanence
];
networking.hostId = "a1b2c3d4";
# SSH for remote management from laptop
services.openssh = {
enable = true;
ports = [ 22 ];
settings = {
PasswordAuthentication = false;
PermitRootLogin = "yes";
};
};
users.users.${username}.openssh.authorizedKeys.keys = [
site_config.ssh_keys.laptop
];
users.users.root.openssh.authorizedKeys.keys = [
site_config.ssh_keys.laptop
];
jovian.devices.steamdeck.enable = true;
# opt back into the SteamOS kernel cmdline (amd_iommu=off, lockup_timeout,
# sched_hw_submission=4, dcdebugmask=0x20000, ttm.pages_min, audit=0).
# desktop-jovian.nix defaults this to false to keep the Deck-tuned amdgpu
# params off RDNA3 desktops (yarn); patiodeck IS a Deck so it wants them.
jovian.steamos.enableDefaultCmdlineConfig = true;
}

52
hosts/patiodeck/disk.nix Normal file
View File

@@ -0,0 +1,52 @@
{
disko.devices = {
disk = {
main = {
type = "disk";
content = {
type = "gpt";
partitions = {
ESP = {
type = "EF00";
size = "500M";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
};
};
nix = {
size = "200G";
content = {
type = "filesystem";
format = "f2fs";
mountpoint = "/nix";
};
};
persistent = {
size = "100%";
content = {
type = "filesystem";
format = "f2fs";
mountpoint = "/persistent";
};
};
};
};
};
};
nodev = {
"/" = {
fsType = "tmpfs";
mountOptions = [
"defaults"
"size=2G"
"mode=755"
];
};
};
};
fileSystems."/persistent".neededForBoot = true;
fileSystems."/nix".neededForBoot = true;
}

8
hosts/patiodeck/home.nix Normal file
View File

@@ -0,0 +1,8 @@
{ ... }:
{
imports = [
../../home/profiles/gui.nix
../../home/profiles/desktop.nix
../../home/progs/steam-shortcuts.nix
];
}

View File

@@ -0,0 +1,48 @@
{
username,
...
}:
{
environment.persistence."/persistent" = {
hideMounts = true;
directories = [
"/var/log"
"/var/lib/systemd/coredump"
"/var/lib/nixos"
"/var/lib/systemd/timers"
# agenix identity sealed by the TPM
{
directory = "/var/lib/agenix";
mode = "0700";
user = "root";
group = "root";
}
];
files = [
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_ed25519_key.pub"
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
"/etc/machine-id"
];
users.root = {
files = [
".local/share/fish/fish_history"
];
};
};
# bind mount home directory from persistent storage
fileSystems."/home/${username}" = {
device = "/persistent/home/${username}";
fsType = "none";
options = [ "bind" ];
neededForBoot = true;
};
systemd.tmpfiles.rules = [
"d /etc 755 root"
];
}

View File

@@ -1,21 +1,23 @@
{ {
config,
pkgs, pkgs,
lib, lib,
username, username,
inputs, inputs,
site_config,
... ...
}: }:
{ {
imports = [ imports = [
../../modules/desktop-common.nix ../../modules/desktop-common.nix
../../modules/desktop-jovian.nix
../../modules/no-rgb.nix ../../modules/no-rgb.nix
./disk.nix ./disk.nix
./impermanence.nix ./impermanence.nix
./lact.nix
./vr.nix ./vr.nix
./forza-trigger
inputs.impermanence.nixosModules.impermanence inputs.impermanence.nixosModules.impermanence
inputs.jovian-nixos.nixosModules.default
]; ];
fileSystems."/media/games" = { fileSystems."/media/games" = {
@@ -43,8 +45,8 @@
}; };
ipv4 = { ipv4 = {
method = "manual"; method = "manual";
address1 = "192.168.1.223/24,192.168.1.1"; address1 = "${site_config.hosts.yarn.ip}/24,${site_config.lan.gateway}";
dns = "1.1.1.1;9.9.9.9;"; dns = lib.concatMapStrings (n: "${n};") site_config.dns_servers;
}; };
ipv6.method = "disabled"; ipv6.method = "disabled";
}; };
@@ -59,12 +61,12 @@
}; };
users.users.${username}.openssh.authorizedKeys.keys = [ users.users.${username}.openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO4jL6gYOunUlUtPvGdML0cpbKSsPNqQ1jit4E7U1RyH" # laptop site_config.ssh_keys.laptop
]; ];
users.users.root.openssh.authorizedKeys.keys = [ users.users.root.openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO4jL6gYOunUlUtPvGdML0cpbKSsPNqQ1jit4E7U1RyH" # laptop site_config.ssh_keys.laptop
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC5ZYN6idL/w/mUIfPOH1i+Q/SQXuzAMQUEuWpipx1Pc ci-deploy@muffin" site_config.ssh_keys.ci_deploy
]; ];
programs.steam = { programs.steam = {
@@ -72,205 +74,106 @@
localNetworkGameTransfers.openFirewall = true; localNetworkGameTransfers.openFirewall = true;
}; };
# LACT (Linux AMDGPU Configuration Tool): https://github.com/ilya-zlobintsev/LACT environment.systemPackages = [ pkgs.jovian-stubs ];
environment.systemPackages = with pkgs; [
lact
jovian-stubs
];
systemd.packages = with pkgs; [ lact ];
systemd.services.lactd.wantedBy = [ "multi-user.target" ];
systemd.services.lactd.serviceConfig.ExecStartPre = "${lib.getExe pkgs.bash} -c \"sleep 3s\""; # yarn is not a Steam Deck
jovian.devices.steamdeck.enable = false;
# root-level service that applies a pending update. Triggered by # PS5 DualSense adaptive triggers in Forza Horizon 4 / 5.
# steamos-update (via systemctl start) when the user accepts an update. services.forzaTrigger.enable = true;
# Runs as root so it can write the system profile and boot entry.
systemd.services.pull-update-apply = {
description = "Apply pending NixOS update pulled from binary cache";
serviceConfig = {
Type = "oneshot";
ExecStart = pkgs.writeShellScript "pull-update-apply" ''
set -uo pipefail
export PATH=${
pkgs.lib.makeBinPath [
pkgs.curl
pkgs.coreutils
pkgs.nix
]
}
STORE_PATH=$(curl -sf --max-time 30 "https://nix-cache.sigkill.computer/deploy/yarn" || true)
if [ -z "$STORE_PATH" ]; then
echo "server unreachable"
exit 1
fi
echo "applying $STORE_PATH"
nix-store -r "$STORE_PATH" || { echo "fetch failed"; exit 1; }
nix-env -p /nix/var/nix/profiles/system --set "$STORE_PATH" || { echo "profile set failed"; exit 1; }
"$STORE_PATH/bin/switch-to-configuration" boot || { echo "boot entry failed"; exit 1; }
echo "update applied; reboot required"
'';
};
};
# Allow primary user to start pull-update-apply.service without a password # Steam per-app declarative config via steam-config-nix:
security.polkit.extraConfig = '' # - launch-option env vars (PROTON_FSR4_UPGRADE et al)
polkit.addRule(function(action, subject) { # - file drops into the install dir (FH5 intro stubs, OptiScaler DLLs +
if (action.id == "org.freedesktop.systemd1.manage-units" && # hand-written FH5/RDNA3 INI). Backups land next to replaced files with
action.lookup("unit") == "pull-update-apply.service" && # a `.steam-config-nix-backup` suffix on first apply.
subject.user == "${username}") { #
return polkit.Result.YES; # The patcher runs as a system oneshot at activation; closeSteam = true
} # ensures Steam is shut down before the localconfig.vdf write so Steam
}); # can't clobber it on next exit. File drops happen in the same run but
''; # don't share that concern \u2014 they only touch steamapps/common/<dir>
# and steamapps/compatdata/<id>/pfx.
nixpkgs.config.allowUnfreePredicate = #
pkg: # OptiScaler intercepts FH5's DLSS/XeSS calls and reroutes them through
builtins.elem (lib.getName pkg) [ # the bundled FFX SDK. Override values + sources for the FH5/RDNA3 path
"steamdeck-hw-theme" # live in ./optiscaler-fh5-rdna3.ini; keys not listed there fall through
"steam-jupiter-unwrapped" # to OptiScaler's "auto" defaults.
"steam" #
"steam-original" # Required one-time per-game setup the user has to do in Steam (no API):
"steam-unwrapped" # - Properties > Compatibility: pick the GE-Proton tool by hand. The
"steam-run" # `compatTool` option is intentionally unset \u2014 nixpkgs registers
]; # proton-ge-bin under its versioned id (e.g. GE-Proton10-34), and
# writing the generic "GE-Proton" string silently falls back to
# Override jovian-stubs to disable steamos-update kernel check # bundled Proton.
# This prevents Steam from requesting reboots for "system updates" # - In-game: switch the Upscaling option from FSR 2.2 to DLSS or XeSS
# Steam client updates will still work normally # (FSR 2 inputs aren't intercepted). Press Insert to open the Opti
nixpkgs.overlays = [ # overlay and set the FFX upscaler to FSR 4.
( #
final: prev: # OptiScaler.ini is dropped with mode = "init" so in-game overlay edits
# persist; the hand-written template is only written on first apply (or
# after manual deletion). To push a new default into an existing install:
# `rm OptiScaler.ini` in the FH5 dir, then trigger a redeploy. The DLLs
# and other static assets stay mode = "create" so they're updated on
# every OptiScaler version bump.
#
# OptiScaler's installation page warns against use with online games.
# FH5 has no kernel-mode anti-cheat but Playground does server-side
# telemetry. Use at your own risk.
programs.steam.config =
let let
deploy-url = "https://nix-cache.sigkill.computer/deploy/yarn"; fromOpti = relpath: {
source = "${pkgs.optiscaler}/${relpath}";
steamos-update-script = final.writeShellScript "steamos-update" '' mode = "create";
export PATH=${ };
final.lib.makeBinPath [
final.curl
final.coreutils
final.systemd
]
}
STORE_PATH=$(curl -sf --max-time 30 "${deploy-url}" || true)
if [ -z "$STORE_PATH" ]; then
>&2 echo "[steamos-update] server unreachable"
exit 7
fi
CURRENT=$(readlink -f /nix/var/nix/profiles/system)
if [ "$CURRENT" = "$STORE_PATH" ]; then
>&2 echo "[steamos-update] no update available"
exit 0
fi
# check-only mode: just report that an update exists
if [ "''${1:-}" = "check" ] || [ "''${1:-}" = "--check-only" ]; then
>&2 echo "[steamos-update] update available"
exit 0
fi
# apply: trigger the root-running systemd service to install the update
>&2 echo "[steamos-update] applying update..."
if systemctl start --wait pull-update-apply.service; then
>&2 echo "[steamos-update] update installed, reboot to apply"
exit 0
else
>&2 echo "[steamos-update] apply failed; see 'journalctl -u pull-update-apply'"
exit 1
fi
'';
in in
{ {
jovian-stubs = prev.stdenv.mkDerivation {
name = "jovian-stubs";
dontUnpack = true;
installPhase = ''
mkdir -p $out/bin
ln -s ${steamos-update-script} $out/bin/steamos-update
# ln -s ${steamos-update-script} $out/bin/steamos-mandatory-update
# jupiter-initial-firmware-update: no-op (not a real steam deck)
cat > $out/bin/jupiter-initial-firmware-update << 'STUB'
#!/bin/sh
exit 0
STUB
# jupiter-biosupdate: no-op (not a real steam deck)
cat > $out/bin/jupiter-biosupdate << 'STUB'
#!/bin/sh
exit 0
STUB
# steamos-reboot: reboot the system
cat > $out/bin/steamos-reboot << 'STUB'
#!/bin/sh
>&2 echo "[JOVIAN] $0: stub called with: $*"
systemctl reboot
STUB
# steamos-select-branch: no-op stub
cat > $out/bin/steamos-select-branch << 'STUB'
#!/bin/sh
>&2 echo "[JOVIAN] $0: stub called with: $*"
exit 0
STUB
# steamos-factory-reset-config: no-op stub
cat > $out/bin/steamos-factory-reset-config << 'STUB'
#!/bin/sh
>&2 echo "[JOVIAN] $0: stub called with: $*"
exit 0
STUB
# steamos-firmware-update: no-op stub
cat > $out/bin/steamos-firmware-update << 'STUB'
#!/bin/sh
>&2 echo "[JOVIAN] $0: stub called with: $*"
exit 0
STUB
# pkexec: pass through to real pkexec
cat > $out/bin/pkexec << 'STUB'
#!/bin/sh
exec /run/wrappers/bin/pkexec "$@"
STUB
# sudo: strip flags and run the command directly (no escalation).
# privileged ops are delegated to root systemd services via systemctl.
cat > $out/bin/sudo << 'STUB'
#!/bin/sh
while [ $# -gt 0 ]; do
case "$1" in
-*) shift ;;
*) break ;;
esac
done
exec "$@"
STUB
find $out/bin -type f -exec chmod 755 {} +
'';
};
}
)
];
jovian = {
devices.steamdeck.enable = false;
steam = {
enable = true; enable = true;
autoStart = true; closeSteam = true;
desktopSession = "niri"; apps."fh5" = {
user = username; id = 1551360;
launchOptions.env = {
# OptiScaler FSR 4 INT8 path on this RDNA 3 (Navi 32) box.
# PROTON_FSR4_UPGRADE opts FH5 into Proton's FSR 4 DLL upgrade;
# DXIL_SPIRV_CONFIG fixes the broken visuals the wmma RDNA3
# emulation path otherwise produces. Source: OptiScaler FSR4 wiki
# Linux Setup.
PROTON_FSR4_UPGRADE = "1";
DXIL_SPIRV_CONFIG = "wmma_rdna3_workaround";
# vkd3d-proton stutter/crash workaround on this box; remove when a
# future Proton release fixes the upload-hvv path upstream.
VKD3D_CONFIG = "no_upload_hvv";
};
files = {
# FH5 cold-start splash. Two copies live in the install (SD +
# hires); the engine picks one based on the installed asset
# profile, so stub both. PCGamingWiki documents both paths under
# "Skip intro video".
"media/UI/Videos/T10_MS_Combined.bk2".empty = true;
"media/UI/Videos/hires/T10_MS_Combined.bk2".empty = true;
# OptiScaler.dll is renamed to dxgi.dll so FH5's DLL search order
# picks it up as the dxgi shim per the OptiScaler FH5 wiki page.
"dxgi.dll" = fromOpti "OptiScaler.dll";
"OptiScaler.ini" = {
source = ./optiscaler-fh5-rdna3.ini;
mode = "init";
};
}
// lib.genAttrs [
"D3D12_Optiscaler/D3D12Core.dll"
"amd_fidelityfx_dx12.dll"
"amd_fidelityfx_framegeneration_dx12.dll"
"amd_fidelityfx_upscaler_dx12.dll"
"amd_fidelityfx_vk.dll"
"dlssg_to_fsr3_amd_is_better.dll"
"fakenvapi.dll"
"fakenvapi.ini"
"libxell.dll"
"libxess.dll"
"libxess_dx11.dll"
"libxess_fg.dll"
] fromOpti;
}; };
}; };
# Jovian-NixOS requires sddm
# https://github.com/Jovian-Experiments/Jovian-NixOS/commit/52f140c07493f8bb6cd0773c7e1afe3e1fd1d1fa
services.displayManager.sddm.wayland.enable = true;
# Disable gamescope from common.nix to avoid conflict with jovian-nixos
programs.gamescope.enable = lib.mkForce false;
} }

View File

@@ -0,0 +1,106 @@
{
config,
lib,
pkgs,
username,
...
}:
# Forza Horizon 4 / 5 → DualSense adaptive trigger bridge.
#
# Forza emits a fixed-format UDP telemetry stream ("Data Out") at 60 Hz on a
# user-configured port. We listen on that port, parse each packet via fdp
# (nettrom/forza_motorsport, MIT), and drive the PS5 DualSense's adaptive
# triggers via pydualsense (PyPI, MIT) which talks HID over hidraw.
#
# Setup on the user side, once enabled here:
# - plug the DualSense in over USB and disable Steam Input for the
# controller (Settings → Controller → "PlayStation Configuration Support":
# OFF). Bluetooth works too but the udev/hidraw path is more reliable
# over USB.
# - in Forza, HUD options → set Data Out: ON, Data Out IP: 127.0.0.1,
# Data Out IP Port: 5300, and (FM7 only) Data Out Packet Format: CAR DASH.
#
# System-interaction notes:
# - With multiple DualSense controllers connected, pydualsense picks one
# non-deterministically (`# TODO: implement multiple controllers working`
# in pydualsense's source). Forza Horizon is single-player so this is
# usually fine. If you need to pin a specific controller, the cleanest
# route is monkey-patching `pydualsense.__find_device`.
# - `pkgs.dualsensectl` is intentionally NOT installed by default
# (single-shot writes from it get overwritten by our BG thread within
# ~4 ms). Bring it in ad-hoc with `nix-shell -p dualsensectl` and stop
# this service first via `systemctl --user stop forza-trigger`.
# - Hot-plug recovery happens in-process: the daemon polls pydualsense's BG
# thread liveness and re-runs `pydualsense.init()` on disconnect. systemd's
# `Restart=on-failure` exists only as a crash-recovery safety net.
let
cfg = config.services.forzaTrigger;
pythonPackages = import ./python-packages.nix { inherit lib pkgs; };
inherit (pythonPackages) pydualsense fdp;
forzaTrigger = pkgs.writers.writePython3Bin "forza-trigger" {
libraries = [
pydualsense
fdp
];
# The wrapped binary doesn't need style enforcement — readability of
# the source file is what matters, and that lives in forza_trigger.py.
doCheck = false;
} (builtins.readFile ./forza_trigger.py);
in
{
options.services.forzaTrigger = {
enable = lib.mkEnableOption "Forza Horizon DualSense adaptive trigger bridge";
user = lib.mkOption {
type = lib.types.str;
default = username;
description = ''
User the trigger daemon runs as. Must be the user playing Forza so
the DualSense's hidraw uaccess ACL applies.
'';
};
port = lib.mkOption {
type = lib.types.port;
default = 5300;
description = ''
UDP port the daemon listens on for Forza Data Out packets. Must
match the value configured in Forza's HUD options.
'';
};
};
config = lib.mkIf cfg.enable {
# uaccess hands /dev/hidraw* of the connected PS5 DualSense to the
# active-seat user via ACL. Steam ships near-identical rules; declaring
# them here keeps the module self-contained (and works even if Steam
# isn't running).
services.udev.extraRules = ''
# PS5 DualSense (USB)
KERNEL=="hidraw*", ATTRS{idVendor}=="054c", ATTRS{idProduct}=="0ce6", MODE="0660", TAG+="uaccess"
# PS5 DualSense Edge (USB)
KERNEL=="hidraw*", ATTRS{idVendor}=="054c", ATTRS{idProduct}=="0df2", MODE="0660", TAG+="uaccess"
# PS5 DualSense (Bluetooth)
KERNEL=="hidraw*", KERNELS=="*054C:0CE6*", MODE="0660", TAG+="uaccess"
# PS5 DualSense Edge (Bluetooth)
KERNEL=="hidraw*", KERNELS=="*054C:0DF2*", MODE="0660", TAG+="uaccess"
'';
environment.systemPackages = [ forzaTrigger ];
# User-level service so it inherits the seat-bound uaccess ACL on
# /dev/hidraw* and dies cleanly when the user logs out.
systemd.user.services.forza-trigger = {
description = "Forza Horizon DualSense adaptive trigger bridge";
wantedBy = [ "default.target" ];
after = [ "graphical-session.target" ];
serviceConfig = {
ExecStart = "${forzaTrigger}/bin/forza-trigger --host 127.0.0.1 --port ${toString cfg.port}";
Restart = "on-failure";
RestartSec = 3;
};
};
};
}

View File

@@ -0,0 +1,898 @@
"""Bridge Forza Horizon 4/5 telemetry to DualSense adaptive triggers.
This is a faithful Linux port of the RacingDSX -> DSX -> DualSense pipeline.
Every numeric value, every threshold, every map() / EWMA() coefficient,
and every output byte sequence has been verified against published
sources or against decompiled DSX 1.4.9 itself.
Sources, in priority order:
1. DSX 1.4.9 binary (Paliverse/DualSenseX, GitHub release, archived 2021-12-31)
decompiled with ILSpy. The decompilation revealed:
a. DSX bundles ExtendInput.DataTools.DualSense (Nielk1 Rev6, MIT) as
its trigger-effect encoder.
b. The UDP/JSON dispatcher in DualSenseX/Main.cs maps RacingDSX's
high-level CustomTriggerValueMode names to mode bytes:
VibrateResistance -> 6 (Simple_Vibration / 0x06)
VibrateResistanceA / AB -> 38 (Vibration / 0x26)
VibrateResistanceB -> 6
When the dispatcher hits the `else` branch in
DualSense_USB_Updated.cs (any CustomTriggerValueIndex other than
9/11/13/15/17/19) it writes the eight TriggerValue bytes RAW into
the trigger param region — no bit-packing, no scale conversion.
This is why RacingDSX's 0-255-scale stiffness values ARE the
actual amplitude bytes that reach the controller's firmware.
2. Nielk1's reverse-engineering gist Rev 6 (MIT,
https://gist.github.com/Nielk1/6d54cc2c00d2201ccb8c2720ad7538db).
Source for the canonical Sony bit-packed Feedback (0x21) encoder used
for the non-slip path. Identical to the implementation shipped inside
DSX 1.4.9.
3. RacingDSX (cosmii02/RacingDSX, GPLv3) — community-tuned defaults for
Forza Horizon 4 / 5 since 2022. Specifically:
Config/ThrottleSettings.cs, Config/BrakeSettings.cs,
GameParsers/Parser.cs.
The HID transport (BT/USB framing, CRC32, ~1 kHz sendReport thread) is
provided by pydualsense (PyPI, MIT). We drive its low-level
`triggerL/R.mode` and `triggerL/R.forces[i]` fields directly because
pydualsense's high-level setMode/setForce API does not understand any
specific mode's parameter encoding — it just shovels bytes into the
output report at fixed offsets. That is exactly what we want.
## Documented intentional divergences from RacingDSX
1. Motion gate (`_is_in_motion()`): slip detection is gated on speed or wheel
rotation. RacingDSX has no gate, so locked stationary wheels (e.g. after a
hard stop with brake held) keep the slip path active forever and the trigger
stuck in vibration mode. The gate fixes the user-reported bug.
2. Clamp-to-8 in `_apply_feedback()`: DSX's `TriggerEffectGenerator.Resistance`
silently skips when force > 8, leaving the trigger stuck in whatever mode it
was in (Simple_Vibration during slip\u2192non-slip transitions). We clamp to 8
instead so the transition produces a smooth Feedback ramp.
3. Car performance index width: fdp parses `car_performance_index` as Int32 (the
field's actual width per Forza's spec), while RacingDSX's `FMData.cs` reads
only `GetUInt8(bytes, 220)` \u2014 the low byte. For any car with CPI > 255 (B/A/
S1/S2/X) the two implementations disagree on the pre-race lightbar's CPI
tint. We use the correct value; RacingDSX's lightbar is dimmer/inconsistent
on those cars. Mask `cpi & 0xFF` in `apply_lightbar_pre_race` to match
RacingDSX byte-for-byte if you want bug-faithful Windows-equivalent dimming.
4. Trigger release combines mode 0x05 (active) with mode 0x00 (steady-state).
RacingDSX dispatches `TriggerMode.Normal` for pre-race / between-race state,
which becomes mode byte 0x00. Per Sony's docs (Nielk1 Rev 6), mode 0x00 only
*clears* state and does not retract the trigger motor; mode 0x05 *actively*
returns the trigger stop to neutral. RacingDSX-on-Windows gets away with
0x00 because something on Windows (Steam Input or the OS) reliably resets
the motor on focus loss; on Linux nothing does, and R2 keeps residual
tension after a race ends. But re-asserting 0x05 every frame in steady-state
pre-race causes the trigger motor to audibly whine as the firmware repeatedly
snaps the (already-neutral) trigger back to neutral. So we use 0x05 as a
one-shot on the in-race \u2192 not-in-race transition (and on the telemetry-idle
timeout), then mode 0x00 for steady-state pre-race / idle frames \u2014 motor
stays released, no continuous retraction noise.
5. Throttle gated on clutch state. Forza emits a `clutch` byte (0..255). When
the clutch is disengaged (byte > 128) the engine is mechanically disconnected
from the wheels and the throttle pedal can't transmit power; the trigger has
no business resisting. RacingDSX's throttle resistance formula is
`avgAccel = sqrt(0.25*X\u00b2 + 1.0*Z\u00b2)` derived from the accelerometer alone
with no clutch check, so the trigger keeps producing resistance from
cornering G-forces while the clutch is in. We bypass the throttle path
entirely when clutch > 128, releasing the trigger using the same one-shot-
then-steady pattern as divergence #4. Auto-clutch users will notice ~100 ms
trigger relaxations during shifts; that's actually physically accurate \u2014
the engine *is* momentarily disconnected during a shift.
## Threading note
pydualsense's `sendReport` background thread reads `triggerR/L.mode` and
`forces[0..6]` independently \u2014 there's no atomic publish primitive. Our
`_apply_*` helpers write `forces[]` first and `mode` last; the BG thread reads
`mode` first, so this ordering keeps the worst-case torn frame to one ~4 ms
HID write at slip\u2194non-slip mode transitions. Audible as a brief click on
transitions, not stuck state. Without lock/atomic primitives in pydualsense's
API this is the cleanest mitigation available.
## System interaction notes
**Single-controller assumption.** pydualsense's `__find_device` enumerates all
DualSense devices (vid 0x054C, pid 0x0CE6 standard / 0x0DF2 Edge), keeps the
last one matched (no break in the loop), then opens via `hidapi_open(vid, pid)`
without serial/path \u2014 `hid_open` returns the first match, which is not
necessarily the one selected. With multiple DualSense controllers the picked
controller is non-deterministic. pydualsense's source explicitly notes
`# TODO: implement multiple controllers working`. RacingDSX/DSX are also
single-controller (DSX's `connectedController` is a singleton). Forza Horizon
is single-player so this is fine in practice; if multi-controller selection
matters, monkey-patch `__find_device` to filter by `serial_number`.
**Steam Input.** When Steam Input's PlayStation Configuration Support is
enabled for the game, Steam intercepts hidraw input AND writes its own HID
output reports (rumble, lightbar, sometimes triggers). Our daemon writes
competing output reports at ~1 kHz; the controller observes whichever wrote
last. Effect: trigger oscillates and feels broken. The Nix module's README
in `default.nix` instructs users to disable PlayStation Configuration Support
for Forza in Steam (Settings \u2192 Controller).
**dualsensectl.** Installed in the Nix module for ad-hoc debugging. Single-
shot writes from `dualsensectl trigger left feedback ...` get overwritten by
our BG thread's next iteration ~4 ms later. Use it only when the daemon is
stopped (`systemctl --user stop forza-trigger`).
**Hot-plug.** pydualsense's BG `sendReport` thread terminates silently on
hidraw IOError (unplug, BT disconnect, USB resuspend). The main loop polls
`ds.report_thread.is_alive()` and reconnects in-process via
`_connect_controller()`, which retries `pydualsense.init()` every
`RECONNECT_BACKOFF_S` until the controller comes back. The daemon does not
depend on systemd or any other supervisor for plug-event recovery; running it
directly from a shell handles unplug/replug exactly the same way.
"""
from __future__ import annotations
import argparse
import logging
import math
import os
import socket
import sys
import time
from fdp import ForzaDataPacket
from pydualsense import TriggerModes, pydualsense
LOG = logging.getLogger("forza-trigger")
# --- Mode bytes ---------------------------------------------------------------
# pydualsense's IntFlag aliases happen to cover the modes we need:
# TriggerModes.Off = 0x00 (no-op; clears command without retracting motor)
# TriggerModes(0x05) = 0x05 (canonical Sony Off / Reset)
# TriggerModes.Pulse_B = 0x06 (Simple_Vibration / Simple_AutomaticGun)
# TriggerModes.Rigid_A = 0x21 (Feedback, canonical)
DS_MODE_NORMAL = TriggerModes.Off # 0x00 "clear command"; motor stays in last-set physical state
DS_MODE_OFF = TriggerModes(0x05)
DS_MODE_SIMPLE_VIBRATION = TriggerModes.Pulse_B
DS_MODE_FEEDBACK = TriggerModes.Rigid_A
# --- RacingDSX defaults (Config/ThrottleSettings.cs) --------------------------
THROTTLE_GRIP_LOSS = 0.6
THROTTLE_REAR_SLIP_ACCEL_MIN = 200
THROTTLE_VIB_POSITION = 5 # VibrationModeStart
THROTTLE_MIN_VIBRATION = 5 # below this freq, fall back to Resistance
THROTTLE_MAX_VIBRATION = 55 # peak frequency at slip == 5
THROTTLE_MIN_STIFFNESS = 255 # slip-mode amplitude at avgAccel == 0
THROTTLE_MAX_STIFFNESS = 175 # slip-mode amplitude at avgAccel == AccelerationLimit
THROTTLE_MIN_RESISTANCE = 0 # non-slip canonical strength at avgAccel == 0
THROTTLE_MAX_RESISTANCE = 3 # non-slip canonical strength at avgAccel == AccelerationLimit
THROTTLE_ACCELERATION_LIMIT = 10
THROTTLE_TURN_ACCEL_SCALE = 0.25
THROTTLE_FORWARD_ACCEL_SCALE = 1.0
THROTTLE_RESISTANCE_SMOOTHING = 0.9
THROTTLE_VIBRATION_SMOOTHING = 1.0
THROTTLE_EFFECT_INTENSITY = 1.0
THROTTLE_LAST_RESISTANCE_INIT = 1 # Parser.lastThrottleResistance
# --- RacingDSX defaults (Config/BrakeSettings.cs) -----------------------------
BRAKE_GRIP_LOSS = 0.05
BRAKE_DEADZONE = 100 # Parser literal: data.Brake > 100
BRAKE_VIB_POSITION = 0 # VibrationStart
BRAKE_MIN_VIBRATION = 15
BRAKE_MAX_VIBRATION = 20
BRAKE_MIN_STIFFNESS = 150
BRAKE_MAX_STIFFNESS = 5
BRAKE_MIN_RESISTANCE = 0
BRAKE_MAX_RESISTANCE = 7
BRAKE_RESISTANCE_SMOOTHING = 0.4
BRAKE_VIBRATION_SMOOTHING = 1.0
BRAKE_EFFECT_INTENSITY = 1.0
BRAKE_LAST_RESISTANCE_INIT = 200 # Parser.lastBrakeResistance
# --- Forza UDP packet sizes -> fdp packet_format strings ----------------------
PACKET_FORMATS = {
232: "sled",
311: "dash",
324: "fh4", # FH4 and FH5 share the same layout
}
# --- ForzaParser state-machine constants (GameParsers/ForzaParser.cs) --------
# CarClass field maps as 0=D, 1=C, 2=B, 3=A, 4=S1, 5=S2, 6=X (FH) / 7=X (FM).
# Parser.cs uses an `<=` cascade, so any value > 5 is treated as X.
CAR_CLASS_COLORS = [
(107, 185, 236), # ColorClassD
(234, 202, 49), # ColorClassC
(211, 90, 37), # ColorClassB
(187, 59, 34), # ColorClassA
(128, 54, 243), # ColorClassS1
(75, 88, 229), # ColorClassS2
(105, 182, 72), # ColorClassX (no CPI tint)
]
MAX_CPI = 255 # ForzaParser.MaxCPI
RPM_REDLINE_RATIO = 0.9 # Profile.RPMRedlineRatio
GREEN_FLOOR = 50 # Math.Max(..., 50) on green channel in non-redline path
RACE_OFF_RPM_FRAMES = 200 # ForzaParser.RPMAccumulatorTriggerRaceOff
# --- Clutch gate (throttle only) ---------------------------------------------
# Forza emits `clutch` 0..255 (0 = pedal up / engaged / engine connected to
# wheels, 255 = pedal floored / fully disengaged). With the clutch disengaged
# the throttle pedal is mechanically irrelevant \u2014 pressing it just revs the
# engine without transmitting power. RacingDSX has no clutch gate, so its
# `avgAccel = sqrt(0.25*X\u00b2 + 1.0*Z\u00b2)` formula keeps producing throttle
# resistance from cornering G-forces even while the clutch is in.
CLUTCH_DISENGAGE_THRESHOLD = 128
# --- Reset on idle (UDP timeout) ---------------------------------------------
# Not present in RacingDSX; an additional safety so the controller doesn't get
# stuck if Forza is killed mid-race or the network drops.
IDLE_TIMEOUT_S = 3.0
# --- Hot-plug reconnect backoff ----------------------------------------------
# pydualsense's BG sendReport thread terminates silently on hidraw IOError
# (controller unplugged, BT disconnect, USB resuspend). The main loop polls
# the thread's liveness and reconnects in-process \u2014 the script is agnostic
# of supervisors like systemd. The same backoff governs the initial-connect
# wait when the daemon starts before any controller is plugged in.
RECONNECT_BACKOFF_S = 1.0
# --- Stationary motion gate --------------------------------------------------
# Forza reports nonzero `tire_combined_slip_*` on a stationary car with locked
# wheels (e.g. after coming to a hard stop). RacingDSX/DSX have no gate for
# this and end up with the brake (and sometimes throttle) trigger stuck in
# Simple_Vibration mode forever, because the slip path keeps firing. We
# additionally require either the car or any wheel to be in real motion before
# treating slip as a haptic event.
STATIONARY_SPEED_MS = 0.1 # m/s; below this the car is considered stopped
STATIONARY_WHEEL_RAD_S = 0.1 # rad/s; below this a wheel is considered locked
def _is_in_motion(pkt: ForzaDataPacket) -> bool:
"""True iff the car is moving or any wheel is rotating meaningfully.
Used to gate slip-detection: when both car and all four wheels read as
stopped, any nonzero `tire_combined_slip` Forza emits is data noise from
locked wheels and should not drive haptic vibration.
"""
if abs(_safe(pkt, "speed")) > STATIONARY_SPEED_MS:
return True
for wheel in ("FL", "FR", "RL", "RR"):
if abs(_safe(pkt, f"wheel_rotation_speed_{wheel}")) > STATIONARY_WHEEL_RAD_S:
return True
return False
# --- Effect encoders ----------------------------------------------------------
def _apply_off(trig) -> None:
"""Canonical Sony Off / Reset \u2014 mode byte 0x05, all params 0.
Mirrors `TriggerEffectGenerator.Reset()` in DSX. Per Sony's docs (Nielk1
Rev 6), mode 0x05 *actively* returns the trigger stop to the neutral
position; mode 0x00 (TriggerModes.Off, what RacingDSX writes for its
NormalTrigger pre-race state) only clears state without retracting the
motor, so the trigger stays in whatever Feedback/Vibration position was
last applied. We route every \"release the trigger\" intent here \u2014
pre-race, idle-timeout, mid-race zero-strength fallback, shutdown.
"""
# Write forces before mode so pydualsense's BG sendReport thread, which
# reads mode then forces non-atomically, is more likely to observe a
# self-consistent (mode, forces) pair. See module-docstring threading note.
for i in range(7):
trig.forces[i] = 0
trig.mode = DS_MODE_OFF
def _apply_feedback(trig, position: int, strength: int) -> bool:
"""Sony Feedback (mode 0x21), bit-packed.
Verbatim port of `ExtendInput.DataTools.DualSense.TriggerEffectGenerator
.Resistance` from DSX 1.4.9 \u2014 with one deliberate divergence.
DSX's TriggerEffectGenerator.Resistance returns `false` without writing
when strength > 8, and RacingDSX's fall-through path routinely sends 5..255-
range slip-mode stiffness values into Feedback, hitting that branch every
transition out of slip. The result observed by the player: \"ABS feedback
continues even when stationary\" \u2014 the trigger remains stuck in whatever
mode (typically Simple_Vibration) was set before the failed Resistance
call, sometimes indefinitely if Forza keeps reporting nonzero slip on
locked wheels.
We clamp out-of-range strength to 8 instead. The transition out of slip
now produces a smooth Feedback ramp from full-stiffness down to the
non-slip target as the EWMA decays, rather than freezing on stale
Simple_Vibration bytes. The return value (kept for symmetry with DSX's
bool-returning Resistance) is False on invalid position, True otherwise.
"""
if position > 9:
return False
if strength > 8:
strength = 8
if strength <= 0:
# Sony's algorithm: zero force -> Reset (canonical Off, mode 0x05).
# DSX's TriggerEffectGenerator.Resistance falls through to Reset()
# here, so we do the same for byte-perfect parity.
_apply_off(trig)
return True
force_value = (strength - 1) & 0x07
force_zones = 0
active_zones = 0
for i in range(position, 10):
force_zones |= force_value << (3 * i)
active_zones |= 1 << i
trig.forces[0] = active_zones & 0xFF
trig.forces[1] = (active_zones >> 8) & 0xFF
trig.forces[2] = force_zones & 0xFF
trig.forces[3] = (force_zones >> 8) & 0xFF
trig.forces[4] = (force_zones >> 16) & 0xFF
trig.forces[5] = (force_zones >> 24) & 0xFF
trig.forces[6] = 0 # frequency byte unused for Feedback
trig.mode = DS_MODE_FEEDBACK
return True
def _apply_simple_vibration(trig, position: int, amplitude: int, frequency: int) -> None:
"""Legacy Simple_Vibration (mode 0x06), raw byte passthrough.
Mirrors DSX's `else` branch in `DualSense_USB_Updated.cs::CustomTriggerValues`:
array[11] = TriggerValue1 (= 6 for VibrateResistance)
array[12] = TriggerValue2 (= frequency)
array[13] = TriggerValue3 (= amplitude)
array[14] = TriggerValue4 (= position)
array[15..17,20] = 0
Per Nielk1, Simple_Vibration was Sony's pre-firmware-update vibration
mode — same effect as canonical Vibration (0x26) on every shipped
DualSense, but takes raw 0-255 amplitude bytes instead of the bit-
packed 0-8 zone format. RacingDSX/DSX have used it since v1.0; the
entire Forza-on-DualSense community ships these byte values.
"""
if amplitude <= 0 or frequency <= 0:
_apply_off(trig)
return
trig.forces[0] = frequency & 0xFF
trig.forces[1] = amplitude & 0xFF
trig.forces[2] = position & 0xFF
trig.forces[3] = 0
trig.forces[4] = 0
trig.forces[5] = 0
trig.forces[6] = 0
trig.mode = DS_MODE_SIMPLE_VIBRATION
def _apply_normal(trig) -> None:
"""Mode 0x00 (TriggerModes.Off) + zero forces.
Per Sony's docs (Nielk1 Rev 6) mode 0x00 is a *clear/no-op* command \u2014 the
firmware's last-set physical effect persists. We use this for steady-state
pre-race / idle frames after `_apply_off` has already retracted the motor
via mode 0x05. Re-asserting 0x05 every frame causes the motor to audibly
whine as the firmware repeatedly snaps the (already-neutral) trigger back
to neutral.
"""
for i in range(7):
trig.forces[i] = 0
trig.mode = DS_MODE_NORMAL
def reset_triggers(ds: pydualsense) -> None:
"""Both triggers to canonical Off (mode 0x05). Actively retracts the motor."""
_apply_off(ds.triggerL)
_apply_off(ds.triggerR)
def reset_lightbar(ds: pydualsense) -> None:
"""Lightbar to off (RGB 0,0,0).
Used when telemetry has been idle long enough that we should stop asserting
a race color \u2014 e.g. Forza exited or hasn't started a session yet. Without
this, pydualsense's BG sendReport thread keeps re-publishing whatever
`TouchpadColor` we last set, so the controller stays lit indefinitely.
"""
ds.light.setColorI(0, 0, 0)
# --- RacingDSX math primitives ------------------------------------------------
def _map(x: float, in_min: float, in_max: float, out_min: float, out_max: float) -> float:
"""Mirrors Parser.Map() in RacingDSX, including endpoint clamping."""
if x > in_max:
x = in_max
elif x < in_min:
x = in_min
return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min
def _ewma(value: float, last: float, alpha: float) -> float:
"""Mirrors `Parser.EWMA(float, float, float)`. alpha=1.0 disables smoothing."""
return alpha * value + (1.0 - alpha) * last
def _ewma_int(value: int, last: int, alpha: float) -> int:
"""Mirrors `Parser.EWMA(int, int, float)` \u2014 floor of float-EWMA."""
return math.floor(alpha * value + (1.0 - alpha) * last)
def _safe(pkt: ForzaDataPacket, name: str, default: float = 0.0) -> float:
return float(getattr(pkt, name, default))
# --- Per-trigger persistent state for EWMA ------------------------------------
class _TriggerState:
__slots__ = ("last_resistance", "last_freq", "prev_clutched")
def __init__(self, init_resistance: int) -> None:
# Mirrors RacingDSX's `int lastThrottleResistance` / `int lastBrakeResistance`.
self.last_resistance: int = int(init_resistance)
self.last_freq: int = 0
# Throttle only: tracks last frame's clutch state so the throttle path
# can fire one-shot 0x05 (active retract) on the engaged-\u2192-disengaged
# transition and 0x00 (no-op) for steady-state held-in. See
# CLUTCH_DISENGAGE_THRESHOLD and divergence #5 in the module docstring.
self.prev_clutched: bool = False
# --- Forza game-level persistent state (ForzaParser.cs fields) ----------------
class _ForzaState:
"""Persistent across packets. Mirrors ForzaParser's instance fields:
LastEngineRPM, LastRPMAccumulator, LastValidCarClass, LastValidCarCPI."""
__slots__ = (
"last_engine_rpm",
"rpm_accumulator",
"last_valid_car_class",
"last_valid_car_cpi",
)
def __init__(self) -> None:
self.last_engine_rpm = 0.0
self.rpm_accumulator = 0
self.last_valid_car_class = 0
self.last_valid_car_cpi = 0
def _clamp_byte(v: float) -> int:
"""Clamp to [0, 255] before writing to a uint8 RGB channel."""
return max(0, min(255, int(v)))
def forza_is_race_on(pkt: ForzaDataPacket, state: _ForzaState) -> bool:
"""Mirrors `ForzaParser.IsRaceOn()` verbatim.
FH4/FH5's `is_race_on` field is unreliable: it sometimes stays True after
the player exits a race or pauses. ForzaParser detects the off state by
watching for unchanged engine RPM combined with non-positive Power across
`RPMAccumulatorTriggerRaceOff` (200) consecutive frames. Power is dash-only,
so for sled-format packets it reads as 0; that matches RacingDSX exactly.
"""
in_race = bool(int(getattr(pkt, "is_race_on", 0)))
current_rpm = _safe(pkt, "current_engine_rpm")
power = _safe(pkt, "power")
if current_rpm == state.last_engine_rpm and power <= 0:
state.rpm_accumulator += 1
if state.rpm_accumulator > RACE_OFF_RPM_FRAMES:
in_race = False
else:
state.rpm_accumulator = 0
state.last_engine_rpm = current_rpm
return in_race
# --- Lightbar (touchpad LED ring) ---------------------------------------------
def apply_lightbar_pre_race(ds: pydualsense, pkt: ForzaDataPacket, state: _ForzaState) -> None:
"""Mirrors `ForzaParser.GetPreRaceInstructions()` lightbar logic.
Sets the lightbar to the car's class color, dimmed by `cpi/MAX_CPI`.
X-class cars use the fixed ColorClassX without a CPI tint. Car class and
CPI fields can briefly read 0 during loading screens, so we cache the
last valid value seen \u2014 also matching ForzaParser's behavior."""
car_class = int(_safe(pkt, "car_class"))
if car_class > 0:
state.last_valid_car_class = car_class
car_class = state.last_valid_car_class
cpi = int(_safe(pkt, "car_performance_index"))
if cpi > 0:
state.last_valid_car_cpi = min(cpi, 255)
cpi = state.last_valid_car_cpi
cpi_ratio = cpi / MAX_CPI
if car_class <= 5:
cr, cg, cb = CAR_CLASS_COLORS[car_class]
r = math.floor(cpi_ratio * cr)
g = math.floor(cpi_ratio * cg)
b = math.floor(cpi_ratio * cb)
else:
r, g, b = CAR_CLASS_COLORS[6]
ds.light.setColorI(_clamp_byte(r), _clamp_byte(g), _clamp_byte(b))
def apply_lightbar_in_race(ds: pydualsense, pkt: ForzaDataPacket) -> None:
"""Mirrors `Parser.GetInRaceLightbarInstruction()` RPM-gradient logic.
Below the redline ratio (Profile.RPMRedlineRatio = 0.9), red rises and
green falls linearly with rpm_ratio, with green floored at 50. At or
above redline the lightbar goes pure red (255, 0, 0)."""
max_rpm = _safe(pkt, "engine_max_rpm")
idle_rpm = _safe(pkt, "engine_idle_rpm")
current_rpm = _safe(pkt, "current_engine_rpm")
engine_range = max_rpm - idle_rpm
if engine_range <= 0:
rpm_ratio = 0.0
else:
rpm_ratio = (current_rpm - idle_rpm) / engine_range
if rpm_ratio >= RPM_REDLINE_RATIO:
r, g, b = 255, 0, 0
else:
r = math.floor(rpm_ratio * 255)
g = max(math.floor((1.0 - rpm_ratio) * 255), GREEN_FLOOR)
b = 0
ds.light.setColorI(_clamp_byte(r), _clamp_byte(g), _clamp_byte(b))
# --- Throttle (right trigger) -------------------------------------------------
def apply_right_trigger(ds: pydualsense, pkt: ForzaDataPacket, st: _TriggerState) -> None:
"""Mirrors `Parser.GetInRaceRightTriggerInstruction()` line for line, with
one divergence: the throttle is released when the clutch is disengaged.
See divergence #5 in the module docstring."""
# Clutch gate: 0..255, byte > 128 means "clutch fully or mostly pressed";
# engine is mechanically disconnected, so the throttle pedal can't transmit
# power and shouldn't have any feel. One-shot 0x05 on transition into the
# clutched state, then steady-state 0x00 to avoid the trigger-motor whine
# described in divergence #4.
if int(_safe(pkt, "clutch", 0.0)) > CLUTCH_DISENGAGE_THRESHOLD:
if not st.prev_clutched:
_apply_off(ds.triggerR)
else:
_apply_normal(ds.triggerR)
st.prev_clutched = True
return
st.prev_clutched = False
accel_x = _safe(pkt, "acceleration_x")
accel_z = _safe(pkt, "acceleration_z")
avg_accel = math.sqrt(
THROTTLE_TURN_ACCEL_SCALE * (accel_x * accel_x)
+ THROTTLE_FORWARD_ACCEL_SCALE * (accel_z * accel_z)
)
fl = abs(_safe(pkt, "tire_combined_slip_FL"))
fr = abs(_safe(pkt, "tire_combined_slip_FR"))
rl = abs(_safe(pkt, "tire_combined_slip_RL"))
rr = abs(_safe(pkt, "tire_combined_slip_RR"))
front_slip = (fl + fr) * 0.5
rear_slip = (rl + rr) * 0.5
four_wheel_slip = (fl + fr + rl + rr) * 0.25
accelerator = int(_safe(pkt, "accel"))
losing_grip = (
front_slip > THROTTLE_GRIP_LOSS
or (rear_slip > THROTTLE_GRIP_LOSS and accelerator > THROTTLE_REAR_SLIP_ACCEL_MIN)
) and _is_in_motion(pkt)
if losing_grip:
# Floor after Map (matches `(int)Math.Floor(Map(...))` in Parser.cs).
target_freq = math.floor(
_map(four_wheel_slip, THROTTLE_GRIP_LOSS, 5.0, 0.0, THROTTLE_MAX_VIBRATION)
)
target_resistance = math.floor(
_map(
avg_accel,
0.0,
THROTTLE_ACCELERATION_LIMIT,
THROTTLE_MIN_STIFFNESS,
THROTTLE_MAX_STIFFNESS,
)
)
# Floor after EWMA (matches `(int)EWMA(int, int, float)` overload).
freq = _ewma_int(target_freq, st.last_freq, THROTTLE_VIBRATION_SMOOTHING)
resistance = _ewma_int(
target_resistance, st.last_resistance, THROTTLE_RESISTANCE_SMOOTHING
)
st.last_freq = freq
st.last_resistance = resistance
if freq <= THROTTLE_MIN_VIBRATION or accelerator <= THROTTLE_VIB_POSITION:
# RacingDSX throttle fall-through: sends `Resistance(0, filteredResistance)`
# where filteredResistance is in slip-mode range (175..255). DSX's
# TriggerEffectGenerator.Resistance returns false for force > 8 without
# writing, leaving the trigger stuck in whatever mode (typically
# Simple_Vibration) was set previously. Our `_apply_feedback` clamps
# strength to 8 instead, producing a smooth Feedback ramp \u2014 a
# documented divergence that fixes the user-visible \"vibration
# continues briefly after slip ends\" symptom.
_apply_feedback(
ds.triggerR,
0,
int(resistance * THROTTLE_EFFECT_INTENSITY),
)
else:
_apply_simple_vibration(
ds.triggerR,
THROTTLE_VIB_POSITION,
int(resistance * THROTTLE_EFFECT_INTENSITY),
int(freq * THROTTLE_EFFECT_INTENSITY),
)
return
target_resistance = math.floor(
_map(
avg_accel,
0.0,
THROTTLE_ACCELERATION_LIMIT,
THROTTLE_MIN_RESISTANCE,
THROTTLE_MAX_RESISTANCE,
)
)
resistance = _ewma_int(target_resistance, st.last_resistance, THROTTLE_RESISTANCE_SMOOTHING)
st.last_resistance = resistance
_apply_feedback(ds.triggerR, 0, int(resistance * THROTTLE_EFFECT_INTENSITY))
# --- Brake (left trigger) -----------------------------------------------------
def apply_left_trigger(ds: pydualsense, pkt: ForzaDataPacket, st: _TriggerState) -> None:
"""Mirrors `Parser.GetInRaceLeftTriggerInstruction()` line for line."""
fl = abs(_safe(pkt, "tire_combined_slip_FL"))
fr = abs(_safe(pkt, "tire_combined_slip_FR"))
rl = abs(_safe(pkt, "tire_combined_slip_RL"))
rr = abs(_safe(pkt, "tire_combined_slip_RR"))
four_wheel_slip = (fl + fr + rl + rr) * 0.25
brake = int(_safe(pkt, "brake"))
losing_grip = (
four_wheel_slip > BRAKE_GRIP_LOSS
and brake > BRAKE_DEADZONE
and _is_in_motion(pkt)
)
if losing_grip:
target_freq = math.floor(
_map(four_wheel_slip, BRAKE_GRIP_LOSS, 5.0, 0.0, BRAKE_MAX_VIBRATION)
)
target_resistance = math.floor(
_map(
brake,
0,
255,
BRAKE_MAX_STIFFNESS,
BRAKE_MIN_STIFFNESS,
)
)
freq = _ewma_int(target_freq, st.last_freq, BRAKE_VIBRATION_SMOOTHING)
resistance = _ewma_int(target_resistance, st.last_resistance, BRAKE_RESISTANCE_SMOOTHING)
st.last_freq = freq
st.last_resistance = resistance
if freq <= BRAKE_MIN_VIBRATION:
# RacingDSX brake fall-through (Parser.cs:128) sends Resistance(0, 0)
# explicitly \u2014 strength=0 routes to canonical Off (mode 0x05).
# Subtle slip while braking should leave the trigger neutral.
_apply_feedback(ds.triggerL, 0, 0)
else:
_apply_simple_vibration(
ds.triggerL,
BRAKE_VIB_POSITION,
int(resistance * BRAKE_EFFECT_INTENSITY),
int(freq * BRAKE_EFFECT_INTENSITY),
)
return
target_resistance = math.floor(_map(brake, 0, 255, BRAKE_MIN_RESISTANCE, BRAKE_MAX_RESISTANCE))
resistance = _ewma_int(target_resistance, st.last_resistance, BRAKE_RESISTANCE_SMOOTHING)
st.last_resistance = resistance
_apply_feedback(ds.triggerL, 0, int(resistance * BRAKE_EFFECT_INTENSITY))
# --- UDP main loop ------------------------------------------------------------
def parse_packet(data: bytes) -> ForzaDataPacket | None:
fmt = PACKET_FORMATS.get(len(data))
if fmt is None:
LOG.debug("ignoring packet of unexpected length %d", len(data))
return None
try:
return ForzaDataPacket(data, packet_format=fmt)
except Exception:
LOG.exception("failed to parse forza packet (len=%d)", len(data))
return None
def _close_controller(ds: pydualsense | None) -> None:
"""Best-effort close. The HID device may already be gone (unplug, BT drop)
in which case `device.close()` raises; we don't care."""
if ds is None:
return
try:
ds.close()
except Exception:
pass
def _connect_controller() -> pydualsense:
"""Open the DualSense, blocking until one is reachable.
`pydualsense.init()` raises when no DualSense is plugged in. That's a
normal startup-or-replug condition for us, not a fatal error \u2014 the
daemon is meant to live for the whole user session and self-heal across
plug events without external supervision. We log the first failure once,
then retry quietly every `RECONNECT_BACKOFF_S` seconds.
"""
LOG.info("opening dualsense controller")
first_failure_logged = False
while True:
ds = pydualsense()
try:
ds.init()
except Exception as e:
_close_controller(ds)
if not first_failure_logged:
LOG.warning(
"dualsense not available (%s); retrying every %.1fs",
e,
RECONNECT_BACKOFF_S,
)
first_failure_logged = True
time.sleep(RECONNECT_BACKOFF_S)
continue
LOG.info("dualsense controller connected")
return ds
def run(host: str, port: int, debug: bool) -> int:
ds = _connect_controller()
LOG.info("listening for forza udp on %s:%d", host, port)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((host, port))
sock.settimeout(1.0)
throttle_state = _TriggerState(init_resistance=THROTTLE_LAST_RESISTANCE_INIT)
brake_state = _TriggerState(init_resistance=BRAKE_LAST_RESISTANCE_INIT)
forza_state = _ForzaState()
last_seen = 0.0
in_race = False
prev_in_race = False # for transition detection \u2014 see _apply_normal docstring
have_telemetry = False # True between the first packet and the IDLE_TIMEOUT_S reset
try:
while True:
# Hot-plug detection: pydualsense's BG sendReport thread terminates
# silently on hidraw IOError (unplug, BT disconnect, USB resuspend).
# When it dies our triggerL/R writes go nowhere. Reconnect in-process
# so the daemon doesn't depend on a supervisor for plug-event recovery.
if not ds.report_thread.is_alive():
LOG.warning("dualsense disconnected; reconnecting")
_close_controller(ds)
ds = _connect_controller()
now = time.monotonic()
try:
data, _ = sock.recvfrom(2048)
last_seen = now
have_telemetry = True
except socket.timeout:
# Reset on telemetry-idle regardless of in_race state. After Forza
# exits with the user in its main menu (is_race_on=0 packets just
# before exit, so in_race was already False), the old check would
# leave the last lightbar/trigger state asserted forever.
if have_telemetry and (now - last_seen) > IDLE_TIMEOUT_S:
LOG.info("forza idle for %.1fs \u2014 resetting controller", IDLE_TIMEOUT_S)
# One-shot 0x05 to actively retract the trigger motor; the BG
# thread will publish it ~12 times in the next 50ms before main
# thread loops back here. Subsequent idle iterations don't
# re-enter this branch (have_telemetry is now False).
reset_triggers(ds)
reset_lightbar(ds)
have_telemetry = False
in_race = False
prev_in_race = False
continue
pkt = parse_packet(data)
if pkt is None:
continue
# ForzaParser.IsRaceOn() override: combines packet field with the
# FH-specific RPM-accumulator workaround. Must be called once per
# packet so the accumulator state stays accurate.
in_race = forza_is_race_on(pkt, forza_state)
if not in_race:
# Transition into pre-race: one-shot mode 0x05 to actively
# retract the trigger motor. Subsequent steady-state frames
# send mode 0x00 (no command); re-asserting 0x05 every frame
# makes the firmware audibly whine retracting an already-
# neutral trigger. Divergence #4 in the module docstring.
if prev_in_race:
_apply_off(ds.triggerL)
_apply_off(ds.triggerR)
else:
_apply_normal(ds.triggerL)
_apply_normal(ds.triggerR)
apply_lightbar_pre_race(ds, pkt, forza_state)
prev_in_race = False
continue
if debug:
LOG.debug(
"rpm=%.0f/%.0f accel=%d brake=%d "
"slip[FL,FR,RL,RR]=%.2f,%.2f,%.2f,%.2f "
"throttle[freq=%d res=%d] brake[freq=%d res=%d]",
_safe(pkt, "current_engine_rpm"),
_safe(pkt, "engine_max_rpm"),
int(_safe(pkt, "accel")),
int(_safe(pkt, "brake")),
_safe(pkt, "tire_combined_slip_FL"),
_safe(pkt, "tire_combined_slip_FR"),
_safe(pkt, "tire_combined_slip_RL"),
_safe(pkt, "tire_combined_slip_RR"),
throttle_state.last_freq,
throttle_state.last_resistance,
brake_state.last_freq,
brake_state.last_resistance,
)
apply_lightbar_in_race(ds, pkt)
apply_left_trigger(ds, pkt, brake_state)
apply_right_trigger(ds, pkt, throttle_state)
prev_in_race = True
except KeyboardInterrupt:
LOG.info("shutting down")
finally:
try:
reset_triggers(ds)
except Exception:
pass
_close_controller(ds)
return 0
def main() -> int:
parser = argparse.ArgumentParser(
prog="forza-trigger",
description="Bridge Forza Horizon UDP telemetry to DualSense adaptive triggers.",
)
parser.add_argument("--host", default="127.0.0.1", help="UDP bind address")
parser.add_argument("--port", type=int, default=5300, help="UDP bind port")
parser.add_argument(
"--debug",
action="store_true",
help="log per-packet telemetry at DEBUG level",
)
args = parser.parse_args()
level = os.environ.get("FORZA_TRIGGER_LOG", "DEBUG" if args.debug else "INFO")
logging.basicConfig(
level=level,
format="%(asctime)s %(levelname)s %(name)s %(message)s",
)
return run(args.host, args.port, args.debug)
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,107 @@
# Python packages forza_trigger.py imports that aren't in nixpkgs. Returns
# an attrset consumed by ./default.nix.
#
# Bumping a version: change `version` and `hash`, then `nix build` — Nix
# fails with the new sha256 in the error message, paste it back in.
{
lib,
pkgs,
python ? pkgs.python3,
}:
let
py = python.pkgs;
in
rec {
# CFFI bindings to libhidapi (flok/hidapi-cffi on PyPI). pydualsense's
# `import hidapi` resolves to this — nixpkgs' python3Packages.hidapi is the
# Cython wrapper from trezor/cython-hidapi which exposes a different
# `import hid` API and can't satisfy pydualsense.
hidapi-usb = py.buildPythonPackage rec {
pname = "hidapi-usb";
version = "0.3.2";
format = "setuptools";
# PyPI's project URL slug uses a hyphen (`hidapi-usb`) but the sdist file
# itself is PEP-625-normalized to an underscore (`hidapi_usb-…`). Stock
# fetchPypi assumes they match — they don't here, so fetch by direct URL.
src = pkgs.fetchurl {
url = "https://files.pythonhosted.org/packages/55/80/960ae94b615e26a7d1aeebe8e9fefda2f25608bf1016f9aec268b328c35e/hidapi_usb-${version}.tar.gz";
hash = "sha256-oxp+2i+qqYd1uwiS2Dh8/PzO62iYQQXpR936MnDIFk0=";
};
propagatedBuildInputs = [ py.cffi ];
# Upstream's hidapi.py walks a tuple of soname strings via ffi.dlopen()
# until one resolves. Pin the two Linux hidraw entries to absolute store
# paths so the wrapped Python in our writePython3Bin closure finds them
# without LD_LIBRARY_PATH wrapping. The libusb / iohidmanager / dylib /
# dll entries are dead code on Linux. --replace-fail makes a rename in
# upstream's tuple a loud build error rather than a silent ImportError
# at runtime.
postPatch = ''
substituteInPlace hidapi.py \
--replace-fail "'libhidapi-hidraw.so'," "'${pkgs.hidapi}/lib/libhidapi-hidraw.so'," \
--replace-fail "'libhidapi-hidraw.so.0'," "'${pkgs.hidapi}/lib/libhidapi-hidraw.so.0',"
'';
pythonImportsCheck = [ "hidapi" ];
meta = {
description = "CFFI wrapper for hidapi (used by pydualsense)";
homepage = "https://github.com/flok/hidapi-cffi";
license = lib.licenses.bsd3;
};
};
pydualsense = py.buildPythonPackage rec {
pname = "pydualsense";
version = "0.7.5";
format = "pyproject";
src = py.fetchPypi {
inherit pname version;
hash = "sha256-YgX8AJE4f8p7geKT3xlCD0Mlh1GcyHpBz4rEIqdwKgs=";
};
nativeBuildInputs = [ py.poetry-core ];
propagatedBuildInputs = [ hidapi-usb ];
pythonImportsCheck = [ "pydualsense" ];
meta = {
description = "Control your PS5 DualSense controller from Python";
homepage = "https://github.com/flok/pydualsense";
license = lib.licenses.mit;
};
};
# Single-file Forza UDP packet parser (nettrom/forza_motorsport). Pinned to
# a known-good commit; the repo is dormant (last commit 2021) but the FH4
# packet layout is frozen and FH5 reuses it byte-for-byte.
fdp = py.buildPythonPackage {
pname = "fdp";
version = "0-unstable-2021-05-28";
format = "other";
src = pkgs.fetchurl {
url = "https://raw.githubusercontent.com/nettrom/forza_motorsport/61845cb7ff4082211292a51ce3c49edbfd2d6503/fdp.py";
hash = "sha256-osFaVF9VaEzU4dp3x6KN6OF7SXsd9ZBwvilU+xTT7mM=";
};
dontUnpack = true;
installPhase = ''
runHook preInstall
install -Dm644 $src $out/${python.sitePackages}/fdp.py
runHook postInstall
'';
pythonImportsCheck = [ "fdp" ];
meta = {
description = "ForzaDataPacket Forza Motorsport / Horizon UDP packet parser";
homepage = "https://github.com/nettrom/forza_motorsport";
license = lib.licenses.mit;
};
};
}

View File

@@ -1,15 +1,12 @@
{ {
pkgs, pkgs,
inputs,
lib,
config,
... ...
}: }:
{ {
imports = [ imports = [
../../home/profiles/gui.nix ../../home/profiles/gui.nix
../../home/profiles/desktop.nix ../../home/profiles/desktop.nix
inputs.json2steamshortcut.homeModules.default ../../home/progs/steam-shortcuts.nix
]; ];
home.packages = with pkgs; [ home.packages = with pkgs; [
@@ -27,20 +24,4 @@
obs-pipewire-audio-capture obs-pipewire-audio-capture
]; ];
}; };
services.steam-shortcuts = {
enable = true;
overwriteExisting = true;
steamUserId = lib.strings.toInt (
lib.strings.trim (builtins.readFile ../../secrets/home/steam-user-id)
);
shortcuts = [
{
AppName = "Prism Launcher";
Exe = "${pkgs.prismlauncher}/bin/prismlauncher";
Icon = "${pkgs.prismlauncher}/share/icons/hicolor/scalable/apps/org.prismlauncher.PrismLauncher.svg";
Tags = [ "Game" ];
}
];
};
} }

View File

@@ -12,6 +12,7 @@
"/var/lib/systemd/coredump" "/var/lib/systemd/coredump"
"/var/lib/nixos" "/var/lib/nixos"
"/var/lib/systemd/timers" "/var/lib/systemd/timers"
"/var/lib/bluetooth"
]; ];
files = [ files = [
@@ -21,6 +22,12 @@
"/etc/ssh/ssh_host_rsa_key.pub" "/etc/ssh/ssh_host_rsa_key.pub"
"/etc/machine-id" "/etc/machine-id"
]; ];
users.root = {
files = [
".local/share/fish/fish_history"
];
};
}; };
# Bind mount entire home directory from persistent storage # Bind mount entire home directory from persistent storage
@@ -31,6 +38,17 @@
options = [ "bind" ]; options = [ "bind" ];
neededForBoot = true; neededForBoot = true;
}; };
# /var/lib/agenix holds the TPM-sealed age identity. agenix decrypts secrets
# from initrd-nixos-activation-start, which runs *before* impermanence's
# stage-2 bind mounts. Mount it explicitly with neededForBoot so the
# identity is in place when activation reads it. (NixOS auto-marks /var/log
# and /var/lib/nixos as neededForBoot; /var/lib/agenix is not in that set.)
fileSystems."/var/lib/agenix" = {
device = "/persistent/var/lib/agenix";
fsType = "none";
options = [ "bind" ];
neededForBoot = true;
};
systemd.tmpfiles.rules = [ systemd.tmpfiles.rules = [
"d /etc 755 root" "d /etc 755 root"

45
hosts/yarn/lact.nix Normal file
View File

@@ -0,0 +1,45 @@
{
pkgs,
lib,
...
}:
let
# Composite GPU id used by lactd: <pci_id>-<subsys_id>-<slot>.
# ASRock RX 7800 XT (Navi 32). Verify with `lact cli list-gpus` if the
# PCI slot ever changes (e.g. after adding/removing PCIe devices).
gpuId = "1002:747E-1849:5326-0000:0a:00.0";
in
{
# LACT (Linux AMDGPU Configuration Tool): https://github.com/ilya-zlobintsev/LACT
# Config schema reference: <lact src>/docs/CONFIG.md.
# /etc/lact/config.yaml is a read-only symlink to /nix/store, so GUI "Apply"
# fails — settings are Nix-only. lactd auto-restarts on config change via the
# upstream module's restartTriggers.
services.lact = {
enable = true;
settings = {
# Pin to current Config schema; bump in lockstep with lact upgrades to
# avoid a v(N)→v(N+1) migration save() failing against the RO symlink.
version = 5;
daemon = {
log_level = "info";
admin_group = "wheel";
};
apply_settings_timer = 5;
gpus.${gpuId} = {
# ASRock RX 7800 XT (Navi 32). Voltage offset tuned via sweep with
# clpeak under perf=high (sustained DPM7); visible artifacting on the
# display engine appeared at -300 mV, so this sits 75 mV below that
# cliff for a real-world safety margin (cool clpeak < hot gaming).
voltage_offset = -120; # mV; kernel range on this card: -450..0
max_core_clock = 2400; # MHz; default boost ~2520
power_cap = 200.0; # W; default 220, max 280
performance_level = "auto";
};
};
};
# Keep the existing pre-start sleep that lets amdgpu sysfs settle before
# lactd probes it. Purely additive — the upstream module sets no ExecStartPre.
systemd.services.lactd.serviceConfig.ExecStartPre = "${lib.getExe pkgs.bash} -c \"sleep 3s\"";
}

View File

@@ -0,0 +1,38 @@
; OptiScaler.ini overrides for Forza Horizon 5 on RDNA 3 (Navi 32) under
; Linux/Proton with FSR 4 INT8 enabled. Tested on OptiScaler 0.9.1.
;
; Keys not listed here fall through to OptiScaler's "auto" defaults
; (Config.cpp: missing keys silently resolve to std::nullopt -> _defaultValue).
;
; Sources:
; - https://github.com/optiscaler/OptiScaler/wiki/Forza-Horizon-5
; - https://github.com/optiscaler/OptiScaler/wiki/FSR4-Compatibility-List
;
; Companion launch-option env vars (set via programs.steam.config in
; hosts/yarn/default.nix):
; PROTON_FSR4_UPGRADE=1
; DXIL_SPIRV_CONFIG=wmma_rdna3_workaround (RDNA 3 INT8 visuals fix)
[Inputs]
; FSR2 inputs are buggy in FH5; OptiScaler 0.7.8+ auto-disables them per game
; but explicit is safer across version bumps. Source: FH5 wiki ("Known Issues").
EnableFsr2Inputs=false
[FSR]
; DirectX 12 Agility SDK upgrade. Required for the FSR 4 path on RDNA 3 INT8
; under Proton (the FFX SDK shipped with the game predates FSR 4). Source:
; FSR4 Compatibility wiki, "Linux Setup" section.
Fsr4Update=true
; FH5 wiki: 0.65 fixes flickering lights and reduces car ghosting with FSR-FG.
DlssReactiveMaskBias=0.65
; FSR 4 on RDNA 3 IQ recommendation; mitigates white flashes and artifacting
; from the INT8 emulation path. Source: FSR4 Compatibility wiki, "Image Quality".
FsrNonLinearColorSpace=true
[Spoofing]
; FH5 wiki recommendation. The auto default on AMD spoofs to NVIDIA, which
; OptiScaler 0.9 deliberately drops for FH5; pinning false here makes the
; behavior version-proof.
Dxgi=false

View File

@@ -2,6 +2,7 @@
inputs, inputs,
pkgs, pkgs,
service_configs, service_configs,
site_config,
lib ? inputs.nixpkgs-stable.lib, lib ? inputs.nixpkgs-stable.lib,
... ...
}: }:
@@ -195,7 +196,7 @@ lib.extend (
assert (subdomain != null) != (domain != null); assert (subdomain != null) != (domain != null);
{ config, ... }: { config, ... }:
let let
vhostDomain = if domain != null then domain else "${subdomain}.${service_configs.https.domain}"; vhostDomain = if domain != null then domain else "${subdomain}.${site_config.domain}";
upstream = upstream =
if vpn then if vpn then
"${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString port}" "${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString port}"

View File

@@ -75,4 +75,82 @@ final: prev: {
''; '';
meta.mainProgram = "igpu-exporter"; meta.mainProgram = "igpu-exporter";
}; };
mc-monitor = prev.buildGoModule rec {
pname = "mc-monitor";
version = "0.16.1";
src = prev.fetchFromGitHub {
owner = "itzg";
repo = "mc-monitor";
rev = version;
hash = "sha256-/94+Z9FTFOzQHynHiJuaGFiidkOxmM0g/FIpHn+xvJM=";
};
vendorHash = "sha256-qq7rIpvGRi3AMnBbi8uAhiPcfSF4McIuqozdtxB5CeQ=";
# upstream tests probe live Minecraft servers
doCheck = false;
meta.mainProgram = "mc-monitor";
};
# OptiScaler: universal upscaler/frame-gen middleware. Bridges DLSS, XeSS,
# and FSR backends so games shipping older upscalers can be retargeted at,
# e.g., FSR 4 (RDNA 4 native, RDNA 3 INT8 via Mesa+Proton).
#
# Bundled binaries are unaudited Windows DLLs intended to be loaded under
# Wine/Proton. The release tarball ships its own license tree which we
# carry through verbatim under $out/Licenses for compliance.
#
# Bumping: drop the new asset URL + recompute hash. Verify $out still
# contains OptiScaler.dll and OptiScaler.ini (consumers reference both by
# name).
optiscaler = prev.stdenvNoCC.mkDerivation rec {
pname = "optiscaler";
version = "0.9.1";
src = prev.fetchurl {
url = "https://github.com/optiscaler/OptiScaler/releases/download/v${version}/Optiscaler_${version}-final.20260427._DSB.7z";
hash = "sha256-VtGhjjoy2XjAE0hE6AO6jPBBNCJs/NuCg/aNGAg2+rA=";
};
nativeBuildInputs = [ prev.p7zip ];
unpackPhase = ''
runHook preUnpack
mkdir -p source
7z x -bd -o"source" $src >/dev/null
runHook postUnpack
'';
sourceRoot = "source";
dontConfigure = true;
dontBuild = true;
# Windows DLLs; nothing to patchelf or strip.
dontFixup = true;
installPhase = ''
runHook preInstall
mkdir -p $out
cp -r ./* $out/
# The installer scripts and "extract me" README aren't useful at runtime.
rm -f $out/setup_linux.sh $out/setup_windows.bat
rm -f "$out/!! README_EXTRACT ALL FILES TO GAME FOLDER !!.txt"
runHook postInstall
'';
meta = with prev.lib; {
description = "Upscaler/frame-gen middleware bridging DLSS, XeSS, and FSR backends";
homepage = "https://github.com/optiscaler/OptiScaler";
# OptiScaler proper is GPL-3.0-or-later. The bundled FidelityFX SDK is
# MIT-0; XeSS ships its own Intel SLA. License texts are preserved at
# $out/Licenses/.
license = with licenses; [
gpl3Plus
mit0
unfreeRedistributable
];
sourceProvenance = [ sourceTypes.binaryNativeCode ];
platforms = platforms.linux;
maintainers = [ ];
};
};
} }

View File

@@ -2,10 +2,15 @@
config, config,
lib, lib,
pkgs, pkgs,
site_config,
username, username,
... ...
}: }:
{ {
# Shared timezone. Plain priority so it wins against srvos's mkDefault "UTC";
# mreow overrides via lib.mkForce when travelling.
time.timeZone = site_config.timezone;
# Common Nix daemon settings. Host-specific overrides (binary cache substituters, # Common Nix daemon settings. Host-specific overrides (binary cache substituters,
# gc retention) live in the host's default.nix. # gc retention) live in the host's default.nix.
nix = { nix = {
@@ -53,8 +58,6 @@
]; ];
}; };
services.kmscon.enable = true;
environment.systemPackages = with pkgs; [ environment.systemPackages = with pkgs; [
doas-sudo-shim doas-sudo-shim
]; ];

View File

@@ -0,0 +1,91 @@
{
pkgs,
inputs,
...
}:
let
# age-plugin-tpm 1.0+ defaults to the new age1tag1… (p256tag) recipient
# encoding and refuses to encrypt to legacy age1tpm1… recipients. rage's
# plugin dispatch maps recipient prefixes to binaries (`age1tag1…` →
# `age-plugin-tag`), but nixpkgs only ships `age-plugin-tpm`. Provide a
# symlink so both prefixes resolve to the same binary.
age-plugin-tpm-with-tag = pkgs.symlinkJoin {
name = "age-plugin-tpm-with-tag";
paths = [ pkgs.age-plugin-tpm ];
postBuild = ''
ln -s age-plugin-tpm $out/bin/age-plugin-tag
'';
};
# Wrap rage so the plugin (under both names) is on PATH at activation time.
rageWithTpm = pkgs.writeShellScriptBin "rage" ''
export PATH="${age-plugin-tpm-with-tag}/bin:$PATH"
exec ${pkgs.rage}/bin/rage "$@"
'';
in
{
imports = [
inputs.agenix.nixosModules.default
];
# Expose the plugin + agenix CLI for interactive edits (`agenix -e …`).
environment.systemPackages = [
inputs.agenix.packages.${pkgs.system}.default
pkgs.age-plugin-tpm
];
age.ageBin = "${rageWithTpm}/bin/rage";
# Primary identity: TPM-sealed key, generated by scripts/bootstrap-desktop-tpm.sh.
# Fallback identity: admin SSH key. age tries paths in order, so if the TPM
# is wiped or the board is replaced the SSH key keeps secrets accessible until
# the TPM is re-bootstrapped. Both are encrypted recipients on every .age file.
age.identityPaths = [
"/var/lib/agenix/tpm-identity"
"/home/primary/.ssh/id_ed25519"
];
# Ensure the identity directory exists before agenix activation so a fresh
# bootstrap doesn't race the directory creation.
systemd.tmpfiles.rules = [
"d /var/lib/agenix 0700 root root -"
];
age.secrets = {
# Secureboot PKI bundle (db/KEK/PK keys + certs) consumed by lanzaboote
# via desktop-lanzaboote-agenix.nix at activation time.
secureboot-tar = {
file = ../secrets/desktop/secureboot.tar.age;
mode = "0400";
owner = "root";
group = "root";
};
# netrc for the private nix binary cache.
nix-cache-netrc = {
file = ../secrets/desktop/nix-cache-netrc.age;
mode = "0400";
owner = "root";
group = "root";
};
# yescrypt hash for the primary user.
password-hash = {
file = ../secrets/desktop/password-hash.age;
mode = "0400";
owner = "root";
group = "root";
};
# Master password for oo7-daemon's 'Login' keyring; the unit consumes it
# via systemd's ImportCredential machinery (see desktop-oo7-daemon.nix).
# Owner is `primary` so the user-scope systemd unit can LoadCredential it.
oo7-keyring-password = {
file = ../secrets/desktop/oo7-keyring-password.age;
mode = "0400";
owner = "primary";
group = "users";
};
};
}

View File

@@ -5,6 +5,7 @@
lib, lib,
username, username,
inputs, inputs,
site_config,
niri-package, niri-package,
... ...
}: }:
@@ -16,17 +17,20 @@
./desktop-vm.nix ./desktop-vm.nix
./desktop-steam.nix ./desktop-steam.nix
./desktop-networkmanager.nix ./desktop-networkmanager.nix
./desktop-age-secrets.nix
./desktop-lanzaboote-agenix.nix
./desktop-oo7-daemon.nix
inputs.disko.nixosModules.disko inputs.disko.nixosModules.disko
inputs.lanzaboote.nixosModules.lanzaboote
inputs.nixos-hardware.nixosModules.common-cpu-amd-pstate inputs.nixos-hardware.nixosModules.common-cpu-amd-pstate
inputs.nixos-hardware.nixosModules.common-cpu-amd-zenpower inputs.nixos-hardware.nixosModules.common-cpu-amd-zenpower
inputs.nixos-hardware.nixosModules.common-pc-ssd inputs.nixos-hardware.nixosModules.common-pc-ssd
]; ];
# allow overclocking (I actually underclock but lol) # expose amdgpu overdrive sysfs (pp_od_clk_voltage, fan curves, ...) for LACT.
hardware.amdgpu.overdrive.ppfeaturemask = "0xFFFFFFFF"; # nixpkgs default ppfeaturemask (0xfffd7fff) already has the overdrive bit set.
hardware.amdgpu.overdrive.enable = true;
# Add niri to display manager session packages # Add niri to display manager session packages
services.displayManager.sessionPackages = [ niri-package ]; services.displayManager.sessionPackages = [ niri-package ];
@@ -49,32 +53,41 @@
mkdir -p /nix/var/nix/profiles/per-user/root/channels mkdir -p /nix/var/nix/profiles/per-user/root/channels
''; '';
# extract all my secureboot keys
# TODO! proper secrets management
"secureboot-keys".text = ''
#!/usr/bin/env sh
rm -fr ${config.boot.lanzaboote.pkiBundle} || true
mkdir -p ${config.boot.lanzaboote.pkiBundle}
${lib.getExe pkgs.gnutar} xf ${../secrets/desktop/secureboot.tar} -C ${config.boot.lanzaboote.pkiBundle}
chown -R root:wheel ${config.boot.lanzaboote.pkiBundle}
chmod -R 500 ${config.boot.lanzaboote.pkiBundle}
'';
}; };
swapDevices = [ ]; swapDevices = [ ];
# Desktop-specific Nix cache — muffin serves it, desktops consume. # Desktop-specific Nix cache — muffin serves it, desktops consume.
# Base nix settings (optimise, gc, experimental-features) come from common-nix.nix. # Base nix settings (optimise, gc, experimental-features) come from common.nix.
nix.settings = { nix.settings = {
substituters = [ "https://nix-cache.sigkill.computer" ]; substituters = [ site_config.binary_cache.url ];
trusted-public-keys = [ trusted-public-keys = [
"nix-cache.sigkill.computer-1:ONtQC9gUjL+2yNgMWB68NudPySXhyzJ7I3ra56/NPgk=" site_config.binary_cache.public_key
]; ];
netrc-file = "${../secrets/desktop/nix-cache-netrc}"; netrc-file = config.age.secrets.nix-cache-netrc.path;
}; };
# cachyos kernel overlay # cachyos kernel overlay
nixpkgs.overlays = [ inputs.nix-cachyos-kernel.overlays.default ]; nixpkgs.overlays = [
inputs.nix-cachyos-kernel.overlays.default
# bluez 5.86 reversed the profile-connect order (regression of cdcd845f87ee).
# Dual-role devices that advertise both AudioSource (UUID 0x110a) and
# AudioSink (UUID 0x110b) -- e.g. any A2DP headphone with an HFP mic, like
# the Bose QC 45 -- now negotiate audio-gateway first and never expose
# a2dp-sink, with bluetoothd reporting:
# src/service.c:btd_service_connect() a2dp-sink profile connect failed
# for <addr>: Device or resource busy
# Cherry-picks bluez/bluez@066a164 "a2dp: connect source profile after
# sink" (slated for 5.87). FIX: drop overlay when nixpkgs ships >= 5.87.
# see https://github.com/bluez/bluez/issues/1922
(_final: prev: {
bluez = prev.bluez.overrideAttrs (old: {
patches = (old.patches or [ ]) ++ [
../patches/bluez/0001-a2dp-connect-source-after-sink.patch
];
});
})
];
# kernel options # kernel options
boot = { boot = {
@@ -181,9 +194,14 @@
DRM_HISI_HIBMC = lib.mkForce no; DRM_HISI_HIBMC = lib.mkForce no;
DRM_APPLETBDRM = lib.mkForce no; DRM_APPLETBDRM = lib.mkForce no;
# intel gpu # legacy AMD IP blocks. hosts are Navi 32 RDNA3 dGPU (7800 XT, yarn,
DRM_I915 = lib.mkForce no; # 2023, gfx1101, DCN 3.2) and Krackan Point RDNA 3.5 iGPU (mreow,
DRM_XE = lib.mkForce no; # 2024, gfx1150, DCN 3.5). everything below pre-dates those by a
# decade. upstream only exposes per-generation toggles for SI and
# CIK — no switch for VI/Polaris/Vega/Navi1x, those stay in amdgpu.
DRM_AMDGPU_SI = lib.mkForce no; # Southern Islands / GCN 1 (2012): HD 7950/7970, R9 280/280X, R7 260X
DRM_AMDGPU_CIK = lib.mkForce no; # Sea Islands / GCN 2 (2013): R9 290/290X/390, Kaveri APUs (A10-7850K), Steam Machine Bonaire
DRM_AMD_SECURE_DISPLAY = lib.mkForce no; # HDCP region-CRC debugfs helper, needs custom DMCU firmware
# early-boot framebuffer chain: drop every alternative to amdgpu so # early-boot framebuffer chain: drop every alternative to amdgpu so
# the console never transitions simpledrm -> dummy -> amdgpu (visible # the console never transitions simpledrm -> dummy -> amdgpu (visible
@@ -285,6 +303,486 @@
XZ_DEC_ARM64 = lib.mkForce no; XZ_DEC_ARM64 = lib.mkForce no;
XZ_DEC_SPARC = lib.mkForce no; XZ_DEC_SPARC = lib.mkForce no;
XZ_DEC_RISCV = lib.mkForce no; XZ_DEC_RISCV = lib.mkForce no;
# ==== no hardware for any of these on either host ====
# laptop vendor platform drivers (only FRAMEWORK_LAPTOP is used)
ACER_WMI = lib.mkForce no;
ACER_WIRELESS = lib.mkForce no;
ACERHDF = lib.mkForce no;
APPLE_GMUX = lib.mkForce no;
ASUS_LAPTOP = lib.mkForce no;
ASUS_WMI = lib.mkForce no;
ASUS_NB_WMI = lib.mkForce no;
ASUS_ARMOURY = lib.mkForce no;
ASUS_TF103C_DOCK = lib.mkForce no;
ASUS_WIRELESS = lib.mkForce no;
COMPAL_LAPTOP = lib.mkForce no;
DELL_LAPTOP = lib.mkForce no;
DELL_RBTN = lib.mkForce no;
DELL_PC = lib.mkForce no;
DELL_SMBIOS = lib.mkForce no;
DELL_SMO8800 = lib.mkForce no;
DELL_UART_BACKLIGHT = lib.mkForce no;
DELL_WMI = lib.mkForce no;
DELL_WMI_AIO = lib.mkForce no;
DELL_WMI_DDV = lib.mkForce no;
DELL_WMI_DESCRIPTOR = lib.mkForce no;
DELL_WMI_LED = lib.mkForce no;
DELL_WMI_SYSMAN = lib.mkForce no;
EEEPC_LAPTOP = lib.mkForce no;
EEEPC_WMI = lib.mkForce no;
FUJITSU_LAPTOP = lib.mkForce no;
FUJITSU_ES = lib.mkForce no;
FUJITSU_TABLET = lib.mkForce no;
HUAWEI_WMI = lib.mkForce no;
IBM_ASM = lib.mkForce no;
IBM_RTL = lib.mkForce no;
IDEAPAD_LAPTOP = lib.mkForce no;
LG_LAPTOP = lib.mkForce no;
MSI_LAPTOP = lib.mkForce no;
MSI_WMI = lib.mkForce no;
MSI_EC = lib.mkForce no;
PANASONIC_LAPTOP = lib.mkForce no;
SONY_LAPTOP = lib.mkForce no;
SAMSUNG_LAPTOP = lib.mkForce no;
TOPSTAR_LAPTOP = lib.mkForce no;
THINKPAD_ACPI = lib.mkForce no;
THINKPAD_LMI = lib.mkForce no;
LENOVO_SE10_WDT = lib.mkForce no;
LENOVO_SE30_WDT = lib.mkForce no;
LENOVO_WMI_HOTKEY_UTILITIES = lib.mkForce no;
LENOVO_WMI_CAMERA = lib.mkForce no;
LENOVO_YMC = lib.mkForce no;
LENOVO_WMI_CAPDATA = lib.mkForce no;
LENOVO_WMI_EVENTS = lib.mkForce no;
LENOVO_WMI_HELPERS = lib.mkForce no;
LENOVO_WMI_GAMEZONE = lib.mkForce no;
LENOVO_WMI_TUNING = lib.mkForce no;
YOGABOOK = lib.mkForce no;
YT2_1380 = lib.mkForce no;
XIAOMI_WMI = lib.mkForce no;
BARCO_P50_GPIO = lib.mkForce no;
PC_ENGINES_APU = lib.mkForce no;
SILICOM_PLATFORM = lib.mkForce no;
SIEMENS_SIMATIC_IPC_WDT = lib.mkForce no;
SYSTEM76_ACPI = lib.mkForce no;
INSPUR_PLATFORM_PROFILE = lib.mkForce no;
NVIDIA_WMI_EC_BACKLIGHT = lib.mkForce no;
# legacy filesystems (hosts use vfat/f2fs/tmpfs/fuse; exfat/ntfs3 kept for externals)
JFS_FS = lib.mkForce no;
GFS2_FS = lib.mkForce no;
OCFS2_FS = lib.mkForce no;
NILFS2_FS = lib.mkForce no;
AFFS_FS = lib.mkForce no;
HFS_FS = lib.mkForce no;
HFSPLUS_FS = lib.mkForce no;
BEFS_FS = lib.mkForce no;
JFFS2_FS = lib.mkForce no;
UBIFS_FS = lib.mkForce no;
MINIX_FS = lib.mkForce no;
OMFS_FS = lib.mkForce no;
ROMFS_FS = lib.mkForce no;
UFS_FS = lib.mkForce no;
EROFS_FS = lib.mkForce no;
ORANGEFS_FS = lib.mkForce no;
CODA_FS = lib.mkForce no;
AFS_FS = lib.mkForce no;
CEPH_FS = lib.mkForce no;
ZONEFS_FS = lib.mkForce no;
BCACHE = lib.mkForce no;
BCACHEFS_FS = lib.mkForce no;
ECRYPT_FS = lib.mkForce no;
NFSD = lib.mkForce no;
# legacy partition tables (only GPT+MBR in use)
AIX_PARTITION = lib.mkForce no;
MAC_PARTITION = lib.mkForce no;
LDM_PARTITION = lib.mkForce no;
KARMA_PARTITION = lib.mkForce no;
MINIX_SUBPARTITION = lib.mkForce no;
SOLARIS_X86_PARTITION = lib.mkForce no;
BSD_DISKLABEL = lib.mkForce no;
UNIXWARE_DISKLABEL = lib.mkForce no;
SYSV68_PARTITION = lib.mkForce no;
ULTRIX_PARTITION = lib.mkForce no;
OSF_PARTITION = lib.mkForce no;
SGI_PARTITION = lib.mkForce no;
SUN_PARTITION = lib.mkForce no;
ATARI_PARTITION = lib.mkForce no;
AMIGA_PARTITION = lib.mkForce no;
ACORN_PARTITION = lib.mkForce no;
# legacy net protocols (nothing uses SCTP/RDS/TIPC/SMC or GRE tunnels)
IP_SCTP = lib.mkForce no;
RDS = lib.mkForce no;
TIPC = lib.mkForce no;
SMC = lib.mkForce no;
NET_IPIP = lib.mkForce no;
NET_IPGRE = lib.mkForce no;
NET_IPGRE_DEMUX = lib.mkForce no;
NET_IPVTI = lib.mkForce no;
# legacy PCI sound cards (kept: SND_HDA_* for AMD HDA, SND_SOC_SOF_AMD for ACP)
SND_ALI5451 = lib.mkForce no;
SND_ATIIXP = lib.mkForce no;
SND_ATIIXP_MODEM = lib.mkForce no;
SND_AU8810 = lib.mkForce no;
SND_AU8820 = lib.mkForce no;
SND_AU8830 = lib.mkForce no;
SND_AW2 = lib.mkForce no;
SND_AZT3328 = lib.mkForce no;
SND_BT87X = lib.mkForce no;
SND_CA0106 = lib.mkForce no;
SND_CMIPCI = lib.mkForce no;
SND_OXYGEN = lib.mkForce no;
SND_CS46XX = lib.mkForce no;
SND_CTXFI = lib.mkForce no;
SND_DARLA20 = lib.mkForce no;
SND_GINA20 = lib.mkForce no;
SND_LAYLA20 = lib.mkForce no;
SND_DARLA24 = lib.mkForce no;
SND_GINA24 = lib.mkForce no;
SND_LAYLA24 = lib.mkForce no;
SND_MONA = lib.mkForce no;
SND_MIA = lib.mkForce no;
SND_ECHO3G = lib.mkForce no;
SND_INDIGO = lib.mkForce no;
SND_INDIGOIO = lib.mkForce no;
SND_INDIGODJ = lib.mkForce no;
SND_INDIGOIOX = lib.mkForce no;
SND_INDIGODJX = lib.mkForce no;
SND_EMU10K1 = lib.mkForce no;
SND_EMU10K1X = lib.mkForce no;
SND_ENS1370 = lib.mkForce no;
SND_ENS1371 = lib.mkForce no;
SND_ES1938 = lib.mkForce no;
SND_ES1968 = lib.mkForce no;
SND_FM801 = lib.mkForce no;
SND_HDSP = lib.mkForce no;
SND_HDSPM = lib.mkForce no;
SND_ICE1712 = lib.mkForce no;
SND_ICE1724 = lib.mkForce no;
SND_INTEL8X0 = lib.mkForce no;
SND_INTEL8X0M = lib.mkForce no;
SND_KORG1212 = lib.mkForce no;
SND_LOLA = lib.mkForce no;
SND_LX6464ES = lib.mkForce no;
SND_MAESTRO3 = lib.mkForce no;
SND_MIXART = lib.mkForce no;
SND_MPU401 = lib.mkForce no;
SND_MTS64 = lib.mkForce no;
SND_NM256 = lib.mkForce no;
SND_PCXHR = lib.mkForce no;
SND_PORTMAN2X4 = lib.mkForce no;
SND_RIPTIDE = lib.mkForce no;
SND_RME32 = lib.mkForce no;
SND_RME96 = lib.mkForce no;
SND_RME9652 = lib.mkForce no;
SND_SE6X = lib.mkForce no;
SND_TRIDENT = lib.mkForce no;
SND_VIA82XX = lib.mkForce no;
SND_VIRTUOSO = lib.mkForce no;
SND_VX222 = lib.mkForce no;
SND_YMFPCI = lib.mkForce no;
# legacy HDA codecs (kept: REALTEK for ALC269 on Framework + HDMI for amdhdmi)
SND_HDA_CODEC_ANALOG = lib.mkForce no;
SND_HDA_CODEC_SIGMATEL = lib.mkForce no;
SND_HDA_CODEC_VIA = lib.mkForce no;
SND_HDA_CODEC_CONEXANT = lib.mkForce no;
SND_HDA_CODEC_CA0110 = lib.mkForce no;
SND_HDA_CODEC_CA0132 = lib.mkForce no;
SND_HDA_CODEC_SI3054 = lib.mkForce no;
SND_HDA_CODEC_CIRRUS = lib.mkForce no;
SND_HDA_CODEC_CS420X = lib.mkForce no;
SND_HDA_CODEC_CS421X = lib.mkForce no;
SND_HDA_CODEC_CS8409 = lib.mkForce no;
# OSS compat (deprecated)
SOUND_OSS_CORE = lib.mkForce no;
# legacy USB HCDs (Zen APUs only have xHCI)
USB_OHCI_HCD = lib.mkForce no;
USB_UHCI_HCD = lib.mkForce no;
USB_C67X00_HCD = lib.mkForce no;
USB_OXU210HP_HCD = lib.mkForce no;
USB_ISP116X_HCD = lib.mkForce no;
USB_ISP1760 = lib.mkForce no;
USB_MAX3421_HCD = lib.mkForce no;
USB_SL811_HCD = lib.mkForce no;
USB_R8A66597 = lib.mkForce no;
USB_XEN_HCD = lib.mkForce no;
# USB gadget + exotic device drivers
USB_GADGET = lib.mkForce no;
USB_MICROTEK = lib.mkForce no;
USB_USS720 = lib.mkForce no;
USB_EMI26 = lib.mkForce no;
USB_EMI62 = lib.mkForce no;
USB_ADUTUX = lib.mkForce no;
USB_SEVSEG = lib.mkForce no;
USB_LEGOTOWER = lib.mkForce no;
USB_CYPRESS_CY7C63 = lib.mkForce no;
USB_CYTHERM = lib.mkForce no;
USB_IDMOUSE = lib.mkForce no;
USB_APPLEDISPLAY = lib.mkForce no;
USB_TRANCEVIBRATOR = lib.mkForce no;
USB_CHAOSKEY = lib.mkForce no;
USB_TEST = lib.mkForce no;
# USB mass-storage sub-drivers for legacy flash/camera readers
USB_STORAGE_REALTEK = lib.mkForce no;
USB_STORAGE_DATAFAB = lib.mkForce no;
USB_STORAGE_FREECOM = lib.mkForce no;
USB_STORAGE_ISD200 = lib.mkForce no;
USB_STORAGE_USBAT = lib.mkForce no;
USB_STORAGE_SDDR09 = lib.mkForce no;
USB_STORAGE_SDDR55 = lib.mkForce no;
USB_STORAGE_JUMPSHOT = lib.mkForce no;
USB_STORAGE_ALAUDA = lib.mkForce no;
USB_STORAGE_ONETOUCH = lib.mkForce no;
USB_STORAGE_KARMA = lib.mkForce no;
USB_STORAGE_CYPRESS_ATACB = lib.mkForce no;
USB_STORAGE_ENE_UB6250 = lib.mkForce no;
# wlan vendors (kept: MEDIATEK/INTEL/REALTEK/BROADCOM for mreow+yarn)
WLAN_VENDOR_ADMTEK = lib.mkForce no;
WLAN_VENDOR_ATMEL = lib.mkForce no;
WLAN_VENDOR_CISCO = lib.mkForce no;
WLAN_VENDOR_INTERSIL = lib.mkForce no;
WLAN_VENDOR_MARVELL = lib.mkForce no;
WLAN_VENDOR_MICROCHIP = lib.mkForce no;
WLAN_VENDOR_PURELIFI = lib.mkForce no;
WLAN_VENDOR_QUANTENNA = lib.mkForce no;
WLAN_VENDOR_RALINK = lib.mkForce no;
WLAN_VENDOR_RSI = lib.mkForce no;
WLAN_VENDOR_SILABS = lib.mkForce no;
WLAN_VENDOR_ST = lib.mkForce no;
WLAN_VENDOR_TI = lib.mkForce no;
WLAN_VENDOR_ZYDAS = lib.mkForce no;
# ethernet vendors (kept: AMD/INTEL/REALTEK/AQUANTIA/ATHEROS)
NET_VENDOR_3COM = lib.mkForce no;
NET_VENDOR_ADAPTEC = lib.mkForce no;
NET_VENDOR_AGERE = lib.mkForce no;
NET_VENDOR_ALACRITECH = lib.mkForce no;
NET_VENDOR_ALTEON = lib.mkForce no;
NET_VENDOR_AMAZON = lib.mkForce no;
NET_VENDOR_ARC = lib.mkForce no;
NET_VENDOR_BROADCOM = lib.mkForce no;
NET_VENDOR_BROCADE = lib.mkForce no;
NET_VENDOR_CADENCE = lib.mkForce no;
NET_VENDOR_CAVIUM = lib.mkForce no;
NET_VENDOR_CHELSIO = lib.mkForce no;
NET_VENDOR_CISCO = lib.mkForce no;
NET_VENDOR_CORTINA = lib.mkForce no;
NET_VENDOR_DAVICOM = lib.mkForce no;
NET_VENDOR_DEC = lib.mkForce no;
NET_VENDOR_DLINK = lib.mkForce no;
NET_VENDOR_EMULEX = lib.mkForce no;
NET_VENDOR_ENGLEDER = lib.mkForce no;
NET_VENDOR_EZCHIP = lib.mkForce no;
NET_VENDOR_FUJITSU = lib.mkForce no;
NET_VENDOR_FUNGIBLE = lib.mkForce no;
NET_VENDOR_GOOGLE = lib.mkForce no;
NET_VENDOR_HISILICON = lib.mkForce no;
NET_VENDOR_HUAWEI = lib.mkForce no;
NET_VENDOR_I825XX = lib.mkForce no;
NET_VENDOR_ADI = lib.mkForce no;
NET_VENDOR_LITEX = lib.mkForce no;
NET_VENDOR_MARVELL = lib.mkForce no;
NET_VENDOR_META = lib.mkForce no;
NET_VENDOR_MICREL = lib.mkForce no;
NET_VENDOR_MICROCHIP = lib.mkForce no;
NET_VENDOR_MICROSEMI = lib.mkForce no;
NET_VENDOR_MICROSOFT = lib.mkForce no;
NET_VENDOR_MUCSE = lib.mkForce no;
NET_VENDOR_MYRI = lib.mkForce no;
NET_VENDOR_NI = lib.mkForce no;
NET_VENDOR_NATSEMI = lib.mkForce no;
NET_VENDOR_NETRONOME = lib.mkForce no;
NET_VENDOR_8390 = lib.mkForce no;
NET_VENDOR_NVIDIA = lib.mkForce no;
NET_VENDOR_OKI = lib.mkForce no;
NET_VENDOR_PACKET_ENGINES = lib.mkForce no;
NET_VENDOR_PENSANDO = lib.mkForce no;
NET_VENDOR_QLOGIC = lib.mkForce no;
NET_VENDOR_QUALCOMM = lib.mkForce no;
NET_VENDOR_RDC = lib.mkForce no;
NET_VENDOR_RENESAS = lib.mkForce no;
NET_VENDOR_ROCKER = lib.mkForce no;
NET_VENDOR_SAMSUNG = lib.mkForce no;
NET_VENDOR_SEEQ = lib.mkForce no;
NET_VENDOR_SILAN = lib.mkForce no;
NET_VENDOR_SIS = lib.mkForce no;
NET_VENDOR_SOLARFLARE = lib.mkForce no;
NET_VENDOR_SMSC = lib.mkForce no;
NET_VENDOR_SOCIONEXT = lib.mkForce no;
NET_VENDOR_STMICRO = lib.mkForce no;
NET_VENDOR_SUN = lib.mkForce no;
NET_VENDOR_SYNOPSYS = lib.mkForce no;
NET_VENDOR_TEHUTI = lib.mkForce no;
NET_VENDOR_TI = lib.mkForce no;
NET_VENDOR_VERTEXCOM = lib.mkForce no;
NET_VENDOR_VIA = lib.mkForce no;
NET_VENDOR_WANGXUN = lib.mkForce no;
NET_VENDOR_WIZNET = lib.mkForce no;
NET_VENDOR_XILINX = lib.mkForce no;
NET_VENDOR_XIRCOM = lib.mkForce no;
# watchdogs (kept: SP5100_TCO for AMD chipset, WDAT_WDT for ACPI)
ACQUIRE_WDT = lib.mkForce no;
ADVANTECH_WDT = lib.mkForce no;
ADVANTECH_EC_WDT = lib.mkForce no;
ALIM1535_WDT = lib.mkForce no;
ALIM7101_WDT = lib.mkForce no;
CGBC_WDT = lib.mkForce no;
EBC_C384_WDT = lib.mkForce no;
EXAR_WDT = lib.mkForce no;
F71808E_WDT = lib.mkForce no;
EUROTECH_WDT = lib.mkForce no;
IB700_WDT = lib.mkForce no;
WAFER_WDT = lib.mkForce no;
I6300ESB_WDT = lib.mkForce no;
IE6XX_WDT = lib.mkForce no;
ITCO_WDT = lib.mkForce no;
IT8712F_WDT = lib.mkForce no;
IT87_WDT = lib.mkForce no;
HP_WATCHDOG = lib.mkForce no;
HPWDT_NMI_DECODE = lib.mkForce no;
KEMPLD_WDT = lib.mkForce no;
MLX_WDT = lib.mkForce no;
NI903X_WDT = lib.mkForce no;
NIC7018_WDT = lib.mkForce no;
SMSC37B787_WDT = lib.mkForce no;
TQMX86_WDT = lib.mkForce no;
VIA_WDT = lib.mkForce no;
W83627HF_WDT = lib.mkForce no;
W83877F_WDT = lib.mkForce no;
W83977F_WDT = lib.mkForce no;
MACHZ_WDT = lib.mkForce no;
SBC_EPX_C3_WATCHDOG = lib.mkForce no;
MEN_A21_WDT = lib.mkForce no;
DW_WATCHDOG = lib.mkForce no;
SOFT_WATCHDOG = lib.mkForce no;
XILINX_WATCHDOG = lib.mkForce no;
# misc dead weight
BLK_DEV_DRBD = lib.mkForce no;
GREYBUS = lib.mkForce no;
SOUNDWIRE_QCOM = lib.mkForce no;
SOUNDWIRE_INTEL = lib.mkForce no;
MEDIA_RADIO_SUPPORT = lib.mkForce no;
# net queue disciplines not used on desktop (kept: htb/prio/fifo/fq/fq_codel/cake/bpf/ingress/netem/tbf/mqprio for basic shaping + testing)
NET_SCH_CBS = lib.mkForce no;
NET_SCH_CHOKE = lib.mkForce no;
NET_SCH_CODEL = lib.mkForce no;
NET_SCH_DRR = lib.mkForce no;
NET_SCH_DUALPI2 = lib.mkForce no;
NET_SCH_ETF = lib.mkForce no;
NET_SCH_ETS = lib.mkForce no;
NET_SCH_FQ_PIE = lib.mkForce no;
NET_SCH_GRED = lib.mkForce no;
NET_SCH_HFSC = lib.mkForce no;
NET_SCH_HHF = lib.mkForce no;
NET_SCH_MULTIQ = lib.mkForce no;
NET_SCH_PIE = lib.mkForce no;
NET_SCH_PLUG = lib.mkForce no;
NET_SCH_QFQ = lib.mkForce no;
NET_SCH_RED = lib.mkForce no;
NET_SCH_SFB = lib.mkForce no;
NET_SCH_SFQ = lib.mkForce no;
NET_SCH_SKBPRIO = lib.mkForce no;
NET_SCH_TAPRIO = lib.mkForce no;
NET_SCH_TEQL = lib.mkForce no;
# battery charger PMIC drivers — all mobile/embedded SoCs, none of these
# exist on x86 laptops/desktops (which use ACPI battery + USB-PD via ucsi).
# CROS_* are Chromebook-specific; Framework has CrOS EC but not CrOS charging.
CHARGER_88PM860X = lib.mkForce no;
CHARGER_ADP5061 = lib.mkForce no;
CHARGER_AXP20X = lib.mkForce no;
CHARGER_BD71828 = lib.mkForce no;
CHARGER_BD99954 = lib.mkForce no;
CHARGER_BQ2415X = lib.mkForce no;
CHARGER_BQ24190 = lib.mkForce no;
CHARGER_BQ24257 = lib.mkForce no;
CHARGER_BQ24735 = lib.mkForce no;
CHARGER_BQ2515X = lib.mkForce no;
CHARGER_BQ256XX = lib.mkForce no;
CHARGER_BQ257XX = lib.mkForce no;
CHARGER_BQ25890 = lib.mkForce no;
CHARGER_BQ25980 = lib.mkForce no;
CHARGER_CROS_CONTROL = lib.mkForce no;
CHARGER_CROS_PCHG = lib.mkForce no;
CHARGER_CROS_USBPD = lib.mkForce no;
CHARGER_DA9150 = lib.mkForce no;
CHARGER_DETECTOR_MAX14656 = lib.mkForce no;
CHARGER_GPIO = lib.mkForce no;
CHARGER_ISP1704 = lib.mkForce no;
CHARGER_LP8727 = lib.mkForce no;
CHARGER_LP8788 = lib.mkForce no;
CHARGER_LT3651 = lib.mkForce no;
CHARGER_LTC4162L = lib.mkForce no;
CHARGER_MANAGER = lib.mkForce no;
CHARGER_MAX14577 = lib.mkForce no;
CHARGER_MAX77650 = lib.mkForce no;
CHARGER_MAX77693 = lib.mkForce no;
CHARGER_MAX77705 = lib.mkForce no;
CHARGER_MAX77976 = lib.mkForce no;
CHARGER_MAX8903 = lib.mkForce no;
CHARGER_MAX8971 = lib.mkForce no;
CHARGER_MAX8997 = lib.mkForce no;
CHARGER_MAX8998 = lib.mkForce no;
CHARGER_MP2629 = lib.mkForce no;
CHARGER_MT6360 = lib.mkForce no;
CHARGER_MT6370 = lib.mkForce no;
CHARGER_PF1550 = lib.mkForce no;
CHARGER_RK817 = lib.mkForce no;
CHARGER_RT5033 = lib.mkForce no;
CHARGER_RT9455 = lib.mkForce no;
CHARGER_RT9467 = lib.mkForce no;
CHARGER_RT9471 = lib.mkForce no;
CHARGER_RT9756 = lib.mkForce no;
CHARGER_SBS = lib.mkForce no;
CHARGER_SMB347 = lib.mkForce no;
CHARGER_TPS65090 = lib.mkForce no;
CHARGER_TPS65217 = lib.mkForce no;
CHARGER_TWL4030 = lib.mkForce no;
CHARGER_TWL6030 = lib.mkForce no;
CHARGER_UCS1002 = lib.mkForce no;
CHARGER_WILCO = lib.mkForce no;
# enterprise storage stack (kept: DM_CRYPT for LUKS, DM_SNAPSHOT/INTEGRITY/VERITY, MD_RAID0/1/10/456 in case)
DM_MULTIPATH = lib.mkForce no;
DM_MULTIPATH_QL = lib.mkForce no;
DM_MULTIPATH_ST = lib.mkForce no;
DM_MULTIPATH_HST = lib.mkForce no;
DM_MULTIPATH_IOA = lib.mkForce no;
DM_VDO = lib.mkForce no;
DM_PCACHE = lib.mkForce no;
DM_ZONED = lib.mkForce no;
DM_LOG_USERSPACE = lib.mkForce no;
DM_EBS = lib.mkForce no;
DM_ERA = lib.mkForce no;
DM_DUST = lib.mkForce no;
DM_DELAY = lib.mkForce no;
DM_FLAKEY = lib.mkForce no;
DM_SWITCH = lib.mkForce no;
DM_LOG_WRITES = lib.mkForce no;
DM_CLONE = lib.mkForce no;
DM_UNSTRIPED = lib.mkForce no;
DM_CACHE = lib.mkForce no;
DM_WRITECACHE = lib.mkForce no;
DM_THIN_PROVISIONING = lib.mkForce no;
MD_CLUSTER = lib.mkForce no;
MD_LINEAR = lib.mkForce no;
SCSI_DH_RDAC = lib.mkForce no;
SCSI_DH_HP_SW = lib.mkForce no;
SCSI_ENCLOSURE = lib.mkForce no;
}; };
} }
]; ];
@@ -337,12 +835,6 @@
"msr" "msr"
"btusb" "btusb"
]; ];
kernelParams = [
# 1gb huge pages
"hugepagesz=1G"
"hugepages=3"
];
}; };
services = { services = {
@@ -381,9 +873,6 @@
}; };
}; };
# EST
time.timeZone = "America/New_York";
# Select internationalisation properties. # Select internationalisation properties.
i18n.defaultLocale = "en_US.UTF-8"; i18n.defaultLocale = "en_US.UTF-8";
@@ -419,8 +908,7 @@
"camera" "camera"
"adbusers" "adbusers"
]; ];
# TODO! this is really bad :( I should really figure out how to do proper secrets management hashedPasswordFile = config.age.secrets.password-hash.path;
hashedPasswordFile = "${../secrets/desktop/password-hash}";
}; };
services.gvfs.enable = true; services.gvfs.enable = true;

View File

@@ -0,0 +1,52 @@
# Jovian-NixOS deck-mode configuration shared by all hosts running Steam
# in gamescope (yarn, patiodeck). Host-specific settings (like
# jovian.devices.steamdeck.enable) stay in the host's default.nix.
{
lib,
username,
inputs,
...
}:
{
imports = [
./desktop-steam-update.nix
inputs.jovian-nixos.nixosModules.default
];
nixpkgs.config.allowUnfreePredicate =
pkg:
builtins.elem (lib.getName pkg) [
"steamdeck-hw-theme"
"steam-jupiter-unwrapped"
"steam"
"steam-original"
"steam-unwrapped"
"steam-run"
# OptiScaler bundles XeSS (Intel SLA) and FidelityFX SDK alongside its own GPL-3.0
# source. The package's meta.license is unfreeRedistributable to reflect the bundle.
"optiscaler"
];
jovian.steam = {
enable = true;
autoStart = true;
desktopSession = "niri";
user = username;
};
# default off: jovian's SteamOS kernel cmdline (Valve's Deck-tuned amdgpu
# params: amd_iommu=off, amdgpu.sched_hw_submission=4, dcdebugmask=0x20000,
# lockup_timeout=5000,...,5000, ttm.pages_min=2097152, audit=0) is tuned for
# Van Gogh / Sephiroth APUs at 1280x800 and corrupts host memory pages on
# RDNA3 desktop GPUs (yarn: Navi 32 1440p, FH5 wine-log showed >1M
# illegal-instruction faults across .text after extended play).
# mkDefault so actual Deck hosts can opt back in (see hosts/patiodeck).
jovian.steamos.enableDefaultCmdlineConfig = lib.mkDefault false;
# jovian overrides the display manager; sddm is required
services.displayManager.sddm.wayland.enable = true;
# desktop-common.nix enables programs.gamescope which conflicts with
# jovian's own gamescope wrapper
programs.gamescope.enable = lib.mkForce false;
}

View File

@@ -0,0 +1,49 @@
{
config,
lib,
pkgs,
inputs,
...
}:
{
imports = [
inputs.lanzaboote.nixosModules.lanzaboote
];
boot = {
loader.systemd-boot.enable = lib.mkForce false;
lanzaboote = {
enable = true;
# sbctl expects the bundle at /var/lib/sbctl; muffin uses /etc/secureboot
# because it is wiped on every activation there (impermanence) — desktops
# extract to the historical sbctl path so existing tooling keeps working.
pkiBundle = "/var/lib/sbctl";
};
};
system.activationScripts = {
# Extract the secureboot PKI bundle from the agenix-decrypted tar. Mirrors
# modules/server-lanzaboote-agenix.nix; skip when keys are already present
# (e.g., disko-install staged them via --extra-files).
"secureboot-keys" = {
deps = [ "agenix" ];
text = ''
#!/bin/sh
(
umask 077
if [[ -d ${config.boot.lanzaboote.pkiBundle} && -f ${config.boot.lanzaboote.pkiBundle}/db.key ]]; then
echo "secureboot keys already present, skipping extraction"
else
echo "extracting secureboot keys from agenix"
rm -fr ${config.boot.lanzaboote.pkiBundle} || true
install -d -o root -g wheel -m 0500 ${config.boot.lanzaboote.pkiBundle}
${pkgs.gnutar}/bin/tar xf ${config.age.secrets.secureboot-tar.path} -C ${config.boot.lanzaboote.pkiBundle}
fi
chown -R root:wheel ${config.boot.lanzaboote.pkiBundle}
chmod -R 500 ${config.boot.lanzaboote.pkiBundle}
)
'';
};
};
}

View File

@@ -0,0 +1,58 @@
# oo7-daemon — the pure-Rust implementation of the org.freedesktop.secrets
# (libsecret) D-Bus interface, written by the same project that ships the
# `oo7` Rust crate that flare uses internally.
#
# Without a secret-service provider on the bus, flare's `oo7::Keyring::new()`
# call fails immediately at startup ("The communication with libsecret
# failed"). Most NixOS desktops solve this by enabling
# `services.gnome.gnome-keyring.enable`, but that drags GNOME plumbing
# we don't otherwise want; oo7-daemon is the lightweight match for niri
# desktops.
#
# The `oo7-server` package ships:
# - libexec/oo7-daemon (the binary)
# - share/dbus-1/services/org.freedesktop.secrets.service
# - share/systemd/user/oo7-daemon.service
#
# We register both with NixOS and start the daemon at user login so
# libsecret clients can find the bus name without depending on D-Bus
# auto-activation. We also alias the unit as
# `dbus-org.freedesktop.secrets.service` so D-Bus activation falls back
# to it cleanly when the daemon has not been started yet (e.g. inside a
# fresh `systemd-run --user` scope).
{ pkgs, ... }:
let
# 0.6.0 stops at LockedKeyring::open(login) when no keyring file exists,
# so on first run the auto-created default collection is locked and a
# client's Unlock() call routes to a prompt that never resolves (no
# gnome-shell / kwallet / gcr-prompter on a niri desktop). Cherry-pick
# upstream cf7b9a9 (PR #443) which uses the systemd credential / PAM
# secret to unlock the new keyring directly. Drop the override when
# nixpkgs ships an oo7-server release that includes the fix.
oo7-server = pkgs.oo7-server.overrideAttrs (old: {
patches = (old.patches or [ ]) ++ [
../patches/oo7-server/0001-server-Use-provided-secret-to-unlock-auto-created-de.patch
];
});
in
{
environment.systemPackages = [ oo7-server ];
services.dbus.packages = [ oo7-server ];
systemd.packages = [ oo7-server ];
systemd.user.services.oo7-daemon = {
wantedBy = [ "default.target" ];
aliases = [ "dbus-org.freedesktop.secrets.service" ];
# Feed the keyring master password through systemd's credential
# machinery. The upstream unit declares
# `ImportCredential=oo7.keyring-encryption-password`, which picks up
# whatever LoadCredential leaves under $CREDENTIALS_DIRECTORY. agenix
# decrypts the secret to /run/agenix/oo7-keyring-password as the
# `primary` user, who is also the user this user-scope unit runs as.
serviceConfig.LoadCredential = [
"oo7.keyring-encryption-password:/run/agenix/oo7-keyring-password"
];
};
}

View File

@@ -0,0 +1,122 @@
# Binary-cache update mechanism for Jovian-NixOS desktops.
#
# Replaces the upstream holo-update/steamos-update stubs with a script that
# checks the private binary cache for a newer system closure, and provides a
# root-level systemd service to apply it. Steam's deck UI calls
# `steamos-update check` periodically; exit 7 = no update, exit 0 = update
# applied or available.
#
# The deploy endpoint is ${binary_cache_url}/deploy/${hostname} — a plain
# text file containing the /nix/store path of the latest closure, published
# by CI after a successful build.
{
pkgs,
lib,
hostname,
username,
site_config,
...
}:
let
deploy-url = "${site_config.binary_cache.url}/deploy/${hostname}";
steamos-update-script = pkgs.writeShellScript "steamos-update" ''
export PATH=${
lib.makeBinPath [
pkgs.curl
pkgs.coreutils
pkgs.systemd
]
}
STORE_PATH=$(curl -sf --max-time 30 "${deploy-url}" || true)
if [ -z "$STORE_PATH" ]; then
>&2 echo "[steamos-update] server unreachable"
exit 7
fi
CURRENT=$(readlink -f /nix/var/nix/profiles/system)
if [ "$CURRENT" = "$STORE_PATH" ]; then
>&2 echo "[steamos-update] no update available"
exit 7
fi
# check-only mode: just report that an update exists
if [ "''${1:-}" = "check" ] || [ "''${1:-}" = "--check-only" ]; then
>&2 echo "[steamos-update] update available"
exit 0
fi
# apply: trigger the root-running systemd service to install the update
>&2 echo "[steamos-update] applying update..."
if systemctl start --wait pull-update-apply.service; then
>&2 echo "[steamos-update] update installed, reboot to apply"
exit 0
else
>&2 echo "[steamos-update] apply failed; see 'journalctl -u pull-update-apply'"
exit 1
fi
'';
in
{
nixpkgs.overlays = [
(_final: prev: {
jovian-stubs = prev.jovian-stubs.overrideAttrs (old: {
buildCommand = (old.buildCommand or "") + ''
install -D -m 755 ${steamos-update-script} $out/bin/holo-update
install -D -m 755 ${steamos-update-script} $out/bin/steamos-update
'';
});
})
];
systemd.services.pull-update-apply = {
description = "Apply pending NixOS update pulled from binary cache";
serviceConfig = {
Type = "oneshot";
ExecStart = pkgs.writeShellScript "pull-update-apply" ''
set -uo pipefail
export PATH=${
lib.makeBinPath [
pkgs.curl
pkgs.coreutils
pkgs.nix
]
}
STORE_PATH=$(curl -sf --max-time 30 "${deploy-url}" || true)
if [ -z "$STORE_PATH" ]; then
echo "server unreachable"
exit 1
fi
CURRENT=$(readlink -f /nix/var/nix/profiles/system)
if [ "$CURRENT" = "$STORE_PATH" ]; then
echo "already up to date: $STORE_PATH"
exit 0
fi
echo "applying $STORE_PATH (was $CURRENT)"
nix-store -r --add-root /nix/var/nix/gcroots/pull-update-apply-latest --indirect "$STORE_PATH" \
|| { echo "fetch failed"; exit 1; }
nix-env -p /nix/var/nix/profiles/system --set "$STORE_PATH" \
|| { echo "profile set failed"; exit 1; }
"$STORE_PATH/bin/switch-to-configuration" boot \
|| { echo "boot entry failed"; exit 1; }
echo "update applied; reboot required"
'';
};
};
# allow the primary user to trigger pull-update-apply without a password
security.polkit.extraConfig = ''
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "pull-update-apply.service" &&
subject.user == "${username}") {
return polkit.Result.YES;
}
});
'';
}

View File

@@ -0,0 +1,187 @@
# Deferred deploy finalize for deploy-rs-driven hosts.
#
# When deploy-rs activates via `switch-to-configuration switch` and the gitea-
# actions runner driving the deploy lives on the same host, the runner unit
# gets restarted mid-activation — its definition changes between builds. That
# restart kills the SSH session, the CI job, and deploy-rs's magic-rollback
# handshake, so CI reports failure even when the deploy itself completed.
# This is deploy-rs#153, open since 2022.
#
# This module breaks the dependency: activation does `switch-to-configuration
# boot` (bootloader only, no service restarts), then invokes deploy-finalize
# which schedules a detached systemd transient unit that fires `delay` seconds
# later with the real `switch` (or `systemctl reboot` when the kernel, initrd,
# or kernel-modules changed since boot). The transient unit is owned by pid1,
# so it survives the runner's eventual restart — by which time the CI job has
# finished reporting.
#
# Prior art (reboot-or-switch logic, not the self-deploy detachment):
# - nixpkgs `system.autoUpgrade` (allowReboot = true branch) is the canonical
# source of the 3-path {initrd,kernel,kernel-modules} comparison.
# - obsidiansystems/obelisk#957 merged the same snippet into `ob deploy` for
# push-based remote deploys — but doesn't need detachment since its deployer
# lives on a different machine from the target.
# - nixpkgs#185030 tracks lifting this into switch-to-configuration proper.
# Stale since 2025-07; until it lands, every downstream reimplements it.
#
# Bootstrap note: the activation snippet resolves deploy-finalize via
# lib.getExe (store path), not via `/run/current-system/sw/bin` — `boot` mode
# does not update `/run/current-system`, so the old binary would be resolved.
{
config,
lib,
pkgs,
...
}:
let
cfg = config.services.deployFinalize;
finalize = pkgs.writeShellApplication {
name = "deploy-finalize";
runtimeInputs = [
pkgs.coreutils
pkgs.systemd
];
text = ''
delay=${toString cfg.delay}
profile=/nix/var/nix/profiles/system
dry_run=0
usage() {
cat <<EOF
Usage: deploy-finalize [--dry-run] [--delay N] [--profile PATH]
Compares /run/booted-system against PATH (default /nix/var/nix/profiles/system)
and schedules either \`systemctl reboot\` (kernel or initrd changed) or
\`switch-to-configuration switch\` (services only) via a detached systemd-run
timer firing N seconds later.
Options:
--dry-run Print the decision and would-be command without scheduling.
--delay N Override the delay in seconds. Default: ${toString cfg.delay}.
--profile PATH Override the profile path used for comparison.
EOF
}
while [[ $# -gt 0 ]]; do
case "$1" in
--dry-run) dry_run=1; shift ;;
--delay) delay="$2"; shift 2 ;;
--profile) profile="$2"; shift 2 ;;
-h|--help) usage; exit 0 ;;
*)
echo "deploy-finalize: unknown option $1" >&2
usage >&2
exit 2
;;
esac
done
# Comparing {kernel,initrd,kernel-modules} matches nixpkgs's canonical
# `system.autoUpgrade` allowReboot logic. -e (not -f) so a dangling
# symlink counts as missing: on a real NixOS profile all three exist,
# but defensive: if a profile has bad symlinks we refuse to schedule
# rather than scheduling against ghost paths.
booted_kernel="$(readlink -e /run/booted-system/kernel 2>/dev/null || true)"
booted_initrd="$(readlink -e /run/booted-system/initrd 2>/dev/null || true)"
booted_modules="$(readlink -e /run/booted-system/kernel-modules 2>/dev/null || true)"
new_kernel="$(readlink -e "$profile/kernel" 2>/dev/null || true)"
new_initrd="$(readlink -e "$profile/initrd" 2>/dev/null || true)"
new_modules="$(readlink -e "$profile/kernel-modules" 2>/dev/null || true)"
if [[ -z "$new_kernel" || -z "$new_initrd" || -z "$new_modules" ]]; then
echo "deploy-finalize: refusing to schedule $profile is missing kernel, initrd, or kernel-modules" >&2
exit 1
fi
changed=()
if [[ -z "$booted_kernel" || -z "$booted_initrd" || -z "$booted_modules" ]]; then
# Unreachable on a booted NixOS, but fail closed on reboot.
changed+=("/run/booted-system incomplete")
fi
[[ "$booted_kernel" != "$new_kernel" ]] && changed+=("kernel")
[[ "$booted_initrd" != "$new_initrd" ]] && changed+=("initrd")
[[ "$booted_modules" != "$new_modules" ]] && changed+=("kernel-modules")
reboot_needed=0
reason=""
if [[ ''${#changed[@]} -gt 0 ]]; then
reboot_needed=1
# Join with commas so the reason reads as e.g. `kernel,initrd changed`.
reason="$(IFS=, ; echo "''${changed[*]}") changed"
fi
if [[ "$reboot_needed" == 1 ]]; then
action=reboot
cmd="systemctl reboot"
else
action=switch
reason="services only"
cmd="$profile/bin/switch-to-configuration switch"
fi
# Nanosecond suffix so back-to-back deploys don't collide on unit names.
unit="deploy-finalize-$(date +%s%N)"
printf 'deploy-finalize: booted_kernel=%s\n' "$booted_kernel"
printf 'deploy-finalize: new_kernel=%s\n' "$new_kernel"
printf 'deploy-finalize: booted_initrd=%s\n' "$booted_initrd"
printf 'deploy-finalize: new_initrd=%s\n' "$new_initrd"
printf 'deploy-finalize: booted_kernel-modules=%s\n' "$booted_modules"
printf 'deploy-finalize: new_kernel-modules=%s\n' "$new_modules"
printf 'deploy-finalize: action=%s reason=%s delay=%ss unit=%s\n' \
"$action" "$reason" "$delay" "$unit"
if [[ "$dry_run" == 1 ]]; then
printf 'deploy-finalize: dry-run not scheduling\n'
printf 'deploy-finalize: would run: %s\n' "$cmd"
printf 'deploy-finalize: would schedule: systemd-run --collect --unit=%s --on-active=%s\n' \
"$unit" "$delay"
exit 0
fi
# Cancel any still-pending finalize timers from an earlier deploy so this
# invocation is authoritative. Without this a stale timer could fire with
# the old profile's action (reboot/switch) against the new profile and
# briefly run new userspace under the old kernel.
systemctl stop 'deploy-finalize-*.timer' 2>/dev/null || true
# --on-active arms a transient timer owned by pid1. systemd-run returns
# once the timer is armed; the SSH session that called us can exit and
# the gitea-runner can be restarted (by the switch the timer fires)
# without affecting whether the finalize runs.
systemd-run \
--collect \
--unit="$unit" \
--description="Finalize NixOS deploy ($action after boot-mode activation)" \
--on-active="$delay" \
/bin/sh -c "$cmd"
'';
};
in
{
options.services.deployFinalize = {
enable = lib.mkEnableOption "deferred deploy finalize (switch or reboot) after boot-mode activation";
delay = lib.mkOption {
type = lib.types.ints.positive;
default = 60;
description = ''
Seconds between the deploy-rs activation completing and the scheduled
finalize firing. Tuned so the CI job (or manual SSH session) has time
to complete status reporting before the runner is restarted by the
eventual switch-to-configuration.
'';
};
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [ finalize ];
# Exposed for the deploy-rs activation snippet to reference by /nix/store
# path via lib.getExe — `boot` mode does not update /run/current-system,
# so reading through /run/current-system/sw/bin would resolve to the OLD
# binary on a new-feature rollout or immediately after a rollback.
system.build.deployFinalize = finalize;
};
}

View File

@@ -0,0 +1,32 @@
From 066a164a524e4983b850f5659b921cb42f84a0e0 Mon Sep 17 00:00:00 2001
From: Pauli Virtanen <pav@iki.fi>
Date: Mon, 16 Feb 2026 18:17:08 +0200
Subject: [PATCH] a2dp: connect source profile after sink
Since cdcd845f87ee the order in which profiles with the same priority
are connected is the same order as btd_profile_register() is called,
instead of being the opposite order. When initiating connections, we
want to prefer a2dp-sink profile over a2dp-source, as connecting both at
the same time does not work currently.
Add .after_services to specify the order.
Fixes: https://github.com/bluez/bluez/issues/1898
---
profiles/audio/a2dp.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/profiles/audio/a2dp.c b/profiles/audio/a2dp.c
index 7a37003a2b..c7e0fc75c0 100644
--- a/profiles/audio/a2dp.c
+++ b/profiles/audio/a2dp.c
@@ -3769,6 +3769,9 @@ static struct btd_profile a2dp_source_profile = {
.adapter_probe = a2dp_sink_server_probe,
.adapter_remove = a2dp_sink_server_remove,
+
+ /* Connect source after sink, to prefer sink when conflicting */
+ .after_services = BTD_PROFILE_UUID_CB(NULL, A2DP_SINK_UUID),
};
static struct btd_profile a2dp_sink_profile = {

View File

@@ -0,0 +1,730 @@
From 9ec9203bd47b7369e2a97fee2d6896576da23da0 Mon Sep 17 00:00:00 2001
From: Simon Gardling <titaniumtown@proton.me>
Date: Wed, 29 Apr 2026 19:00:12 -0400
Subject: [PATCH 1/6] feat(typing): Implement typing indicators
- Send TypingMessage Started/Stopped events as the user composes a
message, including a periodic refresh and an idle-stop timer so the
indicator follows actual composition activity.
- Display a typing indicator strip above the message input, gated on
the active channel's is-typing state.
- Add the show-typing-indicators and send-typing-indicators settings,
exposed through a new preferences group, and honour them both for
display and outbound events.
- Generalise Channel-level send_message_to_group to accept any
ContentBody so the new TypingMessage path can reuse it.
---
CHANGELOG.md | 5 +
data/de.schmidhuberj.Flare.gschema.xml | 9 +
data/resources/style.css | 6 +
data/resources/ui/channel_messages.blp | 33 +++
data/resources/ui/preferences_window.blp | 15 ++
src/backend/channel.rs | 59 +++++-
src/backend/manager.rs | 43 +++-
src/backend/manager_thread.rs | 8 +-
src/backend/message/mod.rs | 12 +-
src/gui/channel_messages.rs | 249 ++++++++++++++++++++++-
src/gui/preferences_window.rs | 23 +++
11 files changed, 441 insertions(+), 21 deletions(-)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 20dc578..2bde927 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
+### Added
+
+- Send typing indicators while composing a message and display them above the message input.
+- Settings to enable or disable sending and showing typing indicators.
+
## [0.20.4] - 2026-04-22
### Fixed
diff --git a/data/de.schmidhuberj.Flare.gschema.xml b/data/de.schmidhuberj.Flare.gschema.xml
index 8a58415..0705a73 100644
--- a/data/de.schmidhuberj.Flare.gschema.xml
+++ b/data/de.schmidhuberj.Flare.gschema.xml
@@ -58,6 +58,15 @@
<summary>Send a message when the Enter-key is pressed</summary>
</key>
+ <key name="show-typing-indicators" type="b">
+ <default>true</default>
+ <summary>Show typing indicators of other users</summary>
+ </key>
+ <key name="send-typing-indicators" type="b">
+ <default>true</default>
+ <summary>Send typing indicators while composing</summary>
+ </key>
+
<key name="sort-contacts-by" type="s">
<default>"firstname"</default>
<summary>How to sort contacts, e.g with "firstname" or "surname"</summary>
diff --git a/data/resources/style.css b/data/resources/style.css
index dcd0569..00e4783 100644
--- a/data/resources/style.css
+++ b/data/resources/style.css
@@ -13,6 +13,12 @@
border-top: 1px solid @borders;
}
+.typing-indicator {
+ background-color: @window_bg_color;
+ border-top: 1px solid @borders;
+ min-height: 18px;
+}
+
.message-list row {
padding:0;
}
diff --git a/data/resources/ui/channel_messages.blp b/data/resources/ui/channel_messages.blp
index 53be7ab..7f438e4 100644
--- a/data/resources/ui/channel_messages.blp
+++ b/data/resources/ui/channel_messages.blp
@@ -102,6 +102,39 @@ template $FlChannelMessages: Box {
}
}
+ // Typing indicator
+ Box typing_indicator {
+ styles [
+ "typing-indicator",
+ ]
+
+ orientation: horizontal;
+ hexpand: true;
+ visible: bind template.show-typing as <bool>;
+
+ Adw.Clamp {
+ maximum-size: 800;
+ tightening-threshold: 600;
+ hexpand: true;
+
+ Label {
+ styles [
+ "caption",
+ "dim-label",
+ ]
+
+ halign: start;
+ ellipsize: end;
+ xalign: 0;
+ margin-start: 12;
+ margin-end: 12;
+ margin-top: 2;
+ margin-bottom: 2;
+ label: bind template.active-channel as <$FlChannel>.typing-label;
+ }
+ }
+ }
+
Box {
styles [
"toolbar",
diff --git a/data/resources/ui/preferences_window.blp b/data/resources/ui/preferences_window.blp
index dd84f74..2068cab 100644
--- a/data/resources/ui/preferences_window.blp
+++ b/data/resources/ui/preferences_window.blp
@@ -66,6 +66,21 @@ template $FlPreferencesWindow: Adw.PreferencesDialog {
);
}
}
+
+ Adw.PreferencesGroup {
+ title: _("Typing Indicators");
+ description: _("Inform other users when you are composing a message and show indicators when they are");
+
+ Adw.SwitchRow row_send_typing_indicators {
+ title: _("Send Typing Indicators");
+ subtitle: _("Notify others while you are composing a message");
+ }
+
+ Adw.SwitchRow row_show_typing_indicators {
+ title: _("Show Typing Indicators");
+ subtitle: _("Display when other users are composing a message");
+ }
+ }
}
}
diff --git a/src/backend/channel.rs b/src/backend/channel.rs
index 73e82f3..4bb1d38 100644
--- a/src/backend/channel.rs
+++ b/src/backend/channel.rs
@@ -15,8 +15,9 @@ use glib::Bytes;
use glib::{Object, prelude::Cast};
use libsignal_service::{
- proto::{DataMessage, GroupContextV2},
+ proto::{DataMessage, GroupContextV2, TypingMessage, typing_message::Action as TypingAction},
protocol::ServiceId,
+ zkgroup::groups::{GroupMasterKey, GroupSecretParams},
};
use presage::model::groups::Group;
use presage::store::Thread;
@@ -230,6 +231,62 @@ impl Channel {
self.manager().send_session_reset(uuid, ts).await
}
+ /// Send a typing indicator (started/stopped) to the channel.
+ ///
+ /// Returns `Ok(())` without sending if the user has disabled the
+ /// `send-typing-indicators` setting or the channel has no resolvable peer.
+ pub async fn send_typing(&self, started: bool) -> Result<(), ApplicationError> {
+ // Note-to-self has no useful peer to inform, and routing the
+ // event through `send_message(self_uuid, …)` would fan it out to
+ // every other linked device on the account where flare's own
+ // receive path lights up an "is typing" indicator on its copy of
+ // Note-to-self.
+ if self.is_self() {
+ return Ok(());
+ }
+ let manager = self.manager();
+ if !manager.settings().boolean("send-typing-indicators") {
+ return Ok(());
+ }
+
+ let timestamp = std::time::SystemTime::now()
+ .duration_since(std::time::UNIX_EPOCH)
+ .expect("Time went backwards")
+ .as_millis() as u64;
+
+ let action = if started {
+ TypingAction::Started
+ } else {
+ TypingAction::Stopped
+ };
+
+ let group_id = self
+ .group_context()
+ .and_then(|c| c.master_key)
+ .and_then(|k| <[u8; 32]>::try_from(k).ok())
+ .map(|master_key| {
+ GroupSecretParams::derive_from_master_key(GroupMasterKey::new(master_key))
+ .get_group_identifier()
+ .to_vec()
+ });
+
+ let typing = TypingMessage {
+ timestamp: Some(timestamp),
+ action: Some(action as i32),
+ group_id: group_id.clone(),
+ };
+
+ if let Some(group_master_key) = self.group_context().and_then(|c| c.master_key) {
+ manager
+ .send_message_to_group(group_master_key, typing, timestamp)
+ .await?;
+ } else if let Some(uuid) = self.uuid() {
+ manager.send_message(uuid, typing, timestamp).await?;
+ }
+
+ Ok(())
+ }
+
/// Register a new message with the channel.
/// This does the following (based on the type of message):
/// - Add a quote to the message if needed.
diff --git a/src/backend/manager.rs b/src/backend/manager.rs
index c25fba0..eaa41e0 100644
--- a/src/backend/manager.rs
+++ b/src/backend/manager.rs
@@ -8,7 +8,7 @@ use libsignal_service::protocol::DeviceId;
use libsignal_service::{
Profile,
content::ContentBody,
- proto::{AttachmentPointer, DataMessage, GroupContextV2},
+ proto::{AttachmentPointer, GroupContextV2},
protocol::ServiceId,
sender::{AttachmentSpec, AttachmentUploadError},
websocket::account::DeviceInfo,
@@ -490,20 +490,42 @@ impl Manager {
Thread::Contact(uuid)
};
+ // Fast path: return the cached channel if we already know it.
+ // Without this, callers that arrive after initial channel discovery
+ // (incoming TypingMessage routing, in particular) would receive a
+ // freshly-built Channel object whose property notifications never
+ // reach widgets bound to the cached one in the UI — typing
+ // indicators on both the header bar and the channel-messages view
+ // would silently never light up.
+ if let Some(cached) = self.imp().channels.borrow().get(&thread).cloned() {
+ return cached;
+ }
+
let contact = Contact::from_service_address(&uuid, self).await;
let channel = Channel::from_contact_or_group(contact, group, self).await;
channel.initialize_avatar().await;
- let mut known_channels = self.imp().channels.borrow_mut();
- known_channels.entry(thread).or_insert_with(|| {
- log::trace!("Got a contact from the storage");
+ // Another task may have inserted the same thread while we were
+ // awaiting; pick whichever is already there or insert ours.
+ let stored = {
+ let mut known = self.imp().channels.borrow_mut();
+ known
+ .entry(thread)
+ .or_insert_with(|| {
+ log::trace!("Got a contact from the storage");
+ channel.clone()
+ })
+ .clone()
+ };
+
+ if stored == channel {
self.emit_by_name::<()>("channel", &[&channel]);
- channel.clone()
- });
+ }
- // No need to initialize avatar or last messages in here, will be done when initializing contacts.
+ // No need to initialize avatar or last messages in here, will be
+ // done when initializing contacts.
- channel
+ stored
}
pub fn channel_from_thread(&self, thread: Thread) -> Option<Channel> {
@@ -737,14 +759,15 @@ impl Manager {
pub(super) async fn send_message_to_group(
&self,
group_key: Vec<u8>,
- message: DataMessage,
+ message: impl Into<ContentBody>,
timestamp: u64,
) -> Result<(), ApplicationError> {
log::trace!("`Manager::send_message_to_group` start");
+ let body = message.into();
let internal = self.internal();
let r = tspawn!(async move {
internal
- .send_message_to_group(group_key, message.clone(), timestamp)
+ .send_message_to_group(group_key, body.clone(), timestamp)
.await
})
.await
diff --git a/src/backend/manager_thread.rs b/src/backend/manager_thread.rs
index 1f6a885..cba62ae 100644
--- a/src/backend/manager_thread.rs
+++ b/src/backend/manager_thread.rs
@@ -21,7 +21,7 @@ use libsignal_service::{
configuration::SignalServers,
content::ContentBody,
prelude::{ProfileKey, Uuid, phonenumber},
- proto::{AttachmentPointer, DataMessage, GroupContextV2},
+ proto::{AttachmentPointer, GroupContextV2},
protocol::ServiceId,
sender::{AttachmentSpec, AttachmentUploadError},
websocket::account::DeviceInfo,
@@ -65,7 +65,7 @@ enum Command {
),
SendMessageToGroup(
Vec<u8>,
- Box<DataMessage>,
+ Box<ContentBody>,
u64,
oneshot::Sender<Result<(), Error>>,
),
@@ -353,7 +353,7 @@ impl ManagerThread {
pub async fn send_message_to_group(
&self,
group_key: Vec<u8>,
- message: DataMessage,
+ message: impl Into<ContentBody>,
timestamp: u64,
) -> Result<(), Error> {
let (sender, receiver) = oneshot::channel();
@@ -361,7 +361,7 @@ impl ManagerThread {
.clone()
.send(Command::SendMessageToGroup(
group_key,
- Box::new(message),
+ Box::new(message.into()),
timestamp,
sender,
))
diff --git a/src/backend/message/mod.rs b/src/backend/message/mod.rs
index 11ccd7c..74952ac 100644
--- a/src/backend/message/mod.rs
+++ b/src/backend/message/mod.rs
@@ -270,14 +270,16 @@ impl Message {
// Typing messages.
// Note that they are currently only implemented for contacts, this requires upstream updates to fix.
ContentBody::TypingMessage(t) => {
+ // Both group and contact branches stay cache-only: we only
+ // surface typing for conversations the user already knows
+ // about. Going through `channel_from_uuid_or_group` here
+ // would mint a new Channel object on the first typing
+ // event from a stranger and add them to the sidebar with
+ // no actual messages.
let channel = if let Some(id) = &t.group_id {
manager.channel_from_group_id(id)
} else {
- Some(
- manager
- .channel_from_uuid_or_group(metadata.sender, &None)
- .await,
- )
+ manager.channel_from_thread(presage::store::Thread::Contact(metadata.sender))
};
let Some(channel) = channel else {
diff --git a/src/gui/channel_messages.rs b/src/gui/channel_messages.rs
index 0e8ae4e..831fc25 100644
--- a/src/gui/channel_messages.rs
+++ b/src/gui/channel_messages.rs
@@ -5,6 +5,16 @@ use crate::ApplicationError;
const MESSAGES_REQUEST_LOAD: usize = 10;
+/// Re-send the `Started` typing event at this interval so the receiver
+/// does not let the indicator expire while the user keeps composing.
+/// Must stay strictly below `TYPING_NOTIFICATION_DURATION_SECONDS` in
+/// `crate::backend::channel`.
+const TYPING_REFRESH_SECONDS: u32 = 8;
+
+/// Send `Stopped` if no buffer change has happened in this many seconds.
+/// Mirrors how Signal apps treat composition pauses as the end of typing.
+const TYPING_IDLE_SECONDS: u32 = 5;
+
glib::wrapper! {
/// [ChannelMessages] is the right pane displaying the list of messages and the entry-bar.
pub struct ChannelMessages(ObjectSubclass<imp::ChannelMessages>)
@@ -103,6 +113,198 @@ impl ChannelMessages {
));
}
+ /// Connect the `show-typing-indicators` setting so the typing indicator
+ /// updates immediately when the user toggles the preference.
+ fn setup_typing_settings(&self) {
+ self.manager().settings().connect_changed(
+ Some("show-typing-indicators"),
+ clone!(
+ #[weak(rename_to = s)]
+ self,
+ move |_, _| s.refresh_show_typing()
+ ),
+ );
+ }
+
+ /// Re-evaluate `show-typing` for the current channel based on the channel's
+ /// `is-typing` state and the user's `show-typing-indicators` setting.
+ fn refresh_show_typing(&self) {
+ // The active-channel bind runs during template init before the
+ // manager bind, so `self.manager()` (typed as Manager, not
+ // Option<Manager>) would panic here. Read the manager directly so
+ // a null intermediate state is harmless: with no manager we don't
+ // know the user's preference, so default to showing the indicator.
+ let allowed = self
+ .imp()
+ .manager
+ .borrow()
+ .as_ref()
+ .is_none_or(|m| m.settings().boolean("show-typing-indicators"));
+ let typing = self
+ .active_channel()
+ .map(|c| c.is_typing())
+ .unwrap_or(false);
+ self.set_show_typing(allowed && typing);
+ }
+
+ /// Wire the `show-typing` property to the active channel's `is-typing`.
+ /// Called whenever the active channel changes.
+ fn setup_typing_indicator(&self) {
+ self.refresh_show_typing();
+
+ // Disconnect the handler we attached on the previous active
+ // channel so we don't accumulate one per channel switch.
+ if let Some((prev_channel, handler)) = self.imp().typing_handler.take() {
+ prev_channel.disconnect(handler);
+ }
+
+ if let Some(channel) = self.active_channel() {
+ let handler = channel.connect_notify_local(
+ Some("is-typing"),
+ clone!(
+ #[weak(rename_to = s)]
+ self,
+ move |_, _| s.refresh_show_typing()
+ ),
+ );
+ self.imp().typing_handler.replace(Some((channel, handler)));
+ }
+ }
+
+ /// Send a `Started` typing event for the active channel.
+ ///
+ /// Schedules a periodic refresh so the receiver does not let the
+ /// indicator expire while the user is still composing.
+ fn send_typing_started(&self) {
+ let imp = self.imp();
+ let Some(channel) = self.active_channel() else {
+ return;
+ };
+ if !self.manager().settings().boolean("send-typing-indicators") {
+ return;
+ }
+
+ // Mark this channel as the current typing target so a later channel
+ // switch can still emit a matching `Stopped` event.
+ imp.typing_target.replace(Some(channel.clone()));
+
+ // Refresh `Started` periodically while the user keeps composing.
+ let needs_initial = !imp.sending_typing.replace(true);
+
+ if let Some(source) = imp.typing_refresh.borrow_mut().take() {
+ source.remove();
+ }
+ let refresh = glib::timeout_add_seconds_local(
+ TYPING_REFRESH_SECONDS,
+ clone!(
+ #[weak(rename_to = s)]
+ self,
+ #[upgrade_or]
+ glib::ControlFlow::Break,
+ move || {
+ if !s.imp().sending_typing.get() {
+ return glib::ControlFlow::Break;
+ }
+ s.dispatch_send_typing(true);
+ glib::ControlFlow::Continue
+ }
+ ),
+ );
+ imp.typing_refresh.replace(Some(refresh));
+
+ if needs_initial {
+ self.dispatch_send_typing(true);
+ }
+ }
+
+ /// Send a `Stopped` typing event for the channel that was last targeted.
+ fn send_typing_stopped(&self) {
+ let imp = self.imp();
+ if let Some(source) = imp.typing_refresh.borrow_mut().take() {
+ source.remove();
+ }
+ if let Some(source) = imp.typing_idle.borrow_mut().take() {
+ source.remove();
+ }
+ if !imp.sending_typing.replace(false) {
+ // Nothing to do — we never told anyone we were typing.
+ imp.typing_target.replace(None);
+ return;
+ }
+ let Some(channel) = imp.typing_target.replace(None) else {
+ return;
+ };
+ gspawn!(async move {
+ if let Err(e) = channel.send_typing(false).await {
+ log::warn!("Failed to send `Stopped` typing event: {e}");
+ }
+ });
+ }
+
+ /// Dispatch the actual `Started` typing event to whichever channel is
+ /// currently considered the typing target.
+ fn dispatch_send_typing(&self, started: bool) {
+ let Some(channel) = self.imp().typing_target.borrow().clone() else {
+ return;
+ };
+ gspawn!(async move {
+ if let Err(e) = channel.send_typing(started).await {
+ log::warn!("Failed to send typing event (started={started}): {e}");
+ }
+ });
+ }
+
+ /// Connect the text entry's buffer so we can emit `Started`/`Stopped`
+ /// typing events as the user composes a message.
+ fn setup_typing_send(&self) {
+ let buffer = self.imp().text_entry.buffer();
+ let handler = buffer.connect_changed(clone!(
+ #[weak(rename_to = s)]
+ self,
+ move |buf| {
+ let (start, end) = buf.bounds();
+ if start == end {
+ s.send_typing_stopped();
+ return;
+ }
+ // Both the Started event and the idle-stop timer are
+ // outbound-typing-only behaviours; if the user has
+ // disabled outgoing typing, do nothing and don't churn
+ // a timer per keystroke.
+ let allowed = s
+ .imp()
+ .manager
+ .borrow()
+ .as_ref()
+ .is_none_or(|m| m.settings().boolean("send-typing-indicators"));
+ if !allowed {
+ return;
+ }
+ s.send_typing_started();
+ s.reset_typing_idle_timer();
+ }
+ ));
+ self.imp().typing_buffer_handler.replace(Some(handler));
+ }
+
+ /// Schedule a one-shot timer that sends `Stopped` if the user lets the
+ /// composition idle for more than `TYPING_IDLE_SECONDS` seconds.
+ fn reset_typing_idle_timer(&self) {
+ let imp = self.imp();
+ if let Some(source) = imp.typing_idle.borrow_mut().take() {
+ source.remove();
+ }
+ let source = glib::timeout_add_seconds_local_once(
+ TYPING_IDLE_SECONDS,
+ clone!(
+ #[weak(rename_to = s)]
+ self,
+ move || s.send_typing_stopped()
+ ),
+ );
+ imp.typing_idle.replace(Some(source));
+ }
+
pub async fn clear_messages(&self) -> Result<(), ApplicationError> {
if let Some(channel) = self.active_channel() {
channel.clear_messages().await?;
@@ -165,9 +367,36 @@ pub mod imp {
filling_screen: Cell<bool>,
#[property(get = Self::has_attachments)]
has_attachments: PhantomData<bool>,
+ #[property(get, set)]
+ show_typing: Cell<bool>,
+
+ /// Whether we currently believe the user is composing a message in the
+ /// active channel and have informed the peer with a `Started` event.
+ pub(super) sending_typing: Cell<bool>,
+ /// Channel to which we last sent a `Started` typing event, kept so we
+ /// can send a matching `Stopped` even after the active channel changes.
+ pub(super) typing_target: RefCell<Option<Channel>>,
+ /// Periodic refresh of the `Started` typing event so it does not
+ /// expire on the receiver side while the user is still composing.
+ pub(super) typing_refresh: RefCell<Option<glib::SourceId>>,
+ /// One-shot timer that emits `Stopped` after a stretch of no
+ /// further buffer changes.
+ pub(super) typing_idle: RefCell<Option<glib::SourceId>>,
+ /// Notify handler installed on the active channel's `is-typing`
+ /// property so we can disconnect it before re-attaching when the
+ /// active channel changes.
+ pub(super) typing_handler: RefCell<Option<(Channel, glib::SignalHandlerId)>>,
+ /// Notify + selection-changed handlers installed on the active
+ /// channel by `setup_selection_listener`, kept so we can disconnect
+ /// them before re-attaching on the next channel change.
+ pub(super) selection_handlers: RefCell<Vec<(Channel, glib::SignalHandlerId)>>,
+ /// Buffer change handler that drives the typing-send logic; we
+ /// block it while restoring a draft so loading a draft does not
+ /// transmit a Started typing event.
+ pub(super) typing_buffer_handler: RefCell<Option<glib::SignalHandlerId>>,
#[property(get, set = Self::set_manager, type = Manager)]
- manager: RefCell<Option<Manager>>,
+ pub(super) manager: RefCell<Option<Manager>>,
}
#[gtk::template_callbacks]
@@ -181,10 +410,16 @@ pub mod imp {
self.manager.replace(man);
if initialized {
self.obj().setup_send_on_enter();
+ self.obj().setup_typing_settings();
+ self.obj().setup_typing_send();
}
}
fn set_active_channel(&self, chan: Option<Channel>) {
+ // Inform the previous channel we have stopped typing before we
+ // forget about it.
+ self.obj().send_typing_stopped();
+
if let Some(active_chan) = self.active_channel.borrow().as_ref() {
active_chan.set_property("draft", self.text_entry.text());
}
@@ -195,6 +430,7 @@ pub mod imp {
}
self.obj().focus_input();
+ self.obj().setup_typing_indicator();
}
#[template_callback(function)]
@@ -501,7 +737,18 @@ pub mod imp {
s.obj().set_reply_message(None::<TextMessage>);
if let Some(channel) = s.active_channel.borrow().as_ref() {
let draft = channel.property("draft");
+ // Block the typing buffer-changed handler so
+ // restoring a stored draft does not transmit
+ // a Started typing event to the peer.
+ let buffer = s.text_entry.buffer();
+ let handler_guard = s.typing_buffer_handler.borrow();
+ if let Some(handler) = handler_guard.as_ref() {
+ buffer.block_signal(handler);
+ }
s.text_entry.set_text(draft);
+ if let Some(handler) = handler_guard.as_ref() {
+ buffer.unblock_signal(handler);
+ }
};
}
),
diff --git a/src/gui/preferences_window.rs b/src/gui/preferences_window.rs
index 8137af7..b2b6405 100644
--- a/src/gui/preferences_window.rs
+++ b/src/gui/preferences_window.rs
@@ -78,6 +78,11 @@ pub mod imp {
#[template_child]
row_send_on_enter: TemplateChild<adw::SwitchRow>,
+ #[template_child]
+ row_send_typing_indicators: TemplateChild<adw::SwitchRow>,
+ #[template_child]
+ row_show_typing_indicators: TemplateChild<adw::SwitchRow>,
+
settings: Settings,
}
@@ -173,6 +178,22 @@ pub mod imp {
.bind("send-on-enter", &self.row_send_on_enter.get(), "active")
.flags(SettingsBindFlags::DEFAULT)
.build();
+ self.settings
+ .bind(
+ "send-typing-indicators",
+ &self.row_send_typing_indicators.get(),
+ "active",
+ )
+ .flags(SettingsBindFlags::DEFAULT)
+ .build();
+ self.settings
+ .bind(
+ "show-typing-indicators",
+ &self.row_show_typing_indicators.get(),
+ "active",
+ )
+ .flags(SettingsBindFlags::DEFAULT)
+ .build();
}
}
@@ -194,6 +215,8 @@ pub mod imp {
row_background: TemplateChild::default(),
row_messages_selectable: TemplateChild::default(),
row_send_on_enter: TemplateChild::default(),
+ row_send_typing_indicators: TemplateChild::default(),
+ row_show_typing_indicators: TemplateChild::default(),
}
}
--
2.53.0

View File

@@ -0,0 +1,622 @@
From 200461c0abe0399ae4b5b0bfd3848fcc226ba308 Mon Sep 17 00:00:00 2001
From: Simon Gardling <titaniumtown@proton.me>
Date: Wed, 29 Apr 2026 19:13:52 -0400
Subject: [PATCH 2/6] feat(messages): Implement formatted messages
- Display Signal BodyRange styles (bold, italic, strikethrough,
spoiler, monospace) on incoming messages by translating them into
pango attributes alongside the existing mention rendering, making
the offset accounting work for mention substitutions and
surrogate-pair text alike.
- Parse a markdown-style formatting syntax on outbound messages and
send the resulting BodyRanges with the cleaned body text. The
parser lives in its own module with unit tests covering the
supported markers, nesting, unmatched markers, and non-BMP UTF-16
offsets.
- Update the message-input tooltip to surface the supported markers.
---
CHANGELOG.md | 2 +
data/resources/ui/channel_messages.blp | 2 +-
src/backend/message/formatting.rs | 287 +++++++++++++++++++++++++
src/backend/message/mod.rs | 2 +
src/backend/message/text_message.rs | 200 +++++++++++++----
5 files changed, 447 insertions(+), 46 deletions(-)
create mode 100644 src/backend/message/formatting.rs
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 2bde927..50cd5f5 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,6 +10,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Send typing indicators while composing a message and display them above the message input.
- Settings to enable or disable sending and showing typing indicators.
+- Render formatted message styles (bold, italic, strikethrough, spoiler, monospace) on incoming messages.
+- Send formatted messages with markdown-style markers (`**bold**`, `*italic*`, `~~strike~~`, `||spoiler||`, `` `monospace` ``).
## [0.20.4] - 2026-04-22
diff --git a/data/resources/ui/channel_messages.blp b/data/resources/ui/channel_messages.blp
index 7f438e4..6c3948f 100644
--- a/data/resources/ui/channel_messages.blp
+++ b/data/resources/ui/channel_messages.blp
@@ -301,7 +301,7 @@ template $FlChannelMessages: Box {
activate => $send_message() swapped;
paste-file => $paste_file() swapped;
paste-texture => $paste_texture() swapped;
- tooltip-text: C_("tooltip", "Message input");
+ tooltip-text: C_("tooltip", "Message input. Use **bold**, *italic*, ~~strike~~, ||spoiler|| or `monospace` to format text.");
}
Button button_send {
diff --git a/src/backend/message/formatting.rs b/src/backend/message/formatting.rs
new file mode 100644
index 0000000..5a1d596
--- /dev/null
+++ b/src/backend/message/formatting.rs
@@ -0,0 +1,287 @@
+//! Lightweight markdown-style formatting parser for outgoing messages.
+//!
+//! Supported syntax (mirroring the way Signal Desktop and iOS render
+//! formatted messages):
+//!
+//! - `**text**` for bold
+//! - `*text*` or `_text_` for italic
+//! - `~~text~~` for strikethrough
+//! - `||text||` for spoiler
+//! - `` `text` `` for monospace
+//!
+//! Parsing is forgiving: any marker without a matching counterpart is left
+//! verbatim in the resulting text. Markers may nest as long as the inner
+//! marker is a different kind from the outer one.
+//!
+//! The function returns the cleaned message body plus the corresponding
+//! `BodyRange`s with offsets in UTF-16 code units, as required by the
+//! Signal protocol.
+
+use std::collections::HashMap;
+
+use libsignal_service::proto::BodyRange;
+use libsignal_service::proto::body_range::{AssociatedValue, Style as BodyRangeStyle};
+
+#[derive(Clone, Copy, Debug, Hash, Eq, PartialEq)]
+enum Marker {
+ Bold,
+ Italic,
+ Strikethrough,
+ Spoiler,
+ Monospace,
+}
+
+impl Marker {
+ fn style(self) -> BodyRangeStyle {
+ match self {
+ Marker::Bold => BodyRangeStyle::Bold,
+ Marker::Italic => BodyRangeStyle::Italic,
+ Marker::Strikethrough => BodyRangeStyle::Strikethrough,
+ Marker::Spoiler => BodyRangeStyle::Spoiler,
+ Marker::Monospace => BodyRangeStyle::Monospace,
+ }
+ }
+}
+
+/// Try to consume a marker starting at `chars[i]` and return its kind plus
+/// the number of characters that make up the marker token.
+fn detect_marker(chars: &[char], i: usize) -> Option<(Marker, usize)> {
+ let cur = *chars.get(i)?;
+ let next = chars.get(i + 1).copied();
+ match (cur, next) {
+ ('*', Some('*')) => Some((Marker::Bold, 2)),
+ ('~', Some('~')) => Some((Marker::Strikethrough, 2)),
+ ('|', Some('|')) => Some((Marker::Spoiler, 2)),
+ ('*', _) | ('_', _) => Some((Marker::Italic, 1)),
+ ('`', _) => Some((Marker::Monospace, 1)),
+ _ => None,
+ }
+}
+
+#[derive(Debug, Clone, Copy)]
+struct MatchedSpan {
+ marker: Marker,
+ open_pos: usize,
+ close_pos: usize,
+ marker_len: usize,
+}
+
+/// Walk the character stream left-to-right and pair markers of the same
+/// kind. The first occurrence opens a span, the next occurrence of the same
+/// kind closes it; markers without a partner are simply ignored.
+fn detect_matched_markers(chars: &[char]) -> Vec<MatchedSpan> {
+ let mut open: HashMap<Marker, (usize, usize)> = HashMap::new();
+ let mut spans: Vec<MatchedSpan> = Vec::new();
+ let mut i = 0;
+ while i < chars.len() {
+ if let Some((marker, len)) = detect_marker(chars, i) {
+ if let Some((open_pos, marker_len)) = open.remove(&marker) {
+ spans.push(MatchedSpan {
+ marker,
+ open_pos,
+ close_pos: i,
+ marker_len,
+ });
+ } else {
+ open.insert(marker, (i, len));
+ }
+ i += len;
+ } else {
+ i += 1;
+ }
+ }
+ spans
+}
+
+/// Parse markdown-style formatting markers in `input` and produce the cleaned
+/// text plus the corresponding Signal [BodyRange]s with UTF-16 offsets.
+///
+/// Empty matched spans (e.g. `**` followed immediately by `**`) are dropped.
+pub fn parse_formatting(input: &str) -> (String, Vec<BodyRange>) {
+ let chars: Vec<char> = input.chars().collect();
+ let spans = detect_matched_markers(&chars);
+
+ if spans.is_empty() {
+ return (input.to_owned(), Vec::new());
+ }
+
+ // Mark which character positions are part of a matched marker token and
+ // therefore must be removed from the cleaned output.
+ let mut skip = vec![false; chars.len()];
+ for sp in &spans {
+ for k in sp.open_pos..(sp.open_pos + sp.marker_len).min(chars.len()) {
+ skip[k] = true;
+ }
+ for k in sp.close_pos..(sp.close_pos + sp.marker_len).min(chars.len()) {
+ skip[k] = true;
+ }
+ }
+
+ // Build the cleaned output and a per-input-char map into the output's
+ // UTF-16 code-unit offset.
+ let mut output = String::with_capacity(input.len());
+ let mut input_to_output_utf16 = vec![0u32; chars.len() + 1];
+ let mut utf16_count: u32 = 0;
+ for (i, c) in chars.iter().enumerate() {
+ input_to_output_utf16[i] = utf16_count;
+ if !skip[i] {
+ output.push(*c);
+ utf16_count += c.len_utf16() as u32;
+ }
+ }
+ input_to_output_utf16[chars.len()] = utf16_count;
+
+ let mut ranges: Vec<BodyRange> = Vec::with_capacity(spans.len());
+ for sp in spans {
+ let start = input_to_output_utf16[sp.open_pos + sp.marker_len];
+ let end = input_to_output_utf16[sp.close_pos];
+ if end <= start {
+ continue;
+ }
+ ranges.push(BodyRange {
+ start: Some(start),
+ length: Some(end - start),
+ associated_value: Some(AssociatedValue::Style(sp.marker.style() as i32)),
+ });
+ }
+
+ // Sort by start so the final ranges are stable for tests and for
+ // downstream consumers that expect ordered ranges.
+ ranges.sort_by_key(|r| r.start);
+
+ (output, ranges)
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ fn ranges_summary(ranges: &[BodyRange]) -> Vec<(u32, u32, BodyRangeStyle)> {
+ ranges
+ .iter()
+ .map(|r| {
+ let style = match r.associated_value {
+ Some(AssociatedValue::Style(s)) => {
+ BodyRangeStyle::try_from(s).unwrap_or(BodyRangeStyle::None)
+ }
+ _ => BodyRangeStyle::None,
+ };
+ (r.start.unwrap_or(0), r.length.unwrap_or(0), style)
+ })
+ .collect()
+ }
+
+ #[test]
+ fn no_markers() {
+ let (text, ranges) = parse_formatting("hello world");
+ assert_eq!(text, "hello world");
+ assert!(ranges.is_empty());
+ }
+
+ #[test]
+ fn bold() {
+ let (text, ranges) = parse_formatting("**bold**");
+ assert_eq!(text, "bold");
+ assert_eq!(ranges_summary(&ranges), vec![(0, 4, BodyRangeStyle::Bold)]);
+ }
+
+ #[test]
+ fn italic_asterisk() {
+ let (text, ranges) = parse_formatting("*italic*");
+ assert_eq!(text, "italic");
+ assert_eq!(
+ ranges_summary(&ranges),
+ vec![(0, 6, BodyRangeStyle::Italic)]
+ );
+ }
+
+ #[test]
+ fn italic_underscore() {
+ let (text, ranges) = parse_formatting("_italic_");
+ assert_eq!(text, "italic");
+ assert_eq!(
+ ranges_summary(&ranges),
+ vec![(0, 6, BodyRangeStyle::Italic)]
+ );
+ }
+
+ #[test]
+ fn strikethrough() {
+ let (text, ranges) = parse_formatting("~~strike~~");
+ assert_eq!(text, "strike");
+ assert_eq!(
+ ranges_summary(&ranges),
+ vec![(0, 6, BodyRangeStyle::Strikethrough)]
+ );
+ }
+
+ #[test]
+ fn spoiler() {
+ let (text, ranges) = parse_formatting("||hidden||");
+ assert_eq!(text, "hidden");
+ assert_eq!(
+ ranges_summary(&ranges),
+ vec![(0, 6, BodyRangeStyle::Spoiler)]
+ );
+ }
+
+ #[test]
+ fn monospace() {
+ let (text, ranges) = parse_formatting("`code`");
+ assert_eq!(text, "code");
+ assert_eq!(
+ ranges_summary(&ranges),
+ vec![(0, 4, BodyRangeStyle::Monospace)]
+ );
+ }
+
+ #[test]
+ fn bold_and_italic_nested() {
+ let (text, ranges) = parse_formatting("**bold *italic***");
+ assert_eq!(text, "bold italic");
+ let summary = ranges_summary(&ranges);
+ assert!(summary.contains(&(0, 11, BodyRangeStyle::Bold)));
+ assert!(summary.contains(&(5, 6, BodyRangeStyle::Italic)));
+ }
+
+ #[test]
+ fn unmatched_open_left_literal() {
+ let (text, ranges) = parse_formatting("**only one start");
+ assert_eq!(text, "**only one start");
+ assert!(ranges.is_empty());
+ }
+
+ #[test]
+ fn surrounding_text_preserved() {
+ let (text, ranges) = parse_formatting("hello **world**!");
+ assert_eq!(text, "hello world!");
+ assert_eq!(ranges_summary(&ranges), vec![(6, 5, BodyRangeStyle::Bold)]);
+ }
+
+ #[test]
+ fn multiple_pairs() {
+ let (text, ranges) = parse_formatting("**a**b**c**");
+ assert_eq!(text, "abc");
+ let summary = ranges_summary(&ranges);
+ assert_eq!(summary.len(), 2);
+ assert_eq!(summary[0], (0, 1, BodyRangeStyle::Bold));
+ assert_eq!(summary[1], (2, 1, BodyRangeStyle::Bold));
+ }
+
+ #[test]
+ fn empty_pair_dropped() {
+ let (text, ranges) = parse_formatting("****");
+ assert_eq!(text, "");
+ assert!(ranges.is_empty());
+ }
+
+ #[test]
+ fn utf16_offsets_for_non_bmp() {
+ // Character "𝟚" (U+1D7DA) is a non-BMP codepoint occupying two
+ // UTF-16 code units, so a Bold range over a string containing it
+ // must reflect that in its `length`.
+ let (text, ranges) = parse_formatting("**𝟚**");
+ assert_eq!(text, "𝟚");
+ assert_eq!(ranges_summary(&ranges), vec![(0, 2, BodyRangeStyle::Bold)]);
+ }
+}
diff --git a/src/backend/message/mod.rs b/src/backend/message/mod.rs
index 74952ac..4e0f584 100644
--- a/src/backend/message/mod.rs
+++ b/src/backend/message/mod.rs
@@ -1,12 +1,14 @@
mod call_message;
mod deletion_message;
mod display_message;
+mod formatting;
mod reaction_message;
mod text_message;
pub use call_message::{CallMessage, CallMessageType};
pub use deletion_message::DeletionMessage;
pub use display_message::{DisplayMessage, DisplayMessageExt};
+pub use formatting::parse_formatting;
pub use reaction_message::ReactionMessage;
pub use text_message::TextMessage;
diff --git a/src/backend/message/text_message.rs b/src/backend/message/text_message.rs
index a9adb04..c06bcfa 100644
--- a/src/backend/message/text_message.rs
+++ b/src/backend/message/text_message.rs
@@ -2,9 +2,9 @@ use crate::prelude::*;
use libsignal_service::content::Reaction;
use libsignal_service::proto::DataMessage;
-use libsignal_service::proto::body_range::AssociatedValue;
+use libsignal_service::proto::body_range::{AssociatedValue, Style as BodyRangeStyle};
use libsignal_service::proto::data_message::Delete;
-use pango::{AttrColor, AttrList};
+use pango::{AttrColor, AttrInt, AttrList, AttrString, Style as PangoStyle, Weight};
use crate::backend::timeline::{TimelineItem, TimelineItemExt};
use crate::backend::{Attachment, Channel, Contact};
@@ -19,6 +19,48 @@ gtk::glib::wrapper! {
const MENTION_CHAR: char = '@';
const MENTION_COLOR: (u16, u16, u16) = (0, 0, u16::MAX);
+/// Convert a Signal [BodyRangeStyle] into the pango attributes that render
+/// the same visual style. Spoilers are approximated as a black-on-black
+/// span as pango has no native spoiler primitive.
+fn style_to_pango_attrs(
+ style: BodyRangeStyle,
+ start_byte: u32,
+ end_byte: u32,
+) -> Vec<pango::Attribute> {
+ fn span<A: Into<pango::Attribute>>(attr: A, start: u32, end: u32) -> pango::Attribute {
+ let mut attr: pango::Attribute = attr.into();
+ attr.set_start_index(start);
+ attr.set_end_index(end);
+ attr
+ }
+
+ match style {
+ BodyRangeStyle::Bold => vec![span(
+ AttrInt::new_weight(Weight::Bold),
+ start_byte,
+ end_byte,
+ )],
+ BodyRangeStyle::Italic => vec![span(
+ AttrInt::new_style(PangoStyle::Italic),
+ start_byte,
+ end_byte,
+ )],
+ BodyRangeStyle::Strikethrough => {
+ vec![span(AttrInt::new_strikethrough(true), start_byte, end_byte)]
+ }
+ BodyRangeStyle::Monospace => vec![span(
+ AttrString::new_family("monospace"),
+ start_byte,
+ end_byte,
+ )],
+ BodyRangeStyle::Spoiler => vec![
+ span(AttrColor::new_foreground(0, 0, 0), start_byte, end_byte),
+ span(AttrColor::new_background(0, 0, 0), start_byte, end_byte),
+ ],
+ BodyRangeStyle::None => Vec::new(),
+ }
+}
+
impl TextMessage {
pub fn from_text_channel_sender<S: AsRef<str>>(
text: S,
@@ -65,14 +107,16 @@ impl TextMessage {
.build();
let text_owned = text.as_ref().to_owned();
- let body = if text_owned.is_empty() {
- None
+ let (body, body_ranges) = if text_owned.is_empty() {
+ (None, Vec::new())
} else {
- Some(text_owned)
+ let (cleaned, ranges) = super::parse_formatting(&text_owned);
+ (Some(cleaned), ranges)
};
let message = DataMessage {
body,
+ body_ranges,
timestamp: Some(timestamp),
..Default::default()
};
@@ -245,10 +289,17 @@ impl TextMessage {
self.notify_body();
}
- /// Formats the message body based on its ranges, e.g. to insert mention names.
+ /// Format the message body based on its body ranges.
+ ///
+ /// This both substitutes mentions with the resolved participant name and
+ /// applies styling (bold, italic, monospace, strikethrough, spoiler) as
+ /// pango attributes on the resulting text.
///
- /// Returns the resulting strings and an [AttrList] that can be used in labels to highlight areas.
- /// Be carefull when editing this function and note that Signal uses UTF-16 byte offsets, while Rust uses UTF-8 byte offsets.
+ /// Note that Signal uses UTF-16 byte offsets, while Rust strings use
+ /// UTF-8. The implementation maintains an explicit per-utf16-index
+ /// mapping into the resulting UTF-8 string so that styles applied to a
+ /// range that survives a mention substitution still land on the right
+ /// bytes.
async fn format_body(&self) -> (Option<String>, AttrList) {
let Some(body) = self.internal_data().and_then(|m| m.body) else {
return (None, AttrList::new());
@@ -264,53 +315,112 @@ impl TextMessage {
let channel = self.channel();
- // Sort by growing start index
+ // Sort by growing start index so mention substitutions happen left-to-right.
ranges.sort_unstable_by_key(|r| r.start());
- let attrs = AttrList::new();
-
- // Signal (Java) uses UTF-16 body and therefore also UTF-16 offsets, while Flare (Rust) uses UTF-8. Need to convert.
- let body_utf16: Vec<u16> = body.encode_utf16().collect();
-
- let mut result_utf8 = String::new();
- let mut index_utf16 = 0;
- let mut index_utf8 = 0;
- for r in ranges {
- let start = r.start() as usize;
- let end = start + r.length() as usize;
- let uuid = match r.associated_value {
+ // Resolve mention names asynchronously up front so the rest of the
+ // formatting can be a synchronous walk.
+ let mut mentions: Vec<(usize, usize, String)> = Vec::new();
+ for r in &ranges {
+ let uuid = match &r.associated_value {
Some(AssociatedValue::MentionAci(u)) => u.parse().ok(),
Some(AssociatedValue::MentionAciBinary(u)) => {
- u.try_into().ok().map(Uuid::from_bytes)
+ u.clone().try_into().ok().map(Uuid::from_bytes)
}
_ => None,
};
- let Some(uuid) = uuid else {
+ if let Some(uuid) = uuid {
+ let start = r.start() as usize;
+ let end = (r.start() + r.length()) as usize;
+ let name = format!(
+ "{}{}",
+ MENTION_CHAR,
+ channel.participant_by_uuid(uuid).await.title()
+ );
+ mentions.push((start, end, name));
+ }
+ }
+ // Mentions cannot overlap each other; ensure the iterator order is stable.
+ mentions.sort_unstable_by_key(|(s, _, _)| *s);
+
+ let body_utf16: Vec<u16> = body.encode_utf16().collect();
+ let attrs = AttrList::new();
+
+ // Build the result string while constructing a per-utf16-index map
+ // into the resulting UTF-8 byte offsets.
+ let mut byte_at: Vec<usize> = Vec::with_capacity(body_utf16.len() + 1);
+ let mut result_utf8 = String::new();
+ let mut mention_iter = mentions.into_iter().peekable();
+
+ let mut i = 0;
+ while i < body_utf16.len() {
+ // Inject mention substitutions at their start position.
+ if mention_iter
+ .peek()
+ .is_some_and(|(m_start, _, _)| *m_start == i)
+ {
+ let (m_start, m_end, name) = mention_iter.next().expect("peeked entry to exist");
+ let mention_byte_start = result_utf8.len();
+ // Mark every UTF-16 index inside the mention span as the start
+ // of the substituted text. Indices >= m_end will be filled by
+ // subsequent iterations.
+ for _ in m_start..m_end {
+ byte_at.push(mention_byte_start);
+ }
+ result_utf8.push_str(&name);
+
+ let mut highlight =
+ AttrColor::new_foreground(MENTION_COLOR.0, MENTION_COLOR.1, MENTION_COLOR.2);
+ highlight.set_start_index(mention_byte_start as u32);
+ highlight.set_end_index(result_utf8.len() as u32);
+ attrs.insert(highlight);
+
+ i = m_end.min(body_utf16.len());
continue;
- };
- let name = format!(
- "{}{}",
- MENTION_CHAR,
- channel.participant_by_uuid(uuid).await.title()
- );
- let to_add_body = String::from_utf16_lossy(&body_utf16[index_utf16..start]);
- result_utf8.push_str(&to_add_body);
- result_utf8.push_str(&name);
- index_utf16 = end;
-
- let index_start_highlight = index_utf8 + to_add_body.len();
- index_utf8 += to_add_body.len() + name.len();
- let index_end_highlight = index_utf8;
-
- let (red, green, blue) = MENTION_COLOR;
- let mut highlight = AttrColor::new_foreground(red, green, blue);
- highlight.set_start_index(index_start_highlight as u32);
- highlight.set_end_index(index_end_highlight as u32);
- attrs.insert(highlight);
+ }
+
+ byte_at.push(result_utf8.len());
+ let unit = body_utf16[i];
+ if (0xD800..=0xDBFF).contains(&unit) && i + 1 < body_utf16.len() {
+ // High surrogate: consume the pair as one codepoint.
+ let pair = [unit, body_utf16[i + 1]];
+ let decoded = char::decode_utf16(pair.iter().copied())
+ .next()
+ .and_then(|r| r.ok())
+ .unwrap_or('\u{FFFD}');
+ result_utf8.push(decoded);
+ byte_at.push(result_utf8.len());
+ i += 2;
+ } else {
+ let decoded = char::decode_utf16([unit].iter().copied())
+ .next()
+ .and_then(|r| r.ok())
+ .unwrap_or('\u{FFFD}');
+ result_utf8.push(decoded);
+ i += 1;
+ }
}
+ byte_at.push(result_utf8.len());
- if index_utf16 < body_utf16.len() {
- result_utf8.push_str(&String::from_utf16_lossy(&body_utf16[index_utf16..]))
+ // Apply style ranges using the byte-offset map.
+ for r in ranges {
+ let Some(AssociatedValue::Style(s)) = r.associated_value else {
+ continue;
+ };
+ let style = match BodyRangeStyle::try_from(s) {
+ Ok(BodyRangeStyle::None) | Err(_) => continue,
+ Ok(other) => other,
+ };
+ let start_utf16 = (r.start() as usize).min(byte_at.len() - 1);
+ let end_utf16 = ((r.start() + r.length()) as usize).min(byte_at.len() - 1);
+ if start_utf16 >= end_utf16 {
+ continue;
+ }
+ let start_byte = byte_at[start_utf16] as u32;
+ let end_byte = byte_at[end_utf16] as u32;
+ for attr in style_to_pango_attrs(style, start_byte, end_byte) {
+ attrs.insert(attr);
+ }
}
(Some(result_utf8), attrs)
--
2.53.0

View File

@@ -0,0 +1,917 @@
From 96cabe9e786b4ca8ba89064dfd90e71222e919af Mon Sep 17 00:00:00 2001
From: Simon Gardling <titaniumtown@proton.me>
Date: Wed, 29 Apr 2026 19:33:06 -0400
Subject: [PATCH 3/6] feat(messages): Implement edited messages
- Receive incoming EditMessage (1-1 and sync) and replace the body,
body_ranges, and attachments of the targeted message in place. The
receive path uses an EditMessageItem wrapper that mirrors the
DeletionMessage flow.
- Send EditMessage from the message context menu: an Edit action loads
the original body into the input bar and a dedicated indicator
takes the place of the reply hint while editing. Submitting the
edited text dispatches an EditMessage to the channel via a new
Channel::send_internal_content helper that forwards any ContentBody
to the right send path.
- Display an 'edited' label in the message indicators when the local
copy of a message has been replaced by an edit.
---
CHANGELOG.md | 1 +
data/resources/ui/channel_messages.blp | 89 ++++++++++++++++++++
data/resources/ui/components/indicators.blp | 10 +++
data/resources/ui/message_item.blp | 10 +++
src/backend/channel.rs | 92 ++++++++++++++++++++-
src/backend/message/edit_message_item.rs | 66 +++++++++++++++
src/backend/message/formatting.rs | 11 ++-
src/backend/message/mod.rs | 71 ++++++++++++++++
src/backend/message/text_message.rs | 62 ++++++++++++++
src/gui/channel_messages.rs | 67 +++++++++++++++
src/gui/components/indicators.rs | 2 +
src/gui/components/item_row.rs | 58 ++++++-------
src/gui/message_item.rs | 27 ++++++
13 files changed, 528 insertions(+), 38 deletions(-)
create mode 100644 src/backend/message/edit_message_item.rs
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 50cd5f5..0338ed8 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -12,6 +12,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Settings to enable or disable sending and showing typing indicators.
- Render formatted message styles (bold, italic, strikethrough, spoiler, monospace) on incoming messages.
- Send formatted messages with markdown-style markers (`**bold**`, `*italic*`, `~~strike~~`, `||spoiler||`, `` `monospace` ``).
+- Display incoming edited messages with an `edited` indicator and edit your own sent messages from their context menu.
## [0.20.4] - 2026-04-22
diff --git a/data/resources/ui/channel_messages.blp b/data/resources/ui/channel_messages.blp
index 6c3948f..f3d2348 100644
--- a/data/resources/ui/channel_messages.blp
+++ b/data/resources/ui/channel_messages.blp
@@ -238,6 +238,95 @@ template $FlChannelMessages: Box {
}
}
+ // Editing indicator
+ Box {
+ vexpand-set: true;
+
+ styles [
+ "currently-replied-box",
+ ]
+
+ visible: bind $is_some(template.editing-message) as <bool>;
+
+ Image {
+ styles [
+ "accent",
+ ]
+
+ icon-name: "document-edit-symbolic";
+ width-request: 34;
+ }
+
+ Separator {
+ styles [
+ "accent",
+ ]
+
+ margin-start: 6;
+ width-request: 2;
+ }
+
+ Grid {
+ hexpand: true;
+ margin-start: 12;
+ margin-end: 12;
+
+ Label {
+ styles [
+ "heading",
+ ]
+
+ halign: start;
+ label: _("Editing message");
+ wrap: true;
+ wrap-mode: word_char;
+ lines: 1;
+ ellipsize: end;
+
+ layout {
+ row: 0;
+ column: 0;
+ }
+ }
+
+ Label {
+ styles [
+ "message-text",
+ ]
+
+ halign: fill;
+ label: bind template.editing-message as <$FlTextMessage>.body;
+ wrap: true;
+ wrap-mode: word_char;
+ lines: 2;
+ ellipsize: end;
+ xalign: 0;
+
+ layout {
+ row: 1;
+ column: 0;
+ }
+ }
+ }
+
+ Button {
+ accessibility {
+ label: C_("accessibility", "Cancel editing");
+ }
+
+ tooltip-text: C_("tooltip", "Cancel editing");
+
+ styles [
+ "flat",
+ "circular",
+ ]
+
+ valign: center;
+ clicked => $cancel_edit() swapped;
+ icon-name: "window-close-symbolic";
+ }
+ }
+
// Box for attachments
Box {
vexpand-set: true;
diff --git a/data/resources/ui/components/indicators.blp b/data/resources/ui/components/indicators.blp
index f6c51f6..977f1c4 100644
--- a/data/resources/ui/components/indicators.blp
+++ b/data/resources/ui/components/indicators.blp
@@ -8,6 +8,16 @@ template $FlMessageIndicators {
halign: end;
valign: end;
+ Label edited_label {
+ styles [
+ "dim-label",
+ "caption",
+ ]
+
+ visible: bind template.edited;
+ label: _("edited");
+ }
+
Label message_info_label {
styles [
"dim-label",
diff --git a/data/resources/ui/message_item.blp b/data/resources/ui/message_item.blp
index 82c018b..2c21b8b 100644
--- a/data/resources/ui/message_item.blp
+++ b/data/resources/ui/message_item.blp
@@ -16,6 +16,14 @@ menu message-menu {
icon: "mail-reply-sender-symbolic";
}
+ item {
+ label: _("Edit");
+ action: "msg.edit";
+ verb-icon: "document-edit-symbolic";
+ icon: "document-edit-symbolic";
+ hidden-when: "action-disabled";
+ }
+
item {
label: _("Delete");
action: "msg.delete";
@@ -243,6 +251,7 @@ template $FlMessageItem: $ContextMenuBin {
valign: end;
halign: end;
timestamp: bind $format_time_human(template.message as <$FlTextMessage>.datetime) as <string>;
+ edited: bind template.message as <$FlTextMessage>.is-edited;
}
}
@@ -253,6 +262,7 @@ template $FlMessageItem: $ContextMenuBin {
indicators: $FlMessageIndicators timestamp {
timestamp: bind $format_time_human(template.message as <$FlTextMessage>.datetime) as <string>;
visible: bind $not_empty(template.message as <$FlTextMessage>.body) as <bool>;
+ edited: bind template.message as <$FlTextMessage>.is-edited;
};
}
diff --git a/src/backend/channel.rs b/src/backend/channel.rs
index 4bb1d38..711c92c 100644
--- a/src/backend/channel.rs
+++ b/src/backend/channel.rs
@@ -1,8 +1,8 @@
use crate::backend::{
Contact, Manager, Message,
message::{
- DeletionMessage, DisplayMessage, DisplayMessageExt, MessageExt, ReactionMessage,
- TextMessage,
+ DeletionMessage, DisplayMessage, DisplayMessageExt, EditMessageItem, MessageExt,
+ ReactionMessage, TextMessage,
},
timeline::{TimelineItem, TimelineItemExt},
};
@@ -336,6 +336,17 @@ impl Channel {
message.react(reaction);
}
}
+
+ // Apply pending edits queued while the original was unloaded.
+ // Edits are stored ordered by ascending edit-timestamp so
+ // applying them in sequence converges on the latest content.
+ let pending = self.imp().pending_edits.borrow_mut().remove(&id);
+ if let Some(edits) = pending {
+ log::trace!("Applying {} pending edit(s) to message {id}", edits.len());
+ for edit in edits {
+ message.apply_edit(edit).await;
+ }
+ }
}
// Apply reactions or store them.
@@ -420,6 +431,44 @@ impl Channel {
log::trace!("Deletion message aimed at a unloaded message. Will be ignored");
}
}
+
+ // Edit messages: replace the original message's body with the new
+ // content and remember that the message was edited.
+ if let Some(edit_item) = message.dynamic_cast_ref::<EditMessageItem>() {
+ let Some(target_ts) = edit_item.target_timestamp() else {
+ log::warn!("Got an EditMessage without a target timestamp; ignoring.");
+ return Ok(());
+ };
+ let Some(new_data) = edit_item.edit().and_then(|e| e.data_message) else {
+ log::warn!("Got an EditMessage without a data_message; ignoring.");
+ return Ok(());
+ };
+ crate::trace!(
+ "Channel {} got an edit message targeting timestamp: {}",
+ self.title(),
+ target_ts
+ );
+ let edited_msg = self
+ .imp()
+ .timeline
+ .borrow()
+ .get_by_timestamp(target_ts)
+ .and_then(|o| o.dynamic_cast::<TextMessage>().ok());
+ if let Some(edited_msg) = edited_msg {
+ edited_msg.apply_edit(new_data).await;
+ self.notify("last-message");
+ } else {
+ log::trace!("Edit target {target_ts} not loaded yet; queueing for when it lands.");
+ let edit_ts = new_data.timestamp.unwrap_or(0);
+ let mut pending = self.imp().pending_edits.borrow_mut();
+ let entry = pending.entry(target_ts).or_default();
+ let to_insert = entry
+ .binary_search_by_key(&edit_ts, |d| d.timestamp.unwrap_or(0))
+ .unwrap_or_else(|e| e);
+ entry.insert(to_insert, new_data);
+ }
+ }
+
Ok(())
}
@@ -482,6 +531,38 @@ impl Channel {
Ok(())
}
+ /// Send an arbitrary [ContentBody] to this channel, dispatching to the
+ /// appropriate single-contact or group send path.
+ pub(super) async fn send_internal_content(
+ &self,
+ body: impl Into<libsignal_service::content::ContentBody>,
+ timestamp: u64,
+ ) -> Result<(), crate::ApplicationError> {
+ let manager = self.manager();
+ let body = body.into();
+ let receiver_contact = self
+ .imp()
+ .contact
+ .borrow()
+ .as_ref()
+ .and_then(|c| c.address());
+
+ if let Some(contact) = receiver_contact {
+ manager.send_message(contact, body, timestamp).await?;
+ } else {
+ let group_master_key = self
+ .imp()
+ .group_context
+ .borrow()
+ .as_ref()
+ .and_then(|c| c.master_key.clone());
+ if let Some(key) = group_master_key {
+ manager.send_message_to_group(key, body, timestamp).await?;
+ }
+ }
+ Ok(())
+ }
+
/// Send a message to the channel and add it to the channel.
pub async fn send_message(&self, msg: Message) -> Result<(), crate::ApplicationError> {
msg.mark_as_read();
@@ -749,7 +830,7 @@ mod imp {
use gdk::Paintable;
- use libsignal_service::proto::GroupContextV2;
+ use libsignal_service::proto::{DataMessage, GroupContextV2};
use presage::model::groups::Group;
#[derive(Default, glib::Properties)]
@@ -762,6 +843,11 @@ mod imp {
pub(super) participants: RefCell<Vec<Contact>>,
pub(super) pending_reactions: RefCell<HashMap<u64, Vec<ReactionMessage>>>,
+ /// Edits whose target message is not yet in the timeline. Cold
+ /// scrollback walks newest-first, so an EditMessage may arrive in
+ /// `do_new_message` before its original TextMessage has been
+ /// loaded; the original picks up any queued edits when it lands.
+ pub(super) pending_edits: RefCell<HashMap<u64, Vec<DataMessage>>>,
pub(super) typing: RefCell<HashMap<Uuid, TypingNotification>>,
#[property(name = "avatar", get = Self::avatar)]
diff --git a/src/backend/message/edit_message_item.rs b/src/backend/message/edit_message_item.rs
new file mode 100644
index 0000000..9655f50
--- /dev/null
+++ b/src/backend/message/edit_message_item.rs
@@ -0,0 +1,66 @@
+use crate::backend::timeline::TimelineItem;
+use crate::backend::{Channel, Contact};
+use crate::prelude::*;
+
+use libsignal_service::proto::EditMessage;
+
+use super::{Manager, Message};
+
+gtk::glib::wrapper! {
+ /// An incoming edit-message wrapper carrying the new content for a
+ /// previously-sent message identified by its sent-timestamp.
+ pub struct EditMessageItem(ObjectSubclass<imp::EditMessageItem>) @extends Message, TimelineItem;
+}
+
+impl EditMessageItem {
+ pub fn from_edit(
+ sender: &Contact,
+ channel: &Channel,
+ timestamp: u64,
+ manager: &Manager,
+ edit: EditMessage,
+ ) -> Self {
+ let s: Self = Object::builder::<Self>()
+ .property("sender", sender)
+ .property("channel", channel)
+ .property("timestamp", timestamp)
+ .property("manager", manager)
+ .build();
+ s.imp().edit.swap(&RefCell::new(Some(edit)));
+ s
+ }
+
+ /// Sent-timestamp of the message this edit replaces.
+ pub fn target_timestamp(&self) -> Option<u64> {
+ self.edit().and_then(|e| e.target_sent_timestamp)
+ }
+
+ /// The new [DataMessage] payload that replaces the targeted message's
+ /// content.
+ pub fn edit(&self) -> Option<EditMessage> {
+ self.imp().edit.borrow().clone()
+ }
+}
+
+mod imp {
+ use crate::backend::{Message, message::MessageImpl, timeline::TimelineItemImpl};
+ use crate::prelude::*;
+
+ use libsignal_service::proto::EditMessage;
+
+ #[derive(Default)]
+ pub struct EditMessageItem {
+ pub(super) edit: RefCell<Option<EditMessage>>,
+ }
+
+ #[glib::object_subclass]
+ impl ObjectSubclass for EditMessageItem {
+ const NAME: &'static str = "FlEditMessageItem";
+ type Type = super::EditMessageItem;
+ type ParentType = Message;
+ }
+
+ impl TimelineItemImpl for EditMessageItem {}
+ impl MessageImpl for EditMessageItem {}
+ impl ObjectImpl for EditMessageItem {}
+}
diff --git a/src/backend/message/formatting.rs b/src/backend/message/formatting.rs
index 5a1d596..ed12a85 100644
--- a/src/backend/message/formatting.rs
+++ b/src/backend/message/formatting.rs
@@ -108,13 +108,12 @@ pub fn parse_formatting(input: &str) -> (String, Vec<BodyRange>) {
// Mark which character positions are part of a matched marker token and
// therefore must be removed from the cleaned output.
let mut skip = vec![false; chars.len()];
+ let total = chars.len();
for sp in &spans {
- for k in sp.open_pos..(sp.open_pos + sp.marker_len).min(chars.len()) {
- skip[k] = true;
- }
- for k in sp.close_pos..(sp.close_pos + sp.marker_len).min(chars.len()) {
- skip[k] = true;
- }
+ let open_end = (sp.open_pos + sp.marker_len).min(total);
+ skip[sp.open_pos..open_end].fill(true);
+ let close_end = (sp.close_pos + sp.marker_len).min(total);
+ skip[sp.close_pos..close_end].fill(true);
}
// Build the cleaned output and a per-input-char map into the output's
diff --git a/src/backend/message/mod.rs b/src/backend/message/mod.rs
index 4e0f584..f3a0537 100644
--- a/src/backend/message/mod.rs
+++ b/src/backend/message/mod.rs
@@ -1,6 +1,7 @@
mod call_message;
mod deletion_message;
mod display_message;
+mod edit_message_item;
mod formatting;
mod reaction_message;
mod text_message;
@@ -8,6 +9,7 @@ mod text_message;
pub use call_message::{CallMessage, CallMessageType};
pub use deletion_message::DeletionMessage;
pub use display_message::{DisplayMessage, DisplayMessageExt};
+pub use edit_message_item::EditMessageItem;
pub use formatting::parse_formatting;
pub use reaction_message::ReactionMessage;
pub use text_message::TextMessage;
@@ -253,6 +255,75 @@ impl Message {
.upcast(),
)
}
+ // An edit-message replacing the body of an earlier message.
+ ContentBody::EditMessage(edit) => {
+ let Some(data_message) = edit.data_message.as_ref() else {
+ log::warn!("Got an EditMessage without a data_message; ignoring.");
+ return None;
+ };
+ let channel = manager
+ .channel_from_uuid_or_group(metadata.sender, &data_message.group_v2)
+ .await;
+ let contact = channel
+ .participant_by_uuid(metadata.sender.raw_uuid())
+ .await;
+ if contact.is_blocked() {
+ log::debug!("Got message from a blocked contact. Ignoring");
+ return None;
+ }
+ log::trace!("Got an edit message");
+ Some(
+ EditMessageItem::from_edit(
+ &contact,
+ &channel,
+ timestamp,
+ manager,
+ edit.clone(),
+ )
+ .upcast(),
+ )
+ }
+ // An edit-message sent from another device of the same account.
+ ContentBody::SynchronizeMessage(SyncMessage {
+ sent:
+ Some(
+ sent @ Sent {
+ edit_message: Some(edit),
+ ..
+ },
+ ),
+ ..
+ }) => {
+ let Some(data_message) = edit.data_message.as_ref() else {
+ log::warn!("Got a sync EditMessage without a data_message; ignoring.");
+ return None;
+ };
+ let channel = manager
+ .channel_from_uuid_or_group(
+ sent.parse_destination_service_id()
+ .unwrap_or(metadata.sender),
+ &data_message.group_v2,
+ )
+ .await;
+ let contact = channel
+ .participant_by_uuid(metadata.sender.raw_uuid())
+ .await;
+ if contact.is_blocked() {
+ log::debug!("Got message from a blocked contact. Ignoring");
+ return None;
+ }
+ log::trace!("Got an edit message (sync)");
+ Some(
+ EditMessageItem::from_edit(
+ &contact,
+ &channel,
+ timestamp,
+ manager,
+ edit.clone(),
+ )
+ .upcast(),
+ )
+ }
// Call message.
ContentBody::CallMessage(c) => {
// TODO: Group calls?
diff --git a/src/backend/message/text_message.rs b/src/backend/message/text_message.rs
index c06bcfa..ff7aaaa 100644
--- a/src/backend/message/text_message.rs
+++ b/src/backend/message/text_message.rs
@@ -199,6 +199,66 @@ impl TextMessage {
self.set_property("is-deleted", true);
}
+ /// Replace the message's body with `new_data` and mark the message as
+ /// edited so the UI can surface this to the user.
+ pub async fn apply_edit(&self, new_data: DataMessage) {
+ let new_body = new_data.body.clone();
+ let new_body_ranges = new_data.body_ranges.clone();
+ let edit_attachments = new_data.attachments.clone();
+ if let Some(data) = self.internal_data_mut().as_mut() {
+ data.body = new_body;
+ data.body_ranges = new_body_ranges;
+ // Replacing attachments matches Signal Desktop's behaviour;
+ // a typical edit only changes the body but the protocol allows
+ // updating the attachments as well.
+ if !edit_attachments.is_empty() {
+ data.attachments = edit_attachments;
+ }
+ }
+ self.set_property("is-edited", true);
+ self.prepare_format_body().await;
+ }
+
+ /// Send an edit for this message, replacing its body with `text`.
+ pub async fn send_edit<S: AsRef<str>>(&self, text: S) -> Result<(), crate::ApplicationError> {
+ let target_sent_timestamp = Some(self.timestamp());
+ let send_timestamp = std::time::SystemTime::now()
+ .duration_since(std::time::UNIX_EPOCH)
+ .expect("Time went backwards")
+ .as_millis() as u64;
+
+ let cleaned = text.as_ref().to_owned();
+ let (body, body_ranges) = if cleaned.is_empty() {
+ (None, Vec::new())
+ } else {
+ let (body, ranges) = super::parse_formatting(&cleaned);
+ (Some(body), ranges)
+ };
+
+ // Carry forward the original message's structural fields (quote,
+ // attachments, expire_timer, sticker, group_v2, etc.) so peers do
+ // not see them cleared when applying the edit. Only body,
+ // body_ranges, and timestamp differ.
+ let mut inner = self.internal_data().unwrap_or_default();
+ inner.body = body;
+ inner.body_ranges = body_ranges;
+ inner.timestamp = Some(send_timestamp);
+
+ let edit = libsignal_service::proto::EditMessage {
+ target_sent_timestamp,
+ data_message: Some(inner.clone()),
+ };
+
+ self.channel()
+ .send_internal_content(edit, send_timestamp)
+ .await?;
+
+ // Mirror the change locally.
+ self.apply_edit(inner).await;
+ self.channel().notify("last-message");
+ Ok(())
+ }
+
/// Send a reaction for a message and apply it.
pub async fn send_reaction<S: AsRef<str>>(
&self,
@@ -462,6 +522,8 @@ mod imp {
pub(super) message_attributes: RefCell<AttrList>,
#[property(get, set)]
pub(super) is_deleted: RefCell<bool>,
+ #[property(get, set)]
+ pub(super) is_edited: RefCell<bool>,
}
impl TextMessage {
diff --git a/src/gui/channel_messages.rs b/src/gui/channel_messages.rs
index 831fc25..9187bee 100644
--- a/src/gui/channel_messages.rs
+++ b/src/gui/channel_messages.rs
@@ -2,6 +2,7 @@ use crate::prelude::*;
use gio::SettingsBindFlags;
use crate::ApplicationError;
+use crate::backend::message::TextMessage;
const MESSAGES_REQUEST_LOAD: usize = 10;
@@ -24,6 +25,19 @@ glib::wrapper! {
}
impl ChannelMessages {
+ /// Begin editing `msg`: load its body into the text entry, mark it as
+ /// the editing target, and clear any pending reply.
+ pub fn start_editing(&self, msg: Option<TextMessage>) {
+ if let Some(msg) = msg.as_ref() {
+ self.set_reply_message(None::<TextMessage>);
+ self.imp()
+ .text_entry
+ .set_text(msg.body().unwrap_or_default());
+ }
+ self.set_editing_message(msg);
+ self.imp().text_entry.grab_focus();
+ }
+
pub fn focus_input(&self) {
self.imp().text_entry.grab_focus();
}
@@ -358,6 +372,8 @@ pub mod imp {
active_channel: RefCell<Option<Channel>>,
#[property(get, set, nullable)]
reply_message: RefCell<Option<TextMessage>>,
+ #[property(get, set, nullable)]
+ editing_message: RefCell<Option<TextMessage>>,
#[property(get, set, default = true)]
sticky: Cell<bool>,
@@ -480,6 +496,13 @@ pub mod imp {
self.obj().set_reply_message(None::<TextMessage>);
}
+ #[template_callback]
+ fn cancel_edit(&self) {
+ log::trace!("Unsetting editing message");
+ self.obj().set_editing_message(None::<TextMessage>);
+ self.text_entry.clear();
+ }
+
#[template_callback]
fn remove_attachments(&self) {
log::trace!("Unsetting attachments");
@@ -585,6 +608,33 @@ pub mod imp {
};
self.obj().notify("has-attachments");
+ // If we are editing an existing message, send an EditMessage
+ // instead of constructing a new one.
+ if let Some(target) = self.obj().editing_message() {
+ self.obj().set_editing_message(None::<TextMessage>);
+ if text.is_empty() {
+ log::warn!("Refusing to send an empty edit; dropping the change.");
+ return;
+ }
+ let obj = self.obj();
+ gspawn!(clone!(
+ #[strong]
+ obj,
+ async move {
+ if let Err(e) = target.send_edit(text).await {
+ let root = obj
+ .root()
+ .expect("`ChannelMessages` to have a root")
+ .dynamic_cast::<crate::gui::Window>()
+ .expect("Root of `ChannelMessages` to be a `Window`.");
+ let dialog = ErrorDialog::new(&e, &root);
+ dialog.present(Some(&root));
+ }
+ }
+ ));
+ return;
+ }
+
if text.is_empty() && attachments.is_empty() {
log::warn!("Got requested to send empty message, skipping");
}
@@ -683,6 +733,22 @@ pub mod imp {
}
),
);
+ widget.connect_local(
+ "edit",
+ false,
+ clone!(
+ #[weak]
+ obj,
+ #[upgrade_or_default]
+ move |args| {
+ let msg = args[1].get::<Option<TextMessage>>().expect(
+ "Type of signal `edit` of `ItemRow` to be `TextMessage`.",
+ );
+ obj.start_editing(msg);
+ None
+ }
+ ),
+ );
let list_item = object.downcast_ref::<gtk::ListItem>().unwrap();
list_item.set_activatable(false);
list_item.set_selectable(false);
@@ -735,6 +801,7 @@ pub mod imp {
self,
move |_, _| {
s.obj().set_reply_message(None::<TextMessage>);
+ s.obj().set_editing_message(None::<TextMessage>);
if let Some(channel) = s.active_channel.borrow().as_ref() {
let draft = channel.property("draft");
// Block the typing buffer-changed handler so
diff --git a/src/gui/components/indicators.rs b/src/gui/components/indicators.rs
index ce38221..4356607 100644
--- a/src/gui/components/indicators.rs
+++ b/src/gui/components/indicators.rs
@@ -26,6 +26,8 @@ mod imp {
pub struct MessageIndicators {
#[property(get, set)]
pub(super) timestamp: RefCell<String>,
+ #[property(get, set)]
+ pub(super) edited: Cell<bool>,
//TODO: Implement sending state
//#[template_child]
//pub(super) sending_state_icon: TemplateChild<gtk::Image>,
diff --git a/src/gui/components/item_row.rs b/src/gui/components/item_row.rs
index b2c20d3..538b1bb 100644
--- a/src/gui/components/item_row.rs
+++ b/src/gui/components/item_row.rs
@@ -1,5 +1,3 @@
-use glib::SignalHandlerId;
-
use crate::prelude::*;
use crate::{
@@ -27,23 +25,25 @@ impl ItemRow {
fn timeline_item_to_widget(&self, item: &TimelineItem) -> Option<gtk::Widget> {
if let Some(message) = item.dynamic_cast_ref::<TextMessage>() {
let widget = MessageItem::new(message);
- let handler = widget.connect_local(
- "reply",
- false,
- clone!(
- #[weak(rename_to = s)]
- self,
- #[upgrade_or_default]
- move |args| {
- let msg = args[1]
- .get::<TextMessage>()
- .expect("Type of signal `reply` of `MessageItem` to be `TextMessage`.");
- s.emit_by_name::<()>("reply", &[&msg]);
- None
- }
- ),
- );
- self.set_handler(handler);
+ for signal in ["reply", "edit"] {
+ let handler = widget.connect_local(
+ signal,
+ false,
+ clone!(
+ #[weak(rename_to = s)]
+ self,
+ #[upgrade_or_default]
+ move |args| {
+ let msg = args[1]
+ .get::<TextMessage>()
+ .expect("Type of signal of `MessageItem` to be `TextMessage`.");
+ s.emit_by_name::<()>(signal, &[&msg]);
+ None
+ }
+ ),
+ );
+ self.imp().handlers.borrow_mut().push(handler);
+ }
Some(widget.dynamic_cast().unwrap())
} else if let Some(message) = item.dynamic_cast_ref::<CallMessage>() {
let widget = CallMessageItem::new(message);
@@ -53,12 +53,6 @@ impl ItemRow {
None
}
}
-
- /// Set the pending handler of the ItemRow.
- /// At most one handler may be pending.
- fn set_handler(&self, handler: SignalHandlerId) {
- self.imp().handler.replace(Some(handler));
- }
}
mod imp {
@@ -77,7 +71,7 @@ mod imp {
#[derive(Debug, Default, CompositeTemplate)]
#[template(resource = "/ui/components/item_row.ui")]
pub struct ItemRow {
- pub(super) handler: RefCell<Option<SignalHandlerId>>,
+ pub(super) handlers: RefCell<Vec<SignalHandlerId>>,
}
#[glib::object_subclass]
@@ -116,12 +110,15 @@ mod imp {
.get::<Option<TimelineItem>>()
.expect("ItemRow to only get TimelineItem");
- if let Some(handler) = self.handler.take() {
+ let handlers = self.handlers.take();
+ if !handlers.is_empty() {
if let Some(child) = obj.child() {
- child.disconnect(handler);
+ for handler in handlers {
+ child.disconnect(handler);
+ }
} else {
log::warn!(
- "A handler was set for an item row, but no child registered. This should not happen."
+ "Handlers were set for an item row, but no child registered. This should not happen."
);
}
}
@@ -143,6 +140,9 @@ mod imp {
Signal::builder("reply")
.param_types([TextMessage::static_type()])
.build(),
+ Signal::builder("edit")
+ .param_types([TextMessage::static_type()])
+ .build(),
]
});
SIGNALS.as_ref()
diff --git a/src/gui/message_item.rs b/src/gui/message_item.rs
index 21f504a..59d2778 100644
--- a/src/gui/message_item.rs
+++ b/src/gui/message_item.rs
@@ -94,6 +94,14 @@ impl MessageItem {
s.get_pressed_attachment().imp().open();
}
));
+ let action_edit = SimpleAction::new("edit", None);
+ action_edit.connect_activate(clone!(
+ #[weak(rename_to = s)]
+ self,
+ move |_, _| {
+ s.imp().handle_edit();
+ }
+ ));
let actions = SimpleActionGroup::new();
self.insert_action_group("msg", Some(&actions));
@@ -101,6 +109,7 @@ impl MessageItem {
actions.add_action(&action_delete);
actions.add_action(&action_copy);
actions.add_action(&action_download);
+ actions.add_action(&action_edit);
actions.add_action(&action_open);
self.bind_property("message", &action_delete, "enabled")
@@ -108,6 +117,13 @@ impl MessageItem {
.sync_create()
.build();
+ self.bind_property("message", &action_edit, "enabled")
+ .transform_to(|_, msg: Option<TextMessage>| {
+ msg.map(|m| m.sender().is_self() && m.body().is_some())
+ })
+ .sync_create()
+ .build();
+
self.bind_property("pressed-attachment", &action_download, "enabled")
.transform_to(|_, att: Option<Attachment>| Some(att.is_some()))
.sync_create()
@@ -550,6 +566,14 @@ pub mod imp {
gspawn!(async move { msg.delete().await });
}
+ #[template_callback]
+ pub(super) fn handle_edit(&self) {
+ let obj = self.obj();
+ let msg = obj.message();
+ crate::trace!("Editing a message: {}", msg.body().unwrap_or_default());
+ obj.emit_by_name::<()>("edit", &[&msg]);
+ }
+
// Signal uses the old unicode for the heart emoji, which is recognized as a black heart by gtk. This function converts it to the standard red heart
#[template_callback(function)]
pub(super) fn fix_emoji(emoji: Option<String>) -> Option<String> {
@@ -637,6 +661,9 @@ pub mod imp {
Signal::builder("reply")
.param_types([TextMessage::static_type()])
.build(),
+ Signal::builder("edit")
+ .param_types([TextMessage::static_type()])
+ .build(),
]
});
SIGNALS.as_ref()
--
2.53.0

View File

@@ -0,0 +1,673 @@
From 86088503e4acb398aff50c4bfdc603c2518370f2 Mon Sep 17 00:00:00 2001
From: Simon Gardling <titaniumtown@proton.me>
Date: Wed, 29 Apr 2026 19:53:22 -0400
Subject: [PATCH 4/6] feat(messages): Multi-select messages and delete for me
- Add a 'Select' action to the message context menu that puts the
channel into selection mode and pre-selects the message. While in
selection mode, every message item shows a check button and a
toolbar replaces nothing in particular but offers a 'Delete for me'
destructive action plus a cancel button.
- Track selection state with a transient `selected` property on
`Message` and a `selection-mode` property on `Channel`. A new
`selection-changed` signal lets the channel-messages view update
the selection summary without polling.
- Add `Channel::delete_messages_locally` plus the matching
`Manager::delete_messages_locally` and a new
`Timeline::remove_by_timestamp` helper. The action only purges the
local copy and never sends a remote deletion.
---
CHANGELOG.md | 1 +
data/resources/style.css | 14 +++
data/resources/ui/channel_messages.blp | 60 ++++++++++++
data/resources/ui/message_item.blp | 20 ++++
src/backend/channel.rs | 25 +++++
src/backend/manager.rs | 32 ++++++
src/backend/message/mod.rs | 2 +
src/backend/timeline/mod.rs | 16 +++
src/gui/channel_messages.rs | 129 ++++++++++++++++++++++++-
src/gui/message_item.rs | 118 ++++++++++++++++++++++
10 files changed, 416 insertions(+), 1 deletion(-)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 0338ed8..47ec77a 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -13,6 +13,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Render formatted message styles (bold, italic, strikethrough, spoiler, monospace) on incoming messages.
- Send formatted messages with markdown-style markers (`**bold**`, `*italic*`, `~~strike~~`, `||spoiler||`, `` `monospace` ``).
- Display incoming edited messages with an `edited` indicator and edit your own sent messages from their context menu.
+- Multi-select messages from their context menu and delete the selection locally with a single action.
## [0.20.4] - 2026-04-22
diff --git a/data/resources/style.css b/data/resources/style.css
index 00e4783..1c0cdfd 100644
--- a/data/resources/style.css
+++ b/data/resources/style.css
@@ -19,6 +19,20 @@
min-height: 18px;
}
+.message-item.in-selection-mode .avatar-other {
+ opacity: 0;
+}
+
+.message-item.in-selection-mode.selected .message-bubble {
+ outline: 2px solid @accent_color;
+}
+
+.selection-toolbar {
+ padding: 6px 12px;
+ background-color: @window_bg_color;
+ border-top: 1px solid @borders;
+}
+
.message-list row {
padding:0;
}
diff --git a/data/resources/ui/channel_messages.blp b/data/resources/ui/channel_messages.blp
index f3d2348..eb927f8 100644
--- a/data/resources/ui/channel_messages.blp
+++ b/data/resources/ui/channel_messages.blp
@@ -135,6 +135,66 @@ template $FlChannelMessages: Box {
}
}
+ // Selection toolbar (shown when in multi-select mode).
+ Box selection_toolbar {
+ styles [
+ "selection-toolbar",
+ ]
+
+ orientation: horizontal;
+ spacing: 12;
+ hexpand: true;
+ visible: bind template.active-channel as <$FlChannel>.selection-mode;
+
+ Button {
+ accessibility {
+ label: C_("accessibility", "Cancel selection");
+ }
+
+ tooltip-text: C_("tooltip", "Cancel selection");
+
+ styles [
+ "flat",
+ "circular",
+ ]
+
+ valign: center;
+ clicked => $cancel_selection() swapped;
+ icon-name: "window-close-symbolic";
+ }
+
+ Label {
+ hexpand: true;
+ halign: start;
+ label: bind template.selection-summary;
+ }
+
+ Button {
+ styles [
+ "destructive-action",
+ "pill",
+ ]
+
+ valign: center;
+ clicked => $delete_selection() swapped;
+ sensitive: bind template.has-selection;
+
+ Box {
+ orientation: horizontal;
+ spacing: 6;
+
+ Image {
+ icon-name: "user-trash-symbolic";
+ }
+
+ Label {
+ label: _("Delete for me");
+ }
+ }
+ }
+ }
+
+
Box {
styles [
"toolbar",
diff --git a/data/resources/ui/message_item.blp b/data/resources/ui/message_item.blp
index 2c21b8b..ba3fd23 100644
--- a/data/resources/ui/message_item.blp
+++ b/data/resources/ui/message_item.blp
@@ -24,6 +24,13 @@ menu message-menu {
hidden-when: "action-disabled";
}
+ item {
+ label: _("Select");
+ action: "msg.select";
+ verb-icon: "checkbox-checked-symbolic";
+ icon: "checkbox-checked-symbolic";
+ }
+
item {
label: _("Delete");
action: "msg.delete";
@@ -87,6 +94,19 @@ template $FlMessageItem: $ContextMenuBin {
}
}
+ CheckButton selection_check {
+ visible: bind template.message as <$FlTextMessage>.channel as <$FlChannel>.selection-mode;
+ active: bind template.message as <$FlTextMessage>.selected;
+ valign: center;
+ can-target: false;
+ can-focus: false;
+
+ layout {
+ row: 0;
+ column: 0;
+ }
+ }
+
Adw.Spinner {
visible: bind template.message as <$FlTextMessage>.pending;
tooltip-text: _("This message is currently being sent");
diff --git a/src/backend/channel.rs b/src/backend/channel.rs
index 711c92c..f07ce96 100644
--- a/src/backend/channel.rs
+++ b/src/backend/channel.rs
@@ -199,6 +199,28 @@ impl Channel {
Ok(())
}
+ /// Delete a set of messages locally only ("Delete for me"). Removes them
+ /// from the encrypted local store and from the in-memory timeline.
+ pub async fn delete_messages_locally(
+ &self,
+ timestamps: Vec<u64>,
+ ) -> Result<(), ApplicationError> {
+ if timestamps.is_empty() {
+ return Ok(());
+ }
+ let purged = self
+ .manager()
+ .delete_messages_locally(self, timestamps)
+ .await?;
+ let timeline = self.imp().timeline.borrow();
+ for ts in &purged {
+ timeline.remove_by_timestamp(*ts);
+ }
+ drop(timeline);
+ self.notify("last-message");
+ Ok(())
+ }
+
pub(super) fn group_context(&self) -> Option<GroupContextV2> {
self.imp().group_context.borrow().clone()
}
@@ -859,6 +881,8 @@ mod imp {
pub(super) draft: RefCell<String>,
#[property(get, set)]
pub(super) is_active: RefCell<bool>,
+ #[property(get, set)]
+ pub(super) selection_mode: RefCell<bool>,
#[property(get = Self::last_message)]
pub(super) last_message: PhantomData<Option<DisplayMessage>>,
@@ -998,6 +1022,7 @@ mod imp {
Signal::builder("message")
.param_types([DisplayMessage::static_type()])
.build(),
+ Signal::builder("selection-changed").build(),
]
});
SIGNALS.as_ref()
diff --git a/src/backend/manager.rs b/src/backend/manager.rs
index eaa41e0..0964681 100644
--- a/src/backend/manager.rs
+++ b/src/backend/manager.rs
@@ -210,6 +210,38 @@ impl Manager {
Ok(())
}
+ /// Delete a set of messages locally only ("Delete for me"). The remote
+ /// peer is not informed; this only purges the local copy from storage.
+ ///
+ /// Returns the timestamps that were actually purged from the store; the
+ /// caller should mirror only those into the in-memory timeline so a
+ /// per-message store failure does not leave the on-disk state and the UI
+ /// permanently disagreed.
+ pub async fn delete_messages_locally(
+ &self,
+ channel: &Channel,
+ timestamps: Vec<u64>,
+ ) -> Result<Vec<u64>, ApplicationError> {
+ let thread = channel.thread();
+ let mut store = self.store();
+ let purged = tspawn!(async move {
+ let mut purged = Vec::with_capacity(timestamps.len());
+ for ts in timestamps {
+ match store.delete_message(&thread, ts).await {
+ // Both "row deleted" and "row was already absent" mean
+ // the store no longer holds this message, so it is safe
+ // for the timeline to drop it.
+ Ok(_) => purged.push(ts),
+ Err(e) => log::warn!("Failed to locally delete message {ts}: {e}"),
+ }
+ }
+ Ok::<Vec<u64>, ApplicationError>(purged)
+ })
+ .await
+ .expect("Failed to spawn tokio")?;
+ Ok(purged)
+ }
+
pub async fn submit_recaptcha_challenge<S: AsRef<str>>(
&self,
token: S,
diff --git a/src/backend/message/mod.rs b/src/backend/message/mod.rs
index f3a0537..eba08ec 100644
--- a/src/backend/message/mod.rs
+++ b/src/backend/message/mod.rs
@@ -518,6 +518,8 @@ mod imp {
pub(super) pending: RefCell<bool>,
#[property(get, set)]
pub(super) error: RefCell<bool>,
+ #[property(get, set)]
+ pub(super) selected: RefCell<bool>,
pub(super) data: RefCell<Option<DataMessage>>,
diff --git a/src/backend/timeline/mod.rs b/src/backend/timeline/mod.rs
index 1ce6a24..18dd436 100644
--- a/src/backend/timeline/mod.rs
+++ b/src/backend/timeline/mod.rs
@@ -44,6 +44,22 @@ impl Timeline {
self.items_changed(0, len as u32, 0);
}
+ /// Remove the item with the given timestamp from the timeline, if any.
+ /// Returns whether an item was actually removed.
+ pub fn remove_by_timestamp(&self, timestamp: u64) -> bool {
+ let mut list = self.imp().list.borrow_mut();
+ let position = list.binary_search_by_key(&timestamp, |i| i.timestamp());
+ match position {
+ Ok(idx) => {
+ list.remove(idx);
+ drop(list);
+ self.items_changed(idx as u32, 1, 0);
+ true
+ }
+ Err(_) => false,
+ }
+ }
+
pub fn get_by_timestamp(&self, timestamp: u64) -> Option<TimelineItem> {
let current_items = self.imp().list.borrow();
let index = current_items.binary_search_by_key(&timestamp, |i| i.timestamp());
diff --git a/src/gui/channel_messages.rs b/src/gui/channel_messages.rs
index 9187bee..d6d1826 100644
--- a/src/gui/channel_messages.rs
+++ b/src/gui/channel_messages.rs
@@ -2,7 +2,8 @@ use crate::prelude::*;
use gio::SettingsBindFlags;
use crate::ApplicationError;
-use crate::backend::message::TextMessage;
+use crate::backend::message::{DisplayMessage, TextMessage};
+use crate::backend::timeline::TimelineItemExt;
const MESSAGES_REQUEST_LOAD: usize = 10;
@@ -38,6 +39,52 @@ impl ChannelMessages {
self.imp().text_entry.grab_focus();
}
+ /// Collect timestamps of every currently-selected message in the active
+ /// channel.
+ pub fn collect_selected_timestamps(&self) -> Vec<u64> {
+ let Some(channel) = self.active_channel() else {
+ return Vec::new();
+ };
+ channel
+ .timeline()
+ .iter_forwards()
+ .filter(|i| i.is::<DisplayMessage>())
+ .filter_map(|i| i.dynamic_cast::<DisplayMessage>().ok())
+ .filter(|m| m.property::<bool>("selected"))
+ .map(|m| m.timestamp())
+ .collect()
+ }
+
+ /// Exit selection mode, clearing all per-message selection state.
+ pub fn exit_selection_mode(&self) {
+ let Some(channel) = self.active_channel() else {
+ return;
+ };
+ for item in channel.timeline().iter_forwards() {
+ if let Some(msg) = item.dynamic_cast_ref::<DisplayMessage>()
+ && msg.property::<bool>("selected")
+ {
+ msg.set_property("selected", false);
+ }
+ }
+ channel.set_selection_mode(false);
+ self.refresh_selection_summary();
+ }
+
+ /// Walk the timeline, count how many messages are selected, and update
+ /// the displayed selection summary plus the `has-selection` flag.
+ pub fn refresh_selection_summary(&self) {
+ let count = self.collect_selected_timestamps().len() as u32;
+ let summary = if count == 0 {
+ gettextrs::gettext("Select messages to delete for yourself")
+ } else {
+ gettextrs::ngettext("{} message selected", "{} messages selected", count)
+ .replace("{}", &count.to_string())
+ };
+ self.set_selection_summary(summary);
+ self.set_has_selection(count > 0);
+ }
+
pub fn focus_input(&self) {
self.imp().text_entry.grab_focus();
}
@@ -185,6 +232,45 @@ impl ChannelMessages {
}
}
+ /// Wire the selection summary so the toolbar reflects how many messages
+ /// are selected. Called whenever the active channel changes.
+ fn setup_selection_listener(&self) {
+ self.refresh_selection_summary();
+
+ // Disconnect handlers attached on the previous active channel so we
+ // don't accumulate one per channel switch.
+ for (prev_channel, handler) in self.imp().selection_handlers.take() {
+ prev_channel.disconnect(handler);
+ }
+
+ if let Some(channel) = self.active_channel() {
+ let mut handlers = self.imp().selection_handlers.borrow_mut();
+ let h = channel.connect_local(
+ "selection-changed",
+ false,
+ clone!(
+ #[weak(rename_to = s)]
+ self,
+ #[upgrade_or_default]
+ move |_| {
+ s.refresh_selection_summary();
+ None
+ }
+ ),
+ );
+ handlers.push((channel.clone(), h));
+ let h = channel.connect_notify_local(
+ Some("selection-mode"),
+ clone!(
+ #[weak(rename_to = s)]
+ self,
+ move |_, _| s.refresh_selection_summary()
+ ),
+ );
+ handlers.push((channel, h));
+ }
+ }
+
/// Send a `Started` typing event for the active channel.
///
/// Schedules a periodic refresh so the receiver does not let the
@@ -385,6 +471,10 @@ pub mod imp {
has_attachments: PhantomData<bool>,
#[property(get, set)]
show_typing: Cell<bool>,
+ #[property(get, set)]
+ selection_summary: RefCell<String>,
+ #[property(get, set)]
+ has_selection: Cell<bool>,
/// Whether we currently believe the user is composing a message in the
/// active channel and have informed the peer with a `Started` event.
@@ -435,6 +525,7 @@ pub mod imp {
// Inform the previous channel we have stopped typing before we
// forget about it.
self.obj().send_typing_stopped();
+ self.obj().exit_selection_mode();
if let Some(active_chan) = self.active_channel.borrow().as_ref() {
active_chan.set_property("draft", self.text_entry.text());
@@ -447,6 +538,7 @@ pub mod imp {
self.obj().focus_input();
self.obj().setup_typing_indicator();
+ self.obj().setup_selection_listener();
}
#[template_callback(function)]
@@ -503,6 +595,41 @@ pub mod imp {
self.text_entry.clear();
}
+ #[template_callback]
+ fn cancel_selection(&self) {
+ self.obj().exit_selection_mode();
+ }
+
+ #[template_callback]
+ fn delete_selection(&self) {
+ let obj = self.obj();
+ let Some(channel) = obj.active_channel() else {
+ return;
+ };
+ let timestamps = obj.collect_selected_timestamps();
+ obj.exit_selection_mode();
+ if timestamps.is_empty() {
+ return;
+ }
+ gspawn!(clone!(
+ #[strong]
+ channel,
+ #[strong]
+ obj,
+ async move {
+ if let Err(e) = channel.delete_messages_locally(timestamps).await {
+ let root = obj
+ .root()
+ .expect("`ChannelMessages` to have a root")
+ .dynamic_cast::<crate::gui::Window>()
+ .expect("Root of `ChannelMessages` to be a `Window`.");
+ let dialog = ErrorDialog::new(&e, &root);
+ dialog.present(Some(&root));
+ }
+ }
+ ));
+ }
+
#[template_callback]
fn remove_attachments(&self) {
log::trace!("Unsetting attachments");
diff --git a/src/gui/message_item.rs b/src/gui/message_item.rs
index 59d2778..f2d98e2 100644
--- a/src/gui/message_item.rs
+++ b/src/gui/message_item.rs
@@ -34,6 +34,7 @@ impl MessageItem {
s.setup_text();
s.setup_requires_attention();
s.setup_pending_and_error();
+ s.setup_selection();
s
}
@@ -102,6 +103,16 @@ impl MessageItem {
s.imp().handle_edit();
}
));
+ let action_select = SimpleAction::new("select", None);
+ action_select.connect_activate(clone!(
+ #[weak(rename_to = s)]
+ self,
+ move |_, _| {
+ let msg = s.message();
+ msg.channel().set_selection_mode(true);
+ msg.set_property("selected", true);
+ }
+ ));
let actions = SimpleActionGroup::new();
self.insert_action_group("msg", Some(&actions));
@@ -111,6 +122,7 @@ impl MessageItem {
actions.add_action(&action_download);
actions.add_action(&action_edit);
actions.add_action(&action_open);
+ actions.add_action(&action_select);
self.bind_property("message", &action_delete, "enabled")
.transform_to(|_, msg: Option<TextMessage>| msg.map(|m| m.sender().is_self()))
@@ -236,6 +248,22 @@ impl MessageItem {
self.imp().msg_menu.popup();
}
+ /// Connect a notify handler on `target` and remember its handler id so
+ /// we can disconnect it when the MessageItem is disposed; without this,
+ /// closures keep accumulating on long-lived Channel/Message objects as
+ /// the ListView recycles widgets across the timeline.
+ fn track_notify_local<F>(&self, target: &impl IsA<glib::Object>, name: &str, f: F)
+ where
+ F: Fn(&glib::Object, &glib::ParamSpec) + 'static,
+ {
+ let target_obj = target.clone().upcast::<glib::Object>();
+ let handler = target_obj.connect_notify_local(Some(name), f);
+ self.imp()
+ .tracked_handlers
+ .borrow_mut()
+ .push((target_obj, handler));
+ }
+
/// Set whether this item should show its header.
pub fn set_show_header(&self) {
let visible = self.message().show_header() || self.property("force-show-header");
@@ -330,6 +358,81 @@ impl MessageItem {
message.notify("pending");
message.notify("error");
}
+
+ /// Wire the message item's selection-mode CSS class so it visually
+ /// reflects the channel's `selection-mode` and the message's `selected`
+ /// state.
+ pub fn setup_selection(&self) {
+ let message = self.message();
+ let channel = message.channel();
+ // The closure only updates visual state. Resetting `selected` on
+ // exit lives in `ChannelMessages::exit_selection_mode` (the only
+ // path that flips `selection-mode` back off), which guards the
+ // write so it does not bounce off glib's autogen notify-always
+ // setter. Doing the write here unconditionally would re-enter the
+ // notify::selected handler and recurse via the selection-changed
+ // signal we emit from it — visible as a cpu-bound spin on first
+ // load of a long timeline like Note to self.
+ let update = clone!(
+ #[weak(rename_to = s)]
+ self,
+ move || {
+ let chan = s.message().channel();
+ let in_mode = chan.selection_mode();
+ if in_mode {
+ s.add_css_class("in-selection-mode");
+ } else {
+ s.remove_css_class("in-selection-mode");
+ }
+ if s.message().property::<bool>("selected") && in_mode {
+ s.add_css_class("selected");
+ } else {
+ s.remove_css_class("selected");
+ }
+ }
+ );
+ self.track_notify_local(&channel, "selection-mode", {
+ let update = update.clone();
+ move |_, _| update()
+ });
+ self.track_notify_local(&message, "selected", {
+ let update = update.clone();
+ let weak = self.downgrade();
+ move |_, _| {
+ update();
+ if let Some(s) = weak.upgrade() {
+ s.message()
+ .channel()
+ .emit_by_name::<()>("selection-changed", &[]);
+ }
+ }
+ });
+
+ // While the channel is in selection mode, primary-button clicks
+ // anywhere on the row toggle the message's `selected` flag and the
+ // gesture claims the event sequence so child widgets (label links,
+ // attachments, the popover trigger, the check button) do not also
+ // act on the click. The check button itself is `can-target: false`
+ // in the template so its visual state is driven purely by the bind
+ // to `message.selected` rather than its own toggled signal.
+ let click = gtk::GestureClick::builder()
+ .button(gdk::BUTTON_PRIMARY)
+ .propagation_phase(gtk::PropagationPhase::Capture)
+ .build();
+ click.connect_pressed(clone!(
+ #[weak(rename_to = s)]
+ self,
+ move |gesture, _, _, _| {
+ if s.message().channel().selection_mode() {
+ let cur: bool = s.message().property("selected");
+ s.message().set_property("selected", !cur);
+ gesture.set_state(gtk::EventSequenceState::Claimed);
+ }
+ }
+ ));
+ self.add_controller(click);
+ update();
+ }
}
pub mod imp {
@@ -360,6 +463,8 @@ pub mod imp {
#[template_child]
pub(super) avatar: TemplateChild<adw::Avatar>,
#[template_child]
+ pub(super) selection_check: TemplateChild<gtk::CheckButton>,
+ #[template_child]
pub(super) header: TemplateChild<gtk::Label>,
#[template_child]
pub(super) reactions: TemplateChild<gtk::Label>,
@@ -399,6 +504,13 @@ pub mod imp {
shows_media_loading: PhantomData<bool>,
#[property(get = Self::has_reaction)]
has_reaction: PhantomData<bool>,
+
+ /// Handlers we attached on long-lived objects (the message and the
+ /// channel). Channel outlives every MessageItem and the timeline
+ /// holds messages across list-view widget recycling, so without an
+ /// explicit disconnect each MessageItem we ever build leaves a
+ /// no-op closure attached to its message and channel forever.
+ pub(super) tracked_handlers: RefCell<Vec<(glib::Object, glib::SignalHandlerId)>>,
}
#[glib::object_subclass]
@@ -668,6 +780,12 @@ pub mod imp {
});
SIGNALS.as_ref()
}
+
+ fn dispose(&self) {
+ for (target, handler) in self.tracked_handlers.take() {
+ target.disconnect(handler);
+ }
+ }
}
impl WidgetImpl for MessageItem {}
--
2.53.0

View File

@@ -0,0 +1,580 @@
From 00d9d9f0b8770453eb3f124db0c79d0bc4bacb39 Mon Sep 17 00:00:00 2001
From: Simon Gardling <titaniumtown@proton.me>
Date: Wed, 29 Apr 2026 19:58:54 -0400
Subject: [PATCH 5/6] feat(messages): In-channel message search
- Add a SearchBar above the message list that searches the
currently-loaded timeline using a case-insensitive substring match
against the message body. Bind it to a new
channel-messages.toggle-search action wired to Ctrl+Shift+F.
- Surface a match counter (current/total) and previous/next buttons
next to the entry, plus reuse the existing flash_requires_attention
helper to scroll to and briefly highlight the focused match.
- Reset matches when the bar closes or the active channel changes.
---
CHANGELOG.md | 1 +
data/resources/style.css | 10 +
data/resources/ui/channel_messages.blp | 55 ++++++
data/resources/ui/shortcuts.blp | 5 +
src/backend/message/display_message.rs | 8 +
src/backend/timeline/mod.rs | 10 +
src/gui/channel_messages.rs | 250 ++++++++++++++++++++++++-
src/gui/message_item.rs | 25 +++
src/gui/window.rs | 11 ++
9 files changed, 374 insertions(+), 1 deletion(-)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 47ec77a..16880cd 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -14,6 +14,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Send formatted messages with markdown-style markers (`**bold**`, `*italic*`, `~~strike~~`, `||spoiler||`, `` `monospace` ``).
- Display incoming edited messages with an `edited` indicator and edit your own sent messages from their context menu.
- Multi-select messages from their context menu and delete the selection locally with a single action.
+- In-channel message search (Ctrl+Shift+F) over the loaded timeline with prev/next navigation and a match counter.
## [0.20.4] - 2026-04-22
diff --git a/data/resources/style.css b/data/resources/style.css
index 1c0cdfd..e00e789 100644
--- a/data/resources/style.css
+++ b/data/resources/style.css
@@ -294,3 +294,13 @@ unread-indicator {
background: alpha(@accent_bg_color, 0.7);
}
}
+
+
+/* Persistent highlight for the message currently focused by the in-channel
+ search bar. Stays applied while the user is parked on this match;
+ cleared when navigating to a different match or closing the search. */
+.search-match .message-bubble {
+ background: alpha(@accent_bg_color, 0.35);
+ outline: 2px solid alpha(@accent_color, 0.7);
+ outline-offset: -2px;
+}
\ No newline at end of file
diff --git a/data/resources/ui/channel_messages.blp b/data/resources/ui/channel_messages.blp
index eb927f8..a166f7b 100644
--- a/data/resources/ui/channel_messages.blp
+++ b/data/resources/ui/channel_messages.blp
@@ -43,6 +43,61 @@ template $FlChannelMessages: Box {
hexpand: true;
orientation: vertical;
+ // In-channel message search.
+ SearchBar search_bar {
+ key-capture-widget: scrolled_window;
+ search-mode-enabled: bind template.search-active bidirectional;
+
+ Adw.Clamp {
+ maximum-size: 600;
+
+ Box {
+ orientation: horizontal;
+ spacing: 6;
+
+ SearchEntry search_entry {
+ hexpand: true;
+ placeholder-text: _("Search loaded messages");
+ search-changed => $on_search_query_changed() swapped;
+ previous-match => $on_search_previous() swapped;
+ next-match => $on_search_next() swapped;
+ stop-search => $on_search_stop() swapped;
+ }
+
+ Label {
+ styles [
+ "caption",
+ "dim-label",
+ ]
+
+ label: bind template.search-summary;
+ }
+
+ Button {
+ icon-name: "go-up-symbolic";
+ tooltip-text: C_("tooltip", "Previous match");
+ sensitive: bind template.has-matches;
+ clicked => $on_search_previous() swapped;
+
+ styles [
+ "flat",
+ ]
+ }
+
+ Button {
+ icon-name: "go-down-symbolic";
+ tooltip-text: C_("tooltip", "Next match");
+ sensitive: bind template.has-matches;
+ clicked => $on_search_next() swapped;
+
+ styles [
+ "flat",
+ ]
+ }
+ }
+ }
+ }
+
Overlay {
[overlay]
Adw.Spinner {
diff --git a/data/resources/ui/shortcuts.blp b/data/resources/ui/shortcuts.blp
index ed2a959..79339cc 100644
--- a/data/resources/ui/shortcuts.blp
+++ b/data/resources/ui/shortcuts.blp
@@ -58,5 +58,10 @@ Adw.ShortcutsDialog help_overlay {
title: C_("shortcut window", "Load more messages");
accelerator: "<Ctrl>&l";
}
+
+ Adw.ShortcutsItem {
+ title: C_("shortcut window", "Search messages in current channel");
+ accelerator: "<Ctrl><Shift>&f";
+ }
}
}
diff --git a/src/backend/message/display_message.rs b/src/backend/message/display_message.rs
index 4cd5e7f..4ccb763 100644
--- a/src/backend/message/display_message.rs
+++ b/src/backend/message/display_message.rs
@@ -137,6 +137,7 @@ mod imp {
#[derive(Debug, Default)]
pub struct DisplayMessage {
pub(super) requires_attention: Cell<bool>,
+ pub(super) is_search_match: Cell<bool>,
}
#[glib::object_subclass]
@@ -155,6 +156,7 @@ mod imp {
.read_only()
.build(),
ParamSpecBoolean::builder("requires-attention").build(),
+ ParamSpecBoolean::builder("is-search-match").build(),
]
});
@@ -168,6 +170,11 @@ mod imp {
.get()
.expect("requires-attention parameter to be boolean"),
),
+ "is-search-match" => self.is_search_match.set(
+ value
+ .get()
+ .expect("is-search-match parameter to be boolean"),
+ ),
_ => unimplemented!(),
}
}
@@ -176,6 +183,7 @@ mod imp {
match pspec.name() {
"textual-description" => self.obj().textual_description().to_value(),
"requires-attention" => self.requires_attention.get().to_value(),
+ "is-search-match" => self.is_search_match.get().to_value(),
_ => unimplemented!(),
}
}
diff --git a/src/backend/timeline/mod.rs b/src/backend/timeline/mod.rs
index 18dd436..40fe324 100644
--- a/src/backend/timeline/mod.rs
+++ b/src/backend/timeline/mod.rs
@@ -69,6 +69,16 @@ impl Timeline {
}
}
+ /// Returns the index of the item with the given timestamp, if any.
+ /// Used by the in-channel search to scroll the list view to the
+ /// matched row even when it has not yet been materialized.
+ pub fn position_of(&self, timestamp: u64) -> Option<u32> {
+ let list = self.imp().list.borrow();
+ list.binary_search_by_key(&timestamp, |i| i.timestamp())
+ .ok()
+ .map(|i| i as u32)
+ }
+
pub fn iter_forwards(&self) -> impl Iterator<Item = TimelineItem> + 'static {
let current_items = self.imp().list.borrow();
current_items.clone().into_iter()
diff --git a/src/gui/channel_messages.rs b/src/gui/channel_messages.rs
index d6d1826..53e341f 100644
--- a/src/gui/channel_messages.rs
+++ b/src/gui/channel_messages.rs
@@ -85,6 +85,204 @@ impl ChannelMessages {
self.set_has_selection(count > 0);
}
+ /// Connect the `search-active` property so the entry is focused when
+ /// the bar opens and the matches are cleared when it closes.
+ fn setup_search(&self) {
+ self.connect_notify_local(
+ Some("search-active"),
+ clone!(
+ #[weak(rename_to = s)]
+ self,
+ move |_, _| {
+ if s.search_active() {
+ s.imp().search_entry.grab_focus();
+ s.attach_search_message_listener();
+ } else {
+ s.detach_search_message_listener();
+ s.clear_current_search_match();
+ s.imp().search_entry.set_text("");
+ s.imp().search_matches.replace(Vec::new());
+ s.imp().search_index.set(0);
+ s.set_has_matches(false);
+ s.set_search_summary(String::new());
+ }
+ }
+ ),
+ );
+ }
+
+ /// While the search bar is open, watch the active channel for new
+ /// messages so the result set stays in sync with the timeline. Only
+ /// one handler is alive at a time; `detach_search_message_listener` or
+ /// the next `attach` call clears the previous one.
+ fn attach_search_message_listener(&self) {
+ self.detach_search_message_listener();
+ let Some(channel) = self.active_channel() else {
+ return;
+ };
+ let handler = channel.connect_local(
+ "message",
+ false,
+ clone!(
+ #[weak(rename_to = s)]
+ self,
+ #[upgrade_or_default]
+ move |_| {
+ let query = s.imp().search_entry.text().to_string();
+ s.refresh_search(&query);
+ None
+ }
+ ),
+ );
+ self.imp()
+ .search_message_handler
+ .replace(Some((channel, handler)));
+ }
+
+ fn detach_search_message_listener(&self) {
+ if let Some((channel, handler)) = self.imp().search_message_handler.take() {
+ channel.disconnect(handler);
+ }
+ }
+
+ /// Re-run the loaded-message search using `query`. The search is
+ /// case-insensitive substring match against the message body.
+ pub fn refresh_search(&self, query: &str) {
+ let imp = self.imp();
+ let Some(channel) = self.active_channel() else {
+ self.clear_current_search_match();
+ imp.search_matches.replace(Vec::new());
+ imp.search_index.set(0);
+ self.set_has_matches(false);
+ self.set_search_summary(String::new());
+ return;
+ };
+
+ let trimmed = query.trim();
+ if trimmed.is_empty() {
+ imp.search_matches.replace(Vec::new());
+ self.clear_current_search_match();
+ imp.search_index.set(0);
+ self.set_has_matches(false);
+ self.set_search_summary(String::new());
+ return;
+ }
+
+ let needle = trimmed.to_lowercase();
+ let matches: Vec<u64> = channel
+ .timeline()
+ .iter_forwards()
+ .filter_map(|i| i.dynamic_cast::<TextMessage>().ok())
+ .filter(|m| {
+ m.body()
+ .map(|b| b.to_lowercase().contains(&needle))
+ .unwrap_or(false)
+ })
+ .map(|m| m.timestamp())
+ .collect();
+
+ let total = matches.len();
+ imp.search_matches.replace(matches);
+ // Snap to the latest match (most recent in time) by default so the
+ // user lands at the bottom of the conversation, matching how the
+ // existing scroll-to-unread heuristic works.
+ imp.search_index.set(total.saturating_sub(1));
+ self.set_has_matches(total > 0);
+ self.update_search_summary();
+ self.focus_current_search_match();
+ }
+
+ /// Move the search cursor to the next or previous match and flash that
+ /// message.
+ pub fn goto_search_match(&self, forwards: bool) {
+ let imp = self.imp();
+ let total = imp.search_matches.borrow().len();
+ if total == 0 {
+ return;
+ }
+ let current = imp.search_index.get();
+ let next = if forwards {
+ (current + 1) % total
+ } else if current == 0 {
+ total - 1
+ } else {
+ current - 1
+ };
+ imp.search_index.set(next);
+ self.update_search_summary();
+ self.focus_current_search_match();
+ }
+
+ fn update_search_summary(&self) {
+ let imp = self.imp();
+ let total = imp.search_matches.borrow().len();
+ let summary = if total == 0 {
+ String::new()
+ } else {
+ // Translators: e.g. "3 of 12" indicating the focused match
+ // index out of total search matches.
+ gettextrs::gettext("{current} of {total}")
+ .replace("{current}", &(imp.search_index.get() + 1).to_string())
+ .replace("{total}", &total.to_string())
+ };
+ self.set_search_summary(summary);
+ }
+
+ /// Drop the persistent highlight from the previously-focused match,
+ /// if any. Used both when navigating to a new match and when the
+ /// search bar closes.
+ fn clear_current_search_match(&self) {
+ if let Some(prev) = self.imp().current_search_match.take() {
+ prev.set_property("is-search-match", false);
+ }
+ }
+
+ /// Mark the currently-indexed match with the persistent `search-match`
+ /// highlight and scroll the list view so the row is visible (and
+ /// materialized as a `MessageItem` that can react to the property).
+ fn focus_current_search_match(&self) {
+ let imp = self.imp();
+ let Some(timestamp) = imp
+ .search_matches
+ .borrow()
+ .get(imp.search_index.get())
+ .copied()
+ else {
+ self.clear_current_search_match();
+ return;
+ };
+ let Some(channel) = self.active_channel() else {
+ self.clear_current_search_match();
+ return;
+ };
+ let timeline = channel.timeline();
+ let Some(item) = timeline.get_by_timestamp(timestamp) else {
+ self.clear_current_search_match();
+ return;
+ };
+ let Some(msg) = item.dynamic_cast::<DisplayMessage>().ok() else {
+ self.clear_current_search_match();
+ return;
+ };
+ // Skip the property churn if the focused match has not changed
+ // (e.g. typing more characters while the latest match remains the
+ // best hit).
+ let already_focused = imp
+ .current_search_match
+ .borrow()
+ .as_ref()
+ .is_some_and(|prev| prev == &msg);
+ if !already_focused {
+ self.clear_current_search_match();
+ msg.set_property("is-search-match", true);
+ imp.current_search_match.replace(Some(msg));
+ }
+ if let Some(position) = timeline.position_of(timestamp) {
+ imp.list_view
+ .scroll_to(position, gtk::ListScrollFlags::NONE, None);
+ }
+ }
+
pub fn focus_input(&self) {
self.imp().text_entry.grab_focus();
}
@@ -429,7 +627,10 @@ pub mod imp {
use crate::gui::components::ItemRow;
use crate::gui::components::time_divider::TimeDivider;
use crate::{
- backend::{Channel, Manager, message::TextMessage},
+ backend::{
+ Channel, Manager,
+ message::{DisplayMessage, TextMessage},
+ },
gui::{error_dialog::ErrorDialog, message_item::MessageItem, text_entry::TextEntry},
};
@@ -444,6 +645,8 @@ pub mod imp {
#[template_child]
pub(super) text_entry: TemplateChild<TextEntry>,
#[template_child]
+ pub(super) search_entry: TemplateChild<gtk::SearchEntry>,
+ #[template_child]
pub(super) list_view: TemplateChild<gtk::ListView>,
#[template_child]
no_channels_page: TemplateChild<adw::StatusPage>,
@@ -476,6 +679,28 @@ pub mod imp {
#[property(get, set)]
has_selection: Cell<bool>,
+ #[property(get, set)]
+ search_active: Cell<bool>,
+ #[property(get, set)]
+ search_summary: RefCell<String>,
+ #[property(get, set)]
+ has_matches: Cell<bool>,
+
+ /// Cached timestamps of every message currently matching the search
+ /// query, in chronological order.
+ pub(super) search_matches: RefCell<Vec<u64>>,
+ /// Index into `search_matches` of the currently focused match.
+ pub(super) search_index: Cell<usize>,
+ /// Handler installed on the active channel's `message` signal
+ /// while the search bar is open, so newly-arrived messages are
+ /// folded into the match set without the user re-running the
+ /// search by hand.
+ pub(super) search_message_handler: RefCell<Option<(Channel, glib::SignalHandlerId)>>,
+ /// The match the user is currently parked on. Held so we can clear
+ /// its `is-search-match` flag when navigating to a different match
+ /// or when the search bar closes.
+ pub(super) current_search_match: RefCell<Option<DisplayMessage>>,
+
/// Whether we currently believe the user is composing a message in the
/// active channel and have informed the peer with a `Started` event.
pub(super) sending_typing: Cell<bool>,
@@ -518,6 +743,7 @@ pub mod imp {
self.obj().setup_send_on_enter();
self.obj().setup_typing_settings();
self.obj().setup_typing_send();
+ self.obj().setup_search();
}
}
@@ -526,6 +752,7 @@ pub mod imp {
// forget about it.
self.obj().send_typing_stopped();
self.obj().exit_selection_mode();
+ self.obj().set_search_active(false);
if let Some(active_chan) = self.active_channel.borrow().as_ref() {
active_chan.set_property("draft", self.text_entry.text());
@@ -818,6 +1045,27 @@ pub mod imp {
}
}
+ #[template_callback]
+ fn on_search_query_changed(&self) {
+ let query = self.search_entry.text().to_string();
+ self.obj().refresh_search(&query);
+ }
+
+ #[template_callback]
+ fn on_search_previous(&self) {
+ self.obj().goto_search_match(false);
+ }
+
+ #[template_callback]
+ fn on_search_next(&self) {
+ self.obj().goto_search_match(true);
+ }
+
+ #[template_callback]
+ fn on_search_stop(&self) {
+ self.obj().set_search_active(false);
+ }
+
#[template_callback]
fn handle_row_activated(&self, row: gtk::ListBoxRow) {
if let Ok(msg) = row
diff --git a/src/gui/message_item.rs b/src/gui/message_item.rs
index f2d98e2..1f62d50 100644
--- a/src/gui/message_item.rs
+++ b/src/gui/message_item.rs
@@ -33,6 +33,7 @@ impl MessageItem {
s.setup_loaded();
s.setup_text();
s.setup_requires_attention();
+ s.setup_search_match();
s.setup_pending_and_error();
s.setup_selection();
s
@@ -325,6 +326,30 @@ impl MessageItem {
message.notify("requires-attention");
}
+ /// Reflect the message's `is-search-match` state by toggling the
+ /// `search-match` CSS class. Unlike `requires-attention`, this state
+ /// is not auto-cleared after a delay; it stays set until the channel
+ /// search navigates to a different match or closes.
+ pub fn setup_search_match(&self) {
+ let message = self.message();
+ self.track_notify_local(&message, "is-search-match", {
+ let s = self.downgrade();
+ move |m, _| {
+ let Some(s) = s.upgrade() else {
+ return;
+ };
+ if m.property("is-search-match") {
+ s.add_css_class("search-match");
+ } else {
+ s.remove_css_class("search-match");
+ }
+ }
+ });
+ if message.property("is-search-match") {
+ self.add_css_class("search-match");
+ }
+ }
+
pub fn setup_pending_and_error(&self) {
let message = self.message();
message.connect_notify_local(
diff --git a/src/gui/window.rs b/src/gui/window.rs
index 6335f3d..ce097ce 100644
--- a/src/gui/window.rs
+++ b/src/gui/window.rs
@@ -20,6 +20,7 @@ impl Window {
app.set_accels_for_action("window.close", &["<Control>q"]);
app.set_accels_for_action("channel-messages.activate-input", &["<Control>i"]);
app.set_accels_for_action("channel-messages.load-more", &["<Control>l"]);
+ app.set_accels_for_action("channel-messages.toggle-search", &["<Control><Shift>f"]);
for i in 1..=9 {
app.set_accels_for_action(
&format!("channel-list.activate-channel({i})"),
@@ -531,10 +532,20 @@ pub mod imp {
channel_messages.load_more();
}
));
+ let action_toggle_search = SimpleAction::new("toggle-search", None);
+ action_toggle_search.connect_activate(clone!(
+ #[strong(rename_to = channel_messages)]
+ self.channel_messages,
+ move |_, _| {
+ let active = !channel_messages.search_active();
+ channel_messages.set_search_active(active);
+ }
+ ));
let actions = SimpleActionGroup::new();
obj.insert_action_group("channel-messages", Some(&actions));
actions.add_action(&action_activate_input);
actions.add_action(&action_load_more);
+ actions.add_action(&action_toggle_search);
// Channel list actions.
--
2.53.0

View File

@@ -0,0 +1,175 @@
From 68d9dee5a3345c35197968e158d20cbc3e85e1b3 Mon Sep 17 00:00:00 2001
From: Simon Gardling <titaniumtown@proton.me>
Date: Thu, 30 Apr 2026 04:25:07 -0400
Subject: [PATCH 6/6] feat(messages): Show 'This message was deleted.'
placeholder
Upstream hides the whole MessageItem when is-deleted is true via a
top-level visible bind on the template root. Replace that with a
Signal-Desktop-style behaviour: the row stays in the timeline, but the
bubble's regular content (header, quote, attachments, label, popover
trigger) and any reactions are hidden, and a single italic-dim
placeholder label takes their place.
The two pieces of imperative state set in code (media_overlay and the
floating timestamp_img indicator over media-only messages) are reset
in a small setup_deleted helper that subscribes to is-deleted, since
they are not reachable through bindings.
---
data/resources/style.css | 6 ++++++
data/resources/ui/message_item.blp | 29 +++++++++++++++++++++----
src/gui/message_item.rs | 34 ++++++++++++++++++++++++++++++
3 files changed, 65 insertions(+), 4 deletions(-)
diff --git a/data/resources/style.css b/data/resources/style.css
index e00e789..b3a517f 100644
--- a/data/resources/style.css
+++ b/data/resources/style.css
@@ -9,6 +9,12 @@
background-color: @bubble_bg_color;
}
+/* Deletion placeholder shown in place of remotely-deleted messages. */
+.deleted-message {
+ font-style: italic;
+ opacity: 0.6;
+}
+
.message-input-bar {
border-top: 1px solid @borders;
}
diff --git a/data/resources/ui/message_item.blp b/data/resources/ui/message_item.blp
index ba3fd23..49002a5 100644
--- a/data/resources/ui/message_item.blp
+++ b/data/resources/ui/message_item.blp
@@ -71,8 +71,6 @@ template $FlMessageItem: $ContextMenuBin {
"message-item",
]
- visible: bind $not(template.message as <$FlTextMessage>.is-deleted) as <bool>;
-
Grid {
column-spacing: 12;
row-spacing: 12;
@@ -149,11 +147,32 @@ template $FlMessageItem: $ContextMenuBin {
"message-bubble",
]
+ // Deletion placeholder "This message was deleted." — visible only when
+ // the message has been remotely deleted; hides the rest of the bubble's
+ // content and any reactions/attachments via the .deleted CSS class on
+ // the message-item.
+ Label deleted_label {
+ styles [
+ "deleted-message",
+ ]
+
+ label: _("This message was deleted.");
+ visible: bind template.message as <$FlTextMessage>.is-deleted;
+ halign: start;
+ xalign: 0;
+
+ layout {
+ row: 0;
+ column: 0;
+ }
+ }
+
Label header {
styles [
"heading",
]
+ visible: bind $not(template.message as <$FlTextMessage>.is-deleted) as <bool>;
label: bind template.message as <$FlTextMessage>.sender as <$FlContact>.title;
hexpand: true;
halign: start;
@@ -177,7 +196,7 @@ template $FlMessageItem: $ContextMenuBin {
"quote",
]
- visible: bind $is_some(template.message as <$FlTextMessage>.quote) as <bool>;
+ visible: bind $and($is_some(template.message as <$FlTextMessage>.quote) as <bool>, $not(template.message as <$FlTextMessage>.is-deleted) as <bool>) as <bool>;
Label {
styles [
@@ -219,6 +238,7 @@ template $FlMessageItem: $ContextMenuBin {
}
Box box_attachments {
+ visible: bind $not(template.message as <$FlTextMessage>.is-deleted) as <bool>;
layout {
row: 2;
column: 0;
@@ -276,6 +296,7 @@ template $FlMessageItem: $ContextMenuBin {
}
$FlMessageLabel label_message {
+ visible: bind $not(template.message as <$FlTextMessage>.is-deleted) as <bool>;
label: bind $markup_urls(template.message as <$FlTextMessage>.body) as <string>;
attributes: bind template.message as <$FlTextMessage>.message-attributes;
@@ -306,7 +327,7 @@ template $FlMessageItem: $ContextMenuBin {
]
label: bind $fix_emoji(template.message as <$FlTextMessage>.reactions) as <string>;
- visible: bind template.has-reaction;
+ visible: bind $and(template.has-reaction, $not(template.message as <$FlTextMessage>.is-deleted) as <bool>) as <bool>;
wrap-mode: word;
justify: left;
vexpand: false;
diff --git a/src/gui/message_item.rs b/src/gui/message_item.rs
index 1f62d50..9936680 100644
--- a/src/gui/message_item.rs
+++ b/src/gui/message_item.rs
@@ -36,6 +36,7 @@ impl MessageItem {
s.setup_search_match();
s.setup_pending_and_error();
s.setup_selection();
+ s.setup_deleted();
s
}
@@ -350,6 +351,39 @@ impl MessageItem {
}
}
+ /// Reflect the message's `is-deleted` state in the UI.
+ ///
+ /// Upstream simply hid the row entirely; we instead keep the row but
+ /// show a `"This message was deleted."` placeholder (handled in the
+ /// blueprint) and clean up the bits that the deletion pseudo-message
+ /// can't reach via the bind layer: the media overlay (whose visibility
+ /// is set imperatively in `set_message`) and the standalone
+ /// `timestamp_img` indicator that floats over media-only messages.
+ pub fn setup_deleted(&self) {
+ let message = self.message();
+ let apply = clone!(
+ #[weak(rename_to = s)]
+ self,
+ move || {
+ if s.message().is_deleted() {
+ s.add_css_class("deleted");
+ s.imp().media_overlay.set_visible(false);
+ s.imp().timestamp_img.set_visible(false);
+ } else {
+ // Symmetric reset for any future code path that flips
+ // is-deleted back off (e.g. an unsend/restore flow).
+ // Today nothing does, but the asymmetry is fragile.
+ s.remove_css_class("deleted");
+ }
+ }
+ );
+ self.track_notify_local(&message, "is-deleted", {
+ let apply = apply.clone();
+ move |_, _| apply()
+ });
+ apply();
+ }
+
pub fn setup_pending_and_error(&self) {
let message = self.message();
message.connect_notify_local(
--
2.53.0

View File

@@ -0,0 +1,115 @@
From cf7b9a9fc53023cbaca5a128ece32d76cafe95d5 Mon Sep 17 00:00:00 2001
From: Oscar Cowdery Lack <oscar.cowderylack@gmail.com>
Date: Mon, 30 Mar 2026 00:05:49 +1100
Subject: [PATCH] server: Use provided secret to unlock auto-created default
keyring (#443)
If a secret is provided by PAM or systemd credentials, then it should be
used to unlock the default keyring when creating it for the first time,
not just when discovering existing keyrings.
---
src/service/mod.rs | 36 +++++++++++++++++++++++++-----------
src/tests.rs | 4 +++-
2 files changed, 28 insertions(+), 12 deletions(-)
diff --git a/src/service/mod.rs b/src/service/mod.rs
index bfbe16d..44e55c2 100644
--- a/src/service/mod.rs
+++ b/src/service/mod.rs
@@ -415,10 +415,10 @@ impl Service {
.await?;
// Discover existing keyrings
- let discovered_keyrings = service.discover_keyrings(secret).await?;
+ let discovered_keyrings = service.discover_keyrings(secret.clone()).await?;
service
- .initialize(connection, discovered_keyrings, true)
+ .initialize(connection, discovered_keyrings, secret, true)
.await?;
// Start PAM listener
@@ -458,7 +458,7 @@ impl Service {
)
.await?;
- let default_keyring = if let Some(secret) = secret {
+ let default_keyring = if let Some(secret) = secret.clone() {
vec![(
"Login".to_owned(),
oo7::dbus::Service::DEFAULT_COLLECTION.to_owned(),
@@ -469,7 +469,7 @@ impl Service {
};
service
- .initialize(connection, default_keyring, false)
+ .initialize(connection, default_keyring, secret, false)
.await?;
Ok(service)
}
@@ -686,6 +686,7 @@ impl Service {
&self,
connection: zbus::Connection,
mut discovered_keyrings: Vec<(String, String, Keyring)>, // (name, alias, keyring)
+ secret: Option<Secret>,
auto_create_default: bool,
) -> Result<(), Error> {
self.connection.set(connection.clone()).unwrap();
@@ -701,19 +702,32 @@ impl Service {
if !has_default && auto_create_default {
tracing::info!("No default collection found, creating 'Login' keyring");
- let locked_keyring = LockedKeyring::open(Self::LOGIN_ALIAS)
- .await
- .inspect_err(|e| {
- tracing::error!("Failed to create default Login keyring: {}", e);
- })?;
+ let keyring = if let Some(secret) = secret {
+ UnlockedKeyring::open(Self::LOGIN_ALIAS, secret)
+ .await
+ .map(Keyring::Unlocked)
+ } else {
+ LockedKeyring::open(Self::LOGIN_ALIAS)
+ .await
+ .map(Keyring::Locked)
+ };
+
+ let keyring = keyring.inspect_err(|e| {
+ tracing::error!("Failed to create default Login keyring: {}", e);
+ })?;
+ let is_locked = if keyring.is_locked() {
+ "locked"
+ } else {
+ "unlocked"
+ };
discovered_keyrings.push((
"Login".to_owned(),
oo7::dbus::Service::DEFAULT_COLLECTION.to_owned(),
- Keyring::Locked(locked_keyring),
+ keyring,
));
- tracing::info!("Created default 'Login' collection (locked)");
+ tracing::info!("Created default 'Login' collection ({})", is_locked);
}
// Set up discovered collections
diff --git a/src/tests.rs b/src/tests.rs
index 16aa0bb..07fb27c 100644
--- a/src/tests.rs
+++ b/src/tests.rs
@@ -254,7 +254,9 @@ impl TestServiceSetup {
.await?;
let discovered = service.discover_keyrings(secret.clone()).await?;
- service.initialize(server_conn, discovered, false).await?;
+ service
+ .initialize(server_conn, discovered, secret.clone(), false)
+ .await?;
#[cfg(any(feature = "gnome_native_crypto", feature = "gnome_openssl_crypto"))]
let mock_prompter = {
--
2.53.0

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env bash
# Bootstrap the age-plugin-tpm identity for a desktop host (mreow / yarn).
#
# Produces a TPM-sealed age identity at /var/lib/agenix/tpm-identity and
# prints the recipient string to add to secrets/secrets.nix.
#
# Usage:
# doas scripts/bootstrap-desktop-tpm.sh
#
# After running:
# 1. Append the printed recipient to the `tpm` list in secrets/secrets.nix.
# 2. Re-encrypt: nix-shell -p age-plugin-tpm rage --run \
# 'agenix -r -i ~/.ssh/id_ed25519'
# 3. Commit + ./deploy.sh switch.
set -euo pipefail
if [[ $EUID -ne 0 ]]; then
echo "this script must run as root (access to /dev/tpmrm0 + /var/lib/agenix)" >&2
exit 1
fi
host=$(hostname -s)
id_file=/var/lib/agenix/tpm-identity
install -d -m 0700 -o root -g root /var/lib/agenix
if [[ -f "$id_file" ]]; then
echo "existing identity found at $id_file — preserving"
else
echo "generating TPM-sealed age identity..."
nix-shell -p age-plugin-tpm --run "age-plugin-tpm --generate -o $id_file"
chmod 0400 "$id_file"
chown root:root "$id_file"
fi
# Read the recipient directly from the identity file header — no TPM
# round-trip needed, no nix run, no set -e hazards.
recipient=$(grep '^# Recipient:' "$id_file" | awk '{print $3}')
if [[ -z "$recipient" ]]; then
echo "failed to read recipient from $id_file" >&2
exit 1
fi
cat <<EOF
recipient for $host:
"$recipient $host"
next steps (run on a workstation with git-crypt unlocked):
1. edit secrets/secrets.nix and add the line above to the \`tpm\` list.
2. re-encrypt: nix-shell -p age-plugin-tpm rage --run 'agenix -r -i ~/.ssh/id_ed25519'
3. git commit + ./deploy.sh switch
EOF

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
secrets/secrets.nix Normal file

Binary file not shown.

View File

@@ -13,6 +13,7 @@ let
curl = "${pkgs.curl}/bin/curl"; curl = "${pkgs.curl}/bin/curl";
jq = "${pkgs.jq}/bin/jq"; jq = "${pkgs.jq}/bin/jq";
shuf = "${pkgs.coreutils}/bin/shuf";
# Max items to search per cycle per category (missing + cutoff) per app # Max items to search per cycle per category (missing + cutoff) per app
maxPerCycle = 5; maxPerCycle = 5;
@@ -54,10 +55,16 @@ let
local label="$2" local label="$2"
local series_ids local series_ids
series_ids=$(${curl} -sf --max-time 30 \ # Fetch the full wanted list, dedupe to seriesIds, then randomly
# sample maxPerCycle. Sonarr's wanted endpoint returns one record
# per episode, so a small pageSize collapses to a single seriesId
# whenever any one show dominates the alphabetical head of the
# backlog -- which starves every other show indefinitely.
series_ids=$(${curl} -sf --max-time 60 \
-H "X-Api-Key: $SONARR_KEY" \ -H "X-Api-Key: $SONARR_KEY" \
"${sonarrUrl}/api/v3/wanted/$endpoint?page=1&pageSize=${builtins.toString maxPerCycle}&monitored=true&sortKey=title&sortDirection=ascending&includeSeries=true" \ "${sonarrUrl}/api/v3/wanted/$endpoint?page=1&pageSize=5000&monitored=true" \
| ${jq} -r '[.records[].seriesId] | unique | .[] // empty') | ${jq} -r '[.records[].seriesId] | unique | .[]' \
| ${shuf} -n ${builtins.toString maxPerCycle})
if [ -z "$series_ids" ]; then if [ -z "$series_ids" ]; then
echo "sonarr: no $label items" echo "sonarr: no $label items"

View File

@@ -173,6 +173,19 @@ in
]; ];
} }
{ name = "HDTV-720p"; } { name = "HDTV-720p"; }
# SD fallback for shows that predate HD or whose only seeded
# public-tracker copies are 480p/DVD/SDTV. Sonarr will still
# upgrade to WEB/Bluray (cutoff above) when an HD release
# surfaces.
{
name = "SD";
qualities = [
"WEBDL-480p"
"WEBRip-480p"
"DVD"
"SDTV"
];
}
]; ];
} }
]; ];

View File

@@ -57,6 +57,19 @@ def get_qbit_torrents(qbit_client, category: str) -> dict[str, dict]:
return {t["hash"].upper(): t for t in torrents} return {t["hash"].upper(): t for t in torrents}
def is_complete(torrent: dict) -> bool:
"""True iff the torrent's payload is fully on disk.
A torrent that was once imported can later end up at progress < 1 if the
files were deleted or qBittorrent was reset and the torrent was re-added.
Those entries must NOT be reported as abandoned-safe: their reported size
is the metadata size, not what is actually on disk, so the reclaim figure
would be a fiction and a 'safe to delete' verdict could kill a re-grab in
progress.
"""
return float(torrent.get("progress", 0)) >= 1.0
def gib(size_bytes: int) -> str: def gib(size_bytes: int) -> str:
return f"{size_bytes / 1073741824:.1f}" return f"{size_bytes / 1073741824:.1f}"
@@ -133,6 +146,12 @@ def find_movie_abandoned(radarr, qbit_movies):
torrent = qbit_movies.get(ahash) torrent = qbit_movies.get(ahash)
if torrent is None: if torrent is None:
continue continue
# Skip torrents whose payload is not fully on disk: their reported size
# is metadata, not actual on-disk bytes, so flagging them as
# abandoned-safe would lie about the reclaim and could disrupt a
# re-download in progress.
if not is_complete(torrent):
continue
mid = hash_to_movie.get(ahash) mid = hash_to_movie.get(ahash)
movie = radarr_movies.get(mid) if mid else None movie = radarr_movies.get(mid) if mid else None
@@ -211,6 +230,8 @@ def find_tv_abandoned(sonarr, qbit_tvshows):
torrent = qbit_tvshows.get(ahash) torrent = qbit_tvshows.get(ahash)
if torrent is None: if torrent is None:
continue continue
if not is_complete(torrent):
continue
status = "SAFE" status = "SAFE"
notes = [] notes = []

View File

@@ -2,6 +2,7 @@
config, config,
lib, lib,
pkgs, pkgs,
site_config,
service_configs, service_configs,
... ...
}: }:
@@ -25,7 +26,7 @@
configurePostgres = true; configurePostgres = true;
config = { config = {
# Refer to https://github.com/dani-garcia/vaultwarden/blob/main/.env.template # Refer to https://github.com/dani-garcia/vaultwarden/blob/main/.env.template
DOMAIN = "https://bitwarden.${service_configs.https.domain}"; DOMAIN = "https://bitwarden.${site_config.domain}";
SIGNUPS_ALLOWED = false; SIGNUPS_ALLOWED = false;
ROCKET_ADDRESS = "127.0.0.1"; ROCKET_ADDRESS = "127.0.0.1";
@@ -34,7 +35,7 @@
}; };
}; };
services.caddy.virtualHosts."bitwarden.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."bitwarden.${site_config.domain}".extraConfig = ''
encode zstd gzip encode zstd gzip
reverse_proxy :${toString config.services.vaultwarden.config.ROCKET_PORT} { reverse_proxy :${toString config.services.vaultwarden.config.ROCKET_PORT} {

View File

@@ -1,5 +1,6 @@
{ {
config, config,
site_config,
service_configs, service_configs,
pkgs, pkgs,
lib, lib,
@@ -42,8 +43,8 @@ let
''; '';
}; };
newDomain = service_configs.https.domain; newDomain = site_config.domain;
oldDomain = service_configs.https.old_domain; oldDomain = site_config.old_domain;
in in
{ {
imports = [ imports = [
@@ -54,7 +55,7 @@ in
services.caddy = { services.caddy = {
enable = true; enable = true;
email = "titaniumtown@proton.me"; email = site_config.contact_email;
# Build with Njalla DNS provider for DNS-01 ACME challenges (wildcard certs) # Build with Njalla DNS provider for DNS-01 ACME challenges (wildcard certs)
package = pkgs.caddy.withPlugins { package = pkgs.caddy.withPlugins {
@@ -146,8 +147,9 @@ in
# defaults: maxretry=5, findtime=10m, bantime=10m # defaults: maxretry=5, findtime=10m, bantime=10m
# Ignore local network IPs - NAT hairpinning causes all LAN traffic to # Ignore local network IPs - NAT hairpinning causes all LAN traffic to
# appear from the router IP (192.168.1.1). Banning it blocks all internal access. # appear from the router IP (site_config.lan.gateway). Banning it
ignoreip = "127.0.0.1/8 ::1 192.168.1.0/24"; # blocks all internal access.
ignoreip = "127.0.0.1/8 ::1 ${site_config.lan.cidr}";
}; };
filter.Definition = { filter.Definition = {
# Only match 401s where an Authorization header was actually sent. # Only match 401s where an Authorization header was actually sent.

View File

@@ -2,6 +2,7 @@
config, config,
lib, lib,
pkgs, pkgs,
site_config,
service_configs, service_configs,
inputs, inputs,
... ...
@@ -32,7 +33,7 @@ let
}; };
in in
{ {
services.caddy.virtualHosts."senior-project.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."senior-project.${site_config.domain}".extraConfig = ''
root * ${hugoWebsite} root * ${hugoWebsite}
file_server browse file_server browse
''; '';

View File

@@ -34,6 +34,14 @@
}; };
}; };
users.users.gitea-runner = {
isSystemUser = true;
group = "gitea-runner";
home = "/var/lib/gitea-runner";
description = "Gitea Actions CI runner";
};
users.groups.gitea-runner = { };
# Override DynamicUser to use our static gitea-runner user, and ensure # Override DynamicUser to use our static gitea-runner user, and ensure
# the runner doesn't start before the co-located gitea instance is ready # the runner doesn't start before the co-located gitea instance is ready
# (upstream can't assume locality, so this dependency is ours to add). # (upstream can't assume locality, so this dependency is ours to add).

View File

@@ -49,6 +49,32 @@
}; };
}; };
# Hide repo Actions/workflow details from anonymous visitors. Gitea's own
# REQUIRE_SIGNIN_VIEW=expensive does not cover /{user}/{repo}/actions, and
# the API auth chain (routers/api/v1/api.go buildAuthGroup) deliberately
# omits `auth_service.Session`, so an /api/v1/user probe would 401 even
# for logged-in browser sessions. We gate at Caddy instead: forward_auth
# probes a lightweight *web-UI* endpoint that does accept session cookies,
# and Gitea's own reqSignIn middleware answers 303 to /user/login for
# anonymous callers which we rewrite to preserve the original URL.
# Workflow status badges stay public so README links keep rendering.
services.caddy.virtualHosts.${service_configs.gitea.domain}.extraConfig = ''
@repoActionsNotBadge {
path_regexp ^/[^/]+/[^/]+/actions(/.*)?$
not path_regexp ^/[^/]+/[^/]+/actions/workflows/[^/]+/badge\.svg$
}
handle @repoActionsNotBadge {
forward_auth :${toString service_configs.ports.private.gitea.port} {
uri /user/stopwatches
@unauthorized status 302 303
handle_response @unauthorized {
redir * /user/login?redirect_to={uri} 302
}
}
}
'';
services.postgresql = { services.postgresql = {
ensureDatabases = [ config.services.gitea.user ]; ensureDatabases = [ config.services.gitea.user ];
ensureUsers = [ ensureUsers = [

View File

@@ -687,6 +687,188 @@ let
overrides = [ ]; overrides = [ ];
}; };
} }
# -- Row 6: Minecraft --
{
id = 14;
type = "stat";
title = "Minecraft Players";
gridPos = {
h = 8;
w = 6;
x = 0;
y = 40;
};
datasource = promDs;
targets = [
{
datasource = promDs;
expr = "sum(minecraft_status_players_online_count) or vector(0)";
refId = "A";
}
];
fieldConfig = {
defaults = {
thresholds = {
mode = "absolute";
steps = [
{
color = "green";
value = null;
}
{
color = "yellow";
value = 3;
}
{
color = "red";
value = 6;
}
];
};
};
overrides = [ ];
};
options = {
reduceOptions = {
calcs = [ "lastNotNull" ];
fields = "";
values = false;
};
colorMode = "value";
graphMode = "area";
};
}
{
id = 15;
type = "stat";
title = "Minecraft Server";
gridPos = {
h = 8;
w = 6;
x = 6;
y = 40;
};
datasource = promDs;
targets = [
{
datasource = promDs;
expr = "max(minecraft_status_healthy) or vector(0)";
refId = "A";
}
];
fieldConfig = {
defaults = {
mappings = [
{
type = "value";
options = {
"0" = {
text = "Offline";
color = "red";
index = 0;
};
"1" = {
text = "Online";
color = "green";
index = 1;
};
};
}
];
thresholds = {
mode = "absolute";
steps = [
{
color = "red";
value = null;
}
{
color = "green";
value = 1;
}
];
};
};
overrides = [ ];
};
options = {
reduceOptions = {
calcs = [ "lastNotNull" ];
fields = "";
values = false;
};
colorMode = "value";
graphMode = "none";
};
}
{
id = 16;
type = "timeseries";
title = "Minecraft Player Activity";
gridPos = {
h = 8;
w = 12;
x = 12;
y = 40;
};
datasource = promDs;
targets = [
{
datasource = promDs;
expr = "sum(minecraft_status_players_online_count) or vector(0)";
legendFormat = "Online players";
refId = "A";
}
{
datasource = promDs;
expr = "max(minecraft_status_players_max_count) or vector(0)";
legendFormat = "Max players";
refId = "B";
}
];
fieldConfig = {
defaults = {
unit = "short";
min = 0;
decimals = 0;
color.mode = "palette-classic";
custom = {
lineWidth = 2;
fillOpacity = 15;
spanNulls = true;
};
};
overrides = [
{
matcher = {
id = "byFrameRefID";
options = "B";
};
properties = [
{
id = "custom.lineStyle";
value = {
fill = "dash";
dash = [
8
4
];
};
}
{
id = "custom.fillOpacity";
value = 0;
}
{
id = "custom.lineWidth";
value = 1;
}
];
}
];
};
}
]; ];
}; };
in in

View File

@@ -10,6 +10,9 @@ let
jellyfinExporterPort = service_configs.ports.private.jellyfin_exporter.port; jellyfinExporterPort = service_configs.ports.private.jellyfin_exporter.port;
qbitExporterPort = service_configs.ports.private.qbittorrent_exporter.port; qbitExporterPort = service_configs.ports.private.qbittorrent_exporter.port;
igpuExporterPort = service_configs.ports.private.igpu_exporter.port; igpuExporterPort = service_configs.ports.private.igpu_exporter.port;
minecraftExporterPort = service_configs.ports.private.minecraft_exporter.port;
minecraftServerName = service_configs.minecraft.server_name;
minecraftServerPort = service_configs.ports.public.minecraft.port;
in in
{ {
# -- Jellyfin Prometheus Exporter -- # -- Jellyfin Prometheus Exporter --
@@ -109,4 +112,45 @@ in
REFRESH_PERIOD_MS = "30000"; REFRESH_PERIOD_MS = "30000";
}; };
}; };
# -- Minecraft Prometheus Exporter --
# itzg/mc-monitor queries the local server via SLP on each scrape and exposes
# minecraft_status_{healthy,response_time_seconds,players_online_count,players_max_count}.
# mc-monitor binds to 0.0.0.0 (no listen-address flag); the firewall keeps
# 9567 internal and IPAddressAllow pins the socket to loopback as defense-in-depth.
systemd.services.minecraft-exporter =
lib.mkIf (config.services.grafana.enable && config.services.minecraft-servers.enable)
{
description = "Prometheus exporter for Minecraft (mc-monitor SLP)";
after = [
"network.target"
"minecraft-server-${minecraftServerName}.service"
];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${lib.getExe pkgs.mc-monitor} export-for-prometheus";
Restart = "on-failure";
RestartSec = "10s";
DynamicUser = true;
NoNewPrivileges = true;
ProtectSystem = "strict";
ProtectHome = true;
PrivateTmp = true;
MemoryDenyWriteExecute = true;
RestrictAddressFamilies = [
"AF_INET"
"AF_INET6"
];
IPAddressAllow = [
"127.0.0.0/8"
"::1/128"
];
IPAddressDeny = "any";
};
environment = {
EXPORT_SERVERS = "127.0.0.1:${toString minecraftServerPort}";
EXPORT_PORT = toString minecraftExporterPort;
TIMEOUT = "5s";
};
};
} }

View File

@@ -95,6 +95,12 @@ in
{ targets = [ "127.0.0.1:${toString service_configs.ports.private.igpu_exporter.port}" ]; } { targets = [ "127.0.0.1:${toString service_configs.ports.private.igpu_exporter.port}" ]; }
]; ];
} }
{
job_name = "minecraft";
static_configs = [
{ targets = [ "127.0.0.1:${toString service_configs.ports.private.minecraft_exporter.port}" ]; }
];
}
{ {
job_name = "zfs"; job_name = "zfs";
static_configs = [ static_configs = [

View File

@@ -1,4 +1,5 @@
{ {
site_config,
service_configs, service_configs,
inputs, inputs,
pkgs, pkgs,
@@ -9,7 +10,7 @@ let
inputs.ytbn-graphing-software.packages.${pkgs.stdenv.targetPlatform.system}.web; inputs.ytbn-graphing-software.packages.${pkgs.stdenv.targetPlatform.system}.web;
in in
{ {
services.caddy.virtualHosts."graphing.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."graphing.${site_config.domain}".extraConfig = ''
root * ${graphing-calculator} root * ${graphing-calculator}
file_server browse file_server browse
''; '';

View File

@@ -1,6 +1,7 @@
{ {
config, config,
lib, lib,
site_config,
service_configs, service_configs,
... ...
}: }:
@@ -19,7 +20,7 @@
# serve latest deploy store paths (unauthenticated — just a path string) # serve latest deploy store paths (unauthenticated — just a path string)
# CI writes to /var/lib/nix-deploy/<hostname> after building # CI writes to /var/lib/nix-deploy/<hostname> after building
services.caddy.virtualHosts."nix-cache.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."nix-cache.${site_config.domain}".extraConfig = ''
handle_path /deploy/* { handle_path /deploy/* {
root * /var/lib/nix-deploy root * /var/lib/nix-deploy
file_server file_server

View File

@@ -38,6 +38,7 @@ class JellyfinQBittorrentMonitor:
stream_bitrate_headroom=1.1, stream_bitrate_headroom=1.1,
webhook_port=0, webhook_port=0,
webhook_bind="127.0.0.1", webhook_bind="127.0.0.1",
gateway_ip=None,
): ):
self.jellyfin_url = jellyfin_url self.jellyfin_url = jellyfin_url
self.qbittorrent_url = qbittorrent_url self.qbittorrent_url = qbittorrent_url
@@ -77,6 +78,15 @@ class JellyfinQBittorrentMonitor:
ipaddress.ip_network("fe80::/10"), # IPv6 link-local ipaddress.ip_network("fe80::/10"), # IPv6 link-local
] ]
# Hairpin marker. When a LAN client reaches Jellyfin via the public
# hostname, the router NAT-loopbacks the packet and SNATs the source
# to itself — the session arrives looking local but still costs WAN
# bandwidth. Sessions whose source equals the gateway must therefore
# NOT be skipped. None disables the check (pre-hairpin-aware behavior).
if gateway_ip is None:
gateway_ip = self._discover_default_gateway()
self.gateway_ip = gateway_ip
def is_local_ip(self, ip_address: str) -> bool: def is_local_ip(self, ip_address: str) -> bool:
"""Check if an IP address is from a local network""" """Check if an IP address is from a local network"""
try: try:
@@ -86,6 +96,39 @@ class JellyfinQBittorrentMonitor:
logger.warning(f"Invalid IP address format: {ip_address}") logger.warning(f"Invalid IP address format: {ip_address}")
return True # Treat invalid IPs as local for safety return True # Treat invalid IPs as local for safety
def _discover_default_gateway(self) -> str | None:
"""Read the IPv4 default gateway from /proc/net/route, or None."""
try:
with open("/proc/net/route") as f:
next(f) # skip header
for line in f:
fields = line.split()
if len(fields) < 8 or fields[1] != "00000000":
continue
flags = int(fields[3], 16)
if not flags & 0x2: # RTF_GATEWAY
continue
gw_bytes = bytes.fromhex(fields[2])[::-1] # little-endian
if len(gw_bytes) != 4:
continue
return ".".join(str(b) for b in gw_bytes)
except (OSError, ValueError) as e:
logger.warning(f"Could not autodetect default gateway: {e}")
return None
def is_skippable(self, ip_address: str) -> bool:
"""True iff this source IP can be ignored when deciding to throttle.
Truly LAN-direct sessions are skippable (no WAN cost). Hairpin-NAT'd
LAN sessions arrive with the LAN gateway as their source — those still
cost WAN bandwidth and must NOT be skipped.
"""
if not self.is_local_ip(ip_address):
return False
if self.gateway_ip and ip_address == self.gateway_ip:
return False
return True
def signal_handler(self, signum, frame): def signal_handler(self, signum, frame):
logger.info("Received shutdown signal, cleaning up...") logger.info("Received shutdown signal, cleaning up...")
self.running = False self.running = False
@@ -164,7 +207,7 @@ class JellyfinQBittorrentMonitor:
if ( if (
"NowPlayingItem" in session "NowPlayingItem" in session
and not session.get("PlayState", {}).get("IsPaused", True) and not session.get("PlayState", {}).get("IsPaused", True)
and not self.is_local_ip(session.get("RemoteEndPoint", "")) and not self.is_skippable(session.get("RemoteEndPoint", ""))
): ):
item = session["NowPlayingItem"] item = session["NowPlayingItem"]
item_type = item.get("Type", "").lower() item_type = item.get("Type", "").lower()
@@ -354,6 +397,9 @@ class JellyfinQBittorrentMonitor:
logger.info(f"Default stream bitrate: {self.default_stream_bitrate} bps") logger.info(f"Default stream bitrate: {self.default_stream_bitrate} bps")
logger.info(f"Minimum torrent speed: {self.min_torrent_speed} KB/s") logger.info(f"Minimum torrent speed: {self.min_torrent_speed} KB/s")
logger.info(f"Stream bitrate headroom: {self.stream_bitrate_headroom}x") logger.info(f"Stream bitrate headroom: {self.stream_bitrate_headroom}x")
logger.info(
f"LAN gateway (hairpin marker): {self.gateway_ip or 'none / autodetect failed'}"
)
if self.webhook_port: if self.webhook_port:
logger.info(f"Webhook receiver: {self.webhook_bind}:{self.webhook_port}") logger.info(f"Webhook receiver: {self.webhook_bind}:{self.webhook_port}")
@@ -484,6 +530,7 @@ if __name__ == "__main__":
stream_bitrate_headroom = float(os.getenv("STREAM_BITRATE_HEADROOM", "1.1")) stream_bitrate_headroom = float(os.getenv("STREAM_BITRATE_HEADROOM", "1.1"))
webhook_port = int(os.getenv("WEBHOOK_PORT", "0")) webhook_port = int(os.getenv("WEBHOOK_PORT", "0"))
webhook_bind = os.getenv("WEBHOOK_BIND", "127.0.0.1") webhook_bind = os.getenv("WEBHOOK_BIND", "127.0.0.1")
gateway_ip = os.getenv("LAN_GATEWAY_IP") or None
monitor = JellyfinQBittorrentMonitor( monitor = JellyfinQBittorrentMonitor(
jellyfin_url=jellyfin_url, jellyfin_url=jellyfin_url,
@@ -499,6 +546,7 @@ if __name__ == "__main__":
stream_bitrate_headroom=stream_bitrate_headroom, stream_bitrate_headroom=stream_bitrate_headroom,
webhook_port=webhook_port, webhook_port=webhook_port,
webhook_bind=webhook_bind, webhook_bind=webhook_bind,
gateway_ip=gateway_ip,
) )
monitor.run() monitor.run()

View File

@@ -1,6 +1,7 @@
{ {
pkgs, pkgs,
config, config,
site_config,
service_configs, service_configs,
lib, lib,
... ...
@@ -24,7 +25,7 @@
inherit (service_configs.jellyfin) dataDir cacheDir; inherit (service_configs.jellyfin) dataDir cacheDir;
}; };
services.caddy.virtualHosts."jellyfin.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."jellyfin.${site_config.domain}".extraConfig = ''
reverse_proxy :${builtins.toString service_configs.ports.private.jellyfin.port} { reverse_proxy :${builtins.toString service_configs.ports.private.jellyfin.port} {
# Disable response buffering for streaming. Caddy's default partial # Disable response buffering for streaming. Caddy's default partial
# buffering delays fMP4-HLS segments and direct-play responses where # buffering delays fMP4-HLS segments and direct-play responses where

View File

@@ -1,5 +1,6 @@
{ {
pkgs, pkgs,
site_config,
service_configs, service_configs,
config, config,
inputs, inputs,
@@ -24,7 +25,7 @@ in
# "Invalid API Key" warning has no client IP, and behind Caddy the # "Invalid API Key" warning has no client IP, and behind Caddy the
# llama-server access log only sees 127.0.0.1. Caddy's JSON log has # llama-server access log only sees 127.0.0.1. Caddy's JSON log has
# the real client IP via request.remote_ip. # the real client IP via request.remote_ip.
services.caddy.virtualHosts."llm.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."llm.${site_config.domain}".extraConfig = ''
log { log {
output file /var/log/caddy/access-llama-cpp.log output file /var/log/caddy/access-llama-cpp.log
format json format json
@@ -52,8 +53,8 @@ in
# defaults: maxretry=5, findtime=10m, bantime=10m # defaults: maxretry=5, findtime=10m, bantime=10m
# NAT hairpinning sends LAN traffic via the router IP. Don't ban # NAT hairpinning sends LAN traffic via the router IP. Don't ban
# 192.168.1.0/24 or we lock ourselves out. # our LAN or we lock ourselves out.
ignoreip = "127.0.0.1/8 ::1 192.168.1.0/24"; ignoreip = "127.0.0.1/8 ::1 ${site_config.lan.cidr}";
}; };
filter.Definition = { filter.Definition = {
failregex = ''^.*"remote_ip":"<HOST>".*"status":401.*$''; failregex = ''^.*"remote_ip":"<HOST>".*"status":401.*$'';

View File

@@ -1,13 +1,14 @@
{ {
config, config,
lib, lib,
site_config,
service_configs, service_configs,
... ...
}: }:
{ {
services.coturn = { services.coturn = {
enable = true; enable = true;
realm = service_configs.https.domain; realm = site_config.domain;
use-auth-secret = true; use-auth-secret = true;
static-auth-secret-file = config.age.secrets.coturn-auth-secret.path; static-auth-secret-file = config.age.secrets.coturn-auth-secret.path;
listening-port = service_configs.ports.public.coturn.port; listening-port = service_configs.ports.public.coturn.port;

View File

@@ -1,5 +1,6 @@
{ {
config, config,
site_config,
service_configs, service_configs,
lib, lib,
... ...
@@ -23,7 +24,7 @@
settings.global = { settings.global = {
port = [ service_configs.ports.private.matrix.port ]; port = [ service_configs.ports.private.matrix.port ];
server_name = service_configs.https.domain; server_name = site_config.domain;
allow_registration = true; allow_registration = true;
registration_token_file = config.age.secrets.matrix-reg-token.path; registration_token_file = config.age.secrets.matrix-reg-token.path;
@@ -43,14 +44,14 @@
# TURN server config (coturn) # TURN server config (coturn)
turn_secret_file = config.age.secrets.matrix-turn-secret.path; turn_secret_file = config.age.secrets.matrix-turn-secret.path;
turn_uris = [ turn_uris = [
"turn:${service_configs.https.domain}?transport=udp" "turn:${site_config.domain}?transport=udp"
"turn:${service_configs.https.domain}?transport=tcp" "turn:${site_config.domain}?transport=tcp"
]; ];
turn_ttl = 86400; turn_ttl = 86400;
}; };
}; };
services.caddy.virtualHosts.${service_configs.https.domain}.extraConfig = lib.mkBefore '' services.caddy.virtualHosts.${site_config.domain}.extraConfig = lib.mkBefore ''
header /.well-known/matrix/* Content-Type application/json header /.well-known/matrix/* Content-Type application/json
header /.well-known/matrix/* Access-Control-Allow-Origin * header /.well-known/matrix/* Access-Control-Allow-Origin *
respond /.well-known/matrix/server `{"m.server": "${service_configs.matrix.domain}:${builtins.toString service_configs.ports.public.https.port}"}` respond /.well-known/matrix/server `{"m.server": "${service_configs.matrix.domain}:${builtins.toString service_configs.ports.public.https.port}"}`

View File

@@ -1,5 +1,6 @@
{ {
pkgs, pkgs,
site_config,
service_configs, service_configs,
lib, lib,
config, config,
@@ -177,7 +178,7 @@
}; };
services.caddy.virtualHosts = lib.mkIf (config.services.caddy.enable) { services.caddy.virtualHosts = lib.mkIf (config.services.caddy.enable) {
"map.${service_configs.https.domain}".extraConfig = '' "map.${site_config.domain}".extraConfig = ''
root * ${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name}/squaremap/web root * ${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name}/squaremap/web
file_server browse file_server browse
''; '';

View File

@@ -2,6 +2,7 @@
config, config,
lib, lib,
pkgs, pkgs,
site_config,
username, username,
... ...
}: }:
@@ -25,14 +26,13 @@
]; ];
users.users.${username}.openssh.authorizedKeys.keys = [ users.users.${username}.openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO4jL6gYOunUlUtPvGdML0cpbKSsPNqQ1jit4E7U1RyH" # laptop site_config.ssh_keys.laptop
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBJjT5QZ3zRDb+V6Em20EYpSEgPW5e/U+06uQGJdraxi" # desktop
]; ];
# used for deploying configs to server # used for deploying configs to server
users.users.root.openssh.authorizedKeys.keys = users.users.root.openssh.authorizedKeys.keys =
config.users.users.${username}.openssh.authorizedKeys.keys config.users.users.${username}.openssh.authorizedKeys.keys
++ [ ++ [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC5ZYN6idL/w/mUIfPOH1i+Q/SQXuzAMQUEuWpipx1Pc ci-deploy@muffin" site_config.ssh_keys.ci_deploy
]; ];
} }

62
site-config.nix Normal file
View File

@@ -0,0 +1,62 @@
# Site-wide constants shared across all three hosts and home-manager profiles.
#
# This file is pure data — no package refs, no module config. Import it from
# flake.nix and pass it as the `site_config` specialArg (and extraSpecialArg for
# home-manager). Callers read values; they do not set them.
#
# Adding a value: only add if it's used by ≥2 hosts/modules. Host-specific
# single-use values stay in the host's default.nix. Muffin-only service
# infrastructure (ports, zpool names, hugepage budgets) stays in
# hosts/muffin/service-configs.nix.
rec {
# --- Identity ---
domain = "sigkill.computer";
old_domain = "gardling.com"; # served by muffin via permanent redirect (services/caddy/caddy.nix)
contact_email = "titaniumtown@proton.me";
# All three hosts run on the same timezone. Override per-host via
# lib.mkForce when travelling (see hosts/mreow/default.nix for the pattern).
timezone = "America/New_York";
# --- Binary cache (muffin serves via harmonia, desktops consume) ---
binary_cache = {
url = "https://nix-cache.${domain}";
public_key = "nix-cache.${domain}-1:ONtQC9gUjL+2yNgMWB68NudPySXhyzJ7I3ra56/NPgk=";
};
# --- LAN topology ---
dns_servers = [
"1.1.1.1"
"9.9.9.9"
];
lan = {
cidr = "192.168.1.0/24";
gateway = "192.168.1.1";
};
# Per-host network info. mreow is laptop-on-DHCP so it has no entry.
hosts = {
muffin = {
ip = "192.168.1.50";
# Canonical alias used by deploy.sh, CI workflows, and borg backup target.
# Resolves via /etc/hosts on muffin and the desktops' NetworkManager DNS.
alias = "server-public";
# SSH host key — same key is served for every alias muffin answers to
# (server-public, the IP, git.${domain}, git.${old_domain}).
ssh_host_key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMjgaMnE+zS7tL+m5E7gh9Q9U1zurLdmU0qcmEmaucu";
};
yarn = {
ip = "192.168.1.223";
alias = "desktop";
};
};
# --- SSH pubkeys ---
# One line per key, referenced by name from services/ssh.nix (muffin) and
# hosts/yarn/default.nix. Rotating a key means changing it here, nowhere else.
ssh_keys = {
laptop = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO4jL6gYOunUlUtPvGdML0cpbKSsPNqQ1jit4E7U1RyH";
ci_deploy = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC5ZYN6idL/w/mUIfPOH1i+Q/SQXuzAMQUEuWpipx1Pc ci-deploy@muffin";
};
}

196
tests/deploy-finalize.nix Normal file
View File

@@ -0,0 +1,196 @@
# Test for modules/server-deploy-finalize.nix.
#
# Covers the decision and scheduling logic with fabricated profile directories,
# since spawning a second booted NixOS toplevel to diff kernels is too heavy for
# a runNixOSTest. We rely on the shellcheck pass baked into writeShellApplication
# to catch syntax regressions in the script itself.
{
lib,
pkgs,
inputs,
...
}:
pkgs.testers.runNixOSTest {
name = "deploy-finalize";
node.specialArgs = {
inherit inputs lib;
username = "testuser";
};
nodes.machine =
{ ... }:
{
imports = [
../modules/server-deploy-finalize.nix
];
services.deployFinalize = {
enable = true;
# Shorter default in the test to make expected-substring assertions
# stable and reinforce that the option is wired through.
delay = 15;
};
};
testScript = ''
start_all()
machine.wait_for_unit("multi-user.target")
# Test fixtures: fabricated profile trees whose kernel/initrd/kernel-modules
# symlinks are under test control. `readlink -e` requires the targets to
# exist, so we point at real files in /tmp rather than non-existent paths.
machine.succeed(
"mkdir -p /tmp/profile-same /tmp/profile-changed-kernel "
"/tmp/profile-changed-initrd /tmp/profile-changed-modules "
"/tmp/profile-missing /tmp/fake-targets"
)
machine.succeed(
"touch /tmp/fake-targets/alt-kernel /tmp/fake-targets/alt-initrd "
"/tmp/fake-targets/alt-modules"
)
booted_kernel = machine.succeed("readlink -e /run/booted-system/kernel").strip()
booted_initrd = machine.succeed("readlink -e /run/booted-system/initrd").strip()
booted_modules = machine.succeed("readlink -e /run/booted-system/kernel-modules").strip()
def link_profile(path, kernel, initrd, modules):
machine.succeed(f"ln -sf {kernel} {path}/kernel")
machine.succeed(f"ln -sf {initrd} {path}/initrd")
machine.succeed(f"ln -sf {modules} {path}/kernel-modules")
# profile-same: matches booted exactly should choose `switch`.
link_profile("/tmp/profile-same", booted_kernel, booted_initrd, booted_modules)
machine.succeed("mkdir -p /tmp/profile-same/bin")
machine.succeed(
"ln -sf /run/current-system/bin/switch-to-configuration "
"/tmp/profile-same/bin/switch-to-configuration"
)
# profile-changed-kernel: kernel differs only should choose `reboot`.
link_profile(
"/tmp/profile-changed-kernel",
"/tmp/fake-targets/alt-kernel",
booted_initrd,
booted_modules,
)
# profile-changed-initrd: initrd differs only should choose `reboot`.
link_profile(
"/tmp/profile-changed-initrd",
booted_kernel,
"/tmp/fake-targets/alt-initrd",
booted_modules,
)
# profile-changed-modules: kernel-modules differs only should choose `reboot`.
# Catches the obelisk PR / nixpkgs auto-upgrade case where modules rebuild
# against the same kernel but ABI-incompatible.
link_profile(
"/tmp/profile-changed-modules",
booted_kernel,
booted_initrd,
"/tmp/fake-targets/alt-modules",
)
# profile-missing: no kernel/initrd/kernel-modules should fail closed.
with subtest("dry-run against identical profile selects switch"):
rc, out = machine.execute(
"deploy-finalize --dry-run --profile /tmp/profile-same 2>&1"
)
assert rc == 0, f"rc={rc}\n{out}"
assert "action=switch" in out, out
assert "services only" in out, out
assert "dry-run not scheduling" in out, out
assert "would run: /tmp/profile-same/bin/switch-to-configuration switch" in out, out
assert "would schedule: systemd-run" in out, out
with subtest("dry-run against changed-kernel profile selects reboot"):
rc, out = machine.execute(
"deploy-finalize --dry-run --profile /tmp/profile-changed-kernel 2>&1"
)
assert rc == 0, f"rc={rc}\n{out}"
assert "action=reboot" in out, out
assert "reason=kernel changed" in out, out
assert "systemctl reboot" in out, out
with subtest("dry-run against changed-initrd profile selects reboot"):
rc, out = machine.execute(
"deploy-finalize --dry-run --profile /tmp/profile-changed-initrd 2>&1"
)
assert rc == 0, f"rc={rc}\n{out}"
assert "action=reboot" in out, out
assert "reason=initrd changed" in out, out
with subtest("dry-run against changed-modules profile selects reboot"):
rc, out = machine.execute(
"deploy-finalize --dry-run --profile /tmp/profile-changed-modules 2>&1"
)
assert rc == 0, f"rc={rc}\n{out}"
assert "action=reboot" in out, out
assert "reason=kernel-modules changed" in out, out
with subtest("dry-run against empty profile fails closed with rc=1"):
rc, out = machine.execute(
"deploy-finalize --dry-run --profile /tmp/profile-missing 2>&1"
)
assert rc == 1, f"rc={rc}\n{out}"
assert "missing kernel, initrd, or kernel-modules" in out, out
with subtest("--delay override is reflected in output"):
rc, out = machine.execute(
"deploy-finalize --dry-run --delay 7 --profile /tmp/profile-same 2>&1"
)
assert rc == 0, f"rc={rc}\n{out}"
assert "delay=7s" in out, out
with subtest("configured default delay from module option is used"):
rc, out = machine.execute(
"deploy-finalize --dry-run --profile /tmp/profile-same 2>&1"
)
assert rc == 0, f"rc={rc}\n{out}"
# module option delay=15 in nodes.machine above.
assert "delay=15s" in out, out
with subtest("unknown option rejected with rc=2"):
rc, out = machine.execute("deploy-finalize --bogus 2>&1")
assert rc == 2, f"rc={rc}\n{out}"
assert "unknown option --bogus" in out, out
with subtest("non-dry run arms a transient systemd timer"):
# Long delay so the timer doesn't fire during the test. We stop it
# explicitly afterwards.
rc, out = machine.execute(
"deploy-finalize --delay 3600 --profile /tmp/profile-same 2>&1"
)
assert rc == 0, f"scheduling rc={rc}\n{out}"
# Confirm exactly one transient timer is active.
timers = machine.succeed(
"systemctl list-units --type=timer --no-legend 'deploy-finalize-*.timer' "
"--state=waiting | awk 'NF{print $1}'"
).strip().splitlines()
assert len(timers) == 1, f"expected exactly one pending timer, got {timers}"
assert timers[0].startswith("deploy-finalize-"), timers
with subtest("back-to-back scheduling cancels the previous timer"):
# The previous subtest left one timer armed. Schedule again; the old
# one should be stopped before the new unit name is created.
machine.succeed("sleep 1") # ensure a distinct unit-name timestamp
rc, out = machine.execute(
"deploy-finalize --delay 3600 --profile /tmp/profile-same 2>&1"
)
assert rc == 0, f"second-schedule rc={rc}\n{out}"
timers = machine.succeed(
"systemctl list-units --type=timer --no-legend 'deploy-finalize-*.timer' "
"--state=waiting | awk 'NF{print $1}'"
).strip().splitlines()
assert len(timers) == 1, f"expected only the new timer, got {timers}"
# Clean up so the test's shutdown path is quiet.
machine.succeed(
"systemctl stop 'deploy-finalize-*.timer' 'deploy-finalize-*.service' "
"2>/dev/null || true"
)
'';
}

View File

@@ -12,10 +12,10 @@
... ...
}: }:
let let
baseServiceConfigs = import ../hosts/muffin/service-configs.nix; baseSiteConfig = import ../site-config.nix;
baseServiceConfigs = import ../hosts/muffin/service-configs.nix { site_config = baseSiteConfig; };
testServiceConfigs = lib.recursiveUpdate baseServiceConfigs { testServiceConfigs = lib.recursiveUpdate baseServiceConfigs {
zpool_ssds = ""; zpool_ssds = "";
https.domain = "test.local";
}; };
alwaysOk = pkgs.writeShellApplication { alwaysOk = pkgs.writeShellApplication {

View File

@@ -5,7 +5,10 @@
... ...
}: }:
let let
baseServiceConfigs = import ../../hosts/muffin/service-configs.nix; baseSiteConfig = import ../../site-config.nix;
baseServiceConfigs = import ../../hosts/muffin/service-configs.nix {
site_config = baseSiteConfig;
};
testServiceConfigs = lib.recursiveUpdate baseServiceConfigs { testServiceConfigs = lib.recursiveUpdate baseServiceConfigs {
zpool_ssds = ""; zpool_ssds = "";
gitea = { gitea = {

View File

@@ -5,10 +5,12 @@
... ...
}: }:
let let
baseServiceConfigs = import ../../hosts/muffin/service-configs.nix; baseSiteConfig = import ../../site-config.nix;
baseServiceConfigs = import ../../hosts/muffin/service-configs.nix {
site_config = baseSiteConfig;
};
testServiceConfigs = lib.recursiveUpdate baseServiceConfigs { testServiceConfigs = lib.recursiveUpdate baseServiceConfigs {
zpool_ssds = ""; zpool_ssds = "";
https.domain = "test.local";
ports.private.immich = { ports.private.immich = {
port = 2283; port = 2283;
proto = "tcp"; proto = "tcp";

View File

@@ -5,10 +5,12 @@
... ...
}: }:
let let
baseServiceConfigs = import ../../hosts/muffin/service-configs.nix; baseSiteConfig = import ../../site-config.nix;
baseServiceConfigs = import ../../hosts/muffin/service-configs.nix {
site_config = baseSiteConfig;
};
testServiceConfigs = lib.recursiveUpdate baseServiceConfigs { testServiceConfigs = lib.recursiveUpdate baseServiceConfigs {
zpool_ssds = ""; zpool_ssds = "";
https.domain = "test.local";
jellyfin = { jellyfin = {
dataDir = "/var/lib/jellyfin"; dataDir = "/var/lib/jellyfin";
cacheDir = "/var/cache/jellyfin"; cacheDir = "/var/cache/jellyfin";
@@ -33,6 +35,7 @@ let
(import ../../services/jellyfin/jellyfin.nix { (import ../../services/jellyfin/jellyfin.nix {
inherit config pkgs; inherit config pkgs;
lib = testLib; lib = testLib;
site_config = baseSiteConfig;
service_configs = testServiceConfigs; service_configs = testServiceConfigs;
}) })
]; ];

View File

@@ -5,10 +5,12 @@
... ...
}: }:
let let
baseServiceConfigs = import ../../hosts/muffin/service-configs.nix; baseSiteConfig = import ../../site-config.nix;
baseServiceConfigs = import ../../hosts/muffin/service-configs.nix {
site_config = baseSiteConfig;
};
testServiceConfigs = lib.recursiveUpdate baseServiceConfigs { testServiceConfigs = lib.recursiveUpdate baseServiceConfigs {
zpool_ssds = ""; zpool_ssds = "";
https.domain = "test.local";
}; };
testLib = lib.extend ( testLib = lib.extend (
@@ -28,6 +30,7 @@ let
(import ../../services/bitwarden.nix { (import ../../services/bitwarden.nix {
inherit config pkgs; inherit config pkgs;
lib = testLib; lib = testLib;
site_config = baseSiteConfig;
service_configs = testServiceConfigs; service_configs = testServiceConfigs;
}) })
]; ];

View File

@@ -0,0 +1,220 @@
{
config,
lib,
pkgs,
...
}:
let
baseSiteConfig = import ../site-config.nix;
baseServiceConfigs = import ../hosts/muffin/service-configs.nix {
site_config = baseSiteConfig;
};
testServiceConfigs = lib.recursiveUpdate baseServiceConfigs {
zpool_ssds = "";
gitea = {
dir = "/var/lib/gitea";
# `:80` makes Caddy bind all hosts on HTTP port 80 with no Host-header
# matching — simplest path to a reachable vhost inside the test VM
# where there is no ACME / DNS and no TLS terminator.
domain = ":80";
};
ports.private.gitea = {
port = 3000;
proto = "tcp";
};
};
testLib = lib.extend (
final: prev: {
serviceMountWithZpool =
serviceName: zpool: dirs:
{ ... }:
{ };
serviceFilePerms = serviceName: tmpfilesRules: { ... }: { };
}
);
giteaModule =
{ config, pkgs, ... }:
{
imports = [
(import ../services/gitea/gitea.nix {
inherit config pkgs;
lib = testLib;
service_configs = testServiceConfigs;
})
];
};
in
pkgs.testers.runNixOSTest {
name = "gitea-hide-actions";
nodes = {
server =
{
config,
lib,
pkgs,
...
}:
{
imports = [
../modules/server-security.nix
giteaModule
];
# The shared gitea.nix module derives DOMAIN/ROOT_URL from the
# `service_configs.gitea.domain` string, which here is the full URL
# `http://server`. Override to valid bare values so Gitea doesn't
# get a malformed ROOT_URL like `https://http://server`.
services.gitea.settings = {
server = {
DOMAIN = lib.mkForce "server";
ROOT_URL = lib.mkForce "http://server/";
};
# Tests talk HTTP, so drop the Secure flag — otherwise curl's cookie
# jar holds the session cookie but never sends it back.
session.COOKIE_SECURE = lib.mkForce false;
};
services.caddy = {
enable = true;
# No DNS / ACME in the VM test network — serve plain HTTP.
globalConfig = ''
auto_https off
'';
};
services.postgresql.enable = true;
# Stub out zfs/mount ordering added by the real serviceMountWithZpool.
systemd.services."gitea-mounts".enable = lib.mkForce false;
systemd.services.gitea = {
wants = lib.mkForce [ ];
after = lib.mkForce [ "postgresql.service" ];
requires = lib.mkForce [ ];
};
networking.firewall.allowedTCPPorts = [
80
3000
];
};
client =
{ pkgs, ... }:
{
environment.systemPackages = [ pkgs.curl ];
};
};
testScript = ''
import re
start_all()
server.wait_for_unit("postgresql.service")
server.wait_for_unit("gitea.service")
server.wait_for_unit("caddy.service")
server.wait_for_open_port(3000)
server.wait_for_open_port(80)
server.succeed(
"su -l gitea -s /bin/sh -c '${pkgs.gitea}/bin/gitea admin user create "
"--username testuser --password testpassword "
"--email test@test.local --must-change-password=false "
"--work-path /var/lib/gitea'"
)
def curl(args, cookies=None):
cookie_args = f"-b {cookies} " if cookies else ""
cmd = (
"curl -4 -s -o /dev/null "
f"-w '%{{http_code}}|%{{redirect_url}}' {cookie_args}{args}"
)
return client.succeed(cmd).strip()
def login():
# Gitea's POST /user/login requires a _csrf token and expects the
# matching session cookie already set. Fetch the login form first
# to harvest both, then submit credentials with the same cookie jar.
client.succeed("rm -f /tmp/cookies.txt")
html = client.succeed(
"curl -4 -s -c /tmp/cookies.txt http://server/user/login"
)
match = re.search(r'name="_csrf"\s+value="([^"]+)"', html)
assert match, f"CSRF token not found in login form: {html[:500]!r}"
csrf = match.group(1)
# -L so we follow the post-login redirect; the session cookie is
# rewritten by Gitea on successful login to carry uid.
client.succeed(
"curl -4 -s -L -o /dev/null "
"-b /tmp/cookies.txt -c /tmp/cookies.txt "
f"--data-urlencode '_csrf={csrf}' "
"--data-urlencode 'user_name=testuser' "
"--data-urlencode 'password=testpassword' "
"http://server/user/login"
)
# Sanity-check the session by hitting the gated probe directly
# the post-login cookie jar MUST drive /user/stopwatches to 200.
probe = client.succeed(
"curl -4 -s -o /dev/null -w '%{http_code}' "
"-b /tmp/cookies.txt http://server/user/stopwatches"
).strip()
assert probe == "200", f"session auth probe expected 200, got {probe!r}"
return "/tmp/cookies.txt"
with subtest("Anonymous /{user}/{repo}/actions redirects to login"):
result = curl("http://server/foo/bar/actions")
code, _, redir = result.partition("|")
print(f"anon /foo/bar/actions -> {result!r}")
assert code == "302", f"expected 302, got {code!r} (full: {result!r})"
assert "/user/login" in redir, f"expected login redirect, got {redir!r}"
assert "redirect_to=" in redir, f"expected redirect_to param, got {redir!r}"
assert "/foo/bar/actions" in redir, (
f"expected original URL preserved in redirect_to, got {redir!r}"
)
with subtest("Anonymous deep /actions paths also redirect"):
for path in ["/foo/bar/actions/", "/foo/bar/actions/runs/1", "/foo/bar/actions/workflows/build.yaml"]:
result = curl(f"http://server{path}")
code, _, redir = result.partition("|")
print(f"anon {path} -> {result!r}")
assert code == "302", f"{path}: expected 302, got {code!r}"
assert "/user/login" in redir, f"{path}: expected login redirect, got {redir!r}"
with subtest("Anonymous workflow badge stays public"):
result = curl("http://server/foo/bar/actions/workflows/ci.yaml/badge.svg")
code, _, redir = result.partition("|")
print(f"anon badge -> {result!r}")
assert code != "302" or "/user/login" not in redir, (
f"badge path should not redirect to login, got {result!r}"
)
cookies = login()
with subtest("Session-authenticated /{user}/{repo}/actions reaches Gitea"):
result = curl(
"http://server/testuser/nonexistent/actions", cookies=cookies
)
code, _, redir = result.partition("|")
print(f"auth /testuser/nonexistent/actions -> {result!r}")
# Gitea returns 404 for the missing repo the key assertion is that
# Caddy's gate forwarded the request instead of redirecting to login.
assert not (code == "302" and "/user/login" in redir), (
f"session-authed actions request was intercepted by login gate: {result!r}"
)
with subtest("Anonymous /explore/repos is served without gating"):
result = curl("http://server/explore/repos")
code, _, _ = result.partition("|")
print(f"anon /explore/repos -> {result!r}")
assert code == "200", f"expected 200 for public explore page, got {result!r}"
with subtest("Anonymous /{user}/{repo} (non-actions) is not login-gated"):
result = curl("http://server/foo/bar")
code, _, redir = result.partition("|")
print(f"anon /foo/bar -> {result!r}")
assert not (code == "302" and "/user/login" in redir), (
f"non-actions repo path should not redirect to login: {result!r}"
)
'';
}

View File

@@ -428,6 +428,73 @@ pkgs.testers.runNixOSTest {
local_playback["PositionTicks"] = 50000000 local_playback["PositionTicks"] = 50000000
server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing/Stopped' -d '{json.dumps(local_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{local_auth}, Token={local_token}'") server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing/Stopped' -d '{json.dumps(local_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{local_auth}, Token={local_token}'")
with subtest("Hairpin'd LAN session (source IP = configured gateway) DOES throttle"):
# Simulates a LAN client reaching Jellyfin via the public hostname:
# the router SNATs the source to itself, so Jellyfin sees the gateway
# IP and IsInLocalNetwork=True even though WAN bandwidth is in play.
# We use 127.0.0.1 as the "gateway" in this VM because the localhost
# curl below produces source 127.0.0.1 from Jellyfin's view.
server.succeed("systemctl stop monitor-test || true")
time.sleep(1)
server.succeed(f"""
systemd-run --unit=monitor-hairpin \
--setenv=JELLYFIN_URL=http://localhost:8096 \
--setenv=JELLYFIN_API_KEY={token} \
--setenv=QBITTORRENT_URL=http://localhost:8080 \
--setenv=CHECK_INTERVAL=1 \
--setenv=STREAMING_START_DELAY=1 \
--setenv=STREAMING_STOP_DELAY=1 \
--setenv=TOTAL_BANDWIDTH_BUDGET=50000000 \
--setenv=SERVICE_BUFFER=2000000 \
--setenv=DEFAULT_STREAM_BITRATE=10000000 \
--setenv=MIN_TORRENT_SPEED=100 \
--setenv=LAN_GATEWAY_IP=127.0.0.1 \
{python} {monitor}
""")
time.sleep(2)
assert not is_throttled(), "Should start unthrottled (no streams yet)"
hairpin_auth = 'MediaBrowser Client="Hairpin Client", DeviceId="hairpin-2222", Device="HairpinDevice", Version="1.0"'
hairpin_auth_result = json.loads(server.succeed(
f"curl -sf -X POST 'http://localhost:8096/Users/AuthenticateByName' -d '@${jfLib.payloads.auth}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{hairpin_auth}'"
))
hairpin_token = hairpin_auth_result["AccessToken"]
hairpin_playback = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-hairpin",
"CanSeek": True,
"IsPaused": False,
}
server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing' -d '{json.dumps(hairpin_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{hairpin_auth}, Token={hairpin_token}'")
time.sleep(3)
assert is_throttled(), "Hairpin'd session (source=gateway) should throttle even though source is RFC1918"
# Cleanup: stop the playback and the override-monitor, restore the normal one.
hairpin_playback["PositionTicks"] = 50000000
server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing/Stopped' -d '{json.dumps(hairpin_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{hairpin_auth}, Token={hairpin_token}'")
time.sleep(2)
assert not is_throttled(), "Should unthrottle after hairpin'd playback stops"
server.succeed("systemctl stop monitor-hairpin || true")
time.sleep(1)
server.succeed(f"""
systemd-run --unit=monitor-test \
--setenv=JELLYFIN_URL=http://localhost:8096 \
--setenv=JELLYFIN_API_KEY={token} \
--setenv=QBITTORRENT_URL=http://localhost:8080 \
--setenv=CHECK_INTERVAL=1 \
--setenv=STREAMING_START_DELAY=1 \
--setenv=STREAMING_STOP_DELAY=1 \
--setenv=TOTAL_BANDWIDTH_BUDGET=50000000 \
--setenv=SERVICE_BUFFER=2000000 \
--setenv=DEFAULT_STREAM_BITRATE=10000000 \
--setenv=MIN_TORRENT_SPEED=100 \
{python} {monitor}
""")
time.sleep(2)
# === WEBHOOK TESTS === # === WEBHOOK TESTS ===
# #
# Configure the Jellyfin Webhook plugin to target the monitor, then verify # Configure the Jellyfin Webhook plugin to target the monitor, then verify
@@ -589,7 +656,7 @@ pkgs.testers.runNixOSTest {
server.succeed("systemctl restart jellyfin.service") server.succeed("systemctl restart jellyfin.service")
server.wait_for_unit("jellyfin.service") server.wait_for_unit("jellyfin.service")
server.wait_for_open_port(8096) server.wait_for_open_port(8096)
server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=60) server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=180)
# During Jellyfin restart, monitor can't reach Jellyfin # During Jellyfin restart, monitor can't reach Jellyfin
# After restart, sessions are cleared - monitor should eventually unthrottle # After restart, sessions are cleared - monitor should eventually unthrottle
@@ -645,7 +712,7 @@ pkgs.testers.runNixOSTest {
server.succeed("systemctl start jellyfin.service") server.succeed("systemctl start jellyfin.service")
server.wait_for_unit("jellyfin.service") server.wait_for_unit("jellyfin.service")
server.wait_for_open_port(8096) server.wait_for_open_port(8096)
server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=60) server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=180)
# After Jellyfin comes back, sessions are gone - should unthrottle # After Jellyfin comes back, sessions are gone - should unthrottle
time.sleep(3) time.sleep(3)

View File

@@ -6,10 +6,10 @@
... ...
}: }:
let let
baseServiceConfigs = import ../hosts/muffin/service-configs.nix; baseSiteConfig = import ../site-config.nix;
baseServiceConfigs = import ../hosts/muffin/service-configs.nix { site_config = baseSiteConfig; };
testServiceConfigs = lib.recursiveUpdate baseServiceConfigs { testServiceConfigs = lib.recursiveUpdate baseServiceConfigs {
zpool_ssds = ""; zpool_ssds = "";
https.domain = "test.local";
minecraft.parent_dir = "/var/lib/minecraft"; minecraft.parent_dir = "/var/lib/minecraft";
minecraft.memory = rec { minecraft.memory = rec {
heap_size_m = 1000; heap_size_m = 1000;
@@ -31,6 +31,7 @@ testPkgs.testers.runNixOSTest {
node.specialArgs = { node.specialArgs = {
inherit inputs lib; inherit inputs lib;
site_config = baseSiteConfig;
service_configs = testServiceConfigs; service_configs = testServiceConfigs;
username = "testuser"; username = "testuser";
}; };

View File

@@ -13,6 +13,7 @@ in
minecraftTest = handleTest ./minecraft.nix; minecraftTest = handleTest ./minecraft.nix;
jellyfinQbittorrentMonitorTest = handleTest ./jellyfin-qbittorrent-monitor.nix; jellyfinQbittorrentMonitorTest = handleTest ./jellyfin-qbittorrent-monitor.nix;
deployGuardTest = handleTest ./deploy-guard.nix; deployGuardTest = handleTest ./deploy-guard.nix;
deployFinalizeTest = handleTest ./deploy-finalize.nix;
filePermsTest = handleTest ./file-perms.nix; filePermsTest = handleTest ./file-perms.nix;
# fail2ban tests # fail2ban tests
@@ -40,4 +41,7 @@ in
# gitea runner test # gitea runner test
giteaRunnerTest = handleTest ./gitea-runner.nix; giteaRunnerTest = handleTest ./gitea-runner.nix;
# gitea actions visibility gate test
giteaHideActionsTest = handleTest ./gitea-hide-actions.nix;
} }

View File

@@ -52,6 +52,7 @@ let
SINGLE_CROSS = "A" * 38 + "0C" # movieId=7 single import AND older import for movieId=8 SINGLE_CROSS = "A" * 38 + "0C" # movieId=7 single import AND older import for movieId=8
SINGLE8_NEW = "A" * 38 + "0D" # movieId=8, newer import keeper (not in qBit) SINGLE8_NEW = "A" * 38 + "0D" # movieId=8, newer import keeper (not in qBit)
QUEUED_MOV = "A" * 38 + "0E" # in Radarr queue, not in history QUEUED_MOV = "A" * 38 + "0E" # in Radarr queue, not in history
INPROGRESS_MOV = "A" * 38 + "0F" # movieId=10, older import, currently re-downloading
# TV # TV
UNMANAGED_TV = "B" * 38 + "01" UNMANAGED_TV = "B" * 38 + "01"
@@ -62,13 +63,17 @@ let
REPACK = "B" * 38 + "06" # episodeId=300, newer import active REPACK = "B" * 38 + "06" # episodeId=300, newer import active
REMOVED_TV = "B" * 38 + "07" # episodeId=400, older import (series removed) REMOVED_TV = "B" * 38 + "07" # episodeId=400, older import (series removed)
REMOVED_TV_NEW = "B" * 38 + "08" # episodeId=400, newer import (not in qBit) REMOVED_TV_NEW = "B" * 38 + "08" # episodeId=400, newer import (not in qBit)
INPROGRESS_TV = "B" * 38 + "09" # episodeId=500, older import, currently re-downloading
INPROGRESS_TV_NEW = "B" * 38 + "0A" # episodeId=500, newer import (not in qBit)
INPROGRESS_MOV_NEW = "A" * 38 + "10" # movieId=10, newer import (not in qBit)
def make_torrent(h, name, size, added_on, state="uploading"): def make_torrent(h, name, size, added_on, state="uploading", progress=1.0):
return { return {
"hash": h.lower(), "hash": h.lower(),
"name": name, "name": name,
"size": size, "size": size,
"state": state, "state": state,
"progress": progress,
"added_on": added_on, "added_on": added_on,
"content_path": f"/downloads/{name}", "content_path": f"/downloads/{name}",
} }
@@ -84,6 +89,9 @@ let
make_torrent(LARGER_OLD, "Larger.Movie.2024", 10_737_418_240, 1704067206), make_torrent(LARGER_OLD, "Larger.Movie.2024", 10_737_418_240, 1704067206),
make_torrent(SINGLE_CROSS, "SingleCross.Movie.2024", 4_000_000_000, 1704067207), make_torrent(SINGLE_CROSS, "SingleCross.Movie.2024", 4_000_000_000, 1704067207),
make_torrent(QUEUED_MOV, "Queued.Movie.2024", 2_000_000_000, 1704067208), make_torrent(QUEUED_MOV, "Queued.Movie.2024", 2_000_000_000, 1704067208),
# In-progress re-download: hash matches an old import, but data is
# not yet on disk. Must NOT be flagged as abandoned (regression).
make_torrent(INPROGRESS_MOV, "InProgress.Movie.2024", 8_000_000_000, 1704067209, state="downloading", progress=0.05),
], ],
"tvshows": [ "tvshows": [
make_torrent(UNMANAGED_TV, "Unmanaged.Show.S01E01", 1_000_000_000, 1704067200), make_torrent(UNMANAGED_TV, "Unmanaged.Show.S01E01", 1_000_000_000, 1704067200),
@@ -92,6 +100,7 @@ let
make_torrent(NEW_TV, "New.Show.S01E01", 1_200_000_000, 1704067203), make_torrent(NEW_TV, "New.Show.S01E01", 1_200_000_000, 1704067203),
make_torrent(SEASON_PACK, "Season.Pack.S02", 5_000_000_000, 1704067204), make_torrent(SEASON_PACK, "Season.Pack.S02", 5_000_000_000, 1704067204),
make_torrent(REMOVED_TV, "Removed.Show.S01E01", 900_000_000, 1704067205), make_torrent(REMOVED_TV, "Removed.Show.S01E01", 900_000_000, 1704067205),
make_torrent(INPROGRESS_TV, "InProgress.Show.S01E01", 1_500_000_000, 1704067209, state="downloading", progress=0.05),
], ],
} }
@@ -115,6 +124,9 @@ let
{"movieId": 7, "downloadId": SINGLE_CROSS, "eventType": "downloadFolderImported", "date": "2024-03-01T00:00:00Z"}, {"movieId": 7, "downloadId": SINGLE_CROSS, "eventType": "downloadFolderImported", "date": "2024-03-01T00:00:00Z"},
{"movieId": 8, "downloadId": SINGLE_CROSS, "eventType": "downloadFolderImported", "date": "2024-01-01T00:00:00Z"}, {"movieId": 8, "downloadId": SINGLE_CROSS, "eventType": "downloadFolderImported", "date": "2024-01-01T00:00:00Z"},
{"movieId": 8, "downloadId": SINGLE8_NEW, "eventType": "downloadFolderImported", "date": "2024-06-01T00:00:00Z"}, {"movieId": 8, "downloadId": SINGLE8_NEW, "eventType": "downloadFolderImported", "date": "2024-06-01T00:00:00Z"},
# In-progress re-download regression case for movies
{"movieId": 10, "downloadId": INPROGRESS_MOV, "eventType": "downloadFolderImported", "date": "2024-01-01T00:00:00Z"},
{"movieId": 10, "downloadId": INPROGRESS_MOV_NEW,"eventType": "downloadFolderImported", "date": "2024-06-01T00:00:00Z"},
] ]
RADARR_MOVIES = [ RADARR_MOVIES = [
@@ -126,6 +138,7 @@ let
{"id": 6, "hasFile": True, "movieFile": {"size": 5_368_709_120, "quality": {"quality": {"name": "Bluray-720p"}}}}, {"id": 6, "hasFile": True, "movieFile": {"size": 5_368_709_120, "quality": {"quality": {"name": "Bluray-720p"}}}},
{"id": 7, "hasFile": True, "movieFile": {"size": 4_000_000_000, "quality": {"quality": {"name": "Bluray-1080p"}}}}, {"id": 7, "hasFile": True, "movieFile": {"size": 4_000_000_000, "quality": {"quality": {"name": "Bluray-1080p"}}}},
{"id": 8, "hasFile": True, "movieFile": {"size": 5_000_000_000, "quality": {"quality": {"name": "Remux-1080p"}}}}, {"id": 8, "hasFile": True, "movieFile": {"size": 5_000_000_000, "quality": {"quality": {"name": "Remux-1080p"}}}},
{"id": 10, "hasFile": True, "movieFile": {"size": 8_000_000_000, "quality": {"quality": {"name": "Remux-2160p"}}}},
] ]
# Sonarr mock data # Sonarr mock data
@@ -148,6 +161,9 @@ let
# Removed series scenario # Removed series scenario
{"episodeId": 400, "seriesId": 99, "downloadId": REMOVED_TV, "eventType": "downloadFolderImported", "date": "2024-01-01T00:00:00Z"}, {"episodeId": 400, "seriesId": 99, "downloadId": REMOVED_TV, "eventType": "downloadFolderImported", "date": "2024-01-01T00:00:00Z"},
{"episodeId": 400, "seriesId": 99, "downloadId": REMOVED_TV_NEW,"eventType": "downloadFolderImported", "date": "2024-06-01T00:00:00Z"}, {"episodeId": 400, "seriesId": 99, "downloadId": REMOVED_TV_NEW,"eventType": "downloadFolderImported", "date": "2024-06-01T00:00:00Z"},
# In-progress re-download regression case for TV
{"episodeId": 500, "seriesId": 1, "downloadId": INPROGRESS_TV, "eventType": "downloadFolderImported", "date": "2024-01-01T00:00:00Z"},
{"episodeId": 500, "seriesId": 1, "downloadId": INPROGRESS_TV_NEW,"eventType": "downloadFolderImported", "date": "2024-06-01T00:00:00Z"},
] ]
SONARR_HISTORY_ALL = SONARR_HISTORY_PAGE1 + SONARR_HISTORY_PAGE2 SONARR_HISTORY_ALL = SONARR_HISTORY_PAGE1 + SONARR_HISTORY_PAGE2
@@ -319,14 +335,14 @@ pkgs.testers.runNixOSTest {
with subtest("Detects unmanaged movie torrent"): with subtest("Detects unmanaged movie torrent"):
assert "Unmanaged.Movie.2024" in unmanaged_section, \ assert "Unmanaged.Movie.2024" in unmanaged_section, \
"Should detect unmanaged movie" "Should detect unmanaged movie"
assert "1 unmanaged / 9 total" in unmanaged_section, \ assert "1 unmanaged / 10 total" in unmanaged_section, \
"Should show 1 unmanaged movie out of 9" "Should show 1 unmanaged movie out of 10"
with subtest("Detects unmanaged TV torrent"): with subtest("Detects unmanaged TV torrent"):
assert "Unmanaged.Show.S01E01" in unmanaged_section, \ assert "Unmanaged.Show.S01E01" in unmanaged_section, \
"Should detect unmanaged TV show" "Should detect unmanaged TV show"
assert "1 unmanaged / 6 total" in unmanaged_section, \ assert "1 unmanaged / 7 total" in unmanaged_section, \
"Should show 1 unmanaged TV show out of 6" "Should show 1 unmanaged TV show out of 7"
with subtest("Empty category shows zero counts"): with subtest("Empty category shows zero counts"):
assert "0 unmanaged / 0 total" in unmanaged_section, \ assert "0 unmanaged / 0 total" in unmanaged_section, \
@@ -380,6 +396,16 @@ pkgs.testers.runNixOSTest {
assert "SingleCross.Movie.2024" not in abandoned_section, \ assert "SingleCross.Movie.2024" not in abandoned_section, \
"Hash that is sole import for movieId=7 must be in keeper set, not abandoned" "Hash that is sole import for movieId=7 must be in keeper set, not abandoned"
with subtest("In-progress re-download not abandoned (incomplete payload regression)"):
# A torrent whose hash matches an old downloadFolderImported entry but
# whose data is not currently on disk (progress < 1.0) must not be
# reported as abandoned: its size is metadata, not reclaimable bytes,
# and a SAFE verdict could disrupt a re-download in progress.
assert "InProgress.Movie.2024" not in abandoned_section, \
"In-progress movie re-download must not appear as abandoned"
assert "InProgress.Show.S01E01" not in abandoned_section, \
"In-progress TV re-download must not appear as abandoned"
with subtest("Removed movie triggers REVIEW status"): with subtest("Removed movie triggers REVIEW status"):
assert "Removed.Movie.2024" in abandoned_section, \ assert "Removed.Movie.2024" in abandoned_section, \
"Should detect abandoned torrent for removed movie" "Should detect abandoned torrent for removed movie"