c74d356595
xmrig: compile with compiler optimizations
2026-04-09 16:25:30 -04:00
ae03c2f288
p2pool: don't disable on power loss
...
p2pool is very light on resources, it's xmrig that should be disabled
2026-04-09 14:44:13 -04:00
0d87f90657
gitea: make gitea-runner wait for gitea.service
...
prevents spam on ntfy
2026-04-09 14:16:05 -04:00
d1e9c92423
update
2026-04-09 14:03:34 -04:00
4f33b16411
llama.cpp: thing
2026-04-09 14:02:53 -04:00
0a7c24da4e
llm-agents.nix: change was upstreamed
2026-04-08 18:12:21 -04:00
27096b17be
a
2026-04-08 18:11:37 -04:00
3627cb19c6
omp: add fix auth patch (to test)
2026-04-08 13:08:30 -04:00
0f0429b4b2
llm-agents.nix: use fork that compiles omp from source
2026-04-08 13:04:30 -04:00
8485f07c8d
zen: add consumer-rights-wiki addon
2026-04-07 23:46:50 -04:00
4f41789995
Reapply "llama-cpp: enable"
...
This reverts commit 645a532ed7 .
2026-04-07 22:49:53 -04:00
c0390af1a4
llama-cpp: update
2026-04-07 22:29:02 -04:00
98310f2582
organize patches + add gemma4 patch
2026-04-07 20:57:54 -04:00
3cee862bd0
re-enable rtkit
2026-04-07 20:53:53 -04:00
645a532ed7
Revert "llama-cpp: enable"
...
This reverts commit fdc1596bce .
2026-04-07 20:23:48 -04:00
2884a39eb1
llama-cpp: patch for vulkan support instead
2026-04-07 20:07:02 -04:00
fdc1596bce
llama-cpp: enable
2026-04-07 19:15:56 -04:00
778b04a80f
Reapply "llama-cpp: maybe use vulkan?"
...
This reverts commit 9addb1569a .
2026-04-07 19:12:57 -04:00
88fc219f2d
update
2026-04-07 19:11:50 -04:00
a5c7c91e38
Power: disable a bunch of things
...
BROKE intel arc A380 completely because it was forced into L1.1/L1.2
pcie substates. Forcewaking the device would fail and it would never come up.
So I will be more conservative on power saving tuning.
2026-04-07 19:08:08 -04:00
325e2720ec
borg: remove signal and zen backups (handled by other means
2026-04-07 14:31:09 -04:00
841195425d
README.md: remove old TODO
2026-04-07 13:54:29 -04:00
628c16fe64
fix git-crypt key for dotfiles workflow
2026-04-07 13:51:19 -04:00
269a0c4d27
update
2026-04-07 13:45:43 -04:00
0df5d98770
grafana: use postgresql
...
Doesn't use for data, only annotation and other stuff
2026-04-07 12:44:59 -04:00
2848c7e897
grafana: keep data forever
2026-04-07 12:44:46 -04:00
e57c9cb83b
xmrig-auto-pause: raise thresholds for server background load
2026-04-07 01:09:16 -04:00
d48f27701f
xmrig-auto-pause: add hysteresis to prevent stop/start thrashing
...
xmrig's RandomX pollutes the L3 cache, making other processes appear
~3-8% busier. With a single 5% threshold for both stopping and
resuming, the script oscillates: start xmrig -> cache pressure
inflates CPU -> stop xmrig -> CPU drops -> restart -> repeat.
Split into CPU_STOP_THRESHOLD (15%) and CPU_RESUME_THRESHOLD (5%).
The stop threshold sits above xmrig's indirect pressure, so only
genuine workloads trigger a pause. The resume threshold confirms the
system is truly idle before restarting.
2026-04-07 01:09:06 -04:00
738861fd53
lanzaboote: fix was upstreamed
2026-04-06 19:21:20 -04:00
274ef40ccc
lanzaboote: pin to fork with pcrlock reinstall fix
...
Upstream PR: https://github.com/nix-community/lanzaboote/pull/566
2026-04-06 16:08:57 -04:00
08486e25e6
gitea: also build laptop
2026-04-06 14:38:21 -04:00
4c04e5b0a2
use my own nix cache
2026-04-06 14:21:43 -04:00
a76a7969d9
nix-cache
2026-04-06 14:21:31 -04:00
4be2eaed35
Reapply "update"
...
This reverts commit 655bbda26f .
2026-04-06 13:40:52 -04:00
655bbda26f
Revert "update"
...
This reverts commit 960259b0d0 .
2026-04-06 13:39:32 -04:00
3b8aedd502
fix hardened kernel with nix sandbox
2026-04-06 13:36:38 -04:00
960259b0d0
update
2026-04-06 13:12:50 -04:00
5fa6f37b28
llama-cpp: disable
2026-04-06 13:12:06 -04:00
7afd1f35d2
xmrig-auto-pause: fix
2026-04-06 13:11:54 -04:00
7e571f4986
update
2026-04-06 13:07:19 -04:00
a12dcb01ec
llama-cpp: remove folder
2026-04-06 12:48:28 -04:00
6d47f02a0f
llama-cpp: set batch size to 4096
2026-04-06 02:29:37 -04:00
9addb1569a
Revert "llama-cpp: maybe use vulkan?"
...
This reverts commit 0a927ea893 .
2026-04-06 02:28:26 -04:00
df04e36b41
llama-cpp: fix vulkan cache
2026-04-06 02:23:29 -04:00
0a927ea893
llama-cpp: maybe use vulkan?
2026-04-06 02:12:46 -04:00
3e46c5bfa5
llama-cpp: use turbo3 for everything
2026-04-06 01:53:11 -04:00
06aee5af77
llama-cpp: gemma 4 E4B -> gemma 4 E2B
2026-04-06 01:24:25 -04:00
8fddd3a954
llama-cpp: context: 32768 -> 65536
2026-04-06 01:04:23 -04:00
0e4f0d3176
llama-cpp: fix model name
2026-04-06 00:59:20 -04:00
4b73e237cb
pi: specify anthropic for models
2026-04-06 00:57:36 -04:00