Commit Graph

7 Commits

Author SHA1 Message Date
8fddd3a954 llama-cpp: context: 32768 -> 65536 2026-04-06 01:04:23 -04:00
0e4f0d3176 llama-cpp: fix model name 2026-04-06 00:59:20 -04:00
8ea96c8b8e llama-cpp: fix model hash 2026-04-04 00:28:07 -04:00
479ec43b8f llama-cpp: integrate native prometheus /metrics endpoint
llama.cpp server has a built-in /metrics endpoint exposing
prompt_tokens_seconds, predicted_tokens_seconds, tokens_predicted_total,
n_decode_total, and n_busy_slots_per_decode. Enable it with --metrics
and add a Prometheus scrape target, replacing the need for any external
metric collection for LLM inference monitoring.
2026-04-03 15:19:11 -04:00
47aeb58f7a llama-cpp: do logging 2026-04-03 14:39:46 -04:00
d4d01d63f1 llama-cpp: update + re-enable + gemma 4 E4B 2026-04-03 14:06:35 -04:00
124d33963e organize 2026-04-03 00:47:12 -04:00