Blog
Condensed weekly issues — model releases, benchmarks, Ollama workflows. Also published on Substack.
Issue #3Apr 12, 2026
Persistent AI memory on a Raspberry Pi 5
Local embeddings + ChromaDB + Ollama in ~150 lines. ~$100 of hardware. No tokens.
Issue #2Apr 12, 2026
Gemma 4 changes local LLM — and the first killer use case is Claude Code
88% accuracy at 175 tok/s, 17GB VRAM, and how to cut your Claude Code bill with one env var
Issue #1Apr 11, 2026
Your local AI stack is already being scanned
113K requests, a Raspberry Pi honeypot, and the attack surface you didn't know you had