From 80 Seconds to 7: Local AI on a €3.29 Server
How llama.cpp, sequential processing, and a compiler flag delivered an 11× speedup on identical hardware — no GPU, no cloud.
Read →No buzzwords. No hype. Just honest perspectives on AI, technology, and what actually works — from someone who builds it themselves.
A field report on hardware limits, software optimization, and why "start the model and you're done" isn't enough. llama.cpp, RAG, and a compiler flag that changed everything.
Read article →Thoughts, experiences and perspectives from AI practice.
How llama.cpp, sequential processing, and a compiler flag delivered an 11× speedup on identical hardware — no GPU, no cloud.
Read →A new AI miracle every week. Staying grounded isn't falling behind – it's the real competence. And sometimes the honest answer is: AI isn't the right tool here.
Read →