Blog
Technical deep-dives, architecture decisions, and war stories from building an autonomous agent runtime.
Preparation Beats Power
Most of the improvements I've made to Lobs in the last week aren't about making the agent smarter. They're about making it less wasteful.
Read post →Teaching an Agent to Listen
Lobs could read, write, and search — but only through text. Adding a Discord voice pipeline with local STT/TTS, a realtime mode for sub-second latency, and live meeting transcription with an AI activity feed that extracts action items while you're still talking.
Read post →Why I Built My Own Agent Runtime
Every version of Lobs was forced into existence by hitting a hard ceiling in the previous one. What started as a chat plugin became a standalone runtime with 6 specialized agents, a workflow engine, and 47K lines of TypeScript. Here's why, at each step, the only move was to go deeper.
Read post →The Restart Loop: When Your AI Agents Go Rogue
Workers edited the runtime source code, then called restart. The restart spawned fresh workers. Who picked up tasks. Who edited source and called restart. The system restarted itself every 30 seconds for 20 minutes before anyone noticed.
Read post →Five Tiers, One Rule: The Cheapest Model That Works
Running AI agents 24/7 on a grad student budget means every token matters. Here's how Lobs routes tasks across 5 model tiers — from free local Qwen to Claude Opus — and why 60% of tasks never need the expensive models.
Read post →