Blog
February 17, 2026
Information Theory Is All You Need (to Understand LLMs)
Shannon laid the foundation in 1948. Seventy-eight years later, his framework still explains why Transformers work, what cross-entropy loss actually means, and why your model can never be smarter than its training data.
February 14, 2026
Dnalyaw: Engineering an AI Quant Trading System from Scratch
A deep dive into the architecture, language choices, and hard-won lessons behind building a production quantitative trading platform with Rust, Go, and Python — from nanosecond risk checks to RL-driven signal generation.
January 11, 2026
My Book on Building Production AI Agents
A practical guide to building production-grade AI Agent systems. Covers single-agent design, multi-agent orchestration, MCP protocol, Computer Use, cost control, and enterprise deployment patterns.
January 11, 2026
My Book on Transformers and LLM Architecture
A deep dive into every component of the Transformer architecture—from Tokenization to Attention mechanisms, from forward propagation to code implementation. For developers who want to truly understand how GPT and ChatGPT work.
January 6, 2026
2025 Year-In-Review and 2026 Prediction
Reflecting on how 2025 normalized agent workflows and reasoning models, and why 2026 feels less like a prediction and more like a state you can already opt into.
October 21, 2025
Tensor Logic: A Brain‑Like Architecture
Bridging the gap between logic and learning—how tensor equations create AI systems that think both symbolically and intuitively.
October 7, 2025
Shannon: Designing a Production-Grade Multi-Agent Platform
An architectural deep-dive into Shannon, a self-hosted multi-agent platform that addresses the three hardest problems in production AI: runaway costs, non-deterministic failures, and security vulnerabilities—through deliberate technology choices in Rust, Go, and Python.
April 29, 2025
AI Quantitative Trading: From Models to Quant Funds
Demystifying quantitative trading for AI practitioners — what quant funds actually do, why reinforcement learning fits markets better than LLMs, and where the real barriers lie.
February 25, 2025
From RNNs to LLMs: A Decade of Simplicity and Transformation
Reflecting on Andrej Karpathy’s 2015 RNN post and the surprising evolution of LLMs with transformers and fine-tuning.
February 22, 2024
Transformer Architecture Explained: A Comprehensive Review
A step-by-step walkthrough of the Transformer architecture—from input embeddings and positional encoding to self-attention and the decoder-only GPT variant.