Long-form essays, analysis, and notebook entries on AI strategy, inference economics, and the systems that will shape the next decade.
The next decade of AI is not a single global model. It is a constellation of national stacks, each tuned to its own language, its own data laws, its own definition of harm. The implications run deeper than most policymakers admit.
Training costs got the headlines. But the trillion-token inference workloads of the next decade rewrite the unit economics of every product touching an LLM. Here is the math that keeps me up at night.
I spent six weeks building autonomous agents that mostly failed. The interesting finding wasn't the failures — it was the substrate that emerged underneath them. Here's what to actually build right now.
Llama and Mistral didn't lose because their weights weren't good enough. They lost the application layer because nobody figured out the channels. A note on what wins next.
I'm Rijul. I write essays, host a podcast, and build small things on the web — all of it in service of one question: how do we leverage AI in the next decade without giving away what mattered in the last? New work lands here when it's ready. Subscribe and I'll send it once.