Hey All. I’ve been thinking through how to publish my writing in a way that feels like play and not like another deadline I have to meet each weekend. Instead of coming up with themes for each article, I’m going to try writing a weekly digest of my thoughts broken out by categories. I’m still experimenting with the format, but I want to double down on writing more regularly. The main value driver here, at least initially, is to better express my thinking — and the only way to get better at that is to do it consistently. I think a free-flowing format helps reduce the inertia around publishing. I also recognize the audience so far is likely just me, so there’s low pressure for experimentation.

Technology:

It’s hard not to equate technology and AI during this moment. I’ve been trying to separate what the media and frontier labs are projecting as their narrative from what I actually think is true.

Here’s the claim I keep coming back to: AI infrastructure buildout is currently one of the primary engines of US capital investment — and yet its returns in real productivity remain almost entirely unproven. This matters because capital expenditure and productivity growth are not the same thing, and right now we are conflating them. The historical parallel is almost too on the nose: the railroads and the early internet both required staggering upfront investment before any meaningful productivity gains materialized. In both cases, there was a period where the spending was the story, before anyone could honestly say whether the underlying bet would pay off. The question I keep asking myself about AI is: are we in the investment phase, or the hype phase? I don’t think anyone knows. But I think it’s important to say that out loud rather than let the infrastructure spend get mistaken for evidence of value.

The enterprise use case argument is where I have to be careful about my own bias. There are serious, defensible applications — legal document review, medical imaging, drug discovery, financial modeling — and I don’t want to wave those away. But I also think many of those use cases are still largely in pilot or proof-of-concept stages, and the gap between a compelling demo and systemic productivity gains is historically enormous. The places where I’ve seen AI deliver unambiguous value are narrower than the discourse suggests: coding assistance and, to a lesser extent, structured customer support workflows. Everything else still feels like we’re waiting for the evidence.

At the consumer level, I think the honest framing is this: AI is shaping up to be the next iteration of social media — but not just in terms of business model. The specific mechanism matters. Social media’s damage was to attention and social comparison; it optimized for engagement at the cost of mental health and epistemic quality. AI’s risk is different and in some ways more insidious. It offers false intimacy and epistemic dependency. When a tool feels like a trusted confidant, you share more with it, rely on it more, and question it less. And that’s exactly what tech companies have always wanted — not just your behavioral data, but your actual thoughts, framed as a service to you. I found myself double-checking my own settings a few weeks ago, only to realize my chat history was being shared by default. The architecture of trust is the product.

There’s also a more cynical argument worth taking seriously: the AGI and superintelligence narrative might be, at least in part, a deliberate positioning strategy. I want to be clear that I’m speculating here — but consider the incentive structure. If the frontier is genuinely approaching general intelligence, then the infrastructure spend is justified, the urgency is real, and anyone not investing is falling behind. That framing is extraordinarily useful if you’re trying to justify hundreds of billions in capital expenditure to shareholders and governments. I’m not saying the researchers believe this cynically — many of them clearly don’t — but the narrative has a convenient shape for the people who need it most.

And yet I fundamentally believe in the underlying technology. Large language models trained on the breadth of human knowledge are genuinely useful for offloading cognitive work that is well-defined and verifiable. The problem is the anthropomorphization. These tools produce output that feels tailored, warm, and considered — and that feeling is doing a lot of cultural work that the technology itself hasn’t earned yet. It took the ChatGPT interface, not the underlying model, to create the cultural moment. The magic genie framing was a branding exercise as much as a product decision, and it worked: the illusion that everyone is using it created the need to use it.

Where I land is this: the people confidently predicting AI transforms everything are speculating. So are the people confidently predicting it won’t. The honest answer is that we’re in the middle of a very expensive experiment, and the productivity evidence hasn’t arrived yet. If AI ends up being transformative at the scale of the internet, the current investment will look prescient. If it ends up being transformative at the scale of Excel — genuinely useful, widely adopted, but nowhere near the civilizational disruption being promised — then a lot of the current narrative will look like what it is: a hype cycle that served the people running it. I’ll keep refining my view as the evidence comes in.

Books:

This week I finished Project Hail Mary by Andy Weir, a journey that took about a month via audiobook. I started with a free recording on YouTube over President’s Day weekend, got hooked, and then had to buy an Audible subscription when the free version disappeared — probably pulled as the movie promotion ramped up. No regrets. I’d give the book a 90 out of 100.

Ray Porter’s narration makes it. He has a feel for Weir’s comedic timing that elevates the prose and turns what could have been a dry science adventure into something genuinely warm. The character work on Ryland Grace is strong — he’s funny, humble, and deeply human in a way that makes the absurdity of his situation land. But the real achievement of the book is Rocky, the alien Grace befriends. Their relationship is built slowly and with real craft — two beings with almost nothing in common except curiosity and mutual respect, developing a language to understand each other. I found myself quoting Rocky to my partner at home. “Amaze. Amaze.” If you know, you know.

The reason it doesn’t grade higher is that it doesn’t aim for anything beyond the adventure. That’s a legitimate creative choice — but I found myself wanting the book to sit with its most interesting question a little longer: what does it mean to sacrifice everything for a species that will never know you did it? Grace’s situation carries the weight of a genuinely profound premise, and Weir mostly uses it as a plot engine rather than exploring it. That’s not a failure — the book succeeds completely at what it’s trying to be — but it left me with warmth rather than weight. I walked away smiling at the friendship and not much else.

For what it is — a fun, propulsive, character-driven science adventure — it’s excellent. I’ll probably revisit it.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top