Not a Bubble, Just Too Early.
Why the Generative AI business model feels broken, but isn't.
Be me, 17 years old, 1994. I was on my 486 PC, my face planted two inches from a 14 inch CRT screen with a cardboard frame that held two lenses in place. I’d bought a book on VR and with it came a CD and cardboard VR headset. I kid you not. That rudimentary VR rig rendered the world as a stuttering mess of polygons that looked like a Ferris wheel. The keys allowed me to move around at maybe one and a half frames a second. And yet, I was captivated, immersed, and something sparked. VR was a pipe dream then. Even the most commercial of systems could only push out a few frames more. VR failed then for obvious reasons: the compute was astronomical in cost to run it.
Fast forward three decades and everyone’s talking about generative AI. They’re calling it a bubble. They say it costs too much, burns too hot, and lacks a real business model. But I don’t buy it. This isn’t a dead end. It’s just a story we’ve seen before. Technology that arrives too early is expensive. Until it isn't.
The Bubble Fallacy
We love the word "bubble." It lets us feel smart without taking risk. Dotcom. Crypto. Now, generative AI. The pattern is predictable: overhype, overspend, underdeliver. But not all hype is equal. Tulips didn’t change the world. Tokens won’t cook your dinner. But language models? They’re already writing scripts, debugging code, generating music, and redesigning workflows.
So what’s the real problem? It’s not value. It’s cost.
Compute as Constraint
Generative AI today is like trying to build a skyscraper with bricks made of gold. It’s technically possible. But economically, it’s insanity.
Training GPT-4 reportedly cost over \$100 million. Inference, just running the model, costs real money every time someone hits "submit." And we’re doing it millions of times per day.
Right now, the tech is centralised, cloud dependent, and compute hungry. But compute cost isn’t static. It’s the one thing that always drops. Moore’s Law might be slowing, but architectural innovation is just getting started. The future of AI isn’t server farms; it’s edge chips that sip power and infer at the speed of thought.
Lessons from Low Poly
Anyone who thinks today’s AI is a failure has forgotten what the future used to look like. In the '90s, VR was nausea in headset form. 3D games were jagged, slow, and barely immersive. Anyone remember F-15 Strike Eagle by Microprose? Hills were empty pyramids and tanks, just boxes. And yet, we saw the promise.
Every great technology wave starts clumsy. It moves awkwardly. It fails publicly. Then, quietly, it becomes inevitable. What mattered in 1994 wasn’t the polygons. It was the paradigm shift. Today, we’re in the same place with AI. This isn’t a crash. It’s an awkward adolescence.
The Real Business Model Isn't Here Yet
Most people misread technological revolutions by judging them too early through the lens of old business models. We saw this with mobile. The early apps were gimmicks. The real models like Uber, Instagram and TikTok emerged only after the hardware, networks, and behaviour matured.
Right now, AI is stuck in per token pricing, prompt engineering packages, and SaaS wrappers. But the real shift will come when AI becomes ambient. Embedded. Local. Invisible.
Imagine a future where:
· Your lighting system infers your mood.
· Your car DJ generates tracks to regulate your cortisol.
· Your fridge predicts your emotional eating pattern.
That’s not a feature set. That’s an operating system for life. And it’s only possible when compute is cheap, inference is local, and intelligence is ambient.
We’re Not in a Bubble. We’re in the Basement.
Everyone sees a pop. I see a crawlspace. The tech isn’t in its peak. It’s barely in it’s infancy. The economics aren’t ready. But the trajectory? Unstoppable.
The mistake isn’t overhyping the potential. It’s underestimating the timeline. Give it ten years. Let the chips evolve. Let the costs drop. When that happens, generative AI won’t just disrupt. It’ll disappear into everything.
Just like it was always meant to.
The Teenager Test
It seems far-fetched now but only because history's acceleration hides in plain sight. A 90MHz Pentium struggled to play a single MP3. Today, a sub $100 mini-pc streams 4K video with lossless audio while drawing less than five watts.
That same curve will bend for AI.
In a few years, teenagers won't just be scrolling TikTok. They'll be crafting entire music videos from scratch: choreography, beats, lyrics, visuals all generated, directed, and refined by AI models running locally on laptops. No cloud. No server bill. Just raw creativity, unthrottled by infrastructure.
The future isn’t some sci-fi dream. It’s a teenager in their bedroom, rendering the extraordinary with tools we haven't even priced yet.
Ross, signing off. Still seeing pyramids. Still seeing the future.


