2 minute read

Yes, we’re dependent on AI. Just like we were dependent on cloud infrastructure before that, and software before that, and hardware before that.

Each layer abstracted the one below and became the new dependency. Dependent on transistors, then operating systems, then applications, then cloud services, now models. This is just the next layer. The dependency framing isn’t an objection — it’s a description of how technology always works. We don’t worry about being dependent on compilers.

The Power Gap

The human brain runs on 10 to 20 watts. Pattern recognition, language, reasoning, creativity, spatial navigation, emotional processing — all of it, simultaneously, on roughly 15 watts.

Current frontier model inference requires kilowatts. Training requires megawatts. That’s a gap of 5 to 6 orders of magnitude between what biology achieves and what silicon currently needs.

That gap doesn’t close a little. It collapses. The history of computing is the history of exactly this kind of gap collapsing.

The Optimization Path Is Clear

The economic pressure is enormous — whoever closes this gap wins. The paths are all pointing the same direction:

  • Better hardware: neuromorphic chips, analog computation, purpose-built inference silicon, optical computing
  • Better algorithms: distillation, quantization, sparse computation, mixture of experts
  • Better architectures: smaller models with better training data, more efficient attention mechanisms, state space models
  • Specialization: purpose-built models that are small and fast for narrow domains

All of it pointing toward the same target: human-level capability at human-level power consumption.

Or Maybe Biology Had It Right All Along

This week, Cortical Labs announced that 200,000 human neurons grown on a chip learned to play Doom in a week. Not a simulation of neurons — actual living brain cells on a microelectrode array, processing visual input and making real-time decisions.

The neurons learned faster than traditional silicon-based machine learning. An independent developer programmed them using Python. The whole thing took days, not months.

The quote from Cortical Labs’ Brett Kagan: “What it is being used as is a material that can process information in very special ways that we can’t recreate in silicon.”

That’s not a future prediction. That’s a present observation. Biology already processes information at efficiencies silicon cannot match. The question isn’t whether the gap will close — it’s whether we close it by making silicon more like neurons, or by using actual neurons.

Either way, the gap closes.

When the Gap Closes

When inference gets cheap enough — and it will — the “frontier models are expensive” objection evaporates. You won’t be dependent on a multi-GPU cluster any more than you’re dependent on the transistors in your phone. It becomes infrastructure. Infrastructure you don’t think about.

At that point the dependency question becomes moot. The model is just there, like electricity.

The Real Question

The real question was never “will we be dependent?” We’re always dependent on our tools. The question is: dependent on what, and at what cost?

The cost trajectory is clear. The optimization headroom is enormous. The gap between 15 watts and a kilowatt is not a fundamental physical limit — it’s an engineering gap, and engineering gaps close.


Post 6 in a series on the AI economic shift. Previously: Clarity and Portability Are the Same Thing. Next: The Craft Before It Was Automated.