People are the new oil
When people say “X is the new oil,” they mean the bottleneck. The resource everyone needs and no one has enough of.
For AI, that used to be compute. It isn’t anymore.
Era of Oil
During the industrial era, oil was the constraint, but people were abundant. You could always find another worker, but you couldn’t always find another barrel.
Oil was what you fought wars over. What determined which nations rose and which collapsed. When something is the bottleneck, you don’t politely ask for more. You take it.
Then oil commoditized. The bottleneck moved.
Era of Compute
In 2019, OpenAI trained a model for roughly $50,000, or about 45 months of average American rent. They declared it “too dangerous to release.”
That model can now be reproduced in 5 days on a single consumer GPU.
For years, compute was the constraint. Everyone had ideas. Everyone had researchers with hypotheses to test. What they didn’t have was the hardware to run experiments at scale. Labs hoarded GPUs the way nations once hoarded oil.
That’s changing. Compute is commoditizing. The “too dangerous” model is now a weekend project. The scarcity has moved elsewhere.
Era of People
OpenAI is committing $500B over four years to a single compute cluster. $125B per year. One thousand researchers, paid $500K each, would cost $500M. That’s a 250:1 ratio. Researchers are a rounding error on the balance sheet.
Major labs now operate at 512 to 2,048 H100-equivalents per researcher. A one-year commitment to that hardware costs more than most SF startups raise in a Series A.
So startups adapt. They make focused bets. They hire more people instead of renting more nodes. Paying a researcher $175K or paying Lambda Labs $175K for one 8x H100 node? For most teams, the researcher wins. Three people with different ideas sharing three nodes beats two people with two nodes each.
The NanoGPT Speedrun sits at 16 H100-minutes. The hardware is the same for everyone. What keeps breaking the record is someone seeing something the last person didn’t. Muon, a new optimizer, didn’t come from a lab with more GPUs. It came from a person with a better idea.
The pattern is the same every time. When a resource is scarce, it’s the bottleneck. When it becomes abundant, the constraint flips to whatever was cheap before.
We have enough oil. We have enough compute.