Quantum’s billion-dollar mirage
- Stephen McBride
- a few seconds ago
- 7 min read
Hype meets cold reality
I’m on the road with Matt Ridley and ROS co-founder Dan Steinhart, visiting six cities and meeting 30 world-class innovators in two weeks.
Here we are at Elon Musk’s The Boring Company, his lesser-known play aiming to solve traffic by reducing the cost to dig transport tunnels by 99%. The circle we’re standing in is the diameter of its Prufrock digger:

The flight from Abu Dhabi to Los Angeles was brutal, nearly a full day door-to-door. It was worth it to meet tech billionaires. But man, I can’t wait for the return of supersonic jets.
We’ll meet Boom founder Blake Scholl in Denver on Friday, and I’ll thank him personally for shaving hours off my future jet lag.
So far, we’ve hopped from LA to Austin to San Francisco, and now Dan and I are back in LA, in the heart of America’s innovation avalanche. Here we’ll meet innovators building drone swarms, robotic arms, asteroid miners, cloud seeders and a startup designing autonomous ships.
More on that next week. Today, I want to discuss…
The most overrated technology today.
Quantum computing.
Say “quantum computing” three times, and a Silicon Valley venture capitalist will appear at your door with a bag of cash.
You likely saw the recent headline: Google’s Quantum Computer Makes a Big Technical Leap.
Google said its quantum chip ran an algorithm 13,000X faster than the world’s top supercomputers. Earlier this year it announced its “Willow” chip solved a math problem in 5 minutes that would take a classical computer 10 septillion years!

If you only read the headlines, you’d think quantum is about to cure cancer before lunch. We’ll soon be walking around with quantum chips in our phones. Can I interest you in a qPhone?
At ROS, part of our job is to separate hype from reality. Quantum is cool science, but it’s still at least a decade away from being useful.
A classical computer like your laptop contains microscopic light switches etched into silicon chips. Each switch, called a bit, is either off or on.
Quantum plays by stranger rules. It uses quantum bits (qubits). A qubit can be on and off at the same time. This allows a quantum computer to process exponentially more information.
But there’s a catch: Quantum computers are unbelievably fragile.
They must be kept colder than outer space and completely shielded from the real world. A small vibration or even a tiny puff of air can make qubits collapse, causing the computer to make mistakes.
That’s why today’s machines look like gold chandeliers hanging inside giant freezers:

To get one reliable qubit, you need thousands of other qubits constantly checking and fixing errors. A truly useful quantum computer would need millions of qubits working in sync. Yet even the best machines only have around 500 today.
Even if we double the qubit count each year, we’re still more than a decade away from machines that can do any practical work. And even then, quantum won’t replace our computers. Quantum computers are slower and billions of times more expensive for everyday tasks.
Quantum will shine in a few narrow, high-value jobs like designing new materials. Just like you don’t buy a Ferrari F1 racecar to drop your kids to school, you won’t use a quantum computer to stream Netflix.
Be a rational optimist about quantum. Cheer the breakthroughs. But don’t plan your business, or your portfolio, around it doing anything useful this decade.
“Decryption will be quantum’s first killer app.”
That’s what Irish Venture capitalist Tom McCarthy told me.
We’ve known for 30 years a powerful enough quantum computer could crack the security that protects everything from your bank login to your bitcoin wallet.
Once a million-qubit machine comes online, today’s encryption systems will crack like glass. There will be a scramble to change the locks on the entire internet.
Betting market Kalshi puts the odds of a quantum computer breaking encryption at 40% by 2030. I’ll take the over on that.
I like to bet on hungry founders with fresh ideas. Most of the big quantum players have been grinding away for 20-plus years. Their progress is real but incremental.
My money’s on the innovators building something entirely new. One that caught my eye is thermodynamic computing, a radical approach inspired by how nature processes information.
I recently caught up with Guillaume Verdon, the physicist-turned-founder of thermodynamic hardware startup Extropic. You can watch our chat here.
This chart terrifies pessimists.
Stocks and job openings used to move in tandem. But since ChatGPT launched in November 2022 the S&P 500 ripped higher while openings fell:

Cue the headlines: “AI is minting profits and killing jobs.”
Not so fast, cowboy.
Unemployment is still near historic lows. After a wild 2021, job openings are now cooling toward a more normal level. And thanks to AI, job openings in certain industries are rising.
In 2016 the godfather of AI, Geoff Hinton, said we should stop training radiologists. Machines, he said, would soon read X-rays, MRIs, and CT scans better than any human.
He was half right.
AI is now world-class at spotting patterns. One AI model, CheXNet, can detect pneumonia with greater accuracy than a panel of radiologists. It’s free, and hospitals use it to classify new scans in under a second!
But Hinton was wrong about radiologists being forced on the dole. A great new essay from our friends over at Works in Progress shows today, radiology is the second-highest-paid medical specialty in America. The average radiologist earns $520,000, almost 50% higher than when Hinton predicted catastrophe.
AI doomer fatal flaw No. 1: They treat a job as one big lump of work.
Every job is a bundle of tasks. Radiologists don’t just stare at images all day. Most of their work is human-to-human: consulting with doctors, explaining results to patients, deciding next steps. AI helps take the mundane, repetitive tasks off their plates so they can do more important work.
Which brings us to fatal flaw No. 2: There isn’t a fixed amount of work.
When machines take over tasks, humans “level up.”
When technology makes something faster and cheaper, we don’t do less of it. We do more.
Over the past two decades new scanners made it quicker and cheaper to peek inside the body. The result? A 60% surge in scans since 2005. Radiologists are busier than ever.
Why don’t we all get a full-body scan as part of our annual check-up? Imagine how many cancers or heart issues we’d catch early.
The only reasons we don’t get more scans are cost and time—and technology is crushing both barriers.
What’s happening for radiologists will spread across industries. What AI pessimists don’t understand is…
AI will create a massive employment shortage.
That’s right: We won’t have enough people to fill all the new jobs AI is going to create.
Sounds crazy, but history has my back.
We’ve been introducing world-changing technologies for centuries. Every wave of innovation created more work, not less.
Talk to anyone building AI infrastructure today and they’ll tell you they can’t hire fast enough. Data centers need electricians, engineers and heavy equipment operators.
The US Department of Energy estimates we’ll need another 300,000 engineers just to build the power systems to feed the AIs.
And those are only the jobs we can see. Innovation always spawns work we can’t yet imagine.
MIT economist David Autor found over 85% of job growth in the past 80 years came from occupations that didn’t previously exist.
If in 1990 I told you millions of people would make a living posting photos and videos on an app called Instagram, you’d have laughed me out of the room. Yet here we are.
Just like every transformative technology before it, AI will create tens of millions of jobs we can’t yet imagine. The real challenge won’t be mass unemployment. It’ll be finding enough skilled people to keep up.
Do you agree with this prediction?
“The development of machinery is increasing at such a rapid pace that within a few years the unemployed will arise in revolt to take by force the necessities of life which they are no longer able to earn by their own efforts.”
Sounds reasonable. Too bad it’s from 1933!
The accompanying poster asked: 30 million out of work in 1933 or $20,000 a year income for every family, which?

People always panic when a revolutionary new technology is unleashed.
Socrates warned writing would wreck our memory. Renaissance scholars feared the printing press would drown us in junk. People said radio, TV, video games and Google would rot our brains.
But the printing press didn’t lead to starving scribes. It created publishers, editors, reporters, librarians and mass literacy.
Excel didn’t kill accountants. It multiplied them by making planning and analysis profitable.
We’re spectacularly bad at predicting how technologies will play out. Today’s AI freak-out is the same movie with better special effects.
This time isn’t different.
AI is already designing new life-saving drugs…
And turning kids into straight-A students. We visited Alpha School last Monday…and wow. As a dad of three young kids I can no longer send my kids to regular school with a clear conscience.
I think AI will be bigger than the internet. Unlike the web, AI can transform huge, slow-moving sectors like healthcare and education.
But if we fall for the lie that “AI is stealing all the jobs,” we risk strangling it with red tape before it delivers that progress.
We’ve seen this before. Nuclear energy was smothered by fear, and supersonic flight was grounded by politics. A few scary stories can set a world-changing technology back decades. Let’s make sure AI isn’t next.
That’s one reason we’re on this US tour. Matt Ridley’s new book, a follow-up to The Rational Optimist, argues the future is still going to be amazing, but not as amazing as it could be if we keep getting in our own way.
Progress isn’t inevitable. Civilizations can, and have, gone backwards.
One example Matt talked about this week is China’s Ming dynasty.
When the Ming emperors took power in 1368 China was the most advanced society on Earth. They had invented paper money, printing, gunpowder, the compass and ocean-going ships.
But the Ming rulers shut the borders, centralized power, banned foreign trade, and handed innovation to imperial bureaucrats. Exploration stopped. Shipyards rotted. In a few generations, the empire that once built fleets to Africa was struggling to repair riverboats.
Let’s make sure that doesn’t happen to our world.
The next time someone insists AI is dangerous, tell them about AI saving Joseph Coates’s life, or Alpha School’s students learning at warp speed.
Technology has never been a job-eating monster. It’s a lever that multiplies what humans can achieve.
Remember, facts don’t change the world, stories do. Help us spread some rational optimism by forwarding this letter to a friend.
—Stephen McBride
