Back to Blog

Quantum Entanglements

After twenty-five years of watching quantum computing promises, the engineering problems remain stubbornly real — but so does the lesson that human ingenuity can surprise you.

I have been following quantum computing since D-Wave emerged roughly twenty-five years ago, and for most of that time I have been a skeptic. Not because the physics is wrong — the physics is beautiful — but because the engineering challenges between theoretical possibility and working machine are so severe that they tend to get papered over in the hype cycle. We are told quantum computing will revolutionize drug discovery, cryptography, optimization. What we are not told, often enough, is why it has not done so already.

The problems are real, and they operate on at least two levels.

The Error Correction Wall

The first and most widely discussed obstacle is Quantum Error Correction, or QEC. Qubits are fragile. They decohere — lose their quantum state — almost immediately upon interacting with their environment. To perform useful computation, you need qubits that maintain coherence long enough to execute a meaningful algorithm. The proposed solution has been error correction: surround each “logical” qubit with a large number of redundant physical qubits that detect and correct errors as they occur.

This idea has been on the table for nearly two decades, and the fundamental problem has not gone away. The overhead is punishing. Current estimates suggest you may need a thousand or more physical qubits to sustain a single reliable logical qubit. If a useful calculation requires a few thousand logical qubits, you are talking about millions of physical qubits — each one needing to be manufactured, controlled, and maintained at temperatures near absolute zero. The scaling math is brutal.

The field has made incremental progress. Error rates have improved. New qubit architectures — superconducting circuits, trapped ions, topological approaches — each offer different tradeoff profiles. But none has cracked the fundamental problem: error correction at the scale required for computational advantage over classical machines in practically relevant tasks. The goalpost keeps moving because classical computing keeps improving too.

The Deeper Question: Entanglement vs. Superposition

The second problem is more subtle and, I think, more fundamental. It concerns the distinction between quantum superposition and quantum entanglement — two phenomena that are related but not identical, and whose conflation has muddied public understanding of what quantum computers actually do.

Superposition is relatively straightforward: a quantum system can exist in a combination of states simultaneously until measured. This is well-established physics, routinely demonstrated in laboratories. A single qubit in superposition is not controversial.

Entanglement is different. It is the correlation between two or more quantum systems such that the state of one instantaneously constrains the state of the other, regardless of distance. Einstein called it “spooky action at a distance,” and he meant it as a criticism. Entanglement is the property that gives quantum computing its theoretical exponential advantage — the ability to explore an exponentially large state space simultaneously.

Here is where my skepticism sharpens. Much of what current quantum computers demonstrate arises from wave superposition — interference patterns among qubits in close physical proximity. True entanglement that can be maintained at distance, sustained over time, and scaled across large numbers of qubits is a different animal. The constraints imposed by the Pauli exclusion principle on fermions — the particles (electrons, for instance) that obey half-integer spin statistics — place hard limits on how many qubits can be simultaneously entangled in a useful way. Fermions cannot occupy identical quantum states, and this exclusion creates real boundaries on system scale.

About a decade ago, a company attempted to sidestep these constraints by building a quantum computer based on photons rather than electrons. Photons are bosons, not fermions — they do not obey the Pauli exclusion principle and can, in theory, occupy identical states without limit. The approach was elegant in concept but ran into its own set of engineering barriers: photon loss, the difficulty of creating reliable photon-photon interactions (photons, being massless, do not naturally interact with each other), and the challenge of maintaining entanglement across a photonic circuit at scale.

The point is not that quantum computing is impossible. It is that the gap between demonstrating quantum effects in a controlled laboratory setting and building a machine that outperforms classical computers on real-world problems remains wide, and the reasons for that gap are rooted in physics, not just engineering.

The Humility Clause

And yet.

I include this caveat because I have learned the hard way that confident skepticism about technology can make a fool of you. For years, I was deeply skeptical of generalized artificial intelligence — not narrow AI, which has been useful for decades, but the prospect of machines that could reason flexibly across domains, generate coherent language, and exhibit something that looks, from the outside, very much like understanding.

Then transformer neural networks arrived. The innovations behind systems like ChatGPT and Claude did not come from the direction most AI researchers expected. They did not require solving the hard problems of symbolic reasoning or world-modeling that the field had struggled with for decades. Instead, they emerged from a combination of architectural insight (the attention mechanism), massive scale (training on essentially the entire written output of humanity), and compute resources that would have been unimaginable a generation earlier.

I was wrong. Not about the difficulty of the problem — the problem is genuinely hard — but about the path. The solution came sideways, from a direction that was not on my map.

This experience has recalibrated my priors. Not toward optimism exactly, but toward humility about the boundaries of the possible. The specific engineering obstacles I have outlined for quantum computing are real, and I do not see a clear path through them today. But I have also watched humanity produce innovations that no one anticipated, using approaches that no one predicted, on timelines that seemed impossible until they weren’t.

The honest position, I think, is this: humanity can probably build anything it can clearly imagine, given enough time, enough resources, and enough people working on the problem. The question is never really whether, but when and how. And the “how” almost always turns out to be stranger than anyone guessed.