With potentially billions of dollars of government funding (and profit) at stake in the race to build functional quantum computers, there’s a massive hype machine to match. Last week Senate Majority Whip Dick Durbin (D-Ill.) and Sen. Steve Daines (R-Mont.) introduced legislation that would pump $2.5 billion into the federal quantum ecosystem, which if passed would kick the hype machine into full gear. David Awschalom, director of the Chicago Quantum Exchange, said in a statement accompanying the bill, “Quantum technologies will revolutionize multiple industries… only if we reduce barriers to commercialization,” highlighting the private competition to take advantage of this public research infrastructure. The humble qubit, or the basic unit of information that powers a quantum computer, is often named at the center of this race. Multimillion-dollar projects like the newly launched Illinois Quantum and Microelectronics Park come with promises of computers featuring millions of qubits; materials scientists salivate over new discoveries that could lead to massive numbers of qubits being encoded onto a single chip. The thinking is that the more qubits one has in a quantum system, the more engineers can offset the inevitable failure of the extremely fragile quantum states on which quantum computers depend, in what scientists call “error correction.” A growing number of quantum analysts and scientists, however, believe that bigger might not always be better when it comes to the number of qubits in a system — something that could change firms’ approach to the science just as they race to find practical applications that would justify the government’s massive investment. “The race is still on” to find the smartest approach to building coherent quantum systems, said Sergio Gago Huerta, head of quantum at Moody’s and author of the Quantum Pirates newsletter. Huerta emphasized to DFD that companies need to have a “qubit agnostic” approach to building quantum systems, and to “try to evaluate anything that research and industry bring, from a number of qubits perspective.” A paper published in Nature in March of this year pushed back against qubit “bigger is better” thinking too, as a team of IBM quantum scientists demonstrated a new form of code they say could enable error correction at large scale with a smaller number of qubits. That followed a breakthrough in December of last year, where quantum firm QuEra Computing demonstrated significant gains through a quantum error-correcting code, something leading theoretical computer scientist Scott Aaronson called “the top experimental quantum computing advance of 2023.” Jay Gambetta, the IBM vice president who handles quantum, said the company’s “road map” to a practical quantum computer is focused on getting a smaller number of qubits to work more efficiently. “The number of qubits to achieve a fault-tolerant quantum computer can vary by a lot,” Gambetta told DFD. “So if you make a code that's 90 percent more efficient, it uses many fewer qubits. The number of qubits is not so much the important thing as the quality of the code, and how many operations it can do.” This push from some quantum scientists to squeeze the most out of their qubits doesn’t mean the qubit-maximalist path is wrong — it’s more a reflection of how untested the science still is, and how fervent the competition remains to find a reliable way of making quantum computers work. With recent awards like the $40.5 million federal tech hub grant given to a Colorado quantum consortium largely based on its pitch for commercial application in the near future, scientists are mostly fixated on finding the right number that makes the systems work, however big or small it is. Gambetta said it was “probably the most exciting time in error correction theory” since he was coming into the field in the late 1990s: “In the last year and a half, people have shown that with most of the assumptions from that early work, you can find ways around it, and the numbers are collapsing down in what is required.”
|
Comments
Post a Comment