an interesting excerpt from the book
Quantum Computing, How It Works and How It Could Change the World
A hydrogen atom, for instance, has one positively charged proton and one electron and is easy to simulate on a laptop – you could even work out its chemistry by hand. Helium, next step along on the periodic table, has two protons, orbited by two negatively charged electrons – but it’s more challenging to simulate, because the electrons are entangled, so the state of one is linked to the state of the other, which means they all need to be calculated simultaneously.
By the time you get to thulium – which has 69 orbiting electrons, all entangled with each other – you’re far beyond the capability of classical computers. If you wrote down one of each of the possible states of thulium per second it would take 20 trillion years – more than a thousand times the age of the universe. In his 2013 book Schrödinger’s Killer App, John Dowling calculates that to simulate thulium on a classical computer, you would need to buy up Intel’s entire worldwide production of chips for the next 1.5 million years, at a cost of some $600 trillion.
A much quicker alternative would be to simply measure the atom directly. “Classical computers seem to experience an exponential slowdown when put to simulating entangled quantum systems,” Dowling writes.
“Yet, that same entangled quantum system shows no exponential slowdown when simulating itself. The entangled quantum system acts like a computer that is exponentially more powerful than any classical computer.” Although we’ve known all the equations we need to simulate chemistry since the 1930s, we’ve never had the computing power available to do it. This means that often, when dealing with complex simulations that are intractable for classical computers, the best approach is still to simply try lots of different things in the real world and draw conclusions from observation and experiment.
“We can’t really predict how electrons are going to behave right now,” says Zapata’s Christopher Savoie. “If we can get into a world where we’re simulating it on a computer, we can be more predictive and do fewer actual laboratory experiments.” It is, he says, as if Airbus were still testing planes by building small-scale models and throwing them into the sky. “You cannot simulate chemical processes that you’re interested in,” says Google’s Sergio Boixo. “With a lot of the low-level materials science and engineering, you’re kind of blind.”
To crack these problems, and lots of others like them, chemists, biologists and physicists need to simulate nature – and, exactly as Richard Feynman predicted back in the 1980s, they need computers made from quantum components to help them. In a way, you can think of a quantum computer as a programmable molecule, says Boixo’s Google colleague Marissa Giustina. “It’s a system of many parts that behaves according to the rules of quantum mechanics, like a molecule. You see a path to connect from there to actually programming chemistry in some senses.”
IN 2010, ALÁN ASPURU-GUZIK – a professor of chemistry and computer science, and a co-founder of Zapata – teamed up with the quantum physicist Andrew White from the University of Melbourne and others to run one of the first ever quantum chemistry simulations. They picked dihydrogen – a pretty easy molecule, as it goes, and certainly not something that would pose any problems to a classical computer, or even to a physicist with a pen and some paper.
Dihydrogen – which is just two hydrogen atoms joined together – was first analysed using the then-new science of quantum mechanics back in 1927. The aim, at this point, was simply to show that quantum computers could be used for this kind of calculation – a proof of concept. Their quantum simulation, which ran on a photon-based quantum device, was able to correctly calculate the strength of the bond between the hydrogen atoms, accurate to six parts in a million.
There are three ways in which quantum computers can help improve our understanding of reactions at the molecular level. The first approach involves building a specific computer to model the problem you’re trying to solve – physically recreating the molecule with the right number of qubits corresponding to its actual structure. This kind of machine would be simpler to build, but wouldn’t be a computer in the traditional sense – you wouldn’t be able to easily reprogram it to tackle different problems.
The second approach involves implementing algorithms that show how a system changes over time. You input the current state of the system, in the form of its wave function, and the level of energy in the system (known as its Hamiltonian, after the mathematician Sir William Rowan Hamilton) and watch it play out over time. These ‘Hamiltonian simulations’, as they’re generally known, have a huge array of potential uses, and could be particularly useful in understanding and predicting complex reactions involving molecules like thulium, where the electrons are highly correlated.
IN JANUARY 2020, researchers at IBM published an early glimpse of how quantum computers could be useful in the NISQ era. Working with the German car manufacturer Daimler on improving batteries for electric vehicles, they used a small-scale quantum computer to simulate the behaviour of three molecules containing lithium, which could be used in the next generation of lithium-sulphur batteries that promise to be more powerful and cheaper than today’s power cells. Instead of running a Hamiltonian simulation, which would have required many more qubits than the researchers had access to, they used variational quantum algorithms – the third way quantum computers can simulate nature, and likely to be the most useful in the short and medium term.
Variational quantum algorithms use a hybrid of quantum and classical computers to speed up calculations. In a blog post, Peter Johnson – lead research scientist and founder at Zapata – draws a comparison with the way Google Maps finds you the best route home in a reasonable amount of time. “The app does not search through all possible routes,” he writes. “Instead, it ends up searching through a well-motivated subset of routes and partial routes.” What Johnson is saying here is that, rather than going in completely blind, Google’s mapping algorithm uses shortcuts and rules of thumb to limit the size of the database it has to search through.
You might do something similar if you’re looking for a particular house number on an unfamiliar street, and you know that the odd and even numbers are on different sides of the road. Only checking one side of the road halves your search time, with minimal damage to the final result.
Rather than trying to do an entire calculation using a quantum computer, variational quantum algorithms can use a limited number of qubits to make a best guess at the solution with the resources available, and then hand over the result to a classical computer which then decides whether to have another go. Splitting the quantum processing over smaller, independent steps means you can run calculations with fewer, noisier qubits than would otherwise be required.
In 2016, Zapata’s Alán Aspuru-Guzik collaborated with Google’s research team in Santa Barbara to simulate dihydrogen again, but this time using the search giant’s superconducting qubits, and an algorithm known as a ‘variational quantum eigensolver’.
Again, a quantum computer was able to predict the energy states and bond lengths of the molecule. The technique promises to be easier to scale up to more complex systems without requiring a huge increase in error-correction requirements.
“With this method of the variational quantum eigensolver, one of the things you can do is find the minimum energy of your problem,” says IBM’s Heike Riel. “Typically you have an equation which describes your physical system, and one of the problems you have to solve is to find the minimum energy of this equation.” This method requires far fewer qubits than a full simulation, and has a broad range of applications, from optimisation problems like the travelling salesman, through to chemical reactions where you need to find the ground state (the lowest possible energy level of a system), and ones where an excited state (any other energy level) is of interest – as is the case with photosynthesis and solar energy.
As the number of qubits in early quantum computers increases, their creators are opening up access via the cloud. IBM has its IBM Q network, for instance, while Microsoft has integrated quantum devices into its Azure cloud-computing platform. By combining these platforms with quantum-inspired optimisation algorithms and variable quantum algorithms, researchers could start to see some early benefits of quantum computing in the fields of chemistry and biology within the next few years. In time, Google’s Sergio Boixo hopes that quantum computers will be able to tackle some of the existential crises facing our planet. “Climate change is an energy problem – energy is a physical, chemical process,” he says.
“Maybe if we build the tools that allow the simulations to be done, we can construct a new industrial revolution that will hopefully be a more efficient use of energy.” But eventually, the area where quantum computers might have the biggest impact is in quantum physics itself.
The Large Hadron Collider, the world’s largest particle accelerator, collects about 300 gigabytes of data a second as it smashes protons together to try and unlock the fundamental secrets of the universe. To analyse it requires huge amounts of computing power – right now it’s split across 170 data centres in 42 countries. Some scientists at CERN – the European Organisation for Nuclear Research – hope quantum computers could help speed up the analysis of data by enabling them to run more accurate simulations before conducting real- world tests. They’re starting to develop algorithms and models that will help them harness the power of quantum computers when the devices get good enough to help.
“These are our first steps in quantum computing, but even if we are coming relatively late into the game, we are bringing unique expertise in many fields,” Federico Carminati, a physicist at CERN, told WIRED in 2019. “We are experts in quantum mechanics, which is at the base of quantum computing.” The Large Hadron Collider’s landmark achievement so far is undoubtedly the 2012 discovery of the Higgs boson, an elementary particle whose existence helped confirm some long-held but evidence-light theories of quantum physics.
In 2018, physicists from Caltech and the University of Southern California re-analysed the data which led to that discovery using a quantum computer, and managed to replicate the results. It wasn’t quicker than a classical device, but it demonstrated that a quantum machine could be used for that type of problem. “One exciting possibility will be to perform very, very accurate simulations of quantum systems with a quantum computer – which in itself is a quantum system,” said Carminati. “Other groundbreaking opportunities will come from the blend of quantum computing and artificial intelligence to analyse big data – a very ambitious proposition at the moment, but central to our needs.”
In Computing with Quantum Cats, John Gribbin argues that this could be, if not the most important, then certainly the most profound application of quantum computers. “If we are ever to have a satisfactory “theory of everything” incorporating both quantum theory and gravity,” he writes, “it is almost certain that it will only be found with the aid of quantum computers to simulate the behaviour of the universe.”