Is Quantum Computing Useful For Optimization ?
JeanFrancoisPuget 2700028FGP Visits (5857)
A recent paper by Cathy McGeoch and her doctoral student Cong Wang triggered a lot of interest because it compares a specialized computing device built by the D-Wave company to software solvers including our own CPLEX. No surprise, the specialized hardware beats by far software on the problems it is designed to solve. That in itself would not justify the hype we're observing.
What triggered interest is that D-Wave computing device is claimed to be a quantum computer. This brings us to a quite interesting question
As a matter of fact, D-Wave isn't exactly a newcomer, and until recently its claims were taken with skepticism by the quantum computing community, see for instance thi
Let's go back to McGeoch paper. In that paper experiments are done that compare D-Wave a machine to CPLEX and two other software. My take is that one set of experiments demonstrates some value of D-Wave. It is the first one, where the problems to be solved can be directly mapped to the specialized hardware (I'll come back to this later). I am not convinced by the other two experiments as I don't see clear speedup over heuristic approaches.
Before zooming on the most impressive experiment, let me clear any CPLEX related concern. D-Wave machine solves a special kind of optimization problem, called QUBO (Quadratic Unconstrained Binary Optimization). These are problems where all decision variables are binary, the objective function is quadratic, and there are no constraints. This is not the kind of problems CPLEX is sold for. Bottom line, I don't see D-Wave as a competitor to CPLEX. I should probably have kept this for the end of this post given I answered the title question and will start losing readers here.
Another argument makes me feel good about this comparison with D-Wave. As my colleague Mary Fenelon puts it, CPLEX is regularly beaten by special purpose solvers. We consider it an honor that we're considered the one to beat. Our strength is that we can solve lots of different problems and if, say you have a QUBO with just one constraint, well, we can handle that, too, while D-Wave algorithm would have to be extended. The paper acknowledges that CPLEX can prove optimality while D-Wave is just a heuristic. That proof of optimality is some
Let me also point readers to the excellent revi
I could stop here given others already said very valuable things about D-Wave and McGeoch paper. Let me just add something I have not read so far. I need to dive a bit into what D-Wave actually does before I can explain it.
In order to solve a problem, D-Wave needs it to be first translated into a QUBO. This QUBO in turns needs to be transformed into an Ising spin glass into their machine. To cut along story short, it means that the QUBO problem must be mapped onto a special graph, called a chimera graph. The figure below shows the chimera graph for the machine used at USC, extracted from the paper cited above.
The QUBO problem must be mapped as follow: decision variables must be mapped to nodes in the chimera graph. If the product xi.xj occurs in the QUBO objective, then variables x_i and x_j must be mapped to nodes linked by an edge in the chimera graph. This is called minor embedding in McGeoch paper.
Minor embedding is what limits the scope of D-Wave machine. A problem can be solved on it only if it can be minor embedded in its chimera graph. Granted, D-Wave has developed an hybrid software+hardware approach when there is no minor embed. As said above, I haven't been convinced so far of the value of this hybrid approach, but I will certainly have a deeper look at it. Anyway, the full speed of the machine is only available when there is a minor embed into the chimera graph. In particular it means that one cannot solve problems that have more variables than there are nodes in the chimera graph. Currently, the largest D-Wave machine only has 512 nodes (qubits) in its chimera graph. Note that the size limit is increasing regularly, D-Wave has been on a trend where the number of qubits (nodes in the chimera graph) doubles every year.
The requirement to have a minor embed has been well documented. What I haven't seen so far is the acknowledgement that computing a minor embed may be a NP-Complete problem. It means that even if the hardware solved a problem in polynomial time, there is potentially an exponential preparation step.
Indeed, minor embedding is an instance of the sub
That was the argument I found missing in all discussions so far. To be fair, the minor embed needs to be computed only once for all problems that have the same QUBO graph. This may happen when you need to solve the same problem with different data. Note also that finding a minor embed can be easy in some cases. The actual difficulty needs to be assessed QUBO graph per QUBO graph. When finding a minor embed is easy, then using D-Wave may make sense.
Let me conclude here. I think we can safely say that there is promise when a problem can easily be mapped into the representation expected by D-Wave hardware. Mc Geoch paper shows evidence of it. Question remains about how large that class of problem is. Can interesting (I mean interesting for businesses) problems be mapped onto the QUBO problems that D-Wave accept as input? This is the real question D-Wave supporters should be looking at.
Update on May 17
Cathy McGeogh pointed me to a paper by Vicky Choi that discusses minor embedding. In fact minor embedding is not exactly sub graph isomorphism. In minor embedding, a variable can be mapped to a set of nodes, as long as these nodes form a tree. A minor embed means that the embedded graph can be obtained from the embedding graph by collapsing some nodes that share an edge. The figure below shows a minor embed.
The complexity of minor embed is not known as far as I know. V. Choi mentions algorithms that are polynomial in the size of the embedding graph, but exponential in the size of the embedded graph. It means that finding a minor embed using these algorithm is exponential in the size of the QUBO that we want to solve. This is worst case complexity of course, and as discussed above, minor embedding may be easy in practice.
This discussion makes me think that all these experiments do not take into account the time to compute a minor embedding. Maybe they should.
Cathy McGeogh also points to a shortcut in my argumentation. Even if minor embedding was sub graph isomorphism, it does not mean that the problem is NP complete for chimera graphs. I completely agree.
Update on June 17
New QAP tests results are reported in the D-W
New QUBO tests results are reported in the D-W
Update on July 22
New Weighted Max2SAT problems are reported in the D-Wa