This entry was posted
on Thursday, November 15th, 2007 at 7:37 am and is filed under Nerd Interest.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.
Matteo, what I have to say about D-Wave I’ve already said at length. Until they actually provide actual evidence for an actual speedup over classical computers, I hope to spend my time otherwise than remonstrating with them for details and responding to their latest irrelevant press release.
I share Scott’s critical opinion, but if there is really nothing behind the hype, then D-Wave is just a big scam, and I’m not inclined to believe that their investors are so gullible. Anyway, if somebody attended their presentation at SC07, I would be grateful for your informed opinion…
Uh-oh, it looks like your super-secret NSA deal to subvert foreign QC research has been leaked !
From the D-Wave blog:
10. Borgo – November 10, 2007
This Aaronson guy is a total jackass. On top of which he doesn’t seem to be a real scientist either, take a look at his CV he has maybe 5 total publications in real journals and ZERO physics papers… my theory is that he is being paid by the nsa to discredit foreign qc efforts… and that’s how he got in at mit…sure as hell not based on competence or ethics
Do I understand right that D-wave will have the following business model: customers give them problems to solve, and they return an approximate solution claiming that it was obtained by their quantum stuff?
Report from SC: the dwave showing was the same as all their previous public appearances – demonstrate how some NP-hard problem has applicability to an interesting “real world” problem (in this case, image feature matching, using maximum common subgraph), and “solve” the problem using their latest device.
Of course, as usual, there were no details on the performance of the chip – most importantly coherence times of the qubits. However, some of the dwave staff revealed the following during the show:
1.) It would appear that even dwave doesn’t have any idea if their qubits are coherent or not for the length of the computations. It’s not that they won’t release their data on it, it’s that they themselves don’t seem to know for sure. (This is just the impression I got; they didn’t explicitly say this.)
2.) The technique they use to “solve” NP-hard optimization problems basically follows the Farhi quantum adiabatic computing approach, except they do not obey the adiabatic limit. That is to say, they change the system Hamiltonian faster than the adiabatic limit allows, and hence they do not end up in the ground state of the final Hamiltonian.
What is interesting is that dwave are guessing that this approach will still yield results that are useful. They have no actual theoretical models that predict that the outcomes of such experiments will be useful (i.e. that the final state will represent an answer to the optimization problem that is not the global optimum, but is very close to the global optimum). Dwave are hoping that this approach will work, and instead of trying to build models to see if it does work, they have instead convinced investors to let them build a machine that they claim implements this so-called “quantum annealing” and see if their algorithm technique works in practice.
Therefore Dwave’s success relies on two unknowns: a.) does quantum annealing actually give results that are sufficiently near the global optimum for optimization problems such that the results are practically useful? b.) can Dwave actually build a machine that implements quantum annealing, or are their superconducting qubits only coherent and well-controlled for a short portion of the computation?
3.) The third interesting comment from Dwave was that they don’t seem to be too scared about the possibility that what they end up building isn’t even a quantum annealer (let alone a quantum computer, which from the above, we know is not something they’re even trying to build), but is actually a classical annealing machine. They claim that because of the low temperature and the high frequencies they can run the machine at, as a classical annealer their machine will still beat the best classical computing algorithms. I am skeptical of this: if a classical annealing machine easily outpaces the best classical optimization problems, then why hasn’t anyone built one yet? (I think Dwave’s answer to that would be that superconducting JJ technology has only relatively recently become available in such a way that you can easily fabricate many JJ’s on a chip, but I’m not convinced by that argument.)
At the risk of asking for sincere advice, what are the social norms on STOC submissions? Do folks wait to receive a review, before (e.g.) posting them to the arxiv server? Also, why am I asking?
Well, our UW Quantum System Engineering (QSE) group sent in a submission … we had some results on the overlap between geometry, cryptography, and simulation … we saw these results would have fit nicely into the 2007 STOC’s first-day sessions on geometry and cryptography.
Plus, Victoria BC is such a beautiful place to visit!
Sorry to bring up D-wave again, but the news and blogs, are saying that not only have they managed to make a 28 qubit QC, but they can also perform successful facial recognition with it. After solving the biggest problem in computational complexity in February and now one of the biggest problems in pattern recognition, are they going to announce the successful completion of a positronic brain next?
No seriously, are D-wave:
1- A true group of geniuses, leagues ahead of all other computer science and physics researchers, and we should all pay attention?
2- A group of highly self delusional computer scientists and physicists passionate enough to convince the more adventurous or gullible investors?
3- A highly elaborate technological scam, whose perpetrators already have their escape to Mexico planned for sometime next year?
I really don’t get it…
In continuation of my previous comment, check: http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=202805164
The funnest part was this passage:
“We have been collaborating with Hartmut Neven, founder of the image-recognition company, Neven Vision, just after Google acquired it last year,” said Rose. “Neven’s original algorithms had to make many compromises on how it did things–since ordinary computers can’t do things the way the brain does. But we believe that our quantum computer algorithms are not all that different from the way the brain solves image-matching problems, so we were able to simplify Neven’s algorithms and get superior results.”
I wonder what Penrose thinks…
We don’t even know yet how the SudoQ game worked (16 bit register vs. 6,670,903,752,021,072,936,960 possible solutions). Has anybody attended D-Wave’s SC07 presentation to tell more about the image recognition application?..
Alex, my guess is that’s a mix of option 2 and 3. More specifically, 2 for their technical staff and 3 for their marketing BS’ers. But I would be happy to be wrong – being open-minded (but not gullible) never hurts .
It was 16 bits of output. I said that many problems only have 6 total *useful* bits of output after the 16 bits have been read and processed (i.e. the 6 bits that represent the Max. Independent Set of a 6-node graph, as Geordie said on his blog in March).