Assume a ball is hide in a cabinet with million drawers. On average it’ll take you 500,000 peeks to find ball. Now a quantum computer can perform such a search looking only into 1000 drawers.

Because $O(\sqrt{n})$ means “exactly the square root of n operations”.

]]>While quantum mechanics has been foundational to theories of physics for about a hundred years picture of reality it paints remains enigmatic. Google gives an e.g. “Assume a ball is hide in a cabinet with million drawers. On average it’ll take you 500,000 peeks to find ball. Now a quantum computer can perform such a search looking only into 1000 drawers.” ]]>

http://www.newscientist.com/article/dn18272-google-demonstrates-quantum-computer-image-search.html ]]>

http://www.bloo.us/graphics/qualex-ms-flaw-add-node/render/

This can be repeated multiple times with the same effect.

]]>Adding a new vertex connected to every other vertex doesn’t change anything – Qualex-MS catches such things at the preprocessing stage and pre-selects vertices that must be in max clique by being non-connected only to a subset of total weight not greater than the vertex itself. In the unweighted case it means being connected to \geq n-1 vertices. ]]>

http://www.bloo.us/graphics/qualex-ms-flaw-minimal/render/

For different, random orderings of this graph, F, Qualex-MS succeeds (for those that i tried).

I also tried merging F with non-complete graphs of 5 nodes and the majority of the resulting graphs still cause Qualex-MS to return a less-than-maximum clique size.

Finally, by adding a node N to F, such that N is connected to all nodes in F, we obtain a graph that still causes Qualex-MS to return a less-than-maximum clique size, this time increased by 1. This process can be repeated multiple times with the same result (i tried 100 times).

I’ve seen these two behaviors in my more elementary algorithms as well. I’m guessing that given a graph G that “breaks” a non-randomized algorithm M, we can generate an arbitrary amount of graphs that also break M, by successively:

1. Merging with any graph not containing a k-clique.

2. Introducing a new node N such that N has maximum degree.

Does this mean that if any non-randomized, polynomial time algorithm M has at least one flaw (returns the incorrect answer for a given graph F) then that algorithm has a >99% error rate for instances of size > E for large enough E?

]]>