Ok, for some context why finding just one path is important (rather than summing exponentially many):

Let’s try to prove the Sipser-Lautemann theorem for BPP and its analog for BQP using quantum circuits. You can use T&H gates for BPP just by summing amplitudes rather than subtracting them. Starting with a BPP probability distribution in NP^#P, (amplify the amplitudes) and add a number of ancillary bits, then use controlled logic gates conditioned on the original random bits to scramble the ancillary bits, flattening the overall probability distribution to the point that each output string has at most one satisfying assignment. So, a #SAT instance is padded with extra clauses to the point that it becomes a SAT instance and BPP \in NP^NP.

When we try to extend the technique to BQP, we have to deal with the Born rule and the cancellation of amplitudes, problems which manifest themselves as round-off errors. If one state has a probability amplitude 100 and another state has a probability amplitude of 1 it should be 10,000 times more probable. But, if the distributions are flattened, the first state will only be 100 times more probable. This is because the second state should have an amplitude of 0.01, but that amplitudes are rounded to integers. Born rule problems can be dealt with my amplifying amplitudes. The other round-off error problem is when amplitudes that should cancel fail to do so. This can be counteracted by flattening the amplitude distribution less (to a max height and depth of k) and by defining new sub-distributions by fixing the outcomes of Hadamards. However, I am not sure how many such distributions are needed to make the error sufficiently small, but as long as it is not superpolynomial it shouldn’t be a problem.

If you start with a BQP probability amplitude distribution in NP^GapP, amplify the amplitudes extra to deal with Born rule issues, and use ancillary qubits and controlled logic gates to flatten it, you get a non-zero amplitude distribution like (A \land \lnot B) \lor (\lnot A \land \lnot B) (A is positive coefficients and B is negative coefficients), which is in SAT. This doesn’t work properly because of pixellation issues and instead you can define 2k new distributions, defined by fixing both the output coefficient and the outcomes of Hadamards in the circuit. (You need to use O(log k) Hadamards.) Then define a ‘gadget’ Boolean circuit which takes the square of the difference between the positive and negative amplitudes and simulates the Born rule by concatenating a 4k^2 size alphabet to the output strings and turning on a number of strings equal to that called for by the Born rule. The whole circuit is then written as a SAT formula, so BQP \in NP^NP.

I’m not sure whether you can reliably flatten the amplitude distribution down to the range in which this construction works.

(Perhaps the question could be clarified if it could be determined whether circuits with outputs limited to (1) 1 positive coefficient and no negative coefficients, (2) 1 negative coefficient and no positive coefficients, or (3) 1 positive coefficient and 1 negative coefficient which cancel each other out is or isn’t BQP-complete.) If so, we would know that every BQP problem could be flattened and we could use (A \land \lnot B) \lor (\lnot A \land \lnot B) and not worry about Born rule pixellation violations.

Anyway the motivational punchline: It seems we’ve expressed arbitrary BQP problems as combinations of amplitude distributions. If the pathfinding problem is easy, the distributions are in BPP and, by the closure properties of BPP (and because everything is reversible), shouldn’t we have BPP=BQP?

]]>Legendre conjectured a statement which implies the prime number theorem in 1798. Shortly before that (I don’t have the exact year), Legendre proved that the primes had density zero. That’s about 150 years after Fermat. That said, if one looks at the sort of things that Fermat, Descartes and others were proving and conjecturing in that time period, it certainly looks like they had an intuition that the primes became more sparse as one got larger. Between those time periods one has Euler doing a lot of work that looks from a modern standpoint like he was trying to get a handle on the density of the primes (see for example his second proof that there are infinitely many primes).

One of the difficulties is that prior to the modern era, actually conjecturing things explicitly wasn’t that common. Fermat and Descartes are two of the first to do so, so for others one often needs to figure out what they were conjecturing by what sort of things they were actually proving. One fun example related to this is that there’s some disagreement on when the question of whether there are any odd perfect numbers dates from. Descartes mentions it explicitly, but if one looks at earlier writers, they were proving things that certainly look like they were trying to prove special cases, such as proving that no perfect number is a power of a prime.

]]>But to be less pedantic: almost every question that’s similar to what you asked is believed to be exponentially hard for classical computers (and in most cases, for quantum computers as well). In many cases, the problems are actually known to be #P-hard.

]]>Also, he probably had a chart of prime numbers, but I’m guessing the chart wasn’t that big. Maybe the first thousand primes at most.

]]>