*(and I totally agree with you about open-access mega journals such as *Science Advances* and *PLoS ONE* and the like — their results should be taken with a grain of salt. Having said that, I’ve been a reviewer for Science Advances and was not forced to get my reviews back in any shorter timeline than other journals. Moreover, the paper I reviewed most recently was not accepted. So it’s not like everything makes it through)*

On risk of annoying you further with another URL, I thought you might also be interested in this recent result from May 2015. There’s no P=NP claim or anything like that, but it’s another example of how analog computers can, in principle, emulate some aspects of quantum computing that arise simply due to properties of waves.

**Popular news:** “Quantum computer emulated by a classical system”

http://phys.org/news/2015-05-quantum-emulated-classical.html

**Primary source** (caveat: open-access journal):

“Signal-based classical emulation of a universal quantum computer”

by La Cour and Ott

*New Journal of Physics* (2015), **17**:053017

http://dx.doi.org/10.1088/1367-2630/17/5/053017

Abstract:

]]>

In this paper we present a novel approach to emulating a universal quantum computer with a classical system, one that uses a signal of bounded duration and amplitude to represent an arbitrary quantum state. The signal may be of any modality (e.g., acoustic, electromagnetic, etc), but we focus our discussion here on electronic signals. Unitary gate operations are performed using analog electronic circuit devices, such as four-quadrant multipliers, operational amplifiers, and analog filters, although non-unitary operations may be performed as well. In this manner, the Hilbert space structure of the quantum state, as well as a universal set of gate operations, may be fully emulated classically. The required bandwidth scales exponentially with the number of qubits, however, thereby limiting the scalability of the approach, but the intrinsic parallelism, ease of construction, and classical robustness to decoherence may nevertheless lead to capabilities and efficiencies rivaling that of current high performance computers.

http://arxiv.org/pdf/1412.0650.pdf

by Igor L. Markov.

]]>I took a look at the link you provided, and I don’t see that *anything* substantive has changed. In particular, I’m still missing any explanation for how an NP-complete problem gets solved in any proposal of this kind without an exponential blowup in the physical resources, ultimately violating the Bekenstein bound.

If someone wants to try to explain it to me, they’re welcome to do it right here—but please answer directly, rather than providing yet another link that, when followed, turns out not to address the question (or you can’t tell where it does, if anywhere). I don’t want a URL; I want an explanation!

]]>“For example in (8), two of the authors (F.T. and M.D.) proposed a different way to encode a quadratic information overhead in a network of memristors that is not subject to this energy bound.”

which refers to:

F. L. Traversa, M. Di Ventra, (preprint on arXiv:1405.0931) IEEE Transaction on Neural Networks and Learning Systems, DOI: 10.1109/TNNLS.2015.2391182 (2015).

You can find that article at http://dx.doi.org/10.1109/TNNLS.2015.2391182 or, if you don’t have an institutional subscription, via the arXiv ID. You may want to update your post (because it is frequently referenced) to address this IEEE TNNLS example encoding and whether it gets close to responding to your concerns.

]]>The relevant bits seem to be

We show an experimental demonstration of an actual memcomputing architecture that solves the NP-complete version of the subset sum problem in only one step and is composed of a number of memprocessors that scales linearly with the size of the problem. We have fabricated this architecture using standard microelectronic technology so that it can be easily realized in any laboratory setting. Although the particular machine presented here is eventually limited by noise—and will thus require error-correcting codes to scale to an arbitrary number of memprocessors—it represents the first proof of concept of a machine capable of working with the collective state of interacting memory cells, unlike the present-day single-state machines built using the von Neumann architecture.

from the abstract, and

We therefore conclude that our machine compresses data in an exponential way with constant capacity. At the output, we have an exponentially decreasing probability of finding one solution of the SSP when we implement the algorithm by brute force, that is, without any error-correcting coding. However, from the Shannon theorem, there exists a code that allows us to send the required information with bounded error. The question then is whether there is a polynomial code that accomplishes this task. We briefly discuss the question here heuristically. If our machine were Turing-like, then the polynomial code could exist only if N P = P. Instead, our machine is not Turing-like, and the channel can perform an exponential number of operations at the same time. Because this is similar to what quantum computing does when solving difficult problems such as factorization, and we know that for quantum computing polynomial correcting codes do exist (9), we expect that similar coding can be applied to our specific machine as well.

That sounds incredibly hand-waving to me: “quantum computing does something similar to what ours is doing, therefore ours is at least as good as that”, but it’d be nice if you could analyse that claim more robustly. At the very least, would a polynomial error codes existing allow solving NP-complete problems in polynomial time and resources, and is there reason to think polynomial error codes won’t exist.

]]>well okay factoring integers requires some type of prime structure algorithm so riemann hypothesis with a local search.

other problems like traveling salesman can be solved by a physical continuum computer basically something that can solve meta-graph theory which is scanning a map for everything exterior to the points. which is really just matrix equation calculations.

well when you get to the number theory of prime structure, there seems to be no number dna, which can unfold the prime structure, basically something more complex or homeomorphically modular mathematics that satisfies quick steps to factoring integers. i think the reason is because locally well einstein said space can be finite but unbounded and so there are limits to what you can measure. so more clarification is going to be needed in the future as to what are the classes of P/NP. i think its not a “stable” question yet because of number theory concerns. but i do think more complex problems will be solved in the future. i would be number theory in the most unsolvable class.

]]>The article in Scientific American starts by pointing out how much electricity we collectively spend on computers. That’s of great importance. So at the end of the article I wondered, how exactly can memcomputing do the same amount of work with less electricity? They don’t answer that. I think it’s all smoke and mirrors.

]]>