That has been known for a long time if generic means random input. You can color a graph by its valence, then recolor it by its colored valence, then continue for a long time until the coloring stabilizes. But, unlike the simplex method, this generic result doesn’t help you at all with hard cases. If you perturb a polytope, then not only is the simplex method usually fast, but also its output solves the original problem. No one knows a way to do that with graph isomorphism.

One of the important classes of graphs to think about for graph isomorphism are the ultra-homogeneous graphs of Cai, Furer, and Immerman. None of the basic recoloring algorithms can possibly work for these graphs. The widely used program nauty, which is a more sophisticated graph recoloring algorithm, also performs poorly with these graphs as best I understand it. On the other hand, if you know that the graphs come from the CFI construction, then they are not hard to canonically color with a special-case algorithm.

As I understand it, graph isomorphism is at a stalemate. By which I mean: If you show the community any known algorithm to make instances, then it can make an algorithm that usually solves them quickly. If you show the community any known algorithm to solve graph isomorphism, then it can algorithmically generate hard instances. I do not know for certain, but that is my impression.

]]>The state x = x(t) of many systems can be modeled by equations of the form

x’ = f(x,p)

where p is a set of parameters in some space P. The evolution of x(t) might have one or more properties such as being oscillatory, chaotic, or whatever. This behavior is

** generic **

if it occurs for parameter values on an open subset S of P. The reason for this definition is presumably that open sets have a nonempty interior and positive measure. Also, for p in S, there is a small “ball” containing p also in S. This implies that slight perturbations of the problem maintain the property.

For a related (but extreme) example, n by n real matrices are “generically nonsingular”. This is because the set Z in R^(n^2 where det(A) = 0 is n^2 – 1 dimensional and closed and has zero measure, so Z complement, where det(A) != 0, is open, and in fact, pretty big.

It is not clear if this definition is appropriate for algorithm analysis in general, but it can probably be applied to linear programming.

]]>The above post is mainly to keep a slow thread “ticking over”, until Scott starts a new one! 🙂

]]>As far as software is concerned, there is still a

tremendous amount that the hobbist can do,

stuff that is necessary and hasn’t been done yet.

There might even be room for a small quantum computing software company. You should start your own.

I also think new, clever software,

just like new, clever experiments in physics, often raises

very interesting theoretical questions that

nobody had thought to work on before.

Kurt, the following post will argue that you are absolutely right. And furthermore, the post will respectfully (and with deliberate optimism) submit that what you ask for already exists, in the form of Chapters 2 and 8 of “Mike and Ike”. The only other ingredient you need is a laptop-scale computer.

Our QSE Group came to this realization via an epiphany that occurred when two summer interns from the Electrical Engineering Department walked into our control room. The two interns were greeted with “Welcome aboard! Let’s get you started. Would you like to help us measure the noise level of the interferometer?” … and we handed them the business-end of a BNC signal cable.

To our amazement, these junior-year students flinched as though the signal cable was a tree-snake unexpectedly descending from the canopy! It emerged that they had never before measured a real-world signal with professional intent … all of their prior experience had been with *in silico* simulations.

Yet these two students proved to be *very* capable engineers. It took them only a week or so to adapt to the “quaint” notion of working with physical signals. 🙂 So evidently there was nothing wrong with the way these students had learned electrical engineering … instead there was something very good about it.

This experience motivated our QSE Group to write out an explicit set of recipes that link the quantum formalism and fundamental results of Chapters 2 and 8 of “Mike and Ike” to the real world of voltages and power spectral densities, via a set of explicit design rules for *ex silico* numerical simulation. The idea was to teach quantum system engineering along lines similar to the way that subjects like modern electrical engineering *really* are being taught.

Writing out these recipes has been fun. In order to motivate, optimize, and justify these recipes, we had to link the Mike and Ike formalism to (for example) the engineering discipline of model order reduction, the chemical discipline of *ab initio* quantum calculations, and the mathematical discipline of algebraic Kahler geometry. We will be presenting our QSE Group’s recipes at a Gordon Conference later this month (at which time we’ll post them to the arxiv server). And of course, many other groups are working along similar lines.

The overall point is that modern electrical and mechanical engineers (and chemists and condensed matter physicists … and even biologists) do a substantial proportion of their most productive work *in silico*. Why should quantum system engineering students and researchers be any different?

To us, the possibilities of these avenues of research seem pretty much unbounded, whether one’s interests are classical or quantum, practical or fundamental, or theoretical versus experimental versus observational. It is an exciting time for everyone … and this excitement is all based upon the informatic, algebraic, and geometric foundations that QIT provides.

There. How’s *that* for academic optimism! 🙂

D-wave’s claim of having “demonstrated” “the world’s first commerical quantum computer” is seems blantantly fraudulent to me.

]]>