The Computational Expressiveness of a Model Train Set: A Paperlet

Update (April 5, 2021): So it turns out that Adam Chalcraft and Michael Greene already proved the essential result of this post back in 1994 (hat tip to commenter Dylan). Not terribly surprising in retrospect!

My son Daniel had his fourth birthday a couple weeks ago. For a present, he got an electric train set. (For completeness—and since the details of the train set will be rather important to the post—it’s called “WESPREX Create a Dinosaur Track”, but this is not an ad and I’m not getting a kickback for it.)

As you can see, the main feature of this set is a Y-shaped junction, which has a flap that can control which direction the train goes. The logic is as follows:

• If the train is coming up from the “bottom” of the Y, then it continues to either the left arm or the right arm, depending on where the flap is. It leaves the flap as it was.
• If the train is coming down the left or right arms of the Y, then it continues to the bottom of the Y, pushing the flap out of its way if it’s in the way. (Thus, if the train were ever to return to this Y-junction coming up from the bottom, not having passed the junction in the interim, it would necessarily go to the same arm, left or right, that it came down from.)

The train set also comes with bridges and tunnels; thus, there’s no restriction of planarity. Finally, the train set comes with little gadgets that can reverse the train’s direction, sending it back in the direction that it came from:

These gadgets don’t seem particularly important, though, since we could always replace them if we wanted by a Y-junction together with a loop.

Notice that, at each Y-junction, the position of the flap stores one bit of internal state, and that the train can both “read” and “write” these bits as it moves around. Thus, a question naturally arises: can this train set do any nontrivial computations? If there are n Y-junctions, then can it cycle through exp(n) different states? Could it even solve PSPACE-complete problems, if we let it run for exponential time? (For a very different example of a model-train-like system that, as it turns out, is able to express PSPACE-complete problems, see this recent paper by Erik Demaine et al.)

Whatever the answers regarding Daniel’s train set, I knew immediately on watching the thing go that I’d have to write a “paperlet” on the problem and publish it on my blog (no, I don’t inflict such things on journals!). Today’s post constitutes my third “paperlet,” on the general theme of a discrete dynamical system that someone showed me in real life (e.g. in a children’s toy or in biology) having more structure and regularity than one might naïvely expect. My first such paperlet, from 2014, was on a 1960s toy called the Digi-Comp II; my second, from 2016, was on DNA strings acted on by recombinase (OK, that one was associated with a paper in Science, but my combinatorial analysis wasn’t the main point of the paper).

Anyway, after spending an enjoyable evening on the problem of Daniel’s train set, I was able to prove that, alas, the possible behaviors are quite limited (I classified them all), falling far short of computational universality.

If you feel like I’m wasting your time with trivialities (or if you simply enjoy puzzles), then before you read any further, I encourage you to stop and try to prove this for yourself!

Back yet? OK then…

Theorem: Assume a finite amount of train track. Then after a linear amount of time, the train will necessarily enter a “boring infinite loop”—i.e., an attractor state in which at most two of the flaps keep getting toggled, and the rest of the flaps are fixed in place. In more detail, the attractor must take one of four forms:

I. a line (with reversing gadgets on both ends),
II. a simple cycle,
III. a “lollipop” (with one reversing gadget and one flap that keeps getting toggled), or
IV. a “dumbbell” (with two flaps that keep getting toggled).

In more detail still, there are seven possible topologically distinct trajectories for the train, as shown in the figure below.

Here the red paths represent the attractors, where the train loops around and around for an unlimited amount of time, while the blue paths represent “runways” where the train spends a limited amount of time on its way into the attractor. Every degree-3 vertex is assumed to have a Y-junction, while every degree-1 vertex is assumed to have a reversing gadget, unless (in IIb) the train starts at that vertex and never returns to it.

The proof of the theorem rests on two simple observations.

Observation 1: While the Y-junctions correspond to vertices of degree 3, there are no vertices of degree 4 or higher. This means that, if the train ever revisits a vertex v (other than the start vertex) for a second time, then there must be some edge e incident to v that it also traverses for a second time immediately afterward.

Observation 2: Suppose the train traverses some edge e, then goes around a simple cycle (meaning, one where no edges or vertices are reused), and then traverses e again, going in the same direction as the first time. Then from that point forward, the train will just continue around the same simple cycle forever.

The proof of Observation 2 is simply that, if there were any flap that might be in the train’s way as it continued around the simple cycle, then the train would already have pushed it out of the way its first time around the cycle, and nothing that happened thereafter could possibly change the flap’s position.

Using the two observations above, let’s now prove the theorem. Let the train start where it will, and follow it as it traces out a path. Since the graph is finite, at some point some already-traversed edge must be traversed a second time. Let e be the first such edge. By Observation 1, this will also be the first time the train’s path intersects itself at all. There are then three cases:

Case 1: The train traverses e in the same direction as it did the first time. By Observation 2, the train is now stuck in a simple cycle forever after. So the only question is what the train could’ve done before entering the simple cycle. We claim that at most, it could’ve traversed a simple path. For otherwise, we’d contradict the assumption that e was the first edge that the train visited twice on its journey. So the trajectory must have type IIa, IIb, or IIc in the figure.

Case 2: Immediately after traversing e, the train hits a reversing gadget and traverses e again the other way. In this case, the train will clearly retrace its entire path and then continue past its starting point; the question is what happens next. If it hits another reversing gadget, then the trajectory will have type I in the figure. If it enters a simple cycle and stays in it, then the trajectory will have type IIb in the figure. If, finally, it makes a simple cycle and then exits the cycle, then the trajectory will have type III in the figure. In this last case, the train’s trajectory will form a “lollipop” shape. Note that there must be a Y-junction where the “stick” of the lollipop meets the “candy” (i.e., the simple cycle), with the base of the Y aligned with the stick (since otherwise the train would’ve continued around and around the candy). From this, we deduce that every time the train goes around the candy, it does so in a different orientation (clockwise or counterclockwise) than the time before; and that the train toggles the Y-junction’s flap every time it exits the candy (although not when it enters the candy).

Case 3: At some point after traversing e in the forward direction (but not immediately after), the train traverses e in the reverse direction. In this case, the broad picture is analogous to Case 2. So far, the train has made a lollipop with a Y-junction connecting the stick to the candy (i.e. cycle), the base of the Y aligned with the stick, and e at the very top of the stick. The question is what happens next. If the train next hits a reversing gadget, the trajectory will have type III in the figure. If it enters a new simple cycle, disjoint from the first cycle, and never leaves it, the trajectory will have type IId in the figure. If it enters a new simple cycle, disjoint from the first cycle, and does leave it, then the trajectory now has a “dumbbell” pattern, type IV in the figure (also shown in the first video). There’s only one other situation to worry about: namely, that the train makes a new cycle that intersects the first cycle, forming a “theta” (θ) shaped trajectory. In this case, there must be a Y-junction at the point where the new cycle bumps into the old cycle. Now, if the base of the Y isn’t part of the old cycle, then the train never could’ve made it all the way around the old cycle in the first place (it would’ve exited the old cycle at this Y-junction), contradiction. If the base of the Y is part of the old cycle, then the flap must have been initially set to let the train make it all the way around the old cycle; when the train then reenters the old cycle, the flap must be moved so that the train will never make it all the way around the old cycle again. So now the train is stuck in a new simple cycle (sharing some edges with the old cycle), and the trajectory has type IIc in the figure.

This completes the proof of the theorem.

We might wonder: why isn’t this model train set capable of universal computation, of AND, OR, and NOT gates—or at any rate, of some computation more interesting than repeatedly toggling one or two flaps? My answer might sound tautological: it’s simply that the logic of the Y-junctions is too limited. Yes, the flaps can get pushed out of the way—that’s a “bit flip”—but every time such a flip happens, it helps to set up a “groove” in which the train just wants to continue around and around forever, not flipping any additional bits, with only the minor complications of the lollipop and dumbbell structures to deal with. Even though my proof of the theorem might’ve seemed like a tedious case analysis, it had this as its unifying message.

It’s interesting to think about what gadgets would need to be added to the train set to make it computationally universal, or at least expressively richer—able, as turned out to be the case for the Digi-Comp II, to express some nontrivial complexity class falling short of P. So for example, what if we had degree-4 vertices, with little turnstile gadgets? Or multiple trains, which could be synchronized to the millisecond to control how they interacted with each other via the flaps, or which could even crash into each other? I look forward to reading your ideas in the comment section!

For the truth is this: quantum complexity classes, BosonSampling, closed timelike curves, circuit complexity in black holes and AdS/CFT, etc. etc.—all these topics are great, but the same models and problems do get stale after a while. I aspire for my research agenda to chug forward, full steam ahead, into new computational domains.

PS. Happy Easter to those who celebrate!

46 Responses to “The Computational Expressiveness of a Model Train Set: A Paperlet”

1. Watson Ladd Says:

So I think the piece that would work is two Y junctions ganged together. This can form an AND gate: if oriented vertically, the train has to enter from the bottom each time to enable the train to return out the bottom. It can also form a simple counter: the train passes through the first Y junction than the second, and on passing through the second turns the first to bypass it.

You can also make a three port device from this ganged junction device where the train goes from 1 to 2, then when it goes backwards from 2 it comes out on port 3.

A convenient memory cell isn’t hard to design either from this: use the inputs of one device as 0 and 1 and read by sending through the common terminal of the other. If we can gang 3 Y junctions a T flip-flop can be realized, but likely there is a better physical realization out there, akin to the role of the pass gate in CMOS design.

2. Ryan O'Donnell Says:

There is a related simple problem with a very cool complexity status: it’s not known to be in P, but it’s in NP \cap coNP (and also UP \cap coUP, and also CLS, and also PLS). It’s the “ARRIVAL” problem, due to Dohrau, Gartner, Kohler, Matousek, and Welzl (https://arxiv.org/abs/1605.03546):

Given is a digraph where every vertex has 2 out-edges, labeled “even” and “odd”. Initially, for each vertex v, its “even” out-edge is considered “active” and its “odd” out-edge is considered “inactive”. Also given is a source vertex s and a target vertex t. Now imagine a train starts at s, and at each step deterministically follows the “active” out-edge of its current vertex v, with the twist that after this edge is traversed, v “switches” which of its out-edges is the “active” one. The computational question is: does the train ever reach t?

3. Andrew Kay Says:

Points that always flip when you pass are more interesting, and you can have multiple trains where the first to a Y deflects the next one. It certainly made for an amusing puzzle game:

4. Scott Says:

Watson Ladd #1: What does “ganged together” mean? (A picture would help enormously, but short of that some more description!) Incidentally, one thing illustrated by the DigiComp II is that even if you have AND, OR, and NOT gates, that still might not suffice for universality, because of an inability to compose the gates in arbitrary ways.

5. Scott Says:

Ryan O’Donnell #2: Thank you for the extremely relevant reference! It sounds like Daniel’s train set could be considered the variant of ARRIVAL where
(1) all vertices have degree 3 (we’ll ignore the degree-1 vertices), and
(2) crucially, we only switch which outgoing edge is the “active” one, when the train arrives at a vertex via the inactive outgoing edge.

I confess that I’m utterly perplexed as to why the “original” ARRIVAL problem should be in NP intersect coNP … I guess I’ll either need to figure it out or read the paper!

6. Scott Says:

Andrew Kay #3: Is “Trainyard Express” available for iPhone by any chance? I just searched the App Store and couldn’t find it…

7. Sniffnoy Says:

Andrew Kay #3: Points that always flip when you pass is a less-restricted DigiComp II, right?

8. Watson Ladd Says:

Ganged means that the moving pieces are tied together, so if a train forces one left, it forces the other left and vice versa for right. Sometimes you’ll see ganged pots in parts catalogues, or ganged varicaps. This means that the knobs are mechanically linked so they turn together.

I figured out how to get the toggle, and now I think I can embed arbitrary circuits, whose time to evaluate is the total number of wires. Sadly it’s going to be a bunch of work to explain since there isn’t a great vocabulary for talking about the trains.

We signal a 1 by a train having gone along a path, and 0 by it going along another path. Gates will persist their state for multiple readouts on a single path: they have a 0 or 1 output path, and each input has a 0 path and 1 path. Each inputs combines though a Y gate onto a waste path that goes back to a common distribution tree. We’ll talk about that later.

The most basic gate is a “memory”. The lower half is an input, the upper half splits the output path into two. If the input gate is in the 0 position, the upper half directs the train to 0. If the input gate is in the 1 position, the upper half directs it to the 1 output. Note that the upper half doesn’t get flopped: it’s stearing the readout onto each choice.

Take two of these memories in series. If we have the 1 output path from the first gate go into the readout of the second gate, then we can combine the 0 from the first and second gates into the 0 output of AND gate and the 1 output of the second gate will be an AND output.

OR sends 00 out to 0, and the rest to a common 1 output. NOT is a simple reversal.

To obtain fanout we can use the promised T gate in a mini distribution tree to stear the 1 outputs multiple times, and make sure to send that many readout signals from the distribution tree.

Now it’s time to talk about the distribution tree: it’s a binary tree of toggles. The toggle is a bit funky. We start with the 3 path one way gate. A train enters on port A, goes to port B, and when it returns from Port B, goes out Port C. This can be made by wiring up two gates so that the train flips the first gate it passed as it passes the second, and then on the return path it doesn’t change either.

The toggle has switches A and A prime. A takes the common input and steers it to the left or right half. Each of the left or right halves has a one way gadget, these go straight into a free switch B. B then connects to the common input of A prime, which has its switched inputs connected in a loop.

What happens? A train enters A, stears WLOG to the left. It goes into the free switch, setting it leftwards then into A prime which stears it out the left in the loop. But then it loops back, coming in the right side, switching A to stear to the right as well as A prime. The free switch is set leftwards, so the train exits to the left, and then is sent by the one way out on the left side. The next train will go to the right side, and then reverse the switches.

Arrange these toggles in a tree, and you can send as many signals to the readouts of the gates as you need for the circuit.

I want to go make these now, but sadly don’t have the mechanical skills.

9. Michele Sim Says:

Cool now do quantum annealing.

10. pancelor Says:

I’m at the “stop and prove it for yourself” part so I don’t have much to say; I just wanted to mention that this reminds me of a site that goes into some depth on model trains and computation (there’s even a small section on busy beavers!)

http://www.cr31.co.uk/stagecast/trains/tt8_duplo_lout.html

11. Scott Says:

Incidentally, I showed Daniel this post, explaining how the figure classified all the possible things the train set can do. He stared at it for a while before exclaiming, “but you forgot about the bridges and tunnels!” I started explaining how the bridges and tunnels were computationally irrelevant before giving up and saying “OK, you’re right, I forgot.”

12. William Gasarch Says:

To parapharse, you wrote that alas, the possible behavious are very limited, and that you classified them all.

You should be happy. You completely solved the problem.

So why the `alas’ ? I ask seriously. Its more about the issue of working on a problem rooting for it to i a certain direction, or just wanting to solve it and being happy with either directions.

The worst case would be to show that the train set was not that powerful but not get a complete char.

13. Martin Mertins Says:

Hey Scott, if you like harnessing computational power from rail transport you might enjoy this video! “A calculator made from rollercoasters in Rollercoaster Tycoon 2”

14. Scott Says:

William Gasarch #12: On reflection, the reasons for the “alas” are that
(1) I was hoping to get some nontrivial computational model, even if it fell short of P (as happened with the Digi-Comp II), and
(2) I was hoping the problem would be more of a challenge.

15. fred Says:

I was wondering if you could consider the train as representing a bit.
Then you could build a system using multiple trains.
Then, assuming that when two “input” trains reach a Y section at the same time, the flap gets pushed by both trains, going in a middle position, resulting in both trains being blocked.
So that considering the Y junction as a 2-input/1-input gate we get:

A, B, out (XOR)
0 0 0 (no train on either input, no train comes out)
1 0 1 (one train in only one input, one train comes out)
0 1 1
1 1 0 (two trains come in, no train comes out)

which is an XOR gate.

You could use this to do a NOT by setting A to 1 (always a train coming), and the output is ~B:

A, B, out (NOT(B))
1 0 1 (one train in only one input, one train comes out)
1 1 0 (two trains come in, no train comes out)

But I don’t think you can get universality, because you’d need a NAND or NOR.

16. fred Says:

Regarding my comment above, to be clear, it is possible to use trains/billiard balls to do universal computation.

https://en.wikipedia.org/wiki/Billiard-ball_computer

17. Scott Says:

Watson Ladd #1 and #8: Thanks!! I just read through both your comments with pen and paper, drawing pictures.

I can now roughly see how ganging Y-junctions can yield a FANOUT gate: gang together two Y-junctions one above the other; then if the train enters from the bottom left it will exit from the top left, and if it enters from the bottom right it will exit from the top right, and meanwhile the flaps will store the information about which way it went. And I suppose we do an AND gate by serial composition, and a NOT gate by just swapping one track with another?

Alas, while it’s entirely plausible to me that it’s possible (by analogy to the Digi-Comp II, where FANOUT/amplification/transistors turned out to be the sole missing ingredient), I still don’t understand how we compose all the ingredients to get a general Boolean circuit.

For example, suppose we use the FANOUT gate to copy a bit encoded in the train’s position. Great, now how do we maneuver the train back to the gadget with the two ganged-together Y-junctions, in order to read and make use of the stored bit? How do we do this without destroying the gadget’s original functionality?

If it were possible for you to post links to some drawings (photographed hand-drawings are fine), or email them to me, that would probably help enormously!

Incidentally, in your construction, can we assume that Y-junctions are only ganged together in pairs, and not in triples or any larger groupings? Presumably we’ll run into mechanical problems if we try to gang together too many… 🙂

18. Mark Says:

I wonder if “set up a groove” can be formalized more generally. Concretely, you could imagine the state of each junction is a partition $$P$$ of the incident edges such that each set $$S\in P$$ in the partition contains one or two edges.
– If the train comes in on an edge $$u\in \{u,v\}\in P$$, the train then leaves on edge $$v$$ and the state of the junction is *not* updated.
– On the other hand, if the train comes in on an edge $$u\in\{u\}\in P$$, then the state $$P$$ is updated by some transition function $$f(P,u)$$ to a new partition $$P’$$. The guarantee is that $$u\in \{u,v\}\in P’$$ for some edge $$v\neq u$$. The train then leaves on $$v$$. We will imagine the transition function $$f$$ is a fixed part of the description of the junction.

In the case of WESPREX, the only partitions are $$\{\{1,2\},\{3\}\}$$ and $$\{\{1,3\},\{2\}\}$$, with $$f$$ toggling between the two. The point of the generalization is to preserve the property that the junction isn’t updated if the train returns on either of the edges it visited last time.

Perhaps it’s possible to show that all degree-3 vertices exhibit the same behavior?

19. fred Says:

Scott,
maybe this is what you’re looking for?

http://www.cr31.co.uk/stagecast/trains/tt8_duplo_lout.html

20. fred Says:

They even have your favorite – the Busy Beaver
http://www.cr31.co.uk/stagecast/trains/tt7_turing.html

21. Daniel Says:

No no no, you forgot about the bridges and tunnels. I told you: bridges are about ganging, tunnels are about going back, then that’s Turing-complete. Obviously.

22. dylan Says:

Alas, I believe you’ve discovered something already known: see http://www.monochrom.at/turingtrainterminal/Chalcraft.pdf.

And as pancelor #10 mentions, it becomes PSPACE-complete if you introduce coupled switches, which always have the same state: http://www.cr31.co.uk/stagecast/trains/.

Lastly, here’s a recent paper I was involved in which considers a bunch of models similar to this one, including showing that the type of switch in Trainyard (as Andrew Kay #3 mentioned) is PSPACE-complete: https://arxiv.org/abs/2005.03192.

23. fred Says:

The lazy point is exactly the setup Scott had
http://www.cr31.co.uk/stagecast/trains/tt1_points.html

what’s then needed is an extra few things – linking switches, and spring loaded switches.

24. Tomer Altman Says:

Hey Scott,

Thanks for this post, it stirred up some memories of Cal that I wrote up to share with you. Please check out the set of train-computation links that I think will be of interest:

https://tomeraltman.net/2021/04/05/models-of-biological-computation.html

25. Scott Says:

Ryan O’Donnell #2: OK, I just read the ARRIVAL paper—thanks!!! Along with Huang’s paper on the Sensitivity Conjecture, it’s one of the most beautiful papers I’ve ever read that can be fully understood in ≤20 minutes. Even the constant use of the notation “(G,o,d)” seemed bizarrely appropriate to such divine proofs-from-the-book.

Their ARRIVAL model is (indeed) almost exactly equivalent to the Digi-Comp II—I even used the same trick of “run profiles” to analyze the latter—but with the one crucial difference that ARRIVAL allows loops whereas the Digi-Comp II requires a dag. Or to put it more provocatively, ARRIVAL could be seen as “Digi-Comp II augmented by closed timelike curve.” 🙂

So, just as I showed that the Digi-Comp II has exactly the power of CC (comparator circuits), I’m guessing (though this needs to be verified) that ARRIVAL should be complete for a complexity class CCCTC (CC with closed timelike curves). We’d then have CCCTC ⊆ NP∩coNP, and the question would be whether CCCTC ⊆ P.

26. Scott Says:

dylan #22: Yep, you’re right, they beat me to this! Damn. I’ll add a note to the post.

27. mg Says:

I’ve got a stupid question about quantum computing, if this is not an appropriate place then sorry and no need to read further.

The question is whether there is any way to play quantum chess, where a strategy consists of choosing an amplitude for each possible move in each game state.

Today two people can easily play probabilistic chess, it’s especially easy if their strategies only involve probabilities of the form n/6, then on each turn they can assign their moves to faces of a die so that a roll of the die gives the appropriate distribution of moves. If they follow their strategies with the help of the die, they will end up in a victory for one of the players or a draw with the appropriate probability according to the matchup between their strategies. The probabilities for victory of player 1, player 2 and for draw can be approximated by repeating the experiment many times.

Now if we wanted to move with amplitudes instead of probabilities it feels rather impossible for humans, however one could write two (classical) programs that implement quantum strategies, that is for an input game state print out the amplitudes for each possible move. Now if I’m not mistaken such two programs would have a theoretical “matchup” with a probability for victory of one, of the other, and for draw (summing to 1). The question is, is there any way (perhaps with quantum computing) to physically test this matchup like it was possible with probabilistic strategies?

28. Ken Miller Says:

The brilliant David MacKay (author of Information Theory, Inference, and Learning Algorithms and of Sustainable Energy – Without the Hot Air), at a celebration of his life and work a month before he died of cancer, gave a lovely talk on the mathematics of his son’s train set. Very much like what you’re doing here, you would enjoy it: https://www.youtube.com/watch?v=psvTTNOJgfA

29. Pramod Mathai Says:

Here’s a different science/toy correspondence.
The spatially- and temporally- periodic components of the Langevin equation (1) in
https://arxiv.org/pdf/1901.10439.pdf,
are used to tunably separate Brownian particles in a mixture. The frequency of the temporal force is adjusted w.r.t. the mass of a desired particle to separate it from a mixture against a statically biased force.
The periodic components above can be mapped to the saw-tooth profile and harmonic drive that is used to transport the penguins against gravity in this toy:
The penguins are each seated on a cylinder (not clearly seen in the video) which slides along the tracks. In this case, the friction between the cylinder and the saw teeth is the criterion w.r.t. which the harmonic frequency needs to be adjusted. The friction determines how soon the penguin reaches the bottom of each tooth. If the harmonic frequency is small enough to give the penguin enough time to slide to the bottom of the tooth then the penguin will be transported upwards.
Two differences w.r.t. the paper: every penguins’ rod frictions is the same (so every penguin is transported upwards, yielding a satisfactory toy); the penguins are too massive to worry about Brownian motion.

30. fred Says:

Which goes to show yet again that it’s very hard to come up with anything original these days, unless you’re a specialist working in a super narrow field, aware of the state of the art for that tiny domain.
So, if you’re an amateur, or an expert moving a bit away from his field of focus, you have to rely on google before publishing any insight you’ve just come up on your own.

A bit how recently with that supposed new relation on eigenvalues (Terence Tao), which turned out to have been independently found and published many times before.

Personally, recently I had been working for a while on an idea for some new data structure, and, by coincidence, I listened to a recent interview of D.Knuth where he mentioned that his currently favorite “new” data structure was the Binary Decision Diagram, and when I looked it up I realized it was exactly what I had just implemented.

31. Anon Says:

@mg, I don’t think the way you describe quantum chess makes it hard enough. Unless I miss something, in your setting each player can measure the previous move before choosing her own policy, meaning it’s mainly classical. In other words: yes, because this scheme can run on a standard computer.

32. Scott Says:

fred #30: I did google it—finding plenty of hits about computation via model trains, none about this specific type of Y-junction—although obviously I didn’t look hard enough. As it turns out, writing this blog post was effectively
(1) my fastest way to query the Internet for what was known and
(2) also my fastest way to understand the problem for myself,
so in that sense I regret nothing. I hope at least some readers found the experience tolerable as well! 🙂

(Yes, I could’ve posted to MathOverflow or CS StackExchange, but that would’ve just been embarrassing, with a problem that I knew I could solve myself in an hour or two.)

33. mg Says:

@Anon

That’s not how I wanted it to be. The players’ strategies give weights to the graph of chess positions (where edges are valid moves), then the probability of white’s victory is the square of the absolute value of Sum_(p: path to a white checkmate) Product of the weights of edges in p. Analogously for black’s victory and for draw.

34. Anon Says:

mg #33, not sure I got it. Could you provide a nano-example of how you would construct some random policy (e.g. some amplitude distribution), assuming only two possible moves for two steps?

35. Raoul Ohio Says:

fred #30:

Related experience: Decades ago I was calculating (Fortran, of course) basins of attraction for various roots of functions (mostly polynomials) in the complex plane to create cool fractal pictures. I figured out a sweet way to get the n-th power with at most 2 lg(n) multiplications. I was pretty impressed (although the implementation was awkward: a loop using the binary expansion of n in an array).

When I started reading some CS books, I found that this is a standard example of an easy recursive function. Somewhere I read (something like) “this trick has been discovered by more people than any other trick in the history of computing”.

36. Wes Hansen Says:

I just found this crazy lecture of yours on closed timelike curves while looking into hypercomputation and I come here and find this post. Why does it matter? Well, you should know!

Sometime ago I tried to leave a comment here (on another post obviously) linking to these Pre-stimulus response experiments from the team at UC Santa Barbara. As you may know, they used quantum random event generators to generate events seeming to indicate that the human brain has meaningful information regarding an event up to two seconds prior to the event happening. Their work was inspired by a 2012 meta-analysis. A skeptics critique of that work inspired an update of the 2012 meta-analysis conducted by a different team.

These experiments seem to indicate the existence of closed timelike curves precisely like you describe in your lecture! The experiments using classically generated events seem to indicate that the human heart and brain has meaningful information regarding events up to 18 seconds prior to the events occuring. Perhaps you would write a blog post discussing these experiments and what you infer from them? Perhaps you would post this comment!?!

37. Terete Says:

Dylan #22: Thx for reminding me of that! Around the time Adam and Michael wrote it I was tinkering with a 2-state CA where each cell becomes the NAND of any two (non-exclusive) neighbours. So take both inputs from the same neighbour and you have a NOT; string these together for a wire. It took Michael about 3 seconds to work out why this would never work:)

38. fred Says:

Scott #32

Oh, to be clear, I don’t think that this kind of thing is a waste of time, because it’s important and fun to always try to learn new things.

Just that the initial enthusiasm is often eventually tamed a bit.
But then at some point, as you dive deeper and deeper, you’ll get a feel for the current limits of the field, and then possibly find something new to add to it. Or you’ll be able to connect two separate fields together.

39. Scott Says:

Wes Hansen #36: I found a total of 2 citations of the linked paper on Google Scholar since it appeared in 2017, which does seem astoundingly small for a paper that if correct would demonstrate the reality of precognition and overturn the foundations of physics for the last 400 years. 🙂 I’d be curious to know if there have been replication attempts—as you must know, the history of parapsychology for the past century has been “yeah, but once they got careful enough, the effect failed to replicate,” repeated maybe 10,000 times.

Please, though, no other off-topic comments, since I don’t want to “get off track” (har har…)

40. Zeb Says:

What happens if you have two trains, both moving at the same speed? Can you get more interesting behaviors?

41. fred Says:

Scott, what’s your take on that Muon G-2 thing? Will you post about it?

You think it’s an experimental error (a systematic bias), a theoretical error (something off in the handling of the Feynman diagrams), or new physics?

(I know it’s not your field, but I wonder what’s your opinion).

42. Scott Says:

fred #41: Yeah, it’s totally not my field, but just going by what I’ve read and if I had to guess, I’d say that the likeliest explanation is some issue with the approximations they’re using for Feynman diagrams, followed by some experimental issue, followed by beyond-Standard-Model physics (which given all the disappointments of the past half-century, one’s prior should be very strongly against).

43. Adam Chalcraft Says:

Thank you for the reference. Being referenced by both my quantum computing hero and my complexity theory hero makes me very proud. The article was in [Eureka 53] – a college math society journal from Cambridge University, England. I wish now we’d kept the pdf!

44. Wes Hansen Says:

Scott,

I question why you did not see prudent to post my reply to your reply? I’ve spent some time reading here, especially your QC ethics and hype post and I know for a fact that my reply is relevant! I wrote about it on Quora.

This is all too common in the financial sector and it’s always mom and pop barely making it who suffers! They’re going to destroy the damn economy! I’ve just been debating this with a CEO guy on Quora. He shared an answer from a guy who runs a liquor store. The guy is trying to explain why he cannot justify more than \$11 per hour for his employees. I engaged with the guy. Turns out he’s an MBA, 62 years old, retired from the IT industry, and running his liquor store for “something to do.” Yeah, right! He admits his liquor store is a “pass- through LLC” and these are commonly used for tax write down purposes!

I know how these people operate, dammit!

Wes Hansen’s answer to Should you patent your idea before using Kickstarter?

If you don’t, then read that. Better yet, read:

Here, this is beautiful, updated Nov. 7, 2020: Securitization: How Debt Makes You Money

It hasn’t been much more than a decade and here we go again. Don’t think for a minute these people are not going to figure out a way to pass the risk off on to the taxpayer, all the while they dodge taxes.

Rich people are dodging more taxes than the IRS realized, study finds

“The richest Americans use crafty methods to dodge taxes on far more income than the feds previously thought, a new study shows.

The Internal Revenue Service tries to catch high-income tax evaders with random audits — but they often fail to spot complex schemes that the wealthy employ to hide income, such as offshore bank accounts and pass-through businesses, according to the paper published Monday.”

And you have the audacity to talk about ethics!?!

45. The Ongoing Connection Between Programming and Model Railroads – Technology Due Diligence | Duevia Says:

[…] bought for his four-year-old son Daniel, ultimately resulting in a blog post titled “The Computational Expressiveness of a Model Train Set: A […]

46. Simon Gay Says:

Using train sets to illustrate type theory rather than computability: http://www.dcs.gla.ac.uk/~simon/publications/CablesTrainsTypes.pdf

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.

You can now use rich HTML in comments! You can also use basic TeX, by enclosing it within  for displayed equations or  for inline equations.