Force multiplier

We live in perilous times.  Within a few days, the United States might default on its debt, plunging the country into an unprecedented catastrophe.  Meanwhile, the tragedy in Norway (a country I’ll visit for the first time next month) reminds us that the civilized world faces threats from extremists of every ideology.  All this news, of course, occurs against the backdrop of record-breaking heatwaves, the decimation of worldwide fish stocks, the dwindling supply of accessible oil, and the failure of the Large Hadron Collider to find supersymmetry.

But although the future may have seldom seemed bleaker, I want people to know that we in MIT’s complexity theory group are doing everything we can to respond to the most pressing global challenges.  And nothing illustrates that commitment better than a beautiful recent paper by my PhD student Andy Drucker (who many of you will recognize from his years of insightful contributions to Shtetl-Optimized: most recently, solving an open problem raised by my previous post).

Briefly, what Andy has done is to invent—and demonstrate—a breakthrough method by which anyone, including you, can easily learn to multiply ten-digit numbers in your head, using only a collection of stock photos from Flickr to jog your memory.

Now, you might object: “but isn’t it cheating to use a collection of photos to help you do mental math—just like it would be cheating to use pencil and paper?”  However, the crucial point is that you’re not allowed to modify or rearrange the photos, or otherwise use them to record any information about the computation while you’re performing it.  You can only use the photos as aids to your own memory.

By using his method, Andy—who has no special mental-math training or experience whatsoever—was able to calculate 9883603368 x 4288997768 = 42390752785149282624 in his head in a mere seven hours.  I haven’t tried the method myself yet, but hope to do so on my next long plane flight.

Crucially, the “Flickr method” isn’t limited to multiplication.  It works for any mental memorization or calculation task—in other words, for simulating an arbitrary Boolean circuit or Turing machine.  As I see it, this method provides probably the most convincing demonstration so far that the human brain, unaided by pencil and paper, can indeed solve arbitrary problems in the class P (albeit thousands of times more slowly than a pocket calculator).  In his paper, Andy discusses possible applications of the method for cognitive science: most notably, using it to test conjectures about the working of human memory.  If that or other applications pan out, then—like many other research projects that seem explicitly designed to be as useless as possible—Andy’s might end up failing at that goal.

82 Responses to “Force multiplier”

  1. TJIC Says:

    > We live in perilous times. Within a few days, the United States might default on its debt, plunging the country into an unprecedented catastrophe, all because of the intransigence of what Thomas Friedman calls the “Hezbollah faction” of the Republican Party.

    I read your blog because you cover a lot of fascinating material.

    This screed-like bit does not fall under that category.

    While not a Republican, I note that the debt crisis could be averted if we only used a spending budget from…oh, about four years ago.

    Hezbollah is a militant group that seeks genocide for Jews.

    Republicans are one of two big government political parties in the US, and they stand for social security, farm subsidies, industrial policy, a big military, and prescription drug benefits…just like the Democrats.

    To compare the Republicans to Hezbollah makes about as much sense as comparing the League of Women voters to the Nazi party.

    It might feel fun for a few seconds, but it discredits you and makes you look like an intellectually stunted fool who can’t think rationally about politics.

  2. Scott Says:

    TJIC: What’s strange is that I was just quoting a notoriously middle-of-the-road New York Times columnist! Yes, I agree that the Tea Party is not Hezbollah. I think what Friedman meant is that they appear to find a sort of purifying value in destruction—welcoming an economic disaster as a way to advance their goals. Maybe a better comparison would have been early-20th-century Communists. Anyway, I removed the Friedman quote, in the hope that I can still prevent this thread from degenerating into another political bloodbath… 🙂

  3. Marcus Says:

    Drucker should get a doctorate for this work. His idea is brilliant and will revolutionize the fields of psychology, computer science, and education. Can you encourage him to keep working on the concept?

  4. Scott Says:

    Marcus: Andy already has more than enough “straight complexity” results to qualify for a PhD in any case. And I learned a while ago that, while I can “encourage” him to pursue any of his numerous good ideas, ultimately he’s just going to do whatever interests him!

  5. Jay Says:

    What a great paper (and I don’t just say that because it is one of very few here I can understand).

    I don’t know if it was on purpose or not but the paper actually explains (to a degree) the “debt crisis” which was created by people who want their faces on TV so they can be familiar to voters who will ignorantly re-elect most of them next November!

  6. Alpha Says:

    I am ready to bet 10 to 1 that Iran is not building a nuclear weapon and will not have one by 2015 unless Israel/US attacks it. If you really believe in what you say then come and bet on http://www.intrade.com. Right now no one is betting on your position, the bets are if and when Israel/US will attack Iran.

  7. David Speyer Says:

    This is a beautiful paper!

    Andy mentions the issue that, after retrieving a bit, one may become partially spoiled on the non-matching photos, and need to replace them with other non-matching photos. This suggests an interesting question: How should one design algorithms in order to minimize the number of times one reads from memory, with only very limited ability to store previously read bits? Is this something people are already studying?

  8. TeaParty Says:

    “To compare the Republicans to Hezbollah makes about as much sense as comparing the League of Women voters to the Nazi party.”

    The Republicans want to blow up the whole US economy to keep Obama from winning in 2012. How is such selfish insanity not comparable to a terrorist organization?

  9. Scott Says:

    David:

    How should one design algorithms in order to minimize the number of times one reads from memory, with only very limited ability to store previously read bits? Is this something people are already studying?

    Certainly, in applied algorithms, there’s a large body of work devoted to minimizing the number of cache misses—i.e., the number of accesses made to a large slow memory, assuming that a small fast memory is also available.

    (Note that, if you don’t want to differentiate between registers, L1 and L2 cache, RAM, hard disk, etc., then in some sense, every computational step “reads from memory”—so minimizing the number of times one reads from memory just means minimizing the computation time!)

    But maybe your question was getting at something else?

  10. Scott Says:

    Alpha: I’m not confident that Iran will test a nuclear weapon in (say) the next 5 years, but it seems like a reasonable guess that if they don’t, it will be because they were deterred or prevented (e.g., by the large covert efforts it appears are being devoted to that goal), not because they lacked the will or the capability. Now, what would be the terms of a bet on that belief?

  11. Shehab Says:

    During last spring I was taking my first grad level algorithm course. I was trying to determine whether a recursive relation is an oscillating function, i.e. has no Big Oh/Omega/Theta bound, just by looking at it. You can’t use pencil and paper. Yet to see any success. 🙁

  12. Mohsen Says:

    As an Iranian killing the people in all over the world make me really sad! really!Also using weapons against people; specially when it is nuclear weapon! It becomes more sorrowful when we hear it from a person living in a country that have already used it! Much worse when he is a professor in a great university.
    But although I want to remind people there also are people in Iran trying to solve some global challenges; with all hardship we are continuing our work; sanctions we have both from U.S and our governors.

  13. Scott Says:

    Mohsen: OK, just for you, I removed the reference to Iranian nuclear weapons from the post.

    I expect that next, I’m going to get a complaint from fishermen about my reference to the depletion of fish stocks … and in the limit, I’ll have to take down my entire blog. 🙂

    Look, you don’t have to tell me about the existence of enlightened Iranians who are trying to solve global challenges—not only do I passionately support their efforts and enjoy reading about them (the novel I’m reading right now, Zendegi, hopefully imagines a peaceful revolution in Iran), but I’m privileged to have some as friends. (Well, Iranian expats—if I wanted to enter Iran myself, then at the very least I’d need a new passport without Israeli stamps on it.)

    I was going to respond to some of your other points, but on reflection, we’ve already discussed these things in previous threads and I think it’s enough for this one.

  14. Douglas Knight Says:

    Intrade only has a market on Iranian testing through the end of the year, though it has markets on the US or Israel bombing Iran quarterly through 12/2012. I was quite surprised to learn that the market on Ahmadinejad leaving office by the end of the year is 1/4-1/3.

    Bueno de Mesquita makes very specific, but not easily measured, claims about the Iranian nuclear program. He thinks that few of the leaders particularly care about it, but they care a lot about not getting publicly pushed around. Thus he thinks bombing would encourage the program. It’s not clear whether he thinks clandestine methods would. Stuxnet got a lot more publicity than assassinations of physicists.

  15. Mohsen Says:

    🙂
    It is just a hard time to would love to research. Have to do a compulsory military service, spending most of the time in a town other than your home town in a hopeless lab receiving no money!
    With a passport limited to Syria, Venezuela, (perhaps North Korea 😉 ). Then to be mentioned like that in your blog!! 😀
    BTW if I want to enter any country without problem, I would need a new passport without mentioning that I was born in Iran. 🙂

  16. Andy D Says:

    Thanks, Scott!

    David, to address your question (#7): I agree with Scott that there are a number of possible models by which the notion of limiting memory-access in computation can be (and has been) investigated.

    Now, what kind of model is closest to the activity I explored, namely, computing with our recognition-memory capacity? One very-idealized model is that our recognition memory consists of a large table of bits, with a single bit-position corresponding to each possible memorable photo. (The understanding is that extremely-similar photos actually share a cell.)

    When we see a new image, we look in the corresponding cell to check if it’s familiar or not (0/1). So we have random access to these cells. However, we can’t just “un-familiarize” ourselves with images at will, and by looking at them, they become familiar involuntarily. So one could model this by requiring that every time we read from a position in memory, we automatically write a ‘1’ there afterward.

    I know there have been CS theory papers looking at machines that can only write ‘1’ into their memory, although I can’t recall the references. As for machines that *have to* write a 1 with every read, this might be unexplored.

    I considered doing some formal modeling of this general type in the paper, but decided not to, because

    (a) I wanted to keep things simple and focused;
    (b) I didn’t have the gall to propose a formal model of recognition memory, which would almost certainly be a caricature;
    (c) no one has ever exceeded the storage capacity of human recognition memory, at least in the domain of general imagery. To my knowledge, no one has even put forward a convincing estimate of this capacity. Until someone does, we can safely consider it unbounded.

  17. TeaParty Says:

    To those who aren’t impressed, note that even Google, with its millions of servers, can’t multiply ten-digit numbers without an overflow. Try it.

  18. Alpha Says:

    We need an observable event to decide the winner of the bet. The capability of developing nuclear weapon, yes, I won’t bet on that, having intention of building a bomb, I don’t know, I don’t like getting into the game of “predicting” intentions.

    The fear of terrorism, extremism, and nuclear weapons reminds me Dr. Strangelove. Unfortunately there are too many paranoid pigheaded idiots in government and military all over the world.

  19. Interzone Says:

    The only genuine catastrophe in the first paragraph of this post is the failure of the LHC to find supersymmetry.

  20. . Says:

    Would you prefer a default now if you find that the odds of defaulting are higher (from a worse situation) 10 years later?

  21. Jiav Says:

    @Scott

    “this method provides probably the most convincing demonstration so far that the human brain, unaided by pencil and paper, can indeed solve arbitrary problems in the class P”

    I don’t get this one. The idea is to replace the use of working memory by the use of visual recognition, an interesting trick as the former is severly limited in humans. But both capacities are known to be limited, including by Andy D who discussed additionnal tricks to optimize visual recognition. So how could this help solving arbitrary problems?

    @Andy D

    As you mentionned in your paper we’re prone to error, and I was wondering if we could used some error-tolerant computation to fix this problem. However, our errors tend to follow a power law: Michael Jordan may shoot well 99% of the time, and miss five consecutive shots, a very unlikely event if his errors were following a normal distribution. If we allow several independant humans to work in parrallel, then we have a series of power laws that turns into a single normal distribution. But for a single human, the power-law distribution holds. Would that prevents error-correction schemes?

  22. Ian Finn Says:

    I don’t think this paper directly addresses questions about pure recognition memory parameters for humans, but it does say something about our capacity to store associative memories:

    http://www.ncbi.nlm.nih.gov/pubmed/19966258

    I think it is highly significant that the conclusion from his self study is that associative memory is very much limited, and almost exactly on par with that of our animal brethren.

    disclaimer: the author is a friend of mine and so I might be a little biased, but I think the study is awesome 🙂

  23. Andy D Says:

    Jiav,

    Thanks for your comments. A few notes:

    -There are definitely finite upper bounds to the computational abilities of human brains. What I’m convinced of after my experiment, however, is that we can easily, drastically increase our effective working memory by using the ‘recognition method’.
    Visual recognition memory is not perfect, but it’s very powerful and, as I explain in the paper, its success probability can easily be ‘boosted” by using backup images.

    -The bottleneck in my experiment–the risk factor that probably would’ve derailed me if I’d attempted, say, a 15-digit multiplication without further practice–was not a failure of the recognition-memory component, but a failure of logic or “control flow”. That is, the risk of looking at the wrong picture by mistake, or forgetting where I was in the algorithm, etc.

    Even with 10 digits, it took a lot of concentration and double-checking to get this right. However, I could’ve simplified my task by storing more of the control flow information into recognition-memory. Given a larger input, or running a more complicated algorithm, this is what I’d do.

    -You’re right that human errors tend to occur in bursts, and that this might threaten schemes of this kind. Using techniques from the theory of fault-tolerance might help address this in very large computations, but at moderate scales I think it’s better to use simple techniques.

    One thing I would say is that, while humans are more error-prone than computers, we are also more conscious of our own mistakes. When I was uncertain whether an image was familiar, I knew it, and could back up and review my work. When I got a not-so-memorable image to learn, I could study it extra-closely or use a backup image. When I was getting tired, I knew it, and would take a break.

    -There are no theorems in my paper. We can speculate about the power of this approach to human computation, but ultimately it’s an empirical question. My hope is that the paper will find readers ambitious enough (perhaps crazy enough) to put it to the test, to take on a wild project and do something really unprecedented with their brains.

  24. Jair Says:

    Very cool! This seems like the kind of thing Martin Gardner would have appreciated.

    I wonder how well this method would work over long periods of time. Some savants can hear a long string of digits once and recall it years later – perhaps a method like this could produce similar results? It would be difficult, since we are constantly bombarded by images in everyday life, some of which may be similar to the ones used in the test.

  25. Amirali Says:

    I’m amazed at how a brilliant paper like this has gotten mixed up in such heated political discussion! As much as I want to avoid commenting on the political part, I can’t resist (as another Iranian) the temptation to add to Mohsen’s comment

    “It becomes more sorrowful when we hear it from a person living in a country that have already used it!”

    that the appearance of *a country* in this sentence should’ve been more appropriately replaced with *the only country*.

    That being said, I can assure you Mohsen, having lived in the US for a while, that Scott really means the kind words that he wrote about Iran in your response, and he, like many other good-hearted Americans, has nothing against the people of Iran.
    —————————-

    But closer to my interest, I had two thoughts after reading this post that I wonder whether you or Andy would have some comments on.

    (1) Somehow in my brain, it’s much easier to add 350 to 150 than it is to add 378 to 549, whereas to a computer this doesn’t seem to make the slightest difference.

    One could argue that our brain has learned over the years a set of simple rules for adding some special “easy” numbers, and these rules can of course easily be coded into a computer so the computer also performs these additions potentially faster than general additions. However, the point is that (I think) if one were to compare the computational cost in any reasonable model of computation, it would take the computer longer just to sort through these sets of rules than to perform the addition the traditional way.

    So it seems like the computer cannot benefit from this structure whereas our brain can. Does that suggest that in this tiny sense a human brain is fundamentally different than a computer brain and maybe even a little better?

    (2) On the remark by Scott:

    ” As I see it, this method provides probably the most convincing demonstration so far that the human brain, unaided by pencil and paper, can indeed solve arbitrary problems in the class P (albeit thousands of times more slowly than a pocket calculator).”

    Is there a fundamental reason why (an extension of) Andy’s Flikr method wouldn’t also greatly enhance our ability to mentally solve considerably larger instances of 3SAT than we currently can?

  26. Anthony Says:

    @ Andy D

    This is a really nice idea.

    One quick question: do you think base 10 is close to optimal? One could imagine working in a much larger base, say 100, maybe with more backup images.

  27. Amirali Says:

    When I think more about my comment (1) above, I see that this is just an artifact of us being seemingly superior to computers in “sorting” and inferior in “number crunching”. I think the same phenomenon is more generally true: If I see a picture of my grandma, I can almost instantaneously sort through the millions of images stored in my brain an realize that this is a face I know, whereas this seems to be a much more cumbersome task for a computer.

  28. Martin Schwarz Says:

    Somehow this methods reminds me on off-loading the CPU by utilizing the resources available in the GPU. 😉

  29. Jiav Says:

    Andy D,

    Thanks for your answers. I especially like the idea that we are conscious of our errors, although it’s likely false at least for our most interesting errors (e.g. those known as “cognitive bias”). Two questions please:

    Do you have some reference in mind for fault-tolerance techniques that can deal with power-law distribution of errors (without the obvious trick of allowing independant processors/minds)?

    Forgetting for human finiteness, what would be the upper bound for the running time of your algorithm?

  30. Andy D Says:

    Anthony, Jiav,

    Thanks for the feedback! Regarding the runtime/optimality of my approach to multiplication: nothing about my approach is optimal. For one thing, there are faster and less memory-intensive multiplications out there. I chose the multiplication task because the usual naive algorithm *is* memory-intensive (quadratic time and space), and I wanted to simulate such an algorithm. I also wanted to run a familiar algorithm that readers could visualize for themselves, and that wouldn’t confuse me too much.

    Nor is base-10 optimal in any precise sense (although it’s familiar, and probably faster than base 2). As you suggest, one could work in a much larger base B; this would speed up the *learning* of numeric data by a factor ~ (log B), but would slow down the *retrieval* of numeric data by a factor ~ B. For large B this becomes a bad tradeoff for most purposes.

  31. Scott Says:

    I enjoyed skimming the paper and I think it’s a cute idea. However, I’m a bit confused as to why some think this is such an important idea. Why is it important what we can do without specific tools like pencil and paper but using something like flickr? Pencil and paper allow us to record a small amount of information in an easily retrievable way. This flickr method allows us to record a small amount of information in a less easily retrievable, but still doable, way. I honestly don’t see a huge difference, but maybe I’m not a deep enough thinker. Are we going to start dividing problems into classes based on how easy it is for humans to solve them without specific tools? (e.g., which problems can most easily solve with one hand? which problems can most easily solve with their eyes closed? etc.)

  32. steve Says:

    amirali:

    there *is* something special about the fact that those numbers are easier for you to add together, and computers *do* (used to, actually, now they just use more die space) take advantage of exactly the same thing: for many of the x86 processors that we’ve all used, the multiply instruction took a number of clock cycles that was an increasing function of the number of 1’s in the binary representation of the two numbers to be multiplied. less 1’s meant it went faster, because there were less things to add together. so it literally took longer to multiply harder numbers together. just like it would for us in base 10 — more zeros means more places that we can skip.

    jiav:

    this is what error-correcting codes are for, in particular codes that can handle bursty errors, such as reed-solomon.

    s.

  33. Amirali Says:

    Steve,

    Thanks for your comment. Yes, I agree as I wrote before that all the rules that we humans use for addition can be implemented in a computer and in some cases (like the case of having many 0’s that you mentioned) this can speed things up for a computer as well depending on the algorithm being used.

    But the point I was trying to make is that the numbers that are on average easy to add for a human brain and those that are (or can be made) easy for a computer brain are not always the same. And maybe it would be interesting to try to classify them.

    A better example than what I wrote before perhaps would be: 249+251
    I don’t think anyone would perform this addition the “paper and pencil way” — we would just say: OK, 49+51 is obviously 100 and then we have two 200’s, so the sum is 500.

    This thought process (a.k.a. algorithm) can obviously be implemented in a computer, but I don’t think any computer would benefit from it because as I wrote before it should first go through the task of *recognizing* that this instance has this structure. I argued that the human brain seems to be good at the recognition phase, maybe because when we see 49+51 we are immediately reminded of our experience with 50+50.

    (I understand that when we talk about “easy for a computer” we should think about what happens in binary, but let’s abstract away one level from that.)

  34. Jiav Says:

    steve:

    Thanks for the tentative answer, but error-correcting codes are for unreliable communication channels, not for unreliable computations.

    http://arxiv.org/PS_cache/arxiv/pdf/1105/1105.3068v1.pdf

  35. A Says:

    Beautiful

  36. steve Says:

    jiav,

    weird. i thought that they could be used for both.

    steve

  37. Jiav Says:

    steve,

    I couldn’t find any proofs. It seems this subfield does not attract a lot of attention.

    Andy D,

    I forgot to mention two things: 1) you could improve visual recognition using very significant pictures, such as familar faces, words and symbols 2) as far as I remember the number of sets of inputs a neural net can discriminate is about sqrt(nb_of_neurons), so 100.000 pictures for the 10G neurons involved in visual recognition is likely close to the limit.

  38. James Andrix Says:

    My takeaway is that we need a audiovisually distinctive base-100 number system.

  39. Andrew S. Says:

    I’m curious why Scott claims “the human brain, unaided by pencil and paper, can indeed solve arbitrary problems in the class P,” instead of the stronger “… can solve arbitrary problems (recursive, etc.).”

    I.e., why the restriction to P?

    Can’t one use this to simulate a universal TM?

  40. Scott Says:

    Andreas: Yes, you can compute stuff outside P, but it will take you more than polynomial time—and because you’re a human, after >>poly(n) steps you’re dead! 🙂

    By contrast, one might have thought that the class of problems “efficiently solvable by the unaided human intellect” would correspond more closely to (say) LOGSPACE.

  41. Jonathan Shewchuk Says:

    Scott, are you going to Oslo? If so, you must see the Vigelandsparken. It is one of the most beautiful and impressive human accomplishments I’ve ever seen.

  42. Scott Says:

    Jonathan: Alas, no. I’ll be departing from Bergen, on a conference/cruise organized by FQXi.

  43. Angelo Says:

    While this paper is indeed fascinating (congrats to the author) the pitch was imho mostly marketing. It’s a neat way to “emulate” a computer in our brain, but it doen’t prove much about the complexity classes the brain is able to takle or well the way the brain works (it does say something about memory) as for small inputs we are clearly able to follow any algorithm without tricks anyway, so this only expands the range of inputs we can reason about…

  44. Mohsen Says:

    I do agree with Scott(comment#31)! I am also confused with some of comments. It is maybe because I did not got some of points.
    @Martin I am a GPU programmer since the advent of CUDA, yet did not get your point. would you please explain?
    And I think, what Amirali said is also interesting; I want to add that I guess there is a different class of complexity for human brain which is totally different with prevelant turing classes.
    Although, I am not a fan of fuzzy logic, I remember a new approach based on fuzzy logic+neural network was discussed by one of our proffessors (who I think is a pioneer) wich claimed could solve NP class problems in P time. I don’t even know if they have accomplished or not, yet I think it was a cute idea

  45. mkatkov Says:

    Scott, I’m glad that there is finally an application of abstract theoretical constructions to “best” (computers still are not great mathematitians) physical model of computation. I’m also glad that you are convinced that humans at least in P (at the limit of practical computations ;). I’m also sure that even if humans can solve NP-complete problems, you will not be convinced in that until you see formal prove in paper and pencil it is possible.
    Finally, the representation of the information (paper or cards) is important for limits in thinking process, and that is what limits the progress in math and science (it even limits the objects that can be represented), so indeed it is very important what problems can be solved using what tools.

  46. Scott Says:

    I’m also sure that even if humans can solve NP-complete problems, you will not be convinced in that until you see formal prove in paper and pencil it is possible.

    No, I’ll be convinced when I can give you a thousand-vertex Hamilton cycle instance and you can solve it, or a thousand-digit number and you can factor it, in a reasonable amount of time! Or if not you, then someone. I don’t know what about that is so hard to understand.

    The term “NP-complete problem” has a precise meaning in computer science, and you don’t get to redefine it at will. If you think humans have some faculty of creativity or insight that can never be replicated by machine, then fine, just say that! But don’t identify the faculty in question with “solving NP-complete problems,” unless you literally think humans can factor huge numbers and do all the other things implied by such a statement.

  47. John Sidles Says:

    Andy might want to add to his references two delightful works. One is psychologist Alexander Luria’s biography of the Russian mnemonist Solomon Shereshevsky, titled Mind of a Mnemonist:

    “Take the number 1. This is a proud, well-built man; 2 is a high-spirited woman; 3 a gloomy person; 6 a man with a swollen foot; 7 a man with a moustache; 8 a very stout woman—a sack within a sack. As for the number 87, what I see is a fat woman and a man twirling his moustache.”

    The other is Jorge Luis Borges’s short story Funes the Memorious (1952).

    The link between these two is surveyed in a brief article Borges, Luria and hypermnesia–a note.

    It would be very interesting (perhaps) to discover (by experiment) whether these methods worked not with familiar and unfamiliar pictures, but with familiar and unfamiliar books; the point being that human memory for narrative is perhaps even more stable and pattern-selective than human memory for images.

    If there is a common theme among these works, perhaps it is they provide further confirmation that “People are not rational animals, they are rationalizing animals.”

  48. Andy D Says:

    Hi John,

    I’ve read Funes and Mind of a Mnemonist and agree that they’re great fun. Anyone fascinated with the idea of superior memory should read them.

    For those who haven’t read them: Mnemonist‘s hero, S. (a real person, studied by the author), had a powerful synethesia that seemed to automatically, effortlessly transform words and numbers into sensorily-rich, memorable form. He combined this with the “memory palace” technique to develop a powerful trained memory. OTOH, Borges’ protagonist I. Funes, a fictional character, suffers a fall and develops a seemingly unlimited memory. Both individuals are depicted as suffering due to their condition, in particular as incapable of abstract thought.

    I didn’t cite these works because both concern savants, and I’m more interested in what’s possible with an ordinary memory. For this, Joshua Foer’s book Moonwalking with Einstein is a much better resource. What S., the Mnemonist savant, does effortlessly (translate numbers to images), can be done with training as well. The memory-palace technique can also be learned by anyone; but it’s recall-based, in contrast with the “recognition method” I explored, and it seems much more demanding to learn and use.

    Borges’ Funes, on the other hand, is a fictional character, with little to teach us about how memory actually works. In particular, his visual memory is suggested to be unlimited; yet in truth, there are no reliably documented cases of photographic memory, in which a person is able to record their visual field on a “pixel level” if you will. If there were such cases, we would expect one of them to become a dominant force in international memory contests, but this hasn’t happened. The top competitors are ordinary people who’ve trained the bejeezus out of their memory, using memory-palace and related methods.

  49. Andy D Says:

    To my knowledge, two people stand out in the world today as having abilities that might be described as near-photographic memory.

    One is the British artist-savant Stephen Wiltshire, who can draw architecture and cityscapes from memory amazingly well. I know that drawing from memory is an esteemed skill in the art world, and I’m curious whether it’s developed and measured in any systematic way (in contests, e.g.).

    The other is Ramon Campayo of Spain, the Usain Bolt of the competitive-memory world, who holds most of the records for “flash memorization” (a discipline he’s built up himself). For example, he’s memorized 19 random digits in 1 second, or 48 bits in the same time; see here. He seems to have a teachable technique, and his students do impressively well. He probably has some kind of hardware-advantage too, but not a “photographic memory” as we conceive it.

  50. Scott Says:

    Mohsen #44: The moment someone claims they can use fuzzy logic to solve NP-complete problems in polynomial time is the moment I stop listening… 🙂

  51. Amirali Says:

    In case you haven’t seen this already:

  52. John Sidles Says:

    Andy D, thank you for a great topic.

    Laura Ingall’s Wilder’s book Farmer Boy vividly describes a kind of memorization competition that surely is one of the most ancient (yet difficult to quantify):

    “Nick Brown could tell more funny stories and sing more songs than any other man. He said so himself, and it was true. ‘Yes, sir,’ he said, ‘I’ll back myself, not alone against any man, but against any crowd of men. I’ll tell story for story and sing song for song, as long as you’ll bring men up against me, and when they’re all done, I’ll tell the last story and sing the last song.’ Father knew this was true. He had heard Nick Brown do it, in Mr. Case’s store in Malone.”

    I have observed that surgeons too are possessed of prodigious memories, and yet they only occasionally resort to mnemonic devices … instead the process of surgery is conceived as a sequential and partially tactile narrative, whose individual elements an experienced surgeon could no more forget than Nick Brown could forget the middle of a story.

    Yet another book on the various process(es) by which ordinary people systematically learn extraordinary cognitive skills is Stefan Fatsis’ Word Freak: Heartbreak, Triumph, Genius, and Obsession in the World of Competitive Scrabble Players (2001).

  53. Andy D Says:

    Thanks for the references, John! Farmer Boy sounds fun. Indeed, there are many manifestations of “memory power,” each wonderful in their own way.

    There’s one interesting detail in Borges’ Funes that I had forgotten about (one can find it online). Borges says that, even before the fall that gave Funes his godlike memory, the boy had some unusual abilities; in particular, he could tell time down to the minute without a watch.

    I’m curious whether anyone actually has this ability, since keeping time is another fundamental task on which computers seemingly have us beat.

  54. Mohsen Says:

    @Scott #50 I agree so; just to be accurate I add that obviously using fuzzy logic, no NP problem can’t be solved in P time.
    However, if you use another machine that use different instructions (i.e. brain or a quantum computer) the class of algorithms can be changed.
    They claimed that, there is specific instruction in brain which, not fuzzy logic nor neural network could not assimilate the function. Using the brain machine, a new class of algorithms we would have, that prevalent NP problems would be solved in P time.
    I myself think that maybe there is a universal theory can explain complexity of algorithm and the machine they can be run on. (I remembered DNA computers now 🙂 )

  55. Mohsen Says:

    I can’t wait to add comment on Amirali’s link #51! it was brilliant.
    It also can show there are instructions in our brains (if we see them as computers) need to be simulated in our models.
    Plus, the instruction must be easily implemented in nature. Reminds me a video on TED about how smart plants can be!
    Perhaps we (plants, animals and humans) use the same method to be smart!

  56. . Says:

    This is probably the most confusing way to multiply. There probably are easier tricks suitable to humans.

  57. Scott Says:

    Mohsen #54: Well, whether the brain can be efficiently simulated by a classical computer is not a question that we can presuppose the answer to! What’s known today about biochemistry, neurobiology, etc. would lead us to believe that it can be—so if someone thinks it can’t, then the burden is on them to:
    (1) explain the nature of the computational problem not in BPP that humans can solve efficiently,
    (2) explain the evidence that that problem really isn’t in BPP (i.e., that it’s indeed hard for classical computers), and
    (3) probably most importantly, explain what principles of physics the brain exploits to violate the Extended Church-Turing Thesis. (Is it “just” quantum mechanics? If so, then is the brain at least simulable in BQP? If not, then why not?)

    BTW, DNA computers definitely won’t do the job (that is, of solving NP-complete problems exponentially faster than classical computers), the reason being that they’re “just” massively-parallel classical computers. In other words, if P≠NP, then one would expect the amount of DNA needed to solve an instance of n to increase exponentially with n.

    For more, you might enjoy my survey article NP-complete Problems and Physical Reality.

  58. Scott Says:

    There probably are easier tricks suitable to humans.

    . #56: Like what, for example? I’m sure everyone is eager for your alternative proposal!

  59. . Says:

    I said ‘probably’. I remember knuth’s book had a comment on a particular multiplication technique and no calculation wizard had used it. But to probably find a better technique (suitable for humans) one probably has to find out how these guys here do it. We all have only one known way to get to the answer – logic. I am sure these guys have plenty of tricks for upto 10 digit numbers.
    http://www.recordholders.org/en/list/memory.html#8×8

    but considering the flickr method was done in 7hours on the first attempt it probably is very suitable for humans although I find it unintuitive (which is my personal opinion).

  60. Raoul Ohio Says:

    Andy D, re “learning time telling”.

    For whatever it is worth, for a half century or so, I have always tried to predict the time before I look at a clock, or the corner of my monitor. My usual method is guessing how long it has been since I last knew the time. Then I correct for things that “go fast” or “go slow”. (It is indeed true that times “goes faster” when you are having fun, or doing deep thinking). My prediction is typically within about 10% of the time interval since I last knew the time. Not too impressive, but fun.

  61. Andy D Says:

    Thanks Raoul! I think I’m pretty good at predicting the 1-minute mark–probably ’cause it’s a common microwave setting. I’ll test myself and report back.

    Hi “.”,

    I’ve acknowledged that there are faster, easier ways to multiply numbers. This holds both for computers and for humans. Moreover, multiplication is LOGSPACE-computable, which means that in theory you can probably multiply huge numbers without any sort of heuristics. I chose multiplication, and the grade-school algorithm, for their illustrative value only.

    The “recognition method” might seem unintuitive at first, but if you sit down and try it, I think you’ll find it’s easy to learn and apply. And the payoff is a greatly-enhanced memory, not just for mental multiplication, but for all kinds of structured data and calculations. I think it’s wild that this ability is just sitting there in our heads, waiting to be tapped. (This is not to claim the technique is “practical”, but just to explain my motivations in doing the experiment.)

  62. Per Hansa Says:

    Scott, I’m dead scared and I don’t know who else to turn to. The ‘resolution’ of the debt ceiling crisis is possibly worse than an actual default–Obama once again showed himself to be a very poor negotiator who holds no principle safe from rape and sacrifice. Can you, with all your wisdom, illuminate something, anything, in this nightmare that might give me reason to hope for the future of America? I know you’re not a political scientist and that is why I am writing to you.

  63. . Says:

    Hi Andy D. Ok I will try it. Thank you.

  64. Scott Says:

    Per Hansa: No, I don’t think this is worse than an actual default. And defense spending will be cut, which is good. And while we can hardly expect basic research to prosper, it seems like Obama is determined to protect it from termination.

    At the same time, I’m sorry: I don’t know how not to see this as an important step in America’s long slide downward. The Republicans showed that they could get almost everything they wanted by holding the country hostage. They’ll no doubt continue to do that year after year, and the Democrats will continue capitulating. The Republicans won on their “principle” of 100% avoidance of the first thing any sane country would do in this situation (higher taxes for the rich). And Obama failed his supporters while also hurting his reelection chances; I still don’t understand what could have been going through his mind to cause him to act (or rather, not act) as he did.

  65. . Says:

    @Scott “I still don’t understand what could have been going through his mind to cause him to act” Was there any other choice when you have a bunch of Yale educated monkeys on the other side? Yah thats right. Seriously does an education at Yale mean anything to monkeys?

  66. . Says:

    Or may be the educational system at Ivy league is giving the wrong ideals. I don’t know since I am not educated at an Ivy league institution.

  67. try harder Says:

    “.”,

    Please try to see the difference between what Scott (#64) wrote–a pointed criticism–and what you wrote: an offensive, pointlessly inflammatory insult.

  68. . Says:

    http://www.theatlantic.com/politics/archive/2011/08/how-political-psychology-explains-washingtons-debt-limit-deadlock/242894/

  69. John Sidles Says:

    Collectively, the STEM professions have always had at least as much to do with economic prosperity and security as any political party or individual leader. For example, since 1946 the Shannon channel capacity of magnetic resonance spectroscopy and imaging devices has doubled (on average) every 11.7 months. Unsurprisingly, this science-and-technology progress was accompanied by a doubling of STEM jobs roughly every seven years during this same interval.

    QIT tells us that we are at still 28 doublings away from fundamental limits … and so a natural STEM strategy is simply this: sustain historical rates of technological progress. And very fortunately, QIT provides us with precisely the required toolset. We need only set out our STEM roadmaps thoughtfully and plainly, to discover that governments and enterprises alike are eager for our success.

  70. Benjamin Says:

    I hope you enjoy your visit here in Norway, Scott! Although I’m not in computer science, your blog is among my favorites.

  71. rrtucci Says:

    Before his visit to Norway, Scott would like everyone there to know that he is a very uncontroversial guy.

  72. Benjamin Says:

    rrtucci, is that a (rather obscure) Alan Dershowitz reference? If so, I’m amused.

  73. rrtucci Says:

    It’s a bad joke. I was ashamed of it as soon as I posted it. I guess I’m scared of the times we live in, just like many others in this blog

  74. . Says:

    @try harder: Actually I tried to be funny… may be it is not funny! But seriously I wonder if it is the right thing to do to avoid being strongly vocal on something wrong to avoid a bigger calamity?

  75. John Sidles Says:

    For me (and I think for most folks) the civility, humility, sobriety, and good humor of the preceding posts by Benjamin, rrtucci, and “.” … are appreciated, welcome, and encouraging.

  76. Terrence Cole Says:

    @Andy D
    Human image recognition is a subject near and dear to me as well. I wired up your algorithm in javascript yesterday to my image generator, vash. You can find it on my blog here: http://blog.thevash.com/2011/08/how-to-multiply-10-digit-numbers-using.html

    I know how it works, and felt it working, and I was still surprised when I got the right answer.

  77. Mohsen Says:

    Tons of fun reading your survey Scott. It is great!
    @Terrence cool idea!

  78. Andy D Says:

    Hi Terrence,

    Your applet looks wonderful! I will give it a spin shortly.

    I know how it works, and felt it working, and I was still surprised when I got the right answer.

    That’s exactly how I felt 🙂

    I didn’t find Vash, your underlying visual-hashing system, during my literature review (looks like it’s pretty new); but the images look great, and seem memorable too.

  79. John Sidles Says:

    Terrence’s visual-memory web site is terrific, and I urge everyone to give it a try.

    That said, I was completely unable even to multiply a pair to two-digit numbers using it … in consequence of a kind of visual “dazzle” that (for me) is associated to an overload of pictorial images.

    Had the web-site been cued not to images, but to the punch-lines of jokes, or Bible stories, or even to minor Harry Potter/ Aubrey/Maturin/ Huckleberry Finn/ Moby Dick characters, then I know (from personal experience) that narrative-centric cues would have worked much better (for me) than visual cues.

    No doubt, for other folks short musical chords, or rhyming limericks, or drum rhythms, would be appropriate cues … the point being, that human cognition is both variable and elastic.

    Terrence, if you created a parallel site, in which the cues were punch-lines of jokes, and memorizing the cue consisted of learning the joke, then for at least some folks, the process of “multiplication” would become very much faster, easier, and more reliable (and funnier too).

  80. Terrence Cole Says:

    @Andy

    Thanks!

    Don’t feel bad about having missed it: Vash launched 3 weeks ago today.

  81. DaveL Says:

    Excellent and thought-provoking paper. I’m struck by the similarities and differences when compared with the venerable “Memory Palace” method of organizing facts.

    In a memory palace, facts are associated with objects in a house or room or palace, usually striking and visually memorable objects. The big difference is that during retrieval you are looking at many such objects that are (theoretically) very familiar and one brings the needed fact to mind. Some users of this method (popular in the Middle Ages) could memorize huge corpuses of information.

  82. Itai Bar-Natan Says:

    For people trying to use this method, I would like to recommend them to try using the Internet Oracle:
    http://cgi.cs.indiana.edu/~oracle/digests-all.cgi
    It consists of 14860 and growing humorous questions and answers, conveniently grouped into tens. I don’t know how good this is compared to pictures and I haven’t tried this myself (though I am planning to), but it seems worth a try, and John Sidles (#79) might benefit from it. The Internet Oracle is also pretty fun in spite of any “practical applications” in mental arithmetic.