Better safe than sorry

After reading these blog posts dealing with the possibility of the Large Hadron Collider creating a black hole of strangelet that would destroy the earth — as well as this report from the LHC Safety Assessment Group, and these websites advocating legal action against the LHC — I realized that I can remain silent about this important issue no longer.

As a concerned citizen of Planet Earth, I demand that the LHC begin operations as soon as possible, at as high energies as possible, and continue operating until such time as it is proven completely safe to turn it off.

Given our present state of knowledge, we simply cannot exclude the possibility that aliens will visit the Earth next year, and, on finding that we have not yet produced a Higgs boson, find us laughably primitive and enslave us. Or that a wormhole mouth or a chunk of antimatter will be discovered on a collision course with Earth, which can only be neutralized or deflected using new knowledge gleaned from the LHC. Yes, admittedly, the probabilities of these events might be vanishingly small, but the fact remains that they have not been conclusively ruled out. And that being the case, the Precautionary Principle dictates taking the only safe course of action: namely, turning the LHC on as soon as possible.

After all, the fate of the planet might conceivably depend on it.

120 Responses to “Better safe than sorry”

  1. DanF Says:

    Love it! My favorite though (sidebar: just discovered my spellchecker was set to British English; favourite) is that the catergy “The Fate of Humanity” has so many entries…

  2. James D. Miller Says:

    You are attacking a straw man.

    Let P = the probability of the LHC destroying the earth.

    You are implying that most of the critics of the LHC believe it should be stopped if P>0, but what evidence do you have of this?

    For what values of P would you advocate stopping the LHC?

    (I’m the author of the first article you linked to.)

  3. Adam Says:

    It’s not attacking a strawman as it’s not an exact argument but just outlines the problems of the claims of people like you.

    What it is saying is that at very, very small levels of probability, the Precautionary Principle does not make sense. The implication of the post is that worries about the LHC are worries about something with a very, very small probability. This claim is true.

    For it to be a strawman, he would have to be claiming what you say he is claiming. He isn’t doing this. He is claiming that people want to stop the LHC based on a very small probability of danger and that this is not a suitable principle.

  4. Scott Says:

    James, my point was not that the critics think the LHC should be stopped if P>0, but that they seem to assume without any evidence that P>Q, where Q is the probability of the earth being destroyed if the LHC is not turned on. If both probabilities could be calculated and if P were orders of magnitude greater, then I think a strong case could be made against the LHC. As it is, I think both probabilities are nonzero, neither can be calculated (nor can their relative order of magnitude), and neither is far enough from zero to be worth expending many brain cycles on.

    I think it’s easy to fall victim to “premature Bayesianism”: that is, trying to be rigorous by demanding probabilities for specific astronomically unlikely events, while implicitly assigning many other related events a probability of 0 because you haven’t even considered them. Economists might consider this an instance of salience bias. From my perspective, though, something like it is probably inevitable when computationally-bounded agents like ourselves try to simulate Bayesian rationality. We’re never going to succeed, since the space of potentially-relevant events is exponentially large, and summing over them would be #P-complete even if we knew the right prior.

  5. Walt Says:

    I knew it, Scott. I knew you were trying to kill us all.

  6. John Sidles Says:

    Of course, I immediately did a Fermi calculation to see if these worries made sense. Let’s see … a cosmic ray proton would have to have a earth-frame energy of 10^17 eV to match the center-of-mass energy of the LHC … consulting Wikipedia for the observed cosmic ray energy spectrum, I find that one of these 10^17 eV rays strikes my house about once per year … so everything looks fine.

    Uhhh … unless there was a non-obvious flaw in the above reasoning … oh yeah … there was such a flaw … when I used the phrase “cosmic ray proton” … doh!

    What evidence do we have that ultra-high energy cosmic rays are protons? None, really, as far as I know.

    Ok, Bayesians … here’s a challenge for you! … what is the probability that ultra-high energy cosmic rays are *not* hadrons? Because that number lower-bounds the probability that the LHC will create physics not seen since the start of the universe.

    Obviously, I am risking a lot in asking this question … because if the Bayesians all agree, then I may have to start studying Bayesian probability … unless of course the end of the universe should happen to intervene! 🙂

  7. math idiot Says:

    If a mini-blackhole will be created by LHC, there is a significant chance that our earth will be destroyed!

  8. Job Says:

    If we’re not going to run it, then lets at least threaten our way into lower gas prices.

  9. Scott Says:

    If a mini-blackhole will be created by LHC, there is a significant chance that our earth will be destroyed!

    math idiot: Even conditioned on mini black holes being created, I’d still put the probability of the earth being destroyed at extremely close to 0. (Note that for these questions you can’t even demand probabilities as betting odds, since one side could never collect on the bet.) Firstly, if QM is anywhere close to correct, then whatever process converts subatomic particles into mini black holes has to be able to convert the black holes back into particles (and this conclusion seems independent of the details of Hawking radiation); secondly, if there were an irreversible black hole production process, then it should have destroyed the earth and other astronomical bodies a long time ago.

    As a side note, it’s always struck me how people get more worked up about civilization being destroyed by grey goo or malevolent AI-bots or particle physics disasters, than they do about its destruction by completely non-hypothetical methods: say chopping down all the forests, filling the oceans with garbage and the atmosphere with billions of years’ worth of accumulated carbon… Maybe the fact that the real dangers are (relatively) slow creates a false sense of security, or maybe the fact that they’re real makes them less fun to worry about.

  10. Raoul Ohio Says:

    I suspect the root of James D. and friends angst is the statement “will recreate energies and conditions last seen a trillionth of a second after the Big Bang”. I don’t know if this was supposed to be a joke or what, but it has a very low probability of being true for many reasons such as:
    1. Cosmic rays have much higher energy,
    2. More advanced civilizations probably have much bigger LHC’s,
    3. Consider the LH flux at the business end of a blasar,
    4. etc.

    Does anyone know of the background of that statement, and where the estimate T = E-12 s comes from? Sounds like joking around while drinking beer after a seminar.

  11. Job Says:

    I thought that, according to the History channel, there were already black holes on earth – namely in the Bermuda triangle.

  12. Peppeprof Says:

    About the statement: “will recreate energies and conditions last seen a trillionth of a second after the Big Bang”.
    In the http://lsag.web.cern.ch/lsag/LSAG-Report.pdf you can read at page 9:
    “This programme is expected to produce, in very small
    quantities, primordial plasma of the type that filled the Universe when it was about a microsecond old”.
    I guess the beer had been drunk by the journalist reporting the “statement”

  13. John Sidles Says:

    I checked that the above LSAG Report does correctly calculate the relativistic center-of-mass correction (Section 2, The LHC compared with Cosmic-Ray Collisions) … which is unsurprising since the authors are experts … but the report then explicitly errs in inferring:

    This means that Nature has already completed about 10^31 LHC experimental programmes since the beginning of the Universe.

    This is true if (and only if) the high-energy cosmic rays are protons. But nowhere in the LSAG Report is there any discussion of whether this assumption is true … the LSAG Report in fact seems completely oblivious even to the fact that this assumption is made.

    Yikes! Because in front of a jury or a judge, any good trial lawyer will tear the LSAG Report to pieces for this omission.

    Scott, shouldn’t the LSAG Report be amended to correct this (rather grave) omission?

    IMHO, a case can be made either way … and either way, there is sure to be trouble over it. That is why it is regrettable that this omission occurred in the first place.

  14. Dan Riley Says:

    wrt #6, according to balloon-borne emulsion and calorimeter experiments, as well as observations of the shower shape and Cerenkov radiation from air-showering cosmic rays, most high energy cosmic rays behave just like hadronic matter with the same mass and charge as the proton. The balloon experiments extend up to around 10^14 eV with fairly good statistics, Cerenkov observations to around 10^16 eV with small numbers of events observed, and shower shapes have been observed to be consistent with hadronic showers at 10^17 eV and above (with quite low statistics).

    wrt the rest, I’m reminded once again why I stopped reading “Overcoming Bias”.

  15. math idiot Says:

    Some people say that if the high energy beams produced by LHC can destroy our earth, then the cosmic rays should have destroyed our earth a long time ago. May I ask if anyone has the data about how high the energy of the cosmic rays can be? Is it higher or much higher than that produced by LHC?

  16. John Sidles Says:

    Dan’s fine post is an example of how hard it is—impossible really—to assign probabilities to eventualities for which evidence is sparse and contextual ignorance is broad … I am still hoping that two or more Bayesians will provide entertainment by trying. 🙂

    I seem to recall well-founded theoretical arguments that there should be no cosmic rays above a certain cutoff (10^20 ev?) essentially because the cosmic microwave background is a damping medium … yeah here is the Wikipedia page on the Greisen-Zatsepin-Kuzmin limit … it will be very interesting (and great fun too) to hear the implications of the GKZ limit debated in court!

    This whole topic has direct ties to broad (and urgent) issues regarding the federative aspects of science and engineering … because the arguments regarding the communal risks of high-energy physics research obviously have correlates in biology … and yes, also in QIT … I would be prepared to argue especially in QIT … IMHO that is why Scott’s keen instincts have recognized these topics as being blog-worthy.

  17. Eyal Ben David Says:

    If finding the higgs is needed to prevent destruction by aliens, I fear we’re doomed.
    And this is the expert opinion of a psychology undergrad.

  18. milkshake Says:

    1. The fossil carbon sources we are burning are perhaps 200 million years old so it is not true that we are releasing “billions of years’ worth of accumulated carbon”
    2. Past levels of CO2 much higher than the present levels and it was good – the crocks were prospering in artic lakes and conifers were growing in Antartica (the oceans were a bit higher though by some 100 meters)
    3. The aliens are going to wipe us LHC progress nothwithstanding – they (aliens) have authoritatively proven a theorem that they are in the universe alone. They are naturally upset and more then willing to erase the error

  19. James D. Miller Says:

    The average high energy physicist will receive a much greater benefit from the LHC than will the average human. Consequently, the self-interests’ of high energy physicists don’t align with that of the rest of humanity in deciding whether to turn on the LHC, so we should be very suspicious of their claims that the LHC is safe.

    Also, has anyone at LHC factored the “great filter concept” into their safety calculations? If not, we really shouldn’t trust their judgement on this issue.

    As a commentator at Overcomming Bias wrote

    “Because keep in mind that there’s still the Great Filter to explain. Something like the LHC is probably feasible for most civilizations, well before robust interplanetary colonization. If running the LHC experiments results in something (like a black hole?) that destroys the planet, that may well be what happened to all the other advanced civilizations that don’t seem to be around.

    Perhaps we’re about to meet the Great Filter first-hand…”

  20. komponisto Says:

    The average high energy physicist will receive a much greater benefit from the LHC than will the average human. Consequently, the self-interests’ of high energy physicists don’t align with that of the rest of humanity in deciding whether to turn on the LHC, so we should be very suspicious of their claims that the LHC is safe.

    I don’t understand this argument. They may stand to gain slightly more, but they surely don’t stand to lose any less!

    Their knowledge of physics has led the LHC phycisists to the conclusion that the benefits outweigh the risks. Since this is entirely a question of physics, how can we dispute their conclusion when they know more about physics than we do?

  21. Chris Granade Says:

    What? Wiped out by the Great Filter before we become a Type I civilization? That’s a great and crying shame…

  22. Dan Riley Says:

    James,

    How do propose factoring a “great filter” concept into the safety estimates? It seems to me that any such calculation requires an estimate of the deficiency in observed advanced civilizations (how many should we have observed by now vs. how many we have observed) and an estimate of how likely it is that LHC-type experiments are the cause of the deficiency vs. other mechanisms. I don’t see how any calculation with those factors as primary inputs could have a non-negligible weight in the overall safety estimates.

    wrt the self interest argument, I would have thought that possible destruction of the earth, even at a miniscule probability, would dominate whatever benefits high energy physicists preferentially derive from the LHC.

    -dan (an LHC physicist)

  23. Raoul Ohio Says:

    Too bad James D. is not in TCS, or he would have generalized his theorem: In most undertakings, smart people have more to gain than the less smart.

    Remark: This inequity can be rectified by putting duffuses in charge.

    Exercise: Generalize to politics, talk radio, great filters, etc.

  24. Robin Hanson Says:

    Scott, I can’t imagine that you really thing the chance of immediate destruction given not turning it on is greater than the chance otherwise. I agree that the difference between these numbers is more relevant, and that the other number is rarely discussed. You’d have a stronger case if you argued that the machine will help us understand things faster, which will help us avoid other problems in the future.

  25. KaoriBlue Says:

    So here’s a question – Let’s say it’s 50-100 years from now and we’ve built ultra-advanced and compact, uh… wakefield particle accelerators. Furthermore, let’s do away with reality and say that these accelerators can be built arbitrarily cheaply, can have arbitrarily high particle-collision energies, and any desired luminosity. At what energies/luminosities should I begin to get nervous about the fate of the universe, and how should my fear scale?

  26. Greg Egan Says:

    there’s still the Great Filter to explain

    The most likely (post-technological) “Great Filter” consists of the fact that any sufficiently advanced civilisation will be intelligent enough to make at least the following two observations about reality:

    (a) Exponential growth in resource use is unsustainable in this universe.

    (b) If you’re bound by the physical resources of the universe, introducing a constant factor and limiting yourself to, say, 10^-10 of available physical resources will make no difference to the complexity class of problems you can solve; far better to keep the rest in reserve for any unforeseen dire emergencies (not to mention enjoying the view).

    Additionally, if technological life is common there will be peer pressure to limit resource use to avoid friction with the neighbours.

    That’s why we have witnessed no unmistakable signs of very large-scale resource use. As for why we haven’t been deliberately contacted yet … given that it’s been less than 100 years since we were even aware that the universe was larger than the Milky Way, this question is roughly like asking why an average 2-year-old child hasn’t yet been invited to Princeton. (Throw in lightspeed time lags, and it’s quite possible that nobody else has even noticed that the 2-year-old has been born.)

  27. John Sidles Says:

    Robin Hanson says: Scott, I can’t imagine that you really think the chance of immediate destruction given not turning it on is greater than the chance otherwise.

    I agree with Scott, but for reasons other than the tongue-in-cheek ones he gave … and different also from the “knowledge-is-good” reasons that mathematicians, scientists, and engineers usually give.

    Resolved: The most important thing about the LHC is not what we learn after we turn it on … but rather, what has happened before we turn it on … namely, many tens of thousands of people have to devote a pretty large chunk of their lives to this effort.

    Throughout most of humanity’s past, LHC-scale enterprises were organized around war, religion, and nationality. More recently, large corporations have grown to this scale … and then starting with the Apollo Program, and continuing with the HGP, the Sloan Digital Sky Survey, etc., more and more scientific and technical enterprises are growning to this globall scale.

    A great portion of my interest in modeling and simulation methods is driven by their emergent role in the 21st century as the vital “glue” that holds large-scale enterprises together.

    It’s true that quantum simulation is somewhat harder than classical simulation … but in general, it is not exponentially harder … and quantum simulation’s role as a federative “glue” is (potentially) immensely stronger even than classical simulation.

    Thus (IMHO) the single most important aspect of the LHC is the federative role of the modeling and simulation tools that design it … that give us confidence that it will work … that smooth over nationalistic differences … that keep the people involved … particularly the young people … happy and optimistic about the future.

    It would cool if we were similarly happy and optimistic, with similarly good technical reason, about the integrity and stability of our planetary biome … wouldn’t it?

    IMHO, we should be that optimistic.

  28. Markk Says:

    Re: Cosmic Rays are Protons
    Having talked to the Amanda and Ice Cube Detector guys at Wisc I think the physical parameters of these high energy cosmic rays are pretty much protons – the decay showers match as good as the showers in accelerators. So to a high degree of certainty (do we know we are really accelerating protons in the LHC?) many high energy cosmic rays are protons.

  29. James D. Miller Says:

    John Sidles,

    You wrote “The most important thing about the LHC is not what we learn after we turn it on … but rather, what has happened before we turn it on … namely, many tens of thousands of people have to devote a pretty large chunk of their lives to this effort.”

    But what would these people have accomplished had they not worked on the LHC? These lost accomplishments are the greatest cost of the LHC (assuming it doesn’t destroy the earth).

  30. John Sidles Says:

    James D. Miller Says: “But what would these people have accomplished had they not worked on the LHC? These lost accomplishments are the greatest cost of the LHC (assuming it doesn’t destroy the earth).”

    Well, that’s a good point … but tough to answer. My own opinion is straightforward … our planet presently has many more wonderfully talented young people … than it has wonderfully great enterprises to employ them. So IMHO we need more LHC-type enterprises (and 787-type enterprises, Intel-type enterprises, Apple-type enterprises, Disney-type enterprises, Phoenix-type enterprises, etc.) … not fewer.

    That is why (for me) the principal attraction of science and technology in general—and QIT in particular—is its role in creating new enterprises. This creative challenge is (for me) a central theme of the 21st century. The LHC is an outstanding example of such an enterprise.

    However, it is *very* important that not everybody share my opinion … because a diversity of opinion with regard to enterprise is (IMHO) a mark of a healthy and vigorous community.

  31. Jonathan Vos Post Says:

    Scott’s right. It’s not a Physics/hardware problem. It’s a computational problem. And the solution is:

    Program the LHC computers with these three Laws of Asimov:

    1. An LHC-controlling robot may not injure a human being or, through inaction, allow a human being (or the fate of the planet) to come to harm.
    2. An LHC-controlling robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
    3. An LHC-controlling robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    The rest is mere implementation detail.

  32. Scott Says:

    At what energies/luminosities should I begin to get nervous about the fate of the universe, and how should my fear scale?

    KaoriBlue: Personally, I’d get slightly nervous as you approached the Planck energy and started altering the local spacetime geometry, and not before then. To calibrate, I’m someone who gets nervous crossing the street.

    (Michio Kaku had a wonderful passage along the above lines in one of his books, which involved gradually heating an ice cube so that it melts, then vaporizes, then undergoes nuclear disintegration, etc. and eventually reaches the Planck energy—at which point it would “probably be advisable to leave the kitchen.”)

  33. alex Says:

    I’m tempted to just say there has got to be something dangerous about playing with laws of nature we don’t really understand. Its one thing to invent the wheel, and quite another to run experiments on the particles that make up the universe.

    Now maybe we are in grave danger right now (from aliens or some other source) and we just don’t know it. But I think we can discount this possibility since aliens or whatever have not seen fit to enslave humanity in the last few thousand years. We can assume that this is likely to be the case at least for a few thousand more years. You’ve got to allow me to make basic inductive arguments like this one, because you accept such arguments all the time – its how you are sure the laws of nature will not change from today to tomorrow.

  34. Scott Says:

    Scott, I can’t imagine that you really thing the chance of immediate destruction given not turning it on is greater than the chance otherwise.

    Robin, if we only care about “immediate” destruction then I’ll happily agree that P>Q. But when it comes to the eventual destruction of civilization (to fix a date, let’s say by 2100), I really genuinely have no idea which probability is greater. Whatever difference exists between

    Pr[civilization destroyed by 2100 | LHC turned on] and
    Pr[civilization destroyed by 2100 | LHC not turned on]

    seems to me to be dominated by basically unforeseen chains of events, rather than the astronomically-implausible foreseen disasters (mini black holes and strangelets) that everyone likes to focus on. Here’s one example, off the top of my head: the thousands of physicists needed to run the LHC open up jobs elsewhere in physics, some of which are filled by brilliant Iranian physicists who would otherwise work for their country’s nuclear program. No, I don’t actually think that’s likely, and I could have rattled off thousands of similar stories — but that’s exactly the point.

    (Note: You might argue that if I can’t foresee the long-term consequences of turning the LHC on vs. not turning it on, then those consequences should precisely cancel each other out in my expected-utility calculations, which would take us back to the strangelet scenarios. But they don’t precisely cancel out. There’s a small residue, stemming from my belief that scientific understanding has generally been for the good—and that residue completely swamps the infinitesimal probability of a strangelet disaster.)

  35. Daniel Says:

    To Greg Egan, Comment #26:

    I really can’t imagine such isolationist civilizations. Let’s take an isolationist (environment friendly?) civilization, confining itself to a well-protected volume of fixed radius. And let’s take an expanding civilization, terraforming everything it encounters as fast as it can. It is highly unlikely that the isolationism civilization will survive their encounter in the long run.
    I have my own half-serious answer to the Great Filter puzzle. It is something like the opposite of yours:
    I belive that the expansion speed of an expanding civilization very quickly reaches the exact speed of light (say, with one million year of scientific advance). So an outside observer has just a small time window to observe the expanding civilization before the civilization hits her with full force, terraforming/integrating all the atoms of the poor soul.
    If we believe this, then we can use the Anthropic Principle to finish the reasoning: Why didn’t we observe other civilizations? Because if we did, we would be killed by now.

  36. Greg Egan Says:

    Daniel, frankly, who knows? Maybe the universe is full of rampaging arseholes just as you describe, but if so there can’t be many of them or we wouldn’t be here. The whole “paradox” part of the supposed Fermi paradox is that if life generally opts for the replicating airheads model, it should have trashed the galaxy long ago.

    In either case, any claim that the mere lack of observable alien infrastructure is evidence for an existential threat that wipes out most advanced civilisations is just naive. Whether advanced civilisations are restrained and subtle, or just stealthy as in bomber, not seeing them certainly doesn’t mean they’re not there.

  37. Anders Sandberg Says:

    Or rather, there cannot be many rampaging civilizations since we are here *on a middle aged planet*.

    The Bostrom and Tegmark paper (How Unlikely is a Doomsday Catastrophe?, Nature, Vol. 438 (2005): p. 754) puts a neat bound on physics risks using anthropic reasoning – in a dangerous universe we should expect ourselves to live on a younger planet. I’m not sure it is applicable against accelerator risks if they are sub-galactic in scale, unfortunately. But it seems to work against rampaging civilisations.

    The collective action assumption inherent in “old civilizations are quiet civilizations” seems problematic: it only takes one postalien teenager or missionary to send out selfreplicating spam to clutter up the universe. For it to work *every* civilization has to become very good at keeping itself in line.

    I’m personally leaning more towards some variant of Robert Bradbury’s idea that communication lags or, energy efficiency or other practical evolutionary factors act as a strong evolutionary attractor always producing discreet civilizations.

  38. Greg Egan Says:

    For it to work *every* civilization has to become very good at keeping itself in line.

    And for “accelerator risks” to explain anything either they have to be literally impossible to anticipate, or every technological civilisation needs to have identical blind spots in the way they reason about the physics.

    I’m not sure that we’re really in disagreement, though; you talk about “practical factors” and my argument is practical as much as social/ethical. Indefinite exponential growth is physically impossible. Anyone who is not an idiot will, sooner or later, arrange their affairs to cope with that reality; the fact that a small proportion of present-day humans think “Whee! If we go into space all the bounds are lifted!” is pretty much irrelevant to what mature space-faring civilisations will do.

  39. James Says:

    Compare: arguments against Pascal’s Wager.

  40. John Sidles Says:

    James says: Compare: arguments against Pascal’s Wager.

    We should pray that the LHC turns-on successfully? 🙂

  41. Bilal Says:

    You’ve got a good sense of humor Scott! Ha ha ha …. 🙂

  42. tover Says:

    What is probability that studing and analizing of quantum computers and string theory will give benefit? 1/10^10 ? 1/10^20 ? after each year?

  43. John Sidles Says:

    Tover Says: What is probability that studying and analyzing quantum computers and string theory will give benefit?

    That’s easy. One hundred percent. And the benefits already are large.

    Here I am speaking as a someone whose main practical interests are in quantum modeling and simulation. Present-day methods of quantum modeling and simulation already are widely used, in every branch of science and technology that presses against the physical limits to size, speed, sensitivity, and power efficiency — which is to say, pretty much every branch of science and technology.

    The mathematical ideas and tools of QIT and geometric quantum mechanics are now revolutionizing our ideas and tools for quantum modeling and simulation. For the past (say) fifty years or so, our ability to simulate quantum systems has been doubling every three or four years, and there seems to be no obvious mathematical or physical obstruction to continuing that doubling for the indefinite future.

    If you think about it, the implications of that statement are pretty profound.

    Thus even if fifty years from now, quantum computers still are less powerful than classical computers (although both will be greatly evolved and it is very possible that they will have merged by then) … and even if nature should turn out not to use the ideas of geometric quantum mechanics (of which it is possible to regard string theory as a subset) … research in these fields will still have been of great value … because it already is of great value.

    The above is very exciting, to be sure … it’s why I read the QIT literature (including the QIT blogs) with great enjoyment and excitement … but I have come to appreciate that the purely mathematical and technical aspects of QIT and geometric quantum mechanics are not the most exciting aspects.

    The most exciting aspects of modern QIT are federative … the application of quantum modeling and simulation tools for team-building … for launching new enterprises … for speeding the cadence and retiring the risk of these enterprise … for creating and tying together meaningful work and new jobs for young people.

    Because what resource does our planet have in greatest abundance? Young people. And if we could ask to have just one resource in abundance, young people would be that resource.

    So this is a good time to say to Scott, thanks for running a great blog, and to thank also all the QIT folks who post so many wonderfully interesting ideas and questions on it! 🙂

  44. KaoriBlue Says:

    John –

    You claim – “For the past (say) fifty years or so, our ability to simulate quantum systems has been doubling every three or four years, and there seems to be no obvious mathematical or physical obstruction to continuing that doubling for the indefinite future.”

    I’m a bit confused by this comment, and while I certainly believe you (about the last 50 years that is), your projection surprises me. Are you talking about better hardware (scaling according to Moore’s law, parallel computer architectures), or better algorithms for simulating quantum systems? I.e., methods for quantum MD, semi-classical approximations, etc.? As I understand it, the scaling is still exponential and pretty bad in the limit of the size of many of the systems that are interesting to simulate.

  45. rrtucci Says:

    Tover Says: What is probability that studying and analyzing APPLES and ORANGES will give benefit?

    That’s easy. One hundred percent. And the benefits already are large.

    Here I am speaking as a someone whose main practical interests are in quantum modeling and simulation. Present-day methods of quantum modeling and simulation already are widely used, in every branch of science and technology that presses against the physical limits to size, speed, sensitivity, and power efficiency — which is to say, pretty much every branch of science and technology.

    The mathematical ideas and tools of APPLES and ORANGES are now revolutionizing our ideas and tools for quantum modeling and simulation. For the past (say) fifty years or so, our ability to simulate quantum systems has been doubling every three or four years, and there seems to be no obvious mathematical or physical obstruction to continuing that doubling for the indefinite future.

    If you think about it, the implications of that statement are pretty profound.

    Thus even if fifty years from now, APPLES still are less powerful than ORANGES (although both will be greatly evolved and it is very possible that they will have merged by then) … and even if nature should turn out not to use the ideas of APPLES (of which it is possible to regard ORANGES as a subset) … research in these fields will still have been of great value … because it already is of great value.

    The above is very exciting, to be sure … it’s why I read the APPLES literature (including the APPLES blogs) with great enjoyment and excitement … but I have come to appreciate that the purely mathematical and technical aspects of APPLES and ORANGES are not the most exciting aspects.

    The most exciting aspects of modern APPLES are federative … the application of quantum modeling and simulation tools for team-building … for launching new enterprises … for speeding the cadence and retiring the risk of these enterprise … for creating and tying together meaningful work and new jobs for young people.

    Because what resource does our planet have in greatest abundance? Young people. And if we could ask to have just one resource in abundance, young people would be that resource.

    So this is a good time to say to Scott, thanks for running a great blog, and to thank also all the APPLES folks who post so many wonderfully interesting ideas and questions on it! 

  46. John Sidles Says:

    Kaori Blue, there’s a pretty sharp division between the class of no-noise quantum systems (which includes quantum computers) and the class of noisy systems.

    The latter class encompasses also low-temperature systems (`cuz how does a system get to be low-temperature, except by thermalizing contact with a reservoir?) and also measured-and-controlled systems.

    When it comes to noisy quantum systems, the usual arguments that quantum systems are infeasible to simulate simply don’t apply. There is an excellent FOCS’06 article by Buhrman and collaborators, New Limits on Fault-Tolerant Quantum Computation, that establishes this point with particular clarity and rigor.

    From a heuristic point of view, what has been improving exponentially is our ability to simulate noisy and/or low-temperature and/or continuously observed and/or measured-and-controlled quantum systems. Practical capabilities in this area are becoming quite astonishing … recent articles in Nature and Science by the Kendall Houk / David Baker collaboration are good examples (look for keywords like “theozyme” and “compuzyme”).

    Modern QIT helps us appreciate that this is (fundamentally) a single class of quantum systems—systems that have algorithmically compressible quantum trajectory unravellings. This is probably a deeper definition than “noisy” … but there are a number of tricky informatic issues … like “can we compute physical quantities wholly within the compressed representation?”

    To borrow a phrase from Terence Tao, this is “the type of mathematics where progress generally starts in the concrete and then flows to the abstract.”

  47. John Sidles Says:

    LOL … I with rtucci’s reply hadn’t overlapped with mine!

    Yes,the APPLES of algorithmically compressible quantum dynamics and the ORANGES of algorithmically incompressible quantum dynamic … are equally delicious and nourishing! 🙂

  48. Scarybug Says:

    Ha ha! This is pretty much the same satirical argument used to demolish Pascal’s Wager.

    Oh, damn. reading some of the in between comments I notice that someone already brought that up.

    Still, brilliant.

  49. Markk Says:

    Wandering Far afield…. About Greg Egan’s point on expanding vs isolationist civilizations. Geoffrey Landis published some wonderful work in the JBIS about 10 years ago about expansion rates and lifetimes of civilizations in a galaxy. He showed that there is just a lot of space out there! Given species lifetimes, or rather expansive phase lifetimes, on the order of 10 million years or so – and of course a list of other assumptions that make it all a total WAG – there would be little bubbles of civilization only. It was all parameterized, so you could get anything, but I thought that it was interesting that for “conventional” assumptions, generally you got a lot of nothing.

  50. Daniel Says:

    Maybe the universe is full of rampaging arseholes just as you describe, but if so there can’t be many of them or we wouldn’t be here. The whole “paradox” part of the supposed Fermi paradox is that if life generally opts for the replicating airheads model, it should have trashed the galaxy long ago.

    I think “rampaging arsehole” is a loaded term. 🙂 I don’t see non-expasionism as an a priori obvious inter-civilization moral imperative.

    Even if a typical region of the spacetime continuum is a fighting ground of expanding civilizations, kind of like the stable state of this cyclic cellular automaton, it is still possible that very atypical untouched regions exist, and we are placed in one by some sort of anthropic reasoning. (Or, as in His Master’s Voice by Stanislav Lem, our planet is already in a terraformed region, but the terraforming process was very gentle. I don’t find this plausible, but it makes excellent science fiction.)

  51. Greg Egan Says:

    I think “rampaging arsehole” is a loaded term. I don’t see non-expasionism as an a priori obvious inter-civilization moral imperative.

    Having posited in #35 a civilisation expanding at close to the speed of light and making exclusive use of everything in its sphere of influence to the point that if we’d seen them, we’d soon be dead … you now suggest that I should be more polite about these people? Which part of genocide do you find morally equivocal?

    BTW, I’m not advocating “isolationism”, in the sense of no communication and no [benign] exploration (though as I’ve already remarked, it’s absurd to consider the fact that humanity has not yet been contacted to be some kind of Great Silence). I’m advocating, er, not wiping out your neighbours, and not pointlessly exploiting 100% of available resources when even that won’t give you the exponential growth that (apparently) some people imagine they can’t live without. It’s pretty funny that the only argument anyone can summon against this possibility is “Oh no, that’s impossibly utopian, so either there’s some utterly freakish exception that has allowed us to survive the universal Law of the Jungle, or strangelets must destroy every advanced civilisation”. Yeah, so all your assumptions are unimpeachable, but anthropic let-out clauses and alien LHCs ate all the evidence.

    Markk, I’m not sure what the fundamental difference is between civilisations expanding for 10 million years then stopping their expansionist phase, and civilisations restraining themselves in the manner I’ve suggested. I’m assuming that Landis doesn’t suggest that every space-faring civilisation just dies out, so if their expansionist phase ends, that must be a voluntary act. Which is precisely what I’m claiming: civilisations choose to put a bound on their growth that is far lower than that imposed by the laws of physics alone.

    There’s nothing wrong with civilisations spreading beyond their planet of origin; what’s both morally repugnant and ludicrously anthropomorphic is imagining that the process would be like the Mongol empire or European colonisation in overdrive. If an advanced civilisation wanted to insure against localised disasters, it could fill the galaxy with computing devices the size of pinheads, with, say, a dozen in the vicinity of every star. This could have happened already, and we’d be entirely oblivious to it. If people imagine that there’s an evolutionary imperative that would force such a civilisation instead to keep growing until it’s used up every available crumb of matter and energy, I guess they’re entitled to their hypotheses, but all the evidence (such as there is) is against it.

  52. Markk Says:

    ” I’m assuming that Landis doesn’t suggest that every space-faring civilisation just dies out, so if their expansionist phase ends, that must be a voluntary act.”

    Yes, a combination. I’m trying to find that magazine now, but having no luck. As I recall he meant a combination of species lifetime and just interest in expanding as you describe. Using my wonderful reconstructive memory (this could be all contructed…) A civilization needs technology, resources and the will to do long projects all at the same time and lasting for many 10’s of thousands of years to have real expansion. As expansion progresses there will be a loss of coherence due to communication delays, so any civilization could break up culturally. Again total speculation, but interesting.

  53. John Sidles Says:

    So far, the theorists are doing all the talking on this thread … the experimentalists are listening (SETI), but have no ET signals to report so far … but engineers have said nothing.

    Yet the engineering questions are similarly interesting to theoretical and experimental questions. Here’s one … suppose we humans wanted to get our “expansion” phase underway … as quickly as possible. How fast would that be? How much would it cost?

    One possibility would be to seed the oceans of Callisto, Ganymede, and Europa with Earth bacteria … this might be attempted with resources comparable to the Galileo probe … in fact Galileo was crashed into Jupiter deliberately in order to forestall the (inadvertent) possibility of interplanetary seeding.

    The point being that, from an engineering point of view, we humans already are perfectly capable of initiating our expansionist phase … even though we’re (socially speaking) rather inept … so why is everyone on this forum so confident that other expansionist species in our galaxy—supposing that they exist—have their act together?

    Perhaps our neighbors are neither good, nor evil, nor wise … perhaps they are basically shmendriks pretty much like us.

    In other words … Douglas Adams was right. 🙂

  54. Jonathan Vos Post Says:

    Re: #52, 53:

    “I’m assuming that Landis doesn’t suggest that every space-faring civilisation just dies out…”

    If by “Landis” this means Dr. Geoffrey Landis, the Nebula- and Hugo-Award winning science fiction author, who has written on and simulated percolation models of galactic colonization, then he is by definition an Engineer, having been NASA Professor of Astronautical Engineering at MIT. Though he is most surely not just an engineer.

    See his web site for what I think is the essay in question.

  55. JK Says:

    Here’s a totally different question: let’s say the LHC will meet all scientific expectations. It is expected to cost $8B (but will likely end up costing more than that). What price would be too high to pay for whatever knowledge we might obtain? (Note that no matter how much you value knowledge, that $8B could also have been spent in other fields acquiring other knowledge.)

  56. rpenner Says:

    JK — LHC is complete. The minor contributions of the USA (2% of the magnets, 4% of the expense) was approved over eight years ago, and completed. It’s in the commissioning phase now. The “on” light should be glowing by August. To not use the LHC would be throwing that money away, so your question is moot — unless the LHC reveals how to unwind the past history of the entire planet…..

  57. Daniel Says:

    There’s nothing wrong with civilisations spreading beyond their planet of origin; what’s both morally repugnant and ludicrously anthropomorphic is imagining that the process would be like the Mongol empire or European colonisation in overdrive.

    What may be morally repugnant is to suggest without enough evidence that this is a Law of Nature, and then advocate that humanity should choose this path, a self-fulfilling prophecy. I did not intend to advocate anything, especially without any evidence.

    What I did suggest and you may have found morally repugnant was that the maxim of “not rearranging nicely arranged quarks is a nicer thing to do than rearranging them” is anthropomorphic itself. Of course, “rearranging quarks is a cooler thing to do than not rearranging them” is just as anthropomorphic. I’m a big fan of both maxims, and I’m a really big opponent of genocide, honest. But I am a human, so my opinion does not count.

    I think it is very hard for humans to find non-anthropomorphic ideas. But evolution may be one such idea. I’m sure that any civilization contains imperfectly self-replicating agents. Now, let me try to play with emotionally loaded language. You talk about a civilization that bans all of its citizenry (all self-replicating agents, really) from breaking through the Berlin Wall of Reserved Resources. And it maintains this soft-yet-hard Wall perfectly successfully for millions of years. This must be an iron-fisted police state I wouldn’t want to live in.

    …Okay, of course this was just demagoguery. But my real point is: if your civilization spans several thousand light years or more, then you simply don’t have the physical means to control or synchronize all the imperfectly self-replicating agents in your sphere of influence. The rules of evolution apply. (At least on the billion years timescale I’m talking about here.) Now we can argue about what exactly the rules of evolution predict in this case. I don’t know, but I imagine Expanding Space Amoebae, and I really think that the cyclic cellular automaton I already linked to is a useful analogy here, on more than one level.

    As I imagine it, the expansion of a civilization is more like the Big Bang, the birth of new Universes. Another civilization being destroyed is more a natural disaster than an act of evil recklessness. (That’s why I may have sounded so forgiving about it.) By the way, total annihilation is not an inherent part of this picture. The information content of a configuration of particles may have an evolutionary advantage, similarly to their energy content or negative entropy or whatever.

    If you find this fatalistic vision depressing, then probably every healthy humanoid organism agrees with you. I certainly do. But I also find it plausible.

  58. Greg Egan Says:

    The rules of evolution apply.

    And yet here we are, fourteen billion years from the Big Bang, and those who believe that evolution leads to Expanding Space Amoebae are reduced to special pleading and “Great Filter” excuses for the lack of any shred of evidence that it leads to anything of the kind.

    But I am a human, so my opinion does not count.

    On the contrary, the opinions of humans (though probably none who are living today) will certainly shape the opinions of the space-faring civilisation to which we give birth. One of the reasons I get so passionate about this issue (and I hope you’ll forgive me if I’ve seemed discourteous towards you, as opposed to those rampaging arseholes that neither of us would welcome) is because we will eventually reach a point where we have the technology to influence the values of our descendants far more decisively than any previous, purely cultural, processes ever allowed. We will eventually have the means to make our children in our own self-image, and while it would be monstrous to stunt or restrain them in a way that robbed them of the ability to change their own nature and make their own choices about their own children in turn, neither will we be able to shrug off responsibility for their nature and leave it to heredity and chance.

    Now, I grant you that whatever we do with that ability will be both imperfect and unpredictable in the very long term, but we will certainly have the opportunity to inject a massive bias away from an evolutionary random walk. Would it really be such an unbearable “police state” if every child was born smart enough to realise that the total resources accessible to her whole civilisation, whatever she did, obeyed certain inviolable physical laws, and to grasp the fact that using a fixed proportion of those resources would make no qualitative difference to her life? (If I can’t preach constant factors make no difference on Scott’s blog, where can I preach it?) Evolution has wired a very strong urge into us to deny reality on precisely these issues, but if we choose to hack that self-deception out of our nature, it could take evolution a lot longer than 4 billion years this time to slip it back in.

    And granted, there is a value judgement in suggesting that it ought to be universally preferred to live in a peaceful universe with restraint than to fight wars over resources for the sake of growth that is only going to slam against a ceiling that differs by no more than a constant factor from the ceiling you could have lived beneath peacefully … but is that really such a tough call? Remember: the choice is not between restraint forever or exponential growth forever, it is between restraint by choice or restraint at the hands of the laws of physics.

    Now, if it turns out that my whole premise is wrong and advanced civilisations can actually just bud off artificial baby universes with tailored laws of physics where they can run exponentially wild forever without bothering anyone else, then … fine. Let them go crazy. Needless to say, that might be the happiest solution to the Fermi paradox.

  59. Niel Says:

    Regarding spacefaring expansion: we already have plenty of case studies in a toy model — albeit the space was the surface of the earth, the technology involved ranges as far back as pre-agricultural.

    One instance was the expansion and competition between different species of the genus Homo. Expansion and travel speeds were limited in this regime too, albeit for technological/practical reasons rather than the physical principles we suspect apply in the interstellar regime.

    At a later time, around the dawn of history, there are perhaps more case studies: civilizations rising and falling, often rising from the ashes of previous ones, expanding outwards and contracting. There were more resources and better organization, making propagation and communication times much shorter. The less sophisticated peoples of the time (e.g. the Celts) retreated further and further until they couldn’t retreat any more, either succeeding finding a place obscure enough that no-one wanted to try too hard to displace them, finding the organization required to resist being displaced, or being swept away entirely.

    The next regime of civilizing space is where all space is taken up by bodies that resit expansion, being either sovereign states, quasi-sovereign states, or vassal regions which are frequently conquered by different civilisations (and sometimes breaking free again) — but where, short of systemic genocide, the populations are persistent across these political epochs. This is the regime where we find ourselves today.

    It is quite plausible that these scenarios may play themselves out in space, separately or in sequence, many times over. If life is rare, I suspect the first paradigm will rule: different civilisations will usually find themselves at great (dis-)advantage relative to others, and will find it difficult to raise resistance against others, but it may take a while for one group to be completely wiped out. If life is common, but not so common that it fills space (yet), conquest will be possible but expensive, and for a time either fight or flight will be viable options — but defeat may be terrible, and flight is ultimately a temporary solution.

    Probably the closest we can come to a strong analogy between Earth history and possible space-faring scenarios is Earth ca. 1000 — 1500: many warring states, limited travel times but reasonably fast communication across any one country, far-separated regions of space just coming into contact with one another, and limited but non-zero population migration between nations (and also limited but non-zero acts of genocide). This is what I would expect in a mature galaxy or universe: many states which resist each other, but not everyone knows about everyone else.

    We can stretch these analogies further forward in history, but ultimately, the invention of the telegraph and the railroad (without a commensurate dilation in the radius of the Earth, of course) ruins the vague analogy of travel and communication times relative to the number of co-existing countries between life on Earth and life in space. (Unless physics is much different than we know.)

    The question most people have been asking is whether we are in the position of the Neanderthals, just as the cro-Magnons are about to enter Europe, or whether it is our descendants who will play the role of the cro-Magnons some day; or, less starkly, whether we are more like ancient Sumeria or a modern-day tribe in the Amazon basin. But perhaps a more important question is this: would the dynamics of world-politics be very different, depending on the number of neighboring states and the relative communication and travel times across them? Or does the transition to three spatial dimensions, but with sporadic discrete positions, have an effect which totally undermines any possible analogy to Earth-based geopolitics?

  60. Ian Durham Says:

    The problem is that no one’s coming out in defense of black holes. Would the creation of a black hole by the LHC really swallow up the earth? I’m guessing someone at LHC has run a calculation to test the approximate size of any black hole that might be created (I’m fairly certain someone at Brookhaven did this for RHIC) and found it to be fairly small. Plus, black hole dynamics doesn’t immediately mean everything in the vicinity would get sucked in past the event horizon. There are variables to consider including whether the hole rotates or not, not to mention the conceivable benefit that might be reaped from the streams of energy that might be radiated from the poles.

  61. Nick Tarleton Says:

    Scott: Taking the outside view, what was the last time a discovery in high-energy physics had any practical significance, let alone an impact on existential risk? (This may not be very meaningful, though, given the small sample size.) Also, I don’t know where you’re looking, but it seems to me there are a heck of a lot more people worked up about “chopping down all the forests, filling the oceans with garbage and the atmosphere with billions of years’ worth of accumulated carbon” than gray goo or AI. (Are the first two of those existential risks at all?)

    rpenner: I don’t think JK’s question was about what we should do at this moment.

    Greg Egan: “a ceiling that differs by no more than a constant factor from the ceiling you could have lived beneath peacefully” – there’s also a constant-factor difference between one dollar and a million, but I know which I’d rather have in my bank account. (I realize this may be a terrible misunderstanding of your point.) More generally, I don’t see how the impossibility of exponential growth favors throwing in the towel altogether over going for continued cubic growth (which looks possible until space starts expanding inconveniently fast) and trying to cooperate with any aliens once we meet them. The best way to “keep the rest in reserve for any unforeseen dire emergencies” (or for survival as far into the future as possible), taking into account the vast rate of energy and negentropy wastage by stars, would seem to be to dismantle all the stars you can reach, or if that’s impossible, build all the Dyson spheres you can producing and storing matter-antimatter. Finally, I really don’t see the “special pleading” in saying something like “because expansionist aliens are evidently very rare, we should raise the probability that intelligent life is rare, and the prior probability that we will wipe ourselves out before leaving the solar system.” (Though saying any one thing must be the Great Filter is still silly.)

  62. John Sidles Says:

    Nick Tarleton Says: Scott, taking the outside view, what was the last time a discovery in high-energy physics had any practical significance, let alone an impact on existential risk?

    I’ll take a swing at that … mentioning (confessing?) that I have a degree in particle theory.

    Those who look for a “magic bullet” of science and technology to come from particle theory have to be severely disappointed by the amazing success of the Standard Model over the last thirty years. It sounds odd … but *many* particle physicists would be *much* happier if the Standard Model worked *less* well.

    But Nick asked a very different question — whether particle theory has had any impact on humanity’s planetary-scale existential risk.

    Here the answer is clear — the mathematical tools of particle theory have hugely advanced condensed matter theory … and the resulting exponential growth in computing and in ab initio condensed matter studies has exerted a huge (and arguably highly beneficial) impact on the Malthusian aspects of humanity’s existential risk.

    I will further assert that projects like the LHC help to mitigate another planetary-scale existential risk: global warfare. Here it is the engineering toolset of accelerator physics that provides the “glue” that binds together peaceful planetary-scale enterprises … partly because this toolset helps retire the technical risks and speed the pace of development in LHC-scale enterprises … but mainly because this toolset is (reasonably) objective and non-partisan … thus making it feasible for very different cultures to collaborate effectively and peacefully.

    Do these (indirect) benefits justify an $8B LHC investment? Absolutely (IMHO). Should the LHC be turned on as soon as possible and left on as long as possible, as Scott advocates? Absolutely (IMHO).

    And are the above benefits reasonably invariant under LHC→QIT? … absolutely (IMHO) … and there is an important lesson here for the QIT community.

  63. Phil Says:

    I must also add that, what if the LHC were the key to mankind creating new vehicles capable of traveling space in seconds? Or what if it helped us research alterative fules? Antimater powered vehicles, or maybe even use a particle accelerator as a vehicle. 😉

    If we don’t learn from the giant pool of information the LHC would give to us, then we are damning ourselves to primitive ignorance.

  64. Hoju Says:

    Wow, that was almost clever.

    Well done. No wonder you work at MIT. Perhaps someday if you are ever “actually” clever, you might get a nice position at Harvard.

  65. Romu Says:

    All those who’ve played role games know that even with the 100-sided dice, you get the bullseye if you play long enough.

  66. Greg Egan Says:

    there’s also a constant-factor difference between one dollar and a million, but I know which I’d rather have in my bank account

    I’m not talking about anyone living in poverty. But how hard would you fight to have 10^16 dollars over 10^10? Or to have 10^26 people in your civilisation over 10^20? Personally I’d be very close to indifferent to both of these choices. (Right now I’d also be indifferent to an offer of living 10^10 or 10^16 years, but ask me again in 10^10 years’ time.)

    I don’t see how the impossibility of exponential growth favors throwing in the towel altogether over going for continued cubic growth

    If cubic growth in accessible resources per civilisation can be rigourously established as realistic, then cubic growth in resource use is fine. What I said was a fixed proportion of what is accessible; a constant factor, not a constant function.

    What I am arguing against, primarily, is the notion that exponential growth up until a resource crash or a fight with a competitor is just the natural state that evolution will always manage to instil in any species, and that no matter how smart we are it’s futile to resist that tendency.

    I also don’t consider it some kind of absolute imperative that civilisations will “harvest” 100% of available resources, whatever that might be. Plenty of humans right now (especially those who aren’t in material poverty) consider, say, preserving an area of wilderness to be the best way of “using” it. This might be very quaint and culturally specific, but I find the idea of aliens marching around chanting “Must … turn … stars into … antimatter hoard” just as quaint.

    I realise that some people take the view that if the universe could in principle perform X clock cycles worth of consciousness over its entire history, and if it ends up merely performing Y, then we should treat X-Y as a measure of a kind of tragedy, akin to genocide or at least some terrible natural disaster. I can comprehend this view, but I don’t share it, and I don’t consider it all that likely to be a near-universal cultural attitude. However high X happened to be … if we actually achieved it would we then weep with anguish at the sheer horror of the fact that 10^10 X wasn’t possible?

    In real cultures, I think there’s just as likely to be some kind of long-term accommodation to the fact that we’re (probably) limited to a finite X, and while that’s kind of a shame, maximising Y isn’t necessarily the most satisfying way to get over it (especially if it involves getting partisan and fighting tedious wars over just who gets to be those clock cycles). I realise that I have my own biases on this subject, but a lot of people do seem to come up with models that treat maximising Y as completely beyond debate, or as the only sensible “default” assumption for how advanced aliens would be behaving. When I read someone like Nick Bostrom weeping in Technology Review about how sad it will be if we find life on Mars because it would mean we’re all doomed by the Great Filter, I’m afraid all I can do is laugh. All this po-faced Bayesian “reasoning” about data sets of size zero has descended into farce.

  67. Greg Egan Says:

    The problem is that no one’s coming out in defense of black holes.

    If they really thought a stable hole might arise, you’d hope they’d make plans to trap it and play around with it. And if it equilibriated at a certain mass where anything you fed into it got Hawking-radiated back at you, that would be incredibly cool. Look Ma, no conservation of baryon number!

    But how do you grab a small, neutral black hole? The LHC isn’t going to be producing anything large enough to scatter visible light (that would require a hole mass of the order of 10^20 kg), so an ordinary laser trap wouldn’t stand a chance.

  68. Rich Gautier Says:

    I believe that I read in one of Feynman’s biographies that while attending the testing of the hydrogren bomb, some of them thought they might light the atmosphere on fire. Thankfully, we didn’t. Not to say that I completely agree with the end results of that experiment…..just pointing out that in his biography, one of the most brilliant men to have ever lived didn’t seem all that concerned about seriously unlikely events stopping important physical experimentation.

  69. Jim Says:

    What exactly is so bad about destroying the Earth anyway? At least our deaths would be quick, right? A few thousand tiny black holes would zip through the planet pretty fast and eat it all before we realized.

    Frankly, this seems like a much easier death than we’ll experience when our computers take over the world and bludgeon us to death with their giant metal robot hands.

  70. Nick Tarleton Says:

    But how hard would you fight to have 10^16 dollars over 10^10? Or to have 10^26 people in your civilisation over 10^20? Personally I’d be very close to indifferent to both of these choices.

    Selfishly I suspect I’d be indifferent, but ethically? If the people have good lives (that are different enough from each other that we don’t have to worry about whether multiple instances of the same experience add value), it’s intuitively very implausible to me that the first civilization (or one of the same size that lasts 10^6 times as long, with the same diversity caveat) isn’t ~10^6 times as good even though any individual wouldn’t notice much difference. (See also: scope insensitivity. Though I may have to abandon this intuition in light of the problems with unbounded utility functions… let alone the Repugnant Conclusion.) Also, there’s a question of quality as well as quantity – it’d be nice if individuals could grow their minds indefinitely, without the population or non-mind resource use having to shrink.

    What I am arguing against, primarily, is the notion that exponential growth up until a resource crash or a fight with a competitor is just the natural state that evolution will always manage to instil in any species, and that no matter how smart we are it’s futile to resist that tendency.

    Agreed that we must, and can, resist, but this requires fighting evolution.

    Plenty of humans right now (especially those who aren’t in material poverty) consider, say, preserving an area of wilderness to be the best way of “using” it. This might be very quaint and culturally specific, but I find the idea of aliens marching around chanting “Must … turn … stars into … antimatter hoard” just as quaint.

    Maybe, but humans still exploit the majority of available resources. Anders Sandberg’s “*every* civilization has to become very good at keeping itself in line” also applies here (and to “a lot of people do seem to come up with models that treat maximising Y as… the only sensible “default” assumption for how advanced aliens would be behaving.”). And, very speculatively, infomorphs might eventually come not to care that basement-level matter is arranged in beautiful ways.

    However high X happened to be … if we actually achieved it would we then weep with anguish at the sheer horror of the fact that 10^10 X wasn’t possible?

    As someone generally in favor of maximizing Y, I would say ‘no use crying over impossibilities.’ I agree that we should emotionally accept constraints, but we should still resist scope insensitivity and try to do as much good as reality allows.

  71. Nick Tarleton Says:

    Dangit, why doesn’t <small> work?

  72. Nick Tarleton Says:

    Oops. I said (post awaiting moderation right now) “Maybe, but humans still exploit the majority of available resources.” This is wrong… still, most of the Earth isn’t protected parkland.

  73. Joe Says:

    Jim,

    Well, we hope it would be quick. Because something drawn out over time would really suck.

  74. Sean Says:

    Wow, that was almost clever.

    Well done. No wonder you work at MIT. Perhaps someday if you are ever “actually” clever, you might get a nice position at Harvard.

    You can’t be serious.

  75. Jasmin Says:

    Perhaps the title of the thread should be changed to “When physicists attack…”?

    I haven’t laughed this hard in ages.

  76. Jasmin Says:

    Re: Harvard vs MIT

    MIT 28 Physics Nobel Laureates
    Harvard 10 Physics Nobel Laureates

    BUT Harvard boosts their number by claiming work done outside the university by former faculty members. ie they formerly taught at Harvard and later (for work not done at Harvard) went on to become laureates.

  77. G. H. Diel Says:

    Mr. Aaronson, meet Mr. Twain. Thank you.
    GHD

  78. joelpt Says:

    Great post. I dub thee “Aaronson’s Wager”.

    Great comments too. It’s always nice to find some at least halfway intelligent discussion on the interwebs (an increasing rarity).

    To Daniel #35 re: quickly expanding civilizations: it is just as reasonable to presume that there is also a civilization(s) out there who have taken it upon themselves to prevent the unchecked growth of such hypothetical hyper-expanding civilizations.

    Evidently, these “space cops” have either been successful in their efforts, or just haven’t been needed so far in our particular neck of the woods. (Or they just don’t exist as such.)

    Perhaps equally likely is the possibility that our own planet’s lifeforms ARE in fact the product of another hyperexpansive civilization’s successful efforts (e.g. panspermia).

    Who can say?

  79. KaoriBlue Says:

    Hoju – There must be some seriously funny people at IAS.

  80. Michael Says:

    Considering the many-worlds QM interpretation, I don’t think we have anything to fear regardless of the probability of catastrophe. There will always be a universe in which no stranglets or black holes form in the LHC. And since we wouldn’t be around in the ones that did, we will only experience the safe ones for eternity. [See Wikipedia:Quantum_suicide]. Theoretically any conscious being could explode a nuclear bomb while standing on top of it, and would always miraculously survive from his own reference universe.

  81. KV Fitz Says:

    Well,

    we’ll either be destroyed completely, or we won’t be.

    Tell me, folks: how is that any different from any other day you (personally) spend on Planet Earth?

  82. GREG SCHITT Says:

    DOES ANYONE HERE EVER WONDER WHY THERE ARE NO SPACEFARING CIVILIZATIONS OUT THERE BUT FUCKLOADS OF BLACK HOLES???

  83. Bob Holness Says:

    DanF: you mean ‘English’.

  84. milkshake Says:

    it can be explained by the capslock abuse – once the small caps were abandoned, adding emphasis required invention of progressively larger fonts and this eventually exhausted the available resources and the civilisations collapsed under the accumulated weight of gigantic capital letters

  85. John Sidles Says:

    joelpt Says: … it’s always nice to find some at least halfway intelligent discussion on the interwebs (an increasing rarity).

    Hmmm … “intelligent discussion an increasing rarity” … could it be that the Great Filter is already operating?

  86. Jeff Says:

    Surely, the most important point is that the theorists don’t know even approximately what to expect from the LHC experiments yet the downside is cataclysmic.

  87. Marc Holt Says:

    The day the LHC opening was announced I wrote this story about it just for fun….

    http://www.planetwriters.com/article/fiction/lhc-countdown-report.html

  88. Michael Gogins Says:

    Regarding extra-terrestrial interlopers, is it not clear enough that we don’t have the knowledge base that would be required to establish many useful prior probabilities?

    Is it not also obvious that with our rapidly increasing observational capabilities, we are increasing this knowledge base rapidly?

    Isn’t that cool?

    In the meantime, I would argue that given increases in knowledge that already have occurred, the “filter” that must somehow be at work in order to have kept those pesky interlopers out of our skies, or hair, is increasingly being pushed into the biological, cultural, and political end of the Drake equation.

    Insofar as that makes sense, I would just say: Well, _somebody_ has to be first. Looks like it might be us.

  89. Michael Gogins Says:

    About the speed of light advance of ETI:

    I think this is much easier to do with probes than with actual vessels carrying explorers, colonists, or soldiers.

    Also, it looks like in space you can make a telescope good enough to detect ETI at quite some distance.

    So, I would expect an ETI’s ability to detect other ETIs to be much faster than its ability to meet the others or to colonize.

  90. Michael Gogins Says:

    On the military implications of SETI.

    We have on Earth a “balance of terror.” This looks quite hard to actually get rid of, but if it persists indefinitely it will surely end our civilization.

    There is no reason not to think similar balances will not exist between neighboring ETIs. Megaweapons installed and operated remotely are not hard to imagine: near-c massive impacts, focussed starlight from mega-mirrors, plain old nuclear weapons of large size, etc.

    Identifying an attacker to direct retaliation, however, might be difficult if not impossible. This would lead the particularly paranoid to prepare for automated retaliation against all neighbors.

    If this type of military situation exists, then one response is to abandon planets as too unsafe and commit to living exclusively in smaller, space-located habitats.

    That, in turn, would be much harder to detect from afar.

    At the extreme, this could even be a solution to the Fermi paradox. They’re out there, just stealthed.

  91. Anders Sandberg Says:

    Steven B. Giddings and Michelangelo M. Mangano, Astrophysical implications of hypothetical stable TeV-scale black holes, arXiv:0806.3381 is just out and has a nice treatment of the black hole issue. The main part of the paper analyses (using fairly well-tested theory and conservative assumptions) how quickly black holes can be stopped by planets and other objects, showing that if black holes can absorb planets quickly cosmic ray-generated holes will also be easily stopped by white dwarves and absorb them quickly too – but we observe lots of old white dwarves, so either the holes are extremely slow or do not exist. They also have a short separate section arguing that black hole decay is robust across theories.

    So the trick to catch stable black holes is to slow them by a block of degenerate matter or a flask of neutronium liquid.

  92. Greg Egan Says:

    So the trick to catch stable black holes is to slow them by a block of degenerate matter or a flask of neutronium liquid.

    I’ll check on eBay.

  93. ScentOfViolets Says:

    Surely you’d have to pin down the components of the exponential growth? Suppose one civilization completely converts the universe into it’s flavor of computronium. But then, competition for resources would continue within the computronium itself wouldn’t it? It wouldn’t be an undiluted hegemonic rule by just one individual? Without comment on the falsifiability of the proposition, it seems that proponents for (successful) exponential growth are suggesting that the universe is made over once into the Cosmic Computer, then again as entities inside the computer achieve dominance in the memory space to create their own virtual machine, and then again as one the residents of the virtual machine achieves complete control, and then again inside the virtual machine inside the virtual machine inside the computer inside the universe . . . is there any meaningful way to assign probabilities to this case?

    Secondly, given the ability to engineer it’s collective psyche, what would be a good reason to prefer unlimited exponential expansion? The only reason I can think of off the top of my head would be to head off any potential competition, whether it’s precursors are actually observed or not.

    Finally, Lebensraum to the stars seems a bit, er, premature. There might be other directions to expand. In particular, I’m intrigued by the idea of stable structures smaller than atoms , perhaps much, much smaller, perhaps not many orders of magnitude above the Planck length. Then instead of a galaxy-wide empire, there would be the equivalent of thousands of galaxy-wide empires inside the space of a marble, and instead of duration of millions to billions of years, the equivalent of trillions in a span of seconds. Greg’s notion of limiting oneself to 10^20 rather than 10^26 might seem arbitrary and small, but what about 10^120 instead of 10^126? Surely there must be at least in principle a finite upper limit that people would be satisfied with, whether it’s contained inside the universe or not. One species insisting that it and it alone occupy a billion galaxies for ten billion years seems extreme, but one can concede it’s not outside the bounds of mathematical possibility. But insisting on the equivalent of a trillion trillion trillion galaxies for quadrillions upon uncounted quadrillions of years? That to my mind strikes me as ludicrous.

  94. KaoriBlue Says:

    I find it a bit strange that folks always assume space-faring civilizations will be something out of Star Trek. The universe, by all accounts, appears to be an extremely hostile place for the development of (human-like) intelligent life – extreme temperatures, extreme pHs, equivalently large and rapid swings in temperature & pH, constant high-energy radiation bombardment, the list goes on.

    On earth, we have plenty of examples of unicellular life (microbes, archaebacteria) that can survive these kinds of extreme conditions, but almost no equivalent examples for slightly more complex multicellular organisms (beyond, say, films of bacteria). And that’s no surprise – relationships between cells in multicellular organisms require both an enormous increase in the level of complexity, and importantly, a suppression of evolutionary change within the subunits. Consider that an individual cell in a multicellular organism, like Scott, is significantly more likely to commit suicide (via an ‘environmentally friendly’ method of apoptosis) if damaged or exposed to toxins than an average bacterium in a film. It’s much harder to work out and benefit from these kinds complex arrangements if conditions are extreme and constantly changing. Individual cells will be strongly pressured to ‘defect’ when exposed to such conditions – the origin of many cancers. Complexity is also hella-expensive to maintain – I’d like to see any moderately complex multicellular organism match Deinococcus radiodurans’s ability to survive desiccation, space-like vacuums, and having its genome shattered by repeated doses of 5,000 Grays.

    So I’d (personally) place the great filter somewhere between colonial unicellular organisms and truly multicellular organisms like yeast. It’s somewhat less than unsurprising to me that SETI hasn’t turned up anything yet (why would these organisms necessarily care about broadcasting in the EM spectrum?). In my opinion we should be focusing on looking for signatures of the moderately complex organics that are most likely to be used for metabolic purposes.

    Finally, there’s no reason that these ‘dumb’ bugs can’t be space-faring, especially where the planetary gravity well is more forgiving and the atmosphere is thin enough to support some form of bombardment-catalyzed panspermia.

  95. KaoriBlue Says:

    “I find it a bit strange that folks always assume space-faring civilizations will be something out of Star Trek.”

    I should add – “…or seeded by human-like intelligences.’

  96. Daniel Says:

    joelpt: To Daniel #35 re: quickly expanding civilizations: it is just as reasonable to presume that there is also a civilization(s) out there who have taken it upon themselves to prevent the unchecked growth of such hypothetical hyper-expanding civilizations.

    My whole reasoning collapses if the speed of light is exceedable. If it is not, then there are only two possible cases:
    1. There is no pyhisical possibility for anything or anyone to become a space cop.
    2. Some space cop civilization already conquered and rearranged the whole visible universe. What we see as the Laws of Nature are in fact laws introduced by them. I think we can know nothing about their motives, so for most of the time, we can go on with our lives pretending that these are still the unaltered Laws of Nature.

  97. Daniel Says:

    Michael Gogins: About the speed of light advance of ETI: I think this is much easier to do with probes than with actual vessels carrying explorers, colonists, or soldiers.

    Trying to find other civilizations by visiting them with vessels seems really hopeless to me. Self-replicating probes are a more suitable method. But I believe that eventually (in less than some million years of technological progress) civilizations achieve the ability of an improved version of teleportation. Let me describe a more environment-friendly version of this, a version Greg Egan will hopefully approve: We emit specially prepared photons or neutrinos in every direction. This light doesn’t interfere with “uninteresting” material, and appears to be background noise to observers. But when it shines on some “interesting” material, e.g. signs of alien life, then it transforms a small amount of the material into a probe. The probe figures the best way to safely interact with its new environment, and emits a signal back about its findings. One can combine this idea with self-replication to improve throughput, but then Greg will not approve it. 🙂

  98. Daniel Says:

    To Nick Tarleton referencing Nick Bostrom (“Agreed that we must, and can, resist, but this requires fighting evolution.”):

    I’ve just skimmed the Bostrom paper. He states that fighting evolution requires a Singleton that prevents unwanted scenarios. Basically, a Singleton is either a Dictator of Civilization, or a State Religion of Self-Preservation. As far as I could understand, he is completely in line with Greg’s posts here (normatively and descriptively). In footnote [22] of the paper, he addresses the very point I already detailed in a previous post: that the speed of light limits the Singleton in interfering with these unwanted scenarios.

    “One solution would be to ensure that the goal-system of all colonizers it sends out include a fundamental desire to obey the basic laws of the singleton. And one of these laws may be to make sure that any progeny produced must also share this fundamental desire. Moreover, the basic law could stipulate that as technology improves, the safety-standards for reproduction (the degree of verification required to ensure that progeny or colonization probes share the fundamental desire to obey the basic constitution) improve correspondingly, so that the probability of defection asymptotically approaches zero.”

    I really think that he underestimates the power of random mutations. My gut feeling is that this might work in a million years scale of technological advancement, but will fail on a billion years scale. I’m interested in the billion years timescale exactly because I believe it is easier to make predictions there. (C.f. “In the long run, we are all dead.”) I think this is true regardless of whether my particular predictions are any good.

  99. Daniel Says:

    To Scott: After this whole future-of-civilizations thread dies down, I promise to cut back to a more reasonable volume of posts.

  100. Ian Durham Says:

    The LHC isn’t going to be producing anything large enough to scatter visible light (that would require a hole mass of the order of 10^20 kg), so an ordinary laser trap wouldn’t stand a chance.

    Mmm, interesting question. Well, if it’s rotating and thus has a small ergosphere, you might be able to tweak it with small pulses of particles or something – use finely tuned ordinary momentum to hold it in place. :p Of course, if it has no ergosphere I have no idea how you’d trap it.

    Considering the many-worlds QM interpretation, I don’t think we have anything to fear regardless of the probability of catastrophe.

    Ah, yes, that would be a convenient way out, wouldn’t it. Well, here are my thoughts on MWI

  101. nomad Says:

    I must admit i havent read all the posts here, but im allso concerned about the startup of the LHC, chances for a disastor is wery small, but still there.

    It reminds me of research made about changing the movement and or form of matter when looked upon through microscope, i think it was about quantum research/theory
    Then the atlas detector comes into light, it isen’t a human beeing, but how about the people studing this when it starts, can they have a role in the outcome? maybe even people thinking and focusing their minds about this can change the number in a miniscule way(im getting close to taliking about my perspective on religion here but no worries i von’t start)
    The fact is, the source of this fear some of us have is in the chance of creating a black hole, we understand that there is one in the center of the a galaxy, that makes lots of solarsystems spin around it,near it and allso in the center strange stuff is happening, we just don’t understand exactly whats going on and what a black hole is for sure.

    Creating one such black hole that can sustain itself may solve that ansver, if we can study it long enough because its not exacly healthy to be around, anyway science is kinda desperate today, i wonder why, do the founders and or sponsors know something we do not? is gona happen(so we are going to need it online for sure?

    Here is a coupple mindfarts ive thought about.

    When/if the potato is gona hit the fan, do we need the LHC to throw earth into a other dimension through a black hole to make the species survive?
    Is it just a big search after the ultimate weapon?

    And last, do we have to know ewrything this fast, why the hazzle in theese experiments, i can’t belive the scientists has hit a wall and the only way to get through this is the LHC.

  102. Greg Egan Says:

    Well, if it’s rotating and thus has a small ergosphere, you might be able to tweak it with small pulses of particles or something

    I don’t think having an ergosphere makes any significant difference. Classically, a 1 TeV hole would have a Schwarzschild radius of about 10^{-51} metres, and the most the ergosphere can be is twice as wide. The cross section for hitting either with any ordinary matter or radiation would be zilch.

    10^{-51} metres is way below the Planck length, though, so quantum gravity would come into play … but it’s hard to see that boosting the cross-section much; I suspect that’s more likely just to quantise the hole’s mass, with the lowest mass around the Planck mass, which is about 10^16 TeV.

    Of course, you don’t have to hit a black hole to have an exchange of momentum; we’ve sent space probes to the outer planets that have stolen momentum from Jupiter without crashing into it. But … gravity is very, very weak, and 1 TeV is very, very light (1.78 x 10^{-24} kg).

    Which is pretty much why it’s not a remotely scary prospect … but it’s also a bit sad that we have no hope of catching such a thing, in the unlikely event that it’s created.

  103. Kea Says:

    Clearly the planet cannot be destroyed soon enough. Re Cosmic Rays are Protons: try reading Subir Sarkar (Oxford) for an informed analysis of such questions.

  104. Greg Egan Says:

    Selfishly I suspect I’d be indifferent, but ethically?

    If maximising Y is going to be a “duty” not a pleasure (a bit like Catholic doctrine on birth control), that only makes it seem more likely to me that in reality there’ll be a trade-off far below the true maximum. It’s good to react against self-satisfaction and myopia, but unless “many souls good” also gets to be balanced against other competing factors it degenerates into a kind of extremism.

    Once you accept that exponential growth is impossible, I think the upsides of avoiding conflict and ensuring room for other civilisations (adding to the diversity and interest of the universe in ways that might prove both more just and more interesting than if you tried to engineer those things from the same resources yourself) could easily lead to ancient, highly advanced, but utterly invisible civilisations, full of beings who are very happy with their actual numbers and prospects (which while far less than the absolute carrying capacity of the universe are still astronomically greater than anything in our experience; I agree with ScentOfViolets, but I don’t think they’d even need near-Planck-scale computers to saturate their desire for growth with, say, a pinhead orbiting every star).

  105. John Sidles Says:

    Greg Egan speaks of: “… a pinhead orbiting every star…”

    Thank you Greg … for pointing to yet another instance of Douglas Adams’ being absolutely right! 🙂

    Mr. Adams was among the first to articulate the fundamental informatic bound that civilizations can program themselves to be happy at very low entropic cost, whereas programming themselves to be smart necessarily incurs a much higher cost.

    Gary Larsen, Terry Pratchett, Bill Watterson, Scott Adams, Austin Grossman, Max Barry (and many more) have capably extended Douglas Adam’s pioneering informatic insights. Will WALL•E continue the tradition? Let us hope! 🙂

    ——

    The above is written during a lengthy lecture on metastatic osteosarcoma … the kind of lecture where the punch line of a long and horrific case history is “… in the end it turned out to be a metastatic lung cancer (MD audience chuckles appreciatively)”.

    Another zinger: “I’ve only seen seen two scapular tumors … but I wonder how many scapular tumors have seen me?””

    The point being, that the cognitive needs of any civilization (advanced or not) are subtle indeed … it seems very doubtful to me that any civilization—however advanced—will outgrow its need for humor.

    This is why (to me) the story of The Island of the Blue-Eyed People has always had a strongly comedic element.

    So am I the only person who smiles when reading even the driest informatic article?

    Perhaps galactic civilizations refrain from contacting us because our mathematical and scientific literature is too lacking in humor … which they take as proof we don’t understand it.

  106. Nick Tarleton Says:

    If maximising Y is going to be a “duty” not a pleasure (a bit like Catholic doctrine on birth control), that only makes it seem more likely to me that in reality there’ll be a trade-off far below the true maximum.

    Good point. I didn’t mean to advocate the Repugnant Conclusion, or suggest that our descendants should dutifully but joylessly breed as much as possible; only that while considering what to do now to influence the far future (like engineering motivations) we should consider more life linearly better, all else being equal.

    About ensuring room for other civilizations, also good point but I can’t resist quoting your words back:

    What I regret most is my uncritical treatment of the idea of allowing intelligent life to evolve…. Sure, this is a common science-fictional idea, but… anyone who actually did this would have to be utterly morally bankrupt. To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way. Yes, this happened to our own ancestors, but that doesn’t give us the right to inflict the same kind of suffering on anyone else.

    …or stand by while it happens.

  107. Pat Says:

    So, I should save myself some money and cancel my life insurance policy? 😉

  108. Greg Egan Says:

    I can’t resist quoting your words back

    And I stand by them, but there’s a vast spectrum of possible situations that range from what I’m specifically condemning there — creating a universe-in-a-box and standing back with our arms folded, regardless of what happens, just for the sake of watching the experiment proceed — to our responsibilities in a universe where countless evolved civilisations likely already exist and simply deserve not to be crowded out of existence.

    If we’re ever smart enough to be absolutely confident that we can create new life that will neither suffer in the way our ancestors did, nor be warped or diminished by its relationship to us to the point where it would still be better not to exist, there might be an argument for radical intervention in every biosphere, should we ever have the means. (And maybe my goal in life really should be driving poor Roger Scruton mad by genetically engineering all carnivores into vegetarians.) But whether you find it inconsistent or not, I don’t accept that there’s absolute moral equivalence between active and passive scenarios; I’m sure that bewilders people who think that life is all about maximising utility functions, but there it is.

  109. mitchell porter Says:

    It would be of interest to revisit Robin Hanson’s “Burning the Cosmic Commons”, but with a stealth factor added, to represent caution. Even aggressively expansionist civilizations have reason to avoid creating detectable astronomical signatures, such as a wall of colonization, if they wish to avoid attack by rivals. You could turn that proposition into a whole cosmology, in which the galaxies are populated with vast silent armadas which lie quiet for geological ages, pursuing a policy of mutual non-interference just out of self-preservation, but springing into action whenever an “aggressive hegemonizing swarm” shows up. I wonder whether, with those premises, even a “desert” the size of our past light-cone could look natural.

  110. Varuka Salt Says:

    Linked you up here. http://suicidegirls.com/boards/Current+Events/251609/page15/#post13855683

  111. A. Sceptic Says:

    Where is the Flying Spaghetti Monster when we need him/her/it so much?

  112. Charles Gallagher Says:

    I think all the noise about turning on the collider is just..silly. I mean, besides the fact that scientists are pretty certain nothing’s going to happen, you have to figure there are other more advances civilizations out there that have tried this, and have survived. I just don’t think its that easy to destroy the planet, or Tesla woulda done it.

  113. Philip Polkovnikov Says:

    OMG, I just want shut that bunch of cowards up! Scientists won’t stop just because “unpros” are asking to do this without any math-proven reasons. If you’re really sure experiments must be stopped, act decisively. Try to start a demo or whatever!
    P.S. Sorry for emotional style, I can’t stand it anymore.

  114. sysboy Says:

    It’s quite obvious that the LHC won’t destroy the world by creating a black hole.

    That would mean that the World would be sucked to nothingness which is in no way compatible with the Word of God laid down in Revelations.
    .
    .
    .
    .

    Unless of course there are by some strange coincidence, 144000 people working at CERN.

  115. Asymptote Says:

    Awesome!

  116. KaoriBlue Says:

    I’m convinced that the folks at the LHC are quietly seeding paranoia about the supercollider via a seemingly endless stream of news reports and ~4th-page snippets in the NY Times. Forget traditional press releases, physicists have finally taken a hint from Hollywood and gone all-out on a viral marketing campaign. There couldn’t possibly be a more effective strategy for raising public awareness and interest in particle physics.

  117. ScottO Says:

    So, in these LHC experiments, where do the particles created during the collisions in the detectors go? I know many will just decay, but what about the rest?

    For instance, I assume something like a neutrino could be created. Since it’s stable, where does it go? Doesn’t it just fly off into space?

    What I’m getting at is, if micro black holes were created, and they didn’t decay, where would they go? Sure they might briefly interact with some with earth matter. But wouldn’t they have some huge momentum imparted upon them from the collision, and so zip off to space never to be seen again? And if so, why worry about it?

    Scott

  118. Sparrow Letov Says:

    This is so pathetic! It’s taken 3 million years to evolve monkeys smart enough to begin asking real questions about the cosmos. So far as we know, we’re the only ones who have ever gotten this far. Our time may well be limited. If we don’t solve several very big problems in a hurry our technological civilization will be over in a century, at most. We’re running a whitewater river between high canyon walls. The only way out is through. There’s no stopping or going back. We’ve built a wonderful tool that may well give us knowledge we’re going to need very badly, but some of the monkeys are scared to turn it on.

    Throw the switch! The chance of the LHC destroying the world is zero, for crying out loud! And if I’m wrong, so be it. I’d rather die asking questions and finding answers than die from stupidity and ignorance, freezing to death in the dark.

  119. John Sidles Says:

    In sympathy and agreement with Sparrow Letov …

    Fundamental science aside, the LHC is a highly public global-scale, highly federative, global-scale enterprise.

    From this federative point of view, the LHC is already functioning admirably. This is a good sign for the future … because a planet with ten billion people on it requires many such enterprises.

    It is therefore interesting (to me) to inquire “How is it that this very good thing has come about?” This IMHO is a very fruitful question to ask.

    As for answering that question … well … IMHO we are still pretty far from having a satisfactory answer. So for the present maybe we had better just enjoy our good luck. 🙂

    There is a little bit of “zen” in the above point of view. Maybe the real breakthrough of the LHC is already in our hands?

  120. Jonathan Vos Post Says:

    Let me resubmit with care to avoid the less-than sign which the system mistook.

    Do we know anything interesting about the QC equivalent of:

    Notes from Richard Schroeppel (rcs(AT)CS.Arizona.EDU) concerning A057241
    Tue, 9 Jan 2001

    I’ll offer the sequence 0,0,3,6,10, less-than-or-equal 23.
    “Circuit cost of hardest boolean functions of N inputs.
    Metric: 2-input And-gates cost 1, Not is free,
    Fanout is free, Inputs are free, no feedback allowed.”

    The next term may be pretty hard to compute.
    The 10 took several weeks of Alpha time, and the following term
    is thousands (millions?) of times harder. (There are more than half a
    million functions of five variables; you have to locate a function
    that requires an alleged maximum gate count, do the search to
    show it can’t be done in fewer gates, and supply no-worse circuits
    for all the other functions.)