Lecture notes! Intro to Quantum Information Science

Someone recently wrote that my blog is “too high on nerd whining content and too low on actual compsci content to be worth checking too regularly.”  While that’s surely one of the mildest criticisms I’ve ever received, I hope that today’s post will help to even things out.

In Spring 2017, I taught a new undergraduate course at UT Austin, entitled Introduction to Quantum Information Science.  There were about 60 students, mostly CS but also with strong representation from physics, math, and electrical engineering.  One student, Ewin Tang, made a previous appearance on this blog.  But today belongs to another student, Paulo Alves, who took it upon himself to make detailed notes of all of my lectures.  Using Paulo’s notes as a starting point, and after a full year of procrastination and delays, I’m now happy to release the full lecture notes for the course.  Among other things, I’ll be using these notes when I teach the course a second time, starting … holy smokes … this Wednesday.

I don’t pretend that these notes break any new ground.  Even if we restrict to undergrad courses only (which rules out, e.g., Preskill’s legendary notes), there are already other great quantum information lecture notes available on the web, such as these from Berkeley (based on a course taught by, among others, my former adviser Umesh Vazirani and committee member Birgitta Whaley), and these from John Watrous in Waterloo.  There are also dozens of books—including Mermin’s, which we used in this course.  The only difference with these notes is that … well, they cover exactly the topics I’d cover, in exactly the order I’d cover them, and with exactly the stupid jokes and stories I’d tell in a given situation.  So if you like my lecturing style, you’ll probably like these, and if not, not (but given that you’re here, there’s hopefully some bias toward the former).

The only prerequisite for these notes is some minimal previous exposure to linear algebra and algorithms.  If you read them all, you might not be ready yet to do research in quantum information—that’s what a grad course is for—but I feel good that you’ll have an honest understanding of what quantum information is all about and where it currently stands.  (In fact, where it already stood by the late 1990s and early 2000s, but with many comments about the theoretical and experimental progress that’s been made since then.)

Also, if you’re one of the people who read Quantum Computing Since Democritus and who was disappointed by the lack of basic quantum algorithms in that book—a function of the book’s origins, as notes of lectures given to graduate students who already knew basic quantum algorithms—then consider these new notes my restitution.  If nothing else, no one can complain about a dearth of basic quantum algorithms here.

I welcome comments, bugfixes, etc.  Thanks so much, not only to Paulo for transcribing the lectures (and making the figures!), but also to Patrick Rall and Corey Ostrove for TA’ing the course, to Tom Wong and Supartha Podder for giving guest lectures, and of course, to all the students for making the course what it was.

• Lecture 1: Course Intro, Church-Turing Thesis (3 pages)
• Lecture 2: Probability Theory and QM (5 pages)
• Lecture 3: Basic Rules of QM (4 pages)
• Lecture 4: Quantum Gates and Circuits, Zeno Effect, Elitzur-Vaidman Bomb (5 pages)
• Lecture 5: Coin Problem, Inner Products, Multi-Qubit States, Entanglement (5 pages)
• Lecture 6: Mixed States (6 pages)
• Lecture 7: Bloch Sphere, No-Cloning, Wiesner’s Quantum Money (6 pages)
• Lecture 8: More on Quantum Money, BB84 Quantum Key Distribution (5 pages)
• Lecture 9: Superdense Coding (2 pages)
• Lecture 10: Teleportation, Entanglement Swapping, GHZ State, Monogamy (5 pages)
• Lecture 11: Quantifying Entanglement, Mixed State Entanglement (4 pages)
• Lecture 12: Interpretation of QM (Copenhagen, Dynamical Collapse, MWI, Decoherence) (10 pages)
• Lecture 13: Hidden Variables, Bell’s Inequality (5 pages)
• Lecture 14: Nonlocal Games (7 pages)
• Lecture 15: Einstein-Certified Randomness (4 pages)
• Lecture 16: Quantum Computing, Universal Gate Sets (8 pages)
• Lecture 17: Quantum Query Complexity, Deutsch-Jozsa (8 pages)
• Lecture 18: Bernstein-Vazirani, Simon (7 pages)
• Lecture 19: RSA and Shor’s Algorithm (6 pages)
• Lecture 20: Shor, Quantum Fourier Transform (8 pages)
• Lecture 21: Continued Fractions, Shor Wrap-Up (4 pages)
• Lecture 22: Grover (9 pages)
• Lecture 23: BBBV, Applications of Grover (7 pages)
• Lecture 24: Collision and Other Applications of Grover (6 pages)
• Lecture 25: Hamiltonians (10 pages)
• Lecture 26: Adiabatic Algorithm (10 pages)
• Lecture 27: Quantum Error Correction (8 pages)
• Lecture 28: Stabilizer Formalism (9 pages)
• Lecture 29: Experimental Realizations of QC (9 pages)

And by popular request, here are the 2017 problem sets!

I might post solutions at a later date.

Note: If you’re taking the course in 2018 or a later year, these sets should be considered outdated and for study purposes only.

Here’s a 184-page combined file. Thanks so much to Robert Rand, Oscar Cunningham, Petter S, and Noon van der Silk for their help with this.

If it wasn’t explicit: these notes are copyright Scott Aaronson 2018, free for personal or academic use, but not for modification or sale.

I’ve freely moved material between lectures so that it wasn’t arbitrarily cut across lecture boundaries. This is one of the reasons why some lectures are much longer than others.

I apologize that some of the displayed equations are ugly. This is because we never found an elegant way to edit equations in Google Docs.

If you finish these notes and are still hankering for more, try my Quantum Complexity Theory or Great Ideas in Theoretical Computer Science lecture notes, or my Barbados lecture notes.  I now have links to all of them on the sidebar on the right.

111 Responses to “Lecture notes! Intro to Quantum Information Science”

1. John G Faughnan Says:

On page 1 of lecture 2 you use a * superscript, but notes don’t say what it means. (PDF pages could use page numbers :-).

2. Scott Says:

John #1: I’m looking at page 1 of lecture 2, and I don’t see the superscript you’re referring to. In general, though, a superscript * means complex conjugate.

3. Rand Says:

Awesome!

Can we get these in a single PDF? Would be nice to have a single document to reference. (I would combine them for myself, but I imagine that I’m not the only one who would like a single PDF.)

Also, I’m disappointed that John Watrous’ amazing lecture notes didn’t get a shout-out. They got me started in quantum computing and I still point people to them if they want to understand the basics of the subject: https://cs.uwaterloo.ca/~watrous/LectureNotes/CPSC519.Winter2006/all.pdf

4. Scott Says:

Rand #3: Indeed, looking again at Watrous’s notes just now reminded me of how awesome they are! I added a shout-out to them in the post.

Collating everything into a single PDF is a great idea (which had occurred to me as well), but I don’t know a super-convenient way to do it. If you do and were going to do it anyway, any chance you could email me the result and I’ll link to it from the post, with thanks?

5. Sniffnoy Says:

Scott: Quick note, the Watrous link is currently broken due to smart quotes.

6. Job Says:

Suppose you are given an implementation of a function f(x)=y with an associated probability distribution p(y) that is periodic.

That is, p(y) = p(y + d), for all possible y. And we want to find d.

First of all, this variant of period-finding is not in NP, right?

Also, would a quantum computer be able to solve this problem given a quantum implementation of f(x)?

7. Petter S Says:

Here is the combined PDF: https://www.dropbox.com/s/d8fpyzkhwtke6ym/scott.pdf?dl=0

If anyone wants to regenerate it (after e.g. updates from Scott), here is the script: https://www.dropbox.com/s/3mmgscw9wggx0lf/scott.py?dl=0

8. Oscar Cunningham Says:

I made a combined version and emailed it to you.

9. jonas Says:

> At a hate site that I’ve decided no longer to mention by name (or even check, effective today),

I think these promises work better if they have some definite deadline. This could be some specific calendar date (like 2022-08-26), or some other event whose time you can objectively decide and won’t feel tempted cheat with it.

For example, I decided that while the twitch.io website has its nice, and has its good side, it is addictive, took too much time of my life and caused me to stay up too late during nights. So I swore that I won’t visit twitch.io until the time I next replace the central components of my primary home computer (motherboard, cpu, RAM) and the new components start to work. This is a deadline that will take a while but not forever, will motivate me to shop for a new computer (which is a necessary but unpleasant task), and even ties in to the subject of the promise (twitch.io is a website for streaming videos live, and with a more powerful computer I’ll be able to stream a video and work on other tasks that are intensive in computer resources at the same time, so using twitch.io will disrupt me less).

10. Andrei Says:

What is a good reference for the “superquantum” theories mentioned in Lecture 14 i.e. theories strictly between Tsirelson’s bound and the no-communication theorem? (I only know of the papers of Bradler et al. that you asked about on Physics.SE some years ago.)

11. Scott Says:

Sniffnoy #5: Thanks, fixed! The perils of trying to update HTML on an iPhone…

12. Scott Says:

Petter S #7 and Oscar #8: Thanks so much! (I see that Robert Rand and Noon van der Silk have also made combined files, so now I’ll need to choose which one to link to… 😀 )

13. Scott Says:

Incidentally, Petter S: Thanks so much for your Python script that automatically regenerates a combined file! I would like the ability to run that script myself, since there have already been edits and I foresee more.

So, I set up a Python installation on my computer. But the script didn’t work, because I was missing the PyPDF2 library. So then I downloaded the PyPDF2 library. But I have no idea how to make the Python installation realize that the PyPDF2 library is there. When I run its “setup.py” file, it just says

RESTART: C:\pypdf\setup.py

(This, in a nutshell, is why I became a theoretical computer scientist—because around the age of 19, I somehow completely lost whatever ability I’d ever had to deal with stuff like this. 🙂 )

14. Scott Says:

jonas #9: Yes, you’re right. Let’s give it a 3-year deadline, to be optionally extended (same as I did with Lubos Motl, on the opposite extreme of the ideological spectrum, when his attacks on me started interfering with my life).

15. Scott Says:

Andrei #10: The terms to Google are “nonlocal boxes” or “Popescu-Rohrlich (PR) boxes.” See for example this 2005 paper, which showed that PR boxes would render communication complexity trivial. If you work backwards and forwards from that paper (e.g. using Google Scholar), you can probably find most of the relevant literature.

Thanks for putting up your course notes.
In Lecture 2 you give a brief discussion of why the Schroedinger atomic model resolves the radiation problem of the old quantum theory.
This is a point which caused me confusion and I feel is not well explained in most text books and courses, and often overlooked completely, even though it is very straightforward.
The question of why mathematically the model does not describe electrons spiralling into the nucleus, is actually irrelevant. If it predicted electron distributions which were kinematically stable, but which ought to be very lossy according to classical electromagnetic theory, then there would be a contradiction, just as there is for the Bohr-Sommerfeld model.
The real reason why the Schroedinger model resolves the problem is simply because it predicts electron densities for the time eigenstates states which are constant in time, and probability currents which are d.c., so the dipoles, higher poles and current distributions of these states are predicted by classical EM theory not to radiate.

17. Jake Argent Says:

I have a question about the unitary matrix in page 10, with rows [1 0] and [0 i].

When I multiply that matrix with [1 0] column, it gives me [1 0] column. So shouldn’t it be interpreted that it maps |0> to |0> and |1> to i|1> ?

Or in other words, the given mapping should be realized by the matrix of rows [0 1] and [i 0].

Am I getting this wrong?

18. Scott Says:

Jake #17: That was a mistake. It’s fixed now; thanks!

19. Phylliida Says:

These are amazing!! Thank you so much! I’ve wanted to pick up Quantum Computing for a while but kept running into people assuming I already had a basic idea of how quantum stuff worked (I don’t). I understand algorithms and modern computational complexity research but quantum stuff were always just weird and confusing to me. When I found tutorials, alongside this people kept discussing stuff that weren’t relevant algorithmically, using gross weird notation. I had this problem with Quantum Computing since Democritus.

I finally found the Quantum Computation section in Computational Complexity: A Modern Approach. It was the first bit of content of quantum computation I found that didn’t require knowing anything about quantum mechanics, yet explained Bell’s Inequality, Grover, Simon, and Shor’s algorithms in ways I could understand (most of which didn’t even use complex numbers! Just negative amplitudes). I’ve been going through your lecture notes now and they seem to be equally as approachable, which is nice 🙂

They do seem like they get on a lot of tangents though. I guess that is nice for a comprehensive picture, but I wish there were more guides like in “Computational Complexity: A Modern Approach” that only discussed the concepts needed for quantum algorithms in clean ways that didn’t require any knowledge about quantum computing beforehand.

Anyway, still, thanks for sharing, they are enjoyable reads 🙂

20. Scott Says:

Phylliida #19: Allow me to recommend the time-honored technique of skipping the parts that aren’t relevant to you. 🙂

21. pat Says:

You can try the auto-latex equations extension for Latex in google docs:

22. b Says:

There’s not enough nerd whining in this post. Too much compsci.

23. Daniel Seita Says:

Scott #13

From the path, it looks like you are using Windows. Ubuntu might be easier to deal with for running Python code.

24. Israel Says:

Listened to your Strachey Lecture on Quantum Supremacy.
It was witty, entertaining and informative.

Thank you.

My best wishes to you and your loved ones.

PS: Please ignore the haters. They will be consumed by their own hatred.

Mitzvah gedola lihiyot b’simcha tamid. ( even when it is very, very hard )

25. James Gallagher Says:

These are standard (ie boring) lecture notes, so Ok, just teach the prerequisites. But why try to indoctrinate newbies with all kinds of non science?

Your lecture on “interpretations” fails to even mention the most obvious “interpretation”, that nature is just random and that is that.

The fact that we have unitary evolution and preservation of the 2-norm is just simple observational evidence, after that, once you allow randomness then Gleason’s Theorem gives you the Born Rule as the only sensible mathematical law for understanding observations.

If one is trying to construct the Born rule from their interpretation, then they are already lost, the only proper question left to answer, 100 years after Solvay is, “where does the randomness come from?”

(not, “how do we mimic the Born Rule with a silly deterministic system” (or dumb dynamical collapse model))

26. Michael Says:

Thanks Scott, the notes look great!

Flicking through, it looks as though a diagram on page 122 has slid off the page.

27. Petter S Says:

Scott #13: “pip install PyPDF2” should hopefully make it work. Then you need to do the same for “requests”.

28. Petter S Says:

Scott #13: So you lost the ability to deal with stuff like this? I don’t believe you! You still write papers in Latex, right? 🙂

29. Egg Syntax Says:

Have you considered doing video recordings of the lectures themselves? That would pair beautifully with these lecture notes! Might be too late for the first week, but it seems like the university’s AV Department would be willing to do that for you. Barring that, even phone recordings of the lectures would be a treat!

30. Scott Says:

James Gallagher #25: There’s a reason why “nature is just random and that is that” isn’t called an “interpretation”—because it doesn’t even try to answer the glaring question, of how Nature decides exactly when to suspend unitary evolution and introduce randomness (or no decision ever needs to be made, it being all a matter of perspective—as for example the MWIers say, and ironically enough modern Copenhagenists also say) (or if all the random choices were already made at the beginning of time, as the orthodox Bohmians say).

If you find the notes boring (or if you already know this material), then don’t read them. While I may or may not have done them justice, I certainly think the intellectual developments being covered were some of the most exciting ones of the past half-century, in any part of math, science, engineering, or philosophy, and I was gratified that many of the students thought so too.

31. Scott Says:

Michael #26: Thanks! That was a strange display error that wasn’t present in the Google Doc, and only appeared when creating the PDF. It’s fixed now.

32. Scott Says:

Petter S #27-28: I use Scientific Workplace (a graphical front-end for LaTeX), but I still need to waste days of every year messing around with LaTeX packages and fonts, and I despise every minute of it.

Fortunately, I was pointed to a wonderful program called PDFtk, which lets me concatenate PDFs super-quickly without needing to mess around with Python scripts—so my learning how to do the latter has been (again) deferred to another day! 😀

33. Scott Says:

Egg Syntax #29: When I post video lectures, people complain about my stuttering and mannerisms (oddly enough, my lecture style seems fine in person; it just doesn’t translate to video). When I don’t post video lectures, people complain that they’re not available. So I can’t win either way. I probably won’t do video this year, but maybe in a future year, once the delivery is more fluid?

34. fred Says:

This is awesome! Thank you so much!

35. Ben Bevan Says:

Great stuff Scott. Always enjoy your approach as it’s often quite different from the other sources out there. Really nice to have your take on this material.

36. lewikee Says:

Thank you so much for this, Scott. Remarkably succinct for the amount of (counter-intuitive!) information being presented.

37. fred Says:

In lesson 2, it’s written than column state [1/2 0 0 1/2] isn’t feasible, but right after a transformation (controlled not) is introduced that leads to that state.

38. Joe Says:

Scott this is great! I’m exactly the person you described — I’ve read QCSD but I wanted some more basics too. Any chance you will upload the problem sets you used for this course? I’d love some exercises to make sure I’m following along properly.

39. lewikee Says:

fred #37: Yeah I noticed that. The only way I could reconcile the (apparent?) contradiction was that it said the vector could not be a tensor product. Perhaps the latter, transformed vector is not technically a tensor product?

40. fred Says:

lewikee #39

Oh, I see – if you start flipping the two coins/bits based on some rules (depending on the combined values), you end up with a state that can’t (always) be obtained from combining the separate (independent) vectors for each coin/bit.

41. Scott Says:

Joe #38: OK, homework sets are now posted!

42. Scott Says:

fred #37: Yeah, the resolution is simply that in this case, the CNOT gate maps a product distribution to a distribution that we previously proved can’t be a product distribution, but is instead correlated. I edited the notes to clarify.

43. Jonathan Paulson Says:

Minor typo, I think. Lecture 3 page two has “And we define as the inner product of ket x with ket y”. Should it be “bra x with ket y”?

44. Jonathan J Paulson Says:

Minor typo? Lecture 3 page 2 the line “Remember: the way we change quantum states is by applying linear transformations” has U transforming a column vector into a row vector. Shouldn’t the output still be a column vector?

45. Ryan O'Donnell Says:

I’m teaching a Quantum Computation and Information course at CMU this semester and will have videos 🙂 I’m not an expert in the topic like Scott, but perhaps you may still like it. Check out my YouTube (click on my name, above) starting in about a week…

46. Confused Says:

Hey Scott, I have a general question about quantum information, I am sure it would require a lot to actually answer but hopefully there is some sort of vague answer that suffices.

So, take the stance of “it from qubit” as a genuine claim of ontology, that everything physical in the universe can be described in terms of two-valued quantum systems. My question is basically, how does the reduction of all quantum systems to two-valued systems work?

As an example, if we want to apply “it from qubit” in that way, we would say “to explain the physical process of taking an egg and frying it, we would need to describe a very large quantum state that describes the system in it’s original state, and then apply a series of gates that transform that system into the desired output state”. My question is really just, when real life quantum mechanical systems are so much more complicated than the simple two-valued systems like electron spin or photon polarization, how is the reduction actually done?

Is it just a coding argument, in the same way that computer science, as well as mathematical logic and metamathematics, encodes information? I am familiar with and understand how Turing machines can only deal with 1s and 0s but those 1s and 0s encode sets of natural numbers… is the “it from qubit” ontology also just an encoding of physical systems as qubits? I am still confused about the idea even if the answer is just “yes”.

47. James Gallagher Says:

Scott #30

oh come on, Nature just does the random choices at Planck timescales, so several trillion trillion trillion random choices every second, each one followed by the entire universe updating via a unitary evolution.

There are probably many universes existing like this with different unitary evolution rules, and many not existing for more than a small timescale because they have non-unitary updates which cause infinite or zero states.

And only a few with a sufficiently stable unitary evolution law to allow ~13 billion years of evolution and Solar Systems to form etc

I mean, come on, if you’re gonna teach MWI and Penrose gravitational collapse models, then teach this crap too

48. Scott Says:

James #47: So Nature makes random choices once per Planck time. Then presumably the wavefunction is also collapsing in some local basis once per Planck time (since that’s the only time random choices are needed in QM)? But that would lead to a drastically different world than the one we observe, a world where the double-slit experiment and other basic quantum phenomena (which happen over MUCH longer timescales than a Planck time) wouldn’t work. Or alternatively, if the once-per-Planck-time random choices are simply “saved up” for whenever a wavefunction reduction happens, then what DOES determine when (and in which basis) reduction happens? You’ve then simply pushed the measurement problem somewhere else and made zero progress on it. Either way, this proposal is dead on arrival.

49. Scott Says:

Confused #46: The spins of electrons and the polarizations of photons are literally qubits. But I don’t know anyone who believes that ALL the basic constituents of our best description of reality must literally be qubits. At the least, some of them would presumably be qutrits (3-dimensional quantum systems) or other higher-dimensional things, and more generally, the Hilbert space need not have any “canonical” factorization into qubit-like components (as the Hilbert spaces of boson and fermion occupation numbers already don’t).

For a computer scientist like me, though, the more important question is whether the Hilbert space of the observable universe is finite- or infinite-dimensional. The holographic entropy bound, together with the assumption that the dark energy is a cosmological constant, would imply that it’s finite (roughly 10122) dimensional. And if so, then the state of the universe could always be thought of as a finite collection of qubits—via “coding,” as you put it.

Of course, this doesn’t imply that ideas from quantum information will actually be useful for fundamental physics, but that’s something that an increasing number of high-energy physicists separately believe is true, for example because of the insights that emerged from the study of the black-hole information problem and entanglement entropy in AdS/CFT.

50. Ashley Says:

Scott,

Ah, problem sets! Thank you!!

Also, consider this as putting my hand up for you posting the solution set too.

51. jonas Says:

Re “Confused” #46: Funnily, David Madore just complained in “http://www.madore.org/~david/weblog/d.2018-08-12.2545.html#d.2018-08-12.2545” that after studying some quantum electrodynamics, he still doesn’t understand how the real world (not counting its relativistic part) is encoded into a quantum mechanical system. It is possible that some physicists do understand this, but it’s so complicated that David just haven’t learned enough yet. That more terrible possibility is that even the particle physicists don’t know, but I really hope that’s not the case.

52. Jaikrishnan Janardhanan Says:

Scott,

It would be nice if you could kindly post these notes to AMS Open Notes:
https://www.ams.org/open-math-notes

53. fred Says:

Scott, how many qubits those 60M$(UT supercomputer grant) could have bought you? 54. Scott Says: fred #53: I hadn’t heard about that supercomputer grant. In any case,$60M could fund a pretty nice experimental quantum computing effort, but you wouldn’t want to put me in charge of it. 🙂

55. Scott Says:

Jonathan Paulson #43: That’s actually slightly subtle, since the definition of “inner product” could involve automatically conjugating-transposing one of the two vectors. In any case, edited to clarify.

56. Scott Says:

Jonathan Paulson #44 and others: OK, I fixed this. But unfortunately, doing nice equations in Google Docs turns out to be difficult. Paulo found a hacky way to do it (inserting the equations as images), and he’s no longer available. As a result, many of the equations are already ugly, and if I need to edit them, then they become even uglier.

I’ll tell you what: if anyone is willing to go through all 29 lecture notes (as Google Docs) and beautify the equations, I’m willing to offer $500 for that. 57. Adam H Says: Thank you for this generous primer Scott! I think you are becoming the Richard Feynman, the great explainer, of Computation(as well as making huge contributions to your field just like he did in physics). 58. James Gallagher Says: Scott #48 haha, didn’t we do this argument ~5 years ago? I just hope you’re well and in good condition to tackle the new academic year starting soon, you should be in a great mood with all the positivity towards quantum computing coming from major funding sources in your own government. You ought to be a major player in the coming years in this quantum computing renaissance. And whether it succeeds or not, what it discovers will be scientific progress for humankind, which is usually a very good thing. Just move on from silly incidents like the tip-jar debacle and social media herds swarming on what they perceive as an easy weak target. None of these things matter compared to the importance of the work you do and the career and family life you have. But anyway, you’re wrong in your argument about collapse per ~planck-time, since unless it’s mathematically wrong, it’s not wrong, and since a random collapse per ~planck-time model is virtually mathematically indistinguishable from a deterministic many-worlds model it can’t be wrong. It will be wrong when science proves the EM spectrum to be finer than planck scales, or a several million qubit quantum computer models the microscopic world correctly. 59. Scott Says: James #58: No, once again, one collapse per Planck time would have empirical consequences that are dramatically different from what we observe. It is distinguishable from Many-Worlds, because it would rule out the practical observation of interference effects, but we do see interference effects (which is how we know about QM in the first place). The proposal is flat-out wrong. Having said that, I do indeed vastly prefer someone who’s stubbornly wrong about QM than someone who stubbornly considers me a terrible person. 🙂 So I appreciate your comments. And yes, it’s an exciting time for the field, and yes, with teaching just having started yesterday, and with a couple research papers urgently needing to be written, there are indeed much more exciting things to think about than people being mean on social media. 60. fred Says: Regarding the MWI, if there’s a qubit measurement with Prob(1) = 2/3 and Prob(0) = 1/3. Supposedly the current branch where the measurement is performed would “split” into one branch where the qubit == 1 and one branch where the qubit == 0. But those two branches aren’t exactly created “equal” (because of the difference in probability). So how does each branch remember/record that the probabilities were 2/3 and 1/3? Is this information preserved in the wave function of the entire universe for each branch? Is it preserved in the brain of the experimenter? 😛 61. Pat Says: I’ll fix your equations for you if you like. 62. Semiclassical Says: To provide some context regarding the “1 collapse per Planck time” discussion: In the original GRW theory, the mean localization frequency is f=10^-16 s^-1 and a localization accuracy of 10^-5 cm. Quoting from the Stanford entry on collapse theories (section 5 of https://plato.stanford.edu/entries/qm-collapse/): “It follows that a microscopic system undergoes a localization, on average, every hundred million years, while a macroscopic one undergoes a localization every 10^−7 seconds.” (They don’t clarify what definitions of micro/macro are being used, but this can presumably be found in the original papers.) Note that the microscopic rate of localization (1 per hundred million years) is 10s of orders of magnitude different slower than “1 per Planck time”~ 1 per 10^-51 years. The latter macroscopic rate (1 per 100 nanoseconds) is very fast on human time scales, but is still many orders of magnitude short of “trillion trillion trillions of localizations per second”; for that, you’d need to scale up to a much much larger system. So the localization rates in the GRW theory are vastly, vastly different than “1 per Planck time”. The upshot is that a rapid rate of localization is not needed to get a realistic proposal: the sheer number of particles present in a macroscopic system means that a little localization goes a long way. By contrast, ‘one collapse per Planck time’ involves so much localization as to be entirely unrealistic and not in accord with any realistic proposal of spontaneous localization. It may be a cute slogan, but quantitatively it is simply not credible. 63. fred Says: Semiclassical #62 What’s really interesting is that those theories seem testable experimentally (in the near future). 64. James Gallagher Says: Scott #59 If the interference exists mathematically then that is good enough. If you take a vector of 100 complex numbers (say), then randomly change the phase (say) of one of them and then evolve the vector by multiplying by a 100×100 unitary matrix then you have a new vector of 100 complex numbers. But we don’t know what it is until we look at it, since the (phase) change was random. Now, without looking, we allow another random change and a 2nd evolution via multiplication by the unitary matrix. We then have yet another vector of 100 complex numbers, but we do not know what it is until we look at it. In fact it is in a superposition of all possible two time evolution steps with a phase change allowed on any of the 100 complex numbers (That’s a big superposition) Now, if this goes on for billions of iterations, there is still only one vector of 100 complex numbers, but we have no idea what it is until we look. It is, mathematically in an incredibly huge superposition, with all the possible interference effects possible depending on which branch the evolution took. And yes there are worlds, with tiny probabilities, where in a double slit experiment the photons all end up on one side, same as with deterministic many worlds, but the most probable outcomes are the ones we usually observe, the interference effects are a result of a mathematical calculation which just uses superpositions and schrodinger (unitary) evolution. Note also, that an observation of only a small part of the vector will still leave a large part in (mathematical) superposition, local observations don’t fix the global state of the universe. I’m just suggesting that the superpositions are only a mathematical reality, due to randomness at planck timescales, not actual real branching of many worlds. At least in my (silly?) model we have a speed limit, due to the finite time it takes for a random jump, in deterministic models the universe all happens at once, which is a shame really. 65. Scott Says: James #64: I confess that none of that made the slightest bit of sense to me. But even supposing it’s meaningful, and the failure is on my end—even then, isn’t it simpler just to use standard QM? I feel like the burden would still be on you to show what we gain by adopting the elaborate-sounding picture you describe. 66. James Gallagher Says: Scott #65 Natural origin of a Speed of Light limit? 67. Bhavishya Says: Unrelated: The link to Ryan O’Donnell is broken. The correct link -> http://www.contrib.andrew.cmu.edu/~ryanod/ 68. fred Says: “Say we’re at a quantum airport and there’s a piece of unattended tipping jar which could have a 5$ bill in it, but reaching into the jar could alert the police.
How do we take the 5 dollar bill without getting arrested?”

69. Scott Says:

fred #68: No, doesn’t work. The analogue of the Elitzur-Vaidman bomb experiment would only tell you non-invasively whether the \$5 is there, not take it if it is.

70. fred Says:

Scott #69

I figured this was too good to be true…

Regarding the beginning of lecture 5, with the biased coin.
Can’t we simulate classically the qubit solution, by counting only multiples of (1/eps) radiants?
Wouldn’t that require just log(1/eps) bits (instead of log(1/eps^2) bits)?

71. Scott Says:

fred #70: log(1/ε) and log(1/ε2) are the same thing, up to changing the base of the log.

72. JimV Says:

In real events such as a partially-silvered glass pane which reflects 75% of photons in a beam-splitter experiment and transmits 25%, there are many branches, one for each possible photon trajectory. In 75% of those branches the photon was reflected and in 25% it was transmitted–or that’s the way I see it, anyway.

I’m not sure if this is what James G. has in mind, but it sounds a bit like this: any quantum experiment such as two-slit interference could be simulated by programming the quantum effects according to the rules of QM in a computer, correct? (Using a random or pseudo-random number generator.) What if we postulate that the universe itself is running that simulation? (Granted, no computer in our universe could possible do all the calculations for all the observable particles, but maybe the universe itself can.)

73. James Gallagher Says:

JimV #72

Yes that’s is what I mean, but I’m a little surprised it’s not even considered an obvious possible mechanism, especially since MWI is accepted by people so easily!

When I first thought of this idea about 6 or 7 years ago I excitedly posted my thoughts on Lubos Motl’s blog, but didn’t get much feedback, same as on Scott’s blog about 5 years ago. So although I personally couldn’t see anything wrong I assumed it wasn’t something worth making a fuss about.

In any case, I was much more concerned about deriving 3D space from such a mechanism, since that is the obvious thing we observe that does not exist in a large dimensional Hilbert Space. And I noticed a peculiar result about unitary evolution followed by subtraction of the previous state.

ie if we have the evolution rule U(t+dt) = ( exp(h x Anti_Symmetric_Matrix) – I ) x U(t), where h is a real number and I is the Identity matrix, then there exists an h which gives you a stable period-3 global oscillation which is an attractor for measure 1 of initial states

(this is actually an easy undergraduate (or advanced high school) level result, but not something I had seen before)

The idea was, that because of the constant random collapses we need to discard the entire previous state of the universe at each evolution step, and I bet if had suggested this in the 1960s when they were fixing QED etc by subtracting infinities it would have got some interest.

However, we now know that the renormalisation is justified by a well understood mathematical mechanism due to Wilson, so it sounds a bit silly to suggest we “subtract the entire Universe at each step”

(Also, we get a pretty boring universe unless we introduce a (small) subspace which avoids the period-3 global attractor and allows for particles – eg a 6-dimensional subspace which is just the eigenspace of a certain eigenvalue determined by h)

74. wolfgang Says:

>> What if we postulate that the universe itself is running that simulation?

I you really believe in the m.w.i. then you believe that there are (infinitely ?) many branches where quantum computers simulate your experience …

75. Scott Says:

Pat #61: Seriously? Thank you so much!! Shoot me an email, and I’ll give you the link to the Dropbox folder.

76. Jonathan J Paulson Says:

Typo in Lecture 17, page 6: “If
you want no possibility of error, then it’s hard to see that this is the best you can do.” Should be “not hard to see that this is the best you can do”

77. JimV Says:

RE: “If you really believe in the m.w.i. then you believe that there are (infinitely ?) many branches where quantum computers simulate your experience …” [Wolfgang]

I don’t believe in (wouldn’t bet my life on) either the MWI or the simulation method, I just think they are quite interesting ideas to think about. Why do people on the Internet make assumptions about my unstated beliefs which are always wrong? Is it something about my face?

Anyway, there have been only a finite number of events in our light cone since the Big Bang with a finite number of quantum possibilities each. (He says, happening to think that like motion (see Zeno), matter (see Democritus), and energy (see QM), every physical quantity is not continuous but comes in finite numbers of discrete increments; although he wouldn’t bet his life on that either; almost, though.)

78. Jeremy Says:

Hi Scott, Thanks for the notes!

I read the first four parts, and it was breezy for me, but I found myself confused by the quantum bomb detector section and I had to reread it a couple times. Maybe the only difference was my level of prior knowledge?

Anyways, there were a couple of things that I thought could be improved:

First, emphasize that the 0 and 1 are not the outcomes of the query, but are in fact themselves representing different queries. The idea of treating “no query” as a query is surprising to someone thinking classically! Using “query” in both senses of the word in the same phrase is confusing. When I read it through the first couple times, I just assumed that 1 and 0 were the outcomes of the query.

Second, when you say upgrade to a qbit, and give the equation |b> = alpha|0> + beta|1>, it’s not clear that the |0> and |1> correspond to the 0 an 1 in the classical set above. Since we have been discussing qbits on a more abstract level earlier using the ket notation, it led me astray into thinking that it was not a simple linear combination of the above states. Maybe you could use the ket notation in the classical equation too?

Third, in the exposition of how to perform the rotations to do the quantum bomb measurement, you make the assumption that your starting state is |0>, without explicitly stating it.

Finally (I think this is the only genuine bug that I spotted), you said to repeat the operation pi/2 times, which I think it should have been pi/(2 epsilon) times.

Thank you again for sharing this and I hope you take not offense at my suggestions, they are truly out of appreciation!

79. wolfgang Says:

>> physical quantity is not continuous but comes in finite numbers of discrete increments

I think it is helpful to clarify what we are talking about.
Most people discuss m.w.i. in the context of quantum mechanics in a classical (flat) background. In this case, a Geiger counter measuring the decay of a radioactive source would “branch” into a continuum of possible worlds, parametrized by the classical time parameter.

If you assume “finite numbers of increments” you may refer to something like the Bekenstein bound, i.e. quantum gravity. We don’t really know how to describe quantum gravity and what its m.w.i. would look like. Are worlds “branching” in some “emergent time”?
It is not clear to me how one would have to think about the “light cone since the big bang” in this description.
String theory is a possible description of quantum gravity and it seems to come with a multiverse of possible worlds, perhaps realized in eternal inflation. In this case the world would be much larger than what we assume to be the visible patch since the big bang.
I would not know how to estimate the number of quantum events in this case …

80. fred Says:

from lesson 5 –

“That’s because particles need to be close to become entangled, but once they’re entangled you can separate them to an arbitrary distance and they’ll stay entangled.”

But how does that work in light of the fact that QM is time reversible? (two particles flying from infinity are actually entangled, until they meet and become disentangled?)

81. Scott Says:

fred #80: Yes. If two particles flying in from infinity happened to be entangled with each other and not with anything else—something that I don’t think has ever been observed in a cosmological setting (though maybe there are models of early-universe cosmology that predict the existence of such pairs??)—in that case, depending on the Hamiltonian and so forth, they could become unentangled when they met, as indeed we know must be possible by reversibility.

Of course in practice, the thermodynamically vastly more likely thing is that long before this happens, the entanglement will become unobservable due to interactions between one or both of these particles and their environments.

82. Scott Says:

Jonathan #76: Thanks! Fixed.

83. Scott Says:

Jeremy #78: Thanks!! I fixed the error with π/(2ε), and will circle back to the other things later.

84. fred Says:

in chapter 12

“We live in the classical world, where objects have definite locations, the objects can be measured without disturbing them”

But isn’t this only true if we have some quantization?

If space and energy are continuum, by the action==reaction principle, it’s impossible to measure anything without disturbance.
And since any location is a real number, any measurement can never be at full precision.

On the other hand, if we drop the continuum and assume quantization, we can meet the requirements, but don’t we end up with a world that’s basically some version of The Game of Life?

85. fred Says:

chapter 12

“On their face, all these views seem contradictory to our understanding of physics, which
relies on reductionism : each atom just keeps obeying the same simple equations, regardless of how big or complicated a system the atom might be part of”

But what about something like superconductivity or BEC?
In the sense that individual atoms lose their identity and the behavior of the whole can’t be understood by summing the behavior of the parts.

86. asdf Says:

The notes look cool! Unrelated: is this interesting?

https://arstechnica.com/science/2018/09/engineering-tour-de-force-births-programmable-optical-quantum-computer/

87. Scott Says:

asdf #86: It looks cool! Though clearly there’s quite a bit of catching up for optics to get to where superconducting and trapped-ions are now…

88. Nathan Pellegrin Says:

The notes are great – thanks for posting! #14 is fantastic. It is helpful to me in digesting recent results on teleportation of quantum gates. Any plans for an audiobook version?

89. mjgeddes Says:

Excellent Scott! Based on these lecture notes, I realized I was missing some important concepts in the field of computational complexity; I’m trying to learn and I’ve added them to my wiki-books.

Just wanted to mention that I’ve had another surge of big ideas Scott! 😉

What I think I’ve realized is that mathematics may not work in the way many think! In physics, more complex things are always built-up from less complex building blocks , so far as we know (that’s the meaning of methodological reductionism). But does *mathematics* have to work this way? Can more complex mathematical constructions always be decomposed into simpler ones? Constructivists think yes, but I think no!

Now it’s clear that you *can* put mathematical concepts on a ‘ladder of abstraction’, where you rank things from less abstract to most abstract , just as you can for physics concepts. But in the case of math , my dawning realization is that something strange is happening – reductionism may not work for math!

For instance, algebra seems to be the most concrete domain, analysis is more abstract and mathematical logic is the most abstract. On the basis of this ‘ladder of abstraction’ one might be tempted to apply methodological reductionism and try to pick one of these domains as the ‘foundation of math’ (the most fundamental level). But I think it doesn’t work!

Rather, the top- and bottom-levels of abstraction seem to rely on each other in some sort of peculiar fashion , which refutes methodological reductionism! That is to say, analysis *doesn’t* seem to be fundamental, but paradoxically, mathematical logic *and* algebra *do* both seem to be fundamental! (The most abstract *and* the least abstract math domains seem to depend on each other in a peculiar circular loop). This refutes methodological reductionism!

This is related to Bayesianism and the ‘Less Wrong’ crowd, who strongly favor Bayesian inference as the foundation for rationality. Now I think they’ve based this on methodological reductionism. The ladder of abstraction for computer science seems to be ‘Probability&Stats’ (most concrete) -> ‘Computational Complexity’ (middle-level) and ‘Computational Logic’ (most abstract). So they regard Bayesian inference as fundamental based on this ladder of abstraction, and they think that the more abstract domains of computer science are all ‘reducible’ to Bayes (more complex math built-up from simpler math). But if methodological reductionism fails for math, as I now strongly suspect, this falsifies the Bayesian philosophy!

Some seriously heavy mathematics philosophy to mull over on the weekend 😉

90. fred Says:

In chapter 9.

To confirm, Alice’s qubit in the example is the second one, right?
The gates she supposedly applies to her qubit only touch the second one (although her change of phase seems to be affecting both qubits, but I guess phase is special?).

91. fred Says:

In chapter 14

“The experiments don’t quite get to 85% success probability, given the usual
difficulties that afflict quantum experiments.”

If sharing two entangled qubit and doing two independent measurements on them is that complicated, how can we hope to ever build a QC with vast amounts of qubits?

92. Scott Says:

fred #91:

1) In QC, you at least don’t have the severe difficulty that you have in loophole-free Bell tests, of spreading entanglement across a mile or more.

2) Quantum fault-tolerance shows you don’t need perfect gates, just ~99.9% perfect.

3) But yes, building a universal QC is staggeringly hard. That people can now do much simpler things, that a few years ago they couldn’t do, means that at least there’s clear progress in the right direction!

93. Scott Says:

fred #90: No, Alice’s qubit is supposed to be the first one. Were they switched somewhere?

And no, phase is not really special, it’s just another unitary transformation. But you can apply a phase gate to any qubit of |0…0⟩+|1…1⟩ to get |0…0⟩-|1…1⟩ — think about it; it follows from linearity. That has no effect on the local density matrices of any of the qubits you didn’t act on—and for that matter, no effect on the local density matrix of the qubit you did act on! But it has a global effect that you could see if you measured all the qubits together.

94. Gabriel Says:

Hi Scott, What do you think about the story surrounding Theodore Hill’s “greater male variability” paper?

95. fred Says:

Regarding superdeterminism, ‘t Hooft is definitely correct when he points out

“In quantum physics, there’s a notion of counterfactual measurement. You measure what happens if I put the polarizer this way, and then you ask, what if I had it that way? In my opinion, that is basically illegal. There’s only one thing you can measure.”

but that’s just a fraction of what would be needed to show that a classical world can fake out the results of QM (this seems far-fetched for sure).

I just don’t understand the claim that “QM just works” and that questioning it is always some fool’s errand… what about addressing the elephant in the room: making QM work with gravity?

96. fred Says:

Scott,

I often see you refer to the maximum amount of info that can be stored in a given region of space (related to the study of blackholes, etc).

Similarly, can we ask how much (quantum) randomness can be generated from a given region of space?

I’m not sure the question makes sense since you also often mention that quantum information can never be destroyed – is there something being conserved as we keep generating randomness from a closed region of space?

97. Scott Says:

Gabriel #94: I think we can safely reach the conclusion that it’s a travesty, without needing to pass judgment about the correctness or importance of the paper’s technical content.

Even if everyone agreed that a paper was 100% wrong, trivial, plagiarized, whatever—even then, once it’s published, you can’t just “disappear” it from the record; that’s a flagrant and almost Soviet-level violation of academic norms. You can only retract a paper after a long and careful review process. (I know people who are currently advocating such a process with one of Joy Christian’s papers—which he managed to get into a journal!—claiming to have disproved the Bell inequality. But I personally have no stomach for such a fight, feeling that Joy’s claims have by now been completely unmasked as crackpot to anyone who knows anything, and also that the retraction process would take years … as it should.)

In the case at hand, from what little I know, it seems like there are plausible arguments that NYJM maybe shouldn’t have accepted the paper in the first place—e.g., because the model it proposes is just not that interesting, or the paper isn’t written to a high enough standard. I couldn’t say without reading it more carefully, and also learning more about NYJM’s scope and standards. In any case, I appreciate that people like Tim Gowers have been honestly grappling with the paper’s arguments, refusing to avail themselves of that all-purpose modern refutation, “I can’t even.”

On the other hand, it also seems clear that NYJM took the extraordinary step of “disappearing” the paper not for mundane reasons of technical quality, but because they’d been successfully intimidated by prominent people who simply wanted to place the “Greater Male Variability Hypothesis” beyond the pale of discussion. It seems clear, moreover, that the GMVH is taken extremely seriously by experts in evolutionary biology, going back to Darwin himself—certainly if one restricts the application of the hypothesis to non-human animals, rather than to hot-button debates involving humans. And I read the paper, and found no inflammatory language; the paper just summarizes various positions people have taken on the GMVH and then goes on to study its mathematical model. So I think that anyone concerned about academic freedom is absolutely right to be terrified.

If there’s one bright spot for the author, it’s that his work has now been rocketed from the obscurity it would’ve surely otherwise enjoyed to people all over the world reading and debating it, due to a predictable Streisand effect.

98. fred Says:

Regarding the Magic Square Game in chapter 14, has the quantum solution been implemented?
If so, wouldn’t that be enough to prove quantum supremacy? (because it’s proven that there is no classical approach to get prob 10).

99. fred Says:

I guess the definition of what is quantum supremacy is a bit of a moving goal post…

In a sense, showing that one can use the wave function associated with a qubit to store implicitly extra information, which has to be stored explicitly in a classical computer, is enough to show quantum supremacy. So the example of the flawed coin flip using a single qubit is enough! (but can’t be generalized to complex problems)

“We’ll call a set of quantum gates universal if, by composing gates from, you can
approximate any unitary transformation on any number of qubits to any desired precision. Note that, if is finite, then approximation is all we can hope for, because there are uncountably many unitary transformations, but only a countable infinity of quantum circuits built out of gates.”

Is this a flaw that’s specific to QC? Or is there an analogy in classical computers?
Do we know for sure this isn’t going to make it impractical to realize many quantum algorithms?

100. Scott Says:

fred #98-99: When people use the phrase “quantum supremacy,” they really mean “quantum computational supremacy.” They emphatically do not mean: “demonstrating any quantum phenomenon whatsoever that has no classical explanation.” If that’s all they meant, then “quantum supremacy” would already have been demonstrated by around 1910. But we’re talking specifically about asymptotic computational speedups! Accept no substitutes or imitators.

Also, the “flaw” that you can’t perfectly realize an arbitrary unitary transformation using a discrete set of gates, but can only approximate it to any desired precision, has a precise analogue in classical probabilistic algorithms. Can you program your computer to set a certain variable to “1” with probability exactly 1/e? Not in ≤T steps you can’t, if T is any uniform upper bound on the program’s running time! But it doesn’t matter, because you can get the probability so close to 1/e that the discrepancy makes no difference. And the analogous issue in quantum computation doesn’t matter either, for the same reason. This is not a matter of opinion; it follows from theorems proved by Bernstein and Vazirani in 1993, and by Solovay and Kitaev a few years later. For more, see Nielsen & Chuang or nearly any other QC textbook or lecture notes (including mine 🙂 ).

101. Andrew Says:

Nice! Thanks a million for doing this, between this effort, “Burn Math Class”, “Our Mathematical Universe” and “The Particle at the End of the Universe”, I might actually learn some of this stuff eventually. I will now celebrate with a wild bout of uninformed speculation:

1.) Big Bang happens. At this point, there is no “time”.

2.) First observer appears. Not quite sure “what” this is, exactly, but basically it coincides with the appearance of the Higgs field. At this point or somewhere thereabouts, “time” happens (and so does quantization of fields, at least for observers “inside” time).

3.) So, there are now two halves of physical reality, separated by (this is a dumb name) a “time membrane”. Basically, on one side is everything we observe as espoused by the Standard Model, QFT, and so on. On the other side, there is no time and therefore no quantization of the constituent fields. This “latent” energy distributed through the various fields doesn’t manifest on our side as particles, but still can affect the Universe (i.e. this is dark energy and dark matter).

4.) We can observe the crossing of this “time membrane” in experiments. Weird things happen w.r.t. time because, on the other side, there is no time and therefore things “evolve” instantaneously.

5.) And, as icing on top, the imposition of “time” requires huge amounts of energy, which accounts for the difference between ZPE and the Cosmological constant. Basically, time is the energy that prevents the Universe from evolving in a wavefunction immediately from the Big Bang to a statistical distribution of the various quantum fields in space. It introduces a sort of “viscosity” or back pressure that prevents this unitary evolution. The total energy released by the Big Bang is thus ZPE + Cosmological constant, and the difference between the two is in essence the amount of energy “locked up” by this imposition of time.

There, I got that out. Now I can continue to learn the real stuff. Thanks again!

102. mjgeddes Says:

Scott,

(1) I think that both “Probability&Stats” *and* “Computational Logic” form an ‘abstract duality’. If both fields are equally fundamental to computer science (but at opposite ends of the ladder of abstraction), then the idea is that the field of “Computational Complexity” is a sort of *composite* (middle-out) of the other two fields (the middle between the two poles).

(2) There are ‘ladders of abstractions’ for each of the 3 main areas of Computer Science , (a) “Probability&Stats”, (b) Computational Complexity Theory, ( c ) Computational Logic.

If we ‘zoom-in’ on the field of “Probability&Stats” first (decomposing the field into more fine-grained sub-domains), I propose this ordering:

Probability theory (bottom) — Statistics (middle) — Stochastic processes (top)

In the other direction, ‘zooming-in’ on the field of “Computational Logic” (decomposing the field into sub-domains), I’m proposing:

Formal language theory (top) — Type theory (middle) — Modal logic (bottom)

And for “Computational Complexity Theory” my proposed decomposition and ordering is:

Automata (top) — Complexity classes (middle) — Information&Coding Theory (bottom)

Zooming out again and looking at the 3 main areas of CS a whole, I propose this global ordering:

Probability&Stats (bottom) – Computational Complexity (middle ) -Computational Logic (top)

For these 3 main areas, I decomposed into a total of 9 sub-fields (3 sub-fields for each), and given the orderings I postulated, “Complexity Classes” are dead-center on the ladder of abstraction.

We then view this as an ‘abstract duality’, with “Probability&Stats”/”Computational Logic” being the two poles, such that one could construct ‘Complexity Classes” *both* from the top-down (starting from formal languages) *and* from the bottom-up (starting from probabilities).

Thus, everything converges to computational complexity theory! All roads lead to the ‘Complexity Classes’… 🙂

103. fred Says:

Something just came up about some perplexing thought experiment in QM:

https://www.sciencedaily.com/releases/2018/09/180918114438.htm

104. Andrew Says:

@fred #103 — I propose an experiment, ’cause I’m curious (and maybe this has already been done), that goes something like this:

Set up a double-slit apparatus, like in this paper: http://iopscience.iop.org/article/10.1088/1367-2630/15/3/033018

Fire individual electrons (let’s say 10k) and construct a data set that records, for each electron, a.) number of electron (1st fired, 2nd fired, 3rd fired, and so on), b.) time detected, c.) x position, d.) y position, e.) z offset/position. This can be done from the .mov files included with the above experiment, along with a value for the “z” offset.

Now, move the apparatus 1mm in the x or y direction, and conduct the experiment again. Continue doing this to cover a “square” of 1 cm x 1 cm in the x/y plane, stopping every millimeter to re-do the experiment. After the whole square is “covered”, move the apparatus backwards or forwards 1mm and re-do the square. Repeat this process, moving in the “z” direction 1mm each time, until the experiment has been conducted within a 1 cm cube.

Also set up a “control” where the same experiment is run, the same number of electrons are fired, but the apparatus stays in the same physical position the whole time instead of moving after every 10k electrons are fired.

With that data, then, we can conduct a statistical analysis to determine if location affected the outcome in any way. For instance, where does the 4th electron fired tend to land, or is it completely random? Are all the events actually random, or are there any underlying patterns?

If my concept of continuous QFT fields is correct, then differences in field energies “outside” of the quantized values may somehow “skew” the results — not in a way that affects the overall scattering pattern, but perhaps in some other way. I, for one, would like to know how “random” events are placed within the confines of what’s predicted by the wave function.

105. skishore Says:

In Lecture 4, is the “Root NOT” matrix correct? Rotating twice maps |0> to |1> but |1> to -|0>. I think a true Root NOT needs to have imaginary coefficients.

106. Michael Says:

At various locations in lecture 19, the notes are missing the symbols p and q for the two primes.

107. Medved Says:

Sorry if it has been fixed in separate lectures — at page 43 in combined pdf there seems to be a misprint in the 3th formula — |111> instead of |101>.

108. Sheikh Abdul Raheem Ali Says:

In Lecture 5 (in both the combined and separate pdfs) there’s an error that makes the words around the kets appear in the wrong order. My guess is that LaTeX messed up? Anyway, something to look at.

Thanks for the notes!

109. Sheikh Abdul Raheem Ali Says:

Oh, and typo in Lecture 4, under Elitzur-Vaidman bomb, after making a query with a classical bit, “But then we either learn find nothing”.

110. Quantum computing since Democritus – Computação e Informação Quântica Says:

[…] link, notas de aula do curso “Quantum Information Science“. São 29 aulas, cobrindo muitos tópicos importantes deste […]

111. Shtetl-Optimized » Blog Archive » Quantum Computing Lecture Notes 2.0 Says:

[…] years ago, I posted detailed lecture notes on this blog for my Intro to Quantum Information Science undergrad course at UT Austin. Today, with […]