I have no idea how well legacy admissions does or doesn’t work at universities that do it, but 20 years ago I worked in admissions at a major *public* university and can shed some light on how real admissions processes work at universities that don’t do the legacy stuff.

**TL;DR: What you think would be ideal criteria is just hot air if it isn’t backed up with hard evidence, because the evidence sometimes turns out to show the opposite result than you would expect.**

This university was state-funded, and the state’s board of regents mandated that state-funded schools must admit a certain number of students (which at this school was less than the number that apply, hence competitive admissions criteria), and that they must grant degrees to the people of the state generally (so the student body must be diverse on graduation day, not just the first day of freshman year). So a faculty committee sets and annually refines the admission criteria based on empirical evidence and statistical analysis, and they try to predict “success” as defined by when the applicant, if admitted, would complete the requirements for a bachelor’s degree in a reasonable amount of time. In other words, when deciding who to admit to college, **past performance only matters to the extent that it impacts future performance.**

The upshot of this is that only factors which have been *proven* at that institution to have a meaningful, positive impact on a student’s success are taken into consideration, and the factors are weighted according to how much impact they actually have. For students coming in from high school, the #1 predictor of their degree-completing “success” was in fact their class rank—and if it isn’t top-tier, it’s weighted by a rating for what school they went to (if in-state), because the lower the class rank, the more the quality of the school makes a difference. The school quality is determined by how well students admitted from that school in the last decade tended to do at the university. The #2 predictor, IIRC, was performance in a college-prep curriculum, i.e. the more pre-reqs the kid got a C or better in would help that much more, or hurt if they didn’t have enough.

**The actual impact of standardized test scores was almost zero.** Low scores didn’t hurt, and very high scores could help, but they were only the deciding factor in a tiny sliver of borderline cases where the class rank was lower than average and test scores were higher than average. It was so seldom, I personally would not advise anyone to waste money and effort on those tests unless they’ve talked to an admissions counselor at the university, are borderline not gonna get in, and they’d do very well on the test. And even then I’d question whether it’s really necessary to seek admittance directly to the university as a freshman in the fall.

See, at this school, admission criteria varies depending on when the applicant wants to start. Competition is highest and criteria the toughest if they want to start in the autumn after they graduate from high school. If they can wait till the following spring and just work & save money in the meantime, it’s much easier to get in. And, if they go to the local community college for 2 terms, then everything they did in high school no longer matters; they only need a 2.0 college GPA to transfer to the university, as the biggest success predictor for transfer students is their recent performance at an accredited college. So, applicants too far below the academic cutoff are encouraged to go to another college for a semester or three and transfer as a sophomore or junior. Applicants just slightly under the cutoff are encouraged to either do that, or to start in the spring as a freshman if they qualify. (At the community college, they’ll have smaller classes and sometimes the same professors at the community college as they would at the university, just with fewer extra services and non-academic perks. Oh and it’s cheaper too! But so many people think, erroneously, that they have to start and finish at the same school. Parents want to brag, you know.)

Applicants who want to start as freshmen and who are just below the academic cutoff could also get in if they included a short essay that might put them over the line. There’s a subjective element (someone has to read it), but on the whole, the essays fall into certain categories of explanations for the kid’s sub-par academic performance. These excuses are quantified and scored based on rigorous statistical analysis. By far, the best thing a kid could write was that they’d been through some rough times and made bad decisions but were now highly motivated, had turned their life around, were the first in their family to go to college, have something to prove, etc.—i.e., expressions of high levels of motivation correlate very highly to future academic success. The most common thing I saw, but also the one worth the bare minimum statistically, was trauma due to divorce, physical health problems, or a death in the family. Major mental health issues scored slightly higher but still were in the “sh*t happens” category if they were just mentioned without the “I’ve got it under control now” / “I’m so motivated” followup.

This was a large university with many services that the kids generally don’t have access to in high school, so if application or essay makes it sound like the student qualifies for support from (e.g.) university offices that specialize in helping minorities, kids with disabilities, etc., and they’re just a hair shy of the academic cutoff, then the application gets flagged for review and possible contact by the appropriate office. The office would figure out if there are other factors that need to be taken into consideration. Quite often they’d get pulled in for a learning disability test. You’d be shocked how many bright kids struggle all the way through high school because they had no idea they’re dyslexic or whatever and will actually do great in college with different study habits and (e.g.) extra time on tests. Then with an endorsement from whatever office, it puts the applicant over the top and they get to start in the autumn, and they end up doing just as well as the “regular” kids who barely got in on class rank alone.

This system worked really well. Maybe it’s different now, but given the glacial pace at which change happens in large bureaucracies, I doubt it.

]]>——————–

Of course, you could also “instantiate” Fourier Fishing with an explicit Boolean function. If you do that, then you get something that’s basically equivalent to Bremner, Jozsa, and Shepherd’s IQP model. For that model, we have results about the hardness of exact sampling directly analogous to what we have for BosonSampling (i.e., if you can do it efficiently classically, then PH collapses to BPP^{NP}). But we don’t yet have results about the hardness of approximate sampling analogous even to the incomplete results that we have for BosonSampling.

In the slides, you refer to your 2009 result about Fourier Checking. I’m wondering what your current take is on the other result in that paper, about Fourier Fishing.

My interpretation of Fourier Fishing has always been that you give a black box problem that’s solvable in polynomial time on a quantum computer; whereas that black box problem is not in the classical polynomial hierarchy (that is, a reasonably natural definition of PH for black box problems).

To me, it seems like a pretty strong result. But, in conversations over the years, I’ve often found myself defending the interestingness of it.

If you had included it in your talk, what would you have said?

]]>http://arstechnica.com/staff/2014/10/harnessing-depression-one-ars-writers-journey/

]]>quantum states that are more abstract than superposition of

electrons only 0/1 possible assume i superpose wavelength l1 with l2

what will be the size of such a quantum register will it still be a

qbit or is it a qDigit ? even if we can somehow implement a qDigit

what quantum operators can we define on it ? ]]>

There’s some magic about constant factors in the classical case. These days we use DRAM, which is a brilliant invention of an electronic circuit that lets you store lots of bits in both very little circuit board space and much less energy consumption than the more obvious SRAM. But of course, as while we have no idea how to build practical quantum computers at all currently, we can’t really speculate about the details in quantum computers.

]]>Now I think I understand your beamsplitter network example: the n beamsplitters somehow act to encode the N-dim vector, so that when a photon passes through, the photon is in the state |psi>, which took O(1) effort (time for the photon to pass through the beamsplitter network).

Is there any physical-implementation-agnostic way to describe how to construct a qRAM? Can we describe the construction and operation of a qRAM in terms of qubits and one- and two-qubit gates? (Of the small number of papers I’ve seen in the literature so far, they all seem to give examples of how to construct a qRAM using some very specific set of physical tools, e.g. optical beamsplitter networks; atoms in cavities and photons, etc.)

In the case of the beamsplitter network qRAM you describe, what is the physical meaning of the qubits in the resulting |psi>? e.g., for n=3, I first thought that |100> is the state where there is 1 photon in the first mode, and 0 photons in the second and third modes. However, I suppose that’s not right, since then if you only put a single photon into the beamsplitter network, only the states |100>, |010> and |001> can have non-zero amplitude (and |000> if you consider photon loss). Let’s suppose I want to make a beamsplitter qRAM that can produce the following |psi>, how would I do it?

|psi>=(1|000> + 1.1|001> + 1.2|010> + 1.3|011> + 1.4|100> + 1.5|101> + 1.6|110> + 1.7|111>)/a

(where “a” is the normalization factor)

Thanks again, and good luck with the article. If you have even some of your discussion points from this comment thread in it, I think it’s going to be a really helpful article.

]]>