19

David Deutsch is very fond of pointing out Shor's integer factorization algorithm is a demonstration of the many worlds interpretation. As he often asked, where else did all the exponentially many combinations happen?

Are there any other alternative interpretations of quantum mechanics which can explain Shor's algorithm, and the Deutsch-Jozsa and Simon's algorithm?

David Z
  • 76,371
Piet
  • 191

6 Answers6

26

No. Shor's algorithm isn't demonstration of MWI. MWI is a way to think about Shor's algorithm, just like other interpretations of QM. Deutsch advocates MWI as a way to think about quantum algorithms because it is an easy way.

where else did all the exponentially many combinations happen?

Why do you need many worlds for exponentially many combinations to occur?

Are there any other alternative interpretations of quantum mechanics which can explain Shor's algorithm, and the Deutsch-Jozsa and Simon's algorithm?

Any correct interpretation of QM can explain these algorithms. Quantum computer scientists use whichever feels comfortable for the task at hand. Read

Interpretations of quantum mechanics, unlike Gods, are not jealous, and thus it is safe to believe in more than one at the same time. So if the many-worlds interpretation makes it easier to think about the research you’re doing in April, and the Copenhagen interpretation makes it easier to think about the research you’re doing in June, the Copenhagen interpretation is not going to smite you for praying to the many-worlds interpretation. At least I hope it won’t, because otherwise I’m in big trouble.

This is comment by Peter Shor on Scott Aaronson's blog.

15

Every working interpretation of quantum mechanics, and the Copenhagen interpretation and Consistent Histories in particular, correctly predicts that Shor's algorithm works much like the other algorithms. The question

As he often asked, where else did all the exponentially many combinations happen?

is loaded because the computational power of a quantum computer is being implicitly compared with the computational power of a classical computer. When it comes to calculations such as those in Shor's algorithm, the former is exponentially larger than the latter, so a way to "imagine" this exponential increase of the speed is to imagine that there are parallel worlds where the "component" (essentially classical) calculations take place.

But this argument is totally wrong for a simple reason: the real Universe - our Universe - is a quantum system, not a classical system. So it is normal for quantum systems in a single Universe to behave just like the quantum computer running Shor's algorithm. On the contrary, if we only use the classical computers, we exponentially slow down the computer relatively to what it could do. In this sense, Deutsch's "argument" shows that the many-worlds interpretation is just another psychological aid for the people who can't resist to incorrectly think about our world as being a classical world of a sort.

Classical computers may still be very useful, however.

There is one more lethal conceptual problem with the "many worlds" explanation of the Shor's algorithm's speed: the whole quantum computer's calculation has to proceed in a completely coherent way and you're not allowed to imagine that the world splits into "many worlds" as long as things are coherent i.e. before the qubits are measured. Only when the measurement is completed - e.g. at the end of the Shor's algorithm calculation - you're allowed to imagine that the worlds split. But it's too late because by that moment, the whole calculation has already been done in a single (quantum) world, without any help from the parallel worlds. ;-)

Moreover, when the quantum computer works correctly, the final result after the measurement is pretty much totally determined by the initial data. The probability that Shor's algorithm announces the right unique result is nearly 100% - so there is only one possible outcome so in the quantum computer context, parallel worlds are not really created at all, even in the MWI interpretation. It's a widespread misconception that "everything" about quantum mechanics is more fuzzy, less concentrated, less correlated. But the reality is that quantum predictions may be as sharp - and close to 100% - as those in classical physics and correlations allowed in quantum predictions are often higher than what would be possible in any classical theory (think about Bell's inequalities and other setups). Quantum mechanics is just different (GHZM etc. locate the qualitative difference nicely) but its difference can't be summarized by oversimplified slogans that it's more fuzzy and that the fuzziness is "tamed" by a mechanism.

Luboš Motl
  • 179,018
  • "the former is exponentially larger than the latter" This is not true: We currently don't know whether prime factorization is solvable in polynomial time or not. – Lagerbaer May 19 '11 at 19:52
  • Well, given your last sentence, you should have said "I am not sure" instead of "This is not true", right? I am confident that prime factorization can't be solved in polynomial time. – Luboš Motl May 20 '11 at 12:15
  • From the Wikipedia article: "The function problem version: given an integer $N$, find an integer $d$ with $1 < d < N$ that divides $N$ (or conclude that $N$ is prime). This problem is trivially in FNP and it's not known whether it lies in FP or not. This is the version solved by most practical implementations." FP is the class of functions a deterministic Turing Machine can solve in polynomial time. – Lagerbaer May 20 '11 at 15:07
  • Right, that confirms what I say. – Luboš Motl May 20 '11 at 15:46
  • I misunderstood your first comment. Yes, I shouldn't have said "this is not true" but rather "nobody knows yet". – Lagerbaer May 20 '11 at 15:57
  • @Lagerbaer: Even if factoring is in P, and it is extremely doubtful, nature would have to do figure out the fast algorithm to do this from Shor's algorithm in order to trick us, and this is a higher degree of implausability. – Ron Maimon Sep 04 '12 at 04:39
  • If you get the result with 100% probability, it does not mean that the world inside the computer did not split. – Anixx Sep 21 '12 at 16:58
  • And what does it mean? Do you claim that there are also branches even though their probability is zero? My question is rhetorical because the many-worlds ideology is nonsensical whatever interpretation you try to give to it. – Luboš Motl Sep 21 '12 at 18:20
4

Absolutely no! In the computational basis given by $\{|x\rangle \}_x$, it certainly looks like there is a massive parallelism going on. This is the wrong, wrong, wrong way of thinking about it. Everything clears up once you realize the "correct" preferred basis for Simon's algorithm is actually $\left \{ \frac{1}{\sqrt 2} [ | x \rangle \pm | x+s \rangle ] \right\}_x$, and the correct preferred basis for Shor's algorithm is actually $\left\{ 2^{-n/2} \sum_i e^{2\pi i kx/p}|x\rangle \right\}_{k\in Z_p}$. In the correct preferred basis, there is no parallelism at all!!!

In general, the computational basis is NOT the preferred basis. This observation clears up all confusions. Everytime it appears there is parallelism going on, it's really a sign you're working in the wrong "preferred basis"!!!

Glenn Pissy
  • 41
  • 1
  • 1
3

No, not at all.

Consider the following example which is superficially closely related to Simon's algorithm and Shor's algorithm. Suppose you have two functions f and g, from ${\bf Z}_{2^N}$ to $\{0,1\}^m$ with $m\geqslant N$. Suppose they are random oracles with the property that $f(x+s \text{ mod } 2^N) = g(x)$ for some random s picked uniformly from ${\bf Z}_{2^N}$, and that if $x\neq y$, $f(x)\neq f(y)$. Otherwise, the functions are picked uniformly at random.

Can a quantum computer figure out s in polynomial time? Based on the superficial similarity with Simon's and Shor's algorithms, you might think so. Have three registers: the first a qubit, the second N qubits specifying an element of ${\bf Z}_{2^N}$, and the third giving the function value. It's easy to prepare the state $$\frac{1}{2^{(N+1)/2}} \sum_{x\in {\bf Z}_{2^N}} \left[ |0\rangle |x\rangle|f(x)\rangle +|1\rangle |x\rangle |g(x)\rangle \right].$$

Then, take the quantum Fourier transform of the second register, and the Hadamard transform of the first register. Then, measure the values of the first qubit and the second register in the computational basis. Repeat this $cN$ times where c is around 4 or so.

Each measurement, the measured value for the second register, k, is uniformly distributed over ${\bf Z}_{2^N}$. With probability $p_0(k,s)\equiv \cos^2(\pi ks/2^N)$, the first register will measure 0, and with probability $p_1(k,s)\equiv \sin^2(\pi ks/2^N)$, it will measure 1.

So, we have the pairs $(r_i,k_i)$. With probability nearly one, this sequence of pairs already contain enough information to figure out the value for $s$. Just evaluate the function $h(x)\equiv \prod_i p_{r_i}(k,x)$. $h(s)$ will have the largest value by far, typically exceeding a certain bound with nearly probability 1, while the other values will typically be less than another bound with a probability of nearly 1.

If $s\neq 0$, this can be found out the moment the r value of 1 shows up. Similarly, if $s$ isn't an integer multiple of $2^{N-a}$, this can be found out after an order of $2^a$ measurements or so. However, this can also be verified by $2^a$ direct evaluations.

The only problem is there is no known polynomial time quantum algorithm which can extract the value of s given these pairs. So, this algorithm can output a sequence of pairs which can be used to test if a number x is s, or not, but it can't find s by itself.

If there is a quantum algorithm which could, then the closely related problem of finding the seed of the pseudorandom generator can also likely be solved. p is a prime number N bits long. The seed is $y_0\in Z_p$. $y_{i+1}=ay_i +b \text{ mod p}$. $x_i$ is the last bit of $y_i$. Given the sequence $x_i$ for i up to $cN$, figure out the seed. $y_i=[y_0+b/(a-1)]a^i-b/(a-1) \text { mod p}$

0

An alternative view, not much discussed in the literature in this connection, is the quantum logical approach, which emphasizes the non-Boolean structure of properties of quantum systems. (The properties of a classical system form a Boolean algebra, essentially the abstract characterization of a set-theoretic structure. This is reflected in the Boolean character of classical logic, and the Boolean gates in a classical computer.) From this perspective, the picture is entirely different. Rather than ‘computing all values of a function at once,’ a quantum algorithm achieves an exponential speed-up over a classical algorithm by avoiding the computation of any values of the function at all. A crucial difference between quantum and classical information is the possibility of selecting an exclusive disjunction, representing a global property of a function, among alternative possible disjunctions — for example, the ‘constant’ disjunction asserting that the value of the function (for both arguments) is either 0 or 1, or the ‘balanced’ disjunction asserting that the value of the function (for both arguments) is either the same as the argument or different from the argument — without determining the truth values of the disjuncts. Classically, an exclusive disjunction is true if and only if one of the disjuncts is true. This is is redundant information in a quantum computation but essential information classically: an exclusive classical disjunction is true if and only if one of the disjuncts is true. In effect, Deutsch's quantum circuit achieves its speed-up by exploiting the non-Boolean structure of quantum properties to efficiently distinguish between two disjunctive properties, without determining the truth values of the relevant disjuncts (representing the association of individual arguments with corresponding function values). The point of the procedure is to avoid the evaluation of the function in the determination of the global property, in the sense of producing a value in the range of the function for a value in its domain, and it is this feature — impossible in the Boolean logic of classical computation — that leads to the speed-up relative to classical algorithms. For some recent work by Giuntini and others on logics associated with quantum gates, see under ‘quantum computational logics’ in the Other Internet Resources. (For quantum logic not specifically in relation to quantum computation, see the entry on quantum logic and quantum probability).

Chucl
  • 1
0

No, not really, at least not from Shor's algorithm. Essentially, Shor's algorithm is based upon the discrete Fourier transform and period finding. What Shor's algorithm exploits is not some random or decoherent properties of the various components of the wave function, but coherent periodic phase variations. Basically, some generalization of the Fourier transform. All Shor's algorithm can shed on quantum mechanics is that quantum mechanics can perform Fourier transforms upon the components of the wave function!

The case for the many worlds interpretation will be much stronger if there is some quantum algorithm which can exploit and manifest decoherent properties of the various components in a variable manner. Such an algorithm has yet to be discovered, and might not exist at all.