8

Is there a limit to the speed at which a memory, quantum or classical, can be read or written into? I've seen this interesting question : Is there a physical limit to data transfer rate?. Here I'm more interested in the speed at which a state in memory can be accessed.

This question in part comes from something I've heard from a researcher in quantum error correction : what makes (quantum) memory so difficult to build is that we want a system that both is easy for us to access and hard for the environment to corrupt.

So it seems that a quickly readable memory will be more prone to errors. What's the tradeoff?

Subquestions

What quantity should be used to represent this speed?

I guess the amount of energy you are willing to invest in the process should play a role somewhere, maybe through the uncertainty principle?

user140255
  • 1,480
  • 3
    Would you be happy with an answer by way of a specific example? I can give a quantiative analysis of the write speed of a superconducting qubit compared with the "corruption" time, and show how the ratio of those quantities depends on the signal and noise bandwidth of the control device. I'm reasonably sure that the result is probably pretty close to general, but I haven't proven it. – DanielSank Feb 06 '19 at 19:40
  • 2
    @DanielSank Well I'd be interested in that answer. – rob Feb 06 '19 at 19:52
  • @rob well you aren't offering a bounty ;-) – DanielSank Feb 06 '19 at 21:22
  • @DanielSkank I'd be interested in your example if you fell it could give insight on the general case ! – user140255 Feb 07 '19 at 01:11
  • If you ignore all the latencies, seek times, command processing delays, etc. and also assume there is no limit in the material acting as the storage itself, then it seems that the electronics supporting everything else may become the bottleneck (e.g., see https://electronics.stackexchange.com/q/66694). If the signal frequency used to read/write is too high, I would assume the materials transmitting said signal would begin to act like a low pass filter. – honeste_vivere Feb 12 '19 at 15:46
  • @DanielSank, still interested in your answer ! :) – user140255 Feb 14 '19 at 05:17
  • @Undead done. I hope it's helpful. I really don't like posting answers that I don't think definitively answer the question, so I really hesitated on this one :-P – DanielSank Feb 14 '19 at 08:04

3 Answers3

5

Consider a quantum system with two levels separated by an energy $E = \hbar \omega$. In a sense, this is the smallest possible physical system from the point of view of information content. When a two level system can be accurately controlled, measured, and interacted with other two level systems, it's reasonable to call it a "quantum bit" or "qubit".

To operate a qubit, we need a device that can write information to it. In other words, we need a device that can control the qubit's quantum state. This controller device will inevitably be classical in nature, because at some level it has to be something that we can control with our hands/eyes/etc. Such a classical system has, by nature, some degree of noise. So we can ask a simple question: given a classical controller with a certain maximum coherent power output and a certain amount of noise, how does the write speed compare with the rate by which our control system's noise corrupts the qubit quantum state?

Let the power output of the controller be $P$ and its noise spectral density be $S(\omega)$. We also need to know how strongly the controller is coupled to the quantum system. In the case of superconducting qubits, the controller always has some resistance, and so coupling the controller to the qubit causes some energy decay in the qubit even if the controller had no noise. The dimensionless quantity $$Q \equiv \omega \times \left( \frac{\text{energy stored in qubit}}{\text{energy loss rate to controller}} \right)$$ captures the coupling strength between the controller and the qubit. The stuff in parentheses is actually just the energy decay lifetime $T_1$ that the controller imparts on the qubit, so $$Q = \omega \times T_1 \, .$$

  • If the coupling is strong, the qubit feels the controller's resistance more, energy leaks out faster, and $Q$ goes down.

  • If the coupling is weak, the qubit feels the controller's resistance less, the qubit keeps it energy longer, and $Q$ goes up.

The same argument goes for noise: if the qubit-controller coupling is weak and $Q$ is high, the qubit feels the controller's noise less strongly. But there's a tension here: weak coupling and high $Q$ means that the signal we're trying to send in from the controller hits the qubit less strongly, so our write time will go up.

Working through the control and noise theory gives two equations: $$T_\text{write} = \frac{\pi}{2} \sqrt{\frac{Q \hbar}{P}} \qquad \text{and} \qquad T_\text{noise} = \frac{Q \hbar}{S(\omega)}$$ where $T_\text{write}$ is the time it takes to flip the qubit from $\lvert 0 \rangle$ to $\lvert 1 \rangle$ (i.e. the write time) and $T_\text{noise}$ is the time it takes for the controller's noise causes the qubit's state to become random.

Generally speaking, we want $T_\text{write}$ to be small (fast write speed) and $T_\text{noise}$ to be large (long time before noise kills the qubit), so we want $T_\text{noise}/T_\text{write}$ to be big. Well, we have $$\frac{T_\text{noise}}{T_\text{write}} = \frac{2}{\pi} \frac{1}{S(\omega)} \sqrt{\frac{P}{Q \hbar}} \, . \tag{$\star$} $$ Therefore, to maximize $T_\text{noise}/T_\text{write}$, we want

  • Large signal power $P$.

  • Low noise spectral density $S$ at the qubit's resonance frequency.

What about $Q$? It looks like we want low $Q$, but that's not really true because lowering $Q$ means we're lowering the time the qubit survives before leaking its energy into the controller (remember $Q = \omega T_1$). In real applications, we pick the coupling strength between the qubit and controller by choosing a $Q$ large enough such that $T_1$ is several orders of magnitude larger than the time we need to do quantum computations, e.g. $T_1 \approx 10^4 \times T_\text{write}$. Once you've picked $Q$, the power and noise of your controller determine the ratio of $T_\text{noise} / T_\text{write}$. Fortunately, commercial equipment gets us $P$ and $S$ values that are good enough. Note that the value of $\hbar$ comes into Equation ($\star$), and we're kind of lucky that the value is such that it doesn't prevent quantum computing from working.

DanielSank
  • 24,439
4

Maybe not the most thought out answer, but one way to go about thinking of this would be using the universal entropy bound, which states that the highest amount of entropy (information) that can be stored in a sphere of radius $R$ is the entropy of a black hole with radius $R$ (or, more generally, the maximal amount of entropy you can fit into a system scales as $S\sim ER$, where $R$ is a characteristic size of the system). Using the Bekenstein-Hawking formula for the entropy of a black hole, the information stored in a sphere of radius $R$ must satisfy

$$S(R)\leq \frac{A}{4L_p^2},$$

where $L_p$ is the Planck length.

To get the universal physical bound on information writing/reading, I would imagine an expanding ball of information which expands at the speed of light. The amount of information written per unit time would be given by

$$c\frac{\mathrm{d}S}{\mathrm{d}R}=\frac{2\pi cR}{L_p^2}=\frac{2\pi c^4R}{\hbar G}.$$

If one imagines this ball expanding forever, then the rate of entropy increase is unbounded as the radius of your newly-formed black hole memory system tends to infinity. Of course, you wouldn't want your system to be an infinitely large black hole, so the maximum rate would be given by

$$\frac{\mathrm{d}S}{\mathrm{d}t}\bigg|_{\text{max}}=\frac{2\pi c^4R_{\text{max}}}{\hbar G}.$$

There are, of course, probably some subtleties I'm missing out on here, but this is at least my first instinct on a reasonable physical bound.

Now for the matter of actually reading the damn thing back. Good luck with that.

Bob Knighton
  • 8,460
1

Landauer's principle gives a thermodynamical bound for the minimum energy required to erase/overwrite information. Together with constraints from GR I believe that this should also constrain the writing speed.

alphanum
  • 125