13

I'm wondering what is known about the relation between time and memory for polynomial-time algorithms (which are necessarily also polynomial-space). In particular, I would like to learn what is known about less time vs. less memory, in the following sense:

Is it known whether (say) any language $L$ recognized by a polynomial-time algorithm $A$ which is polynomial-space $O(n^s)$ ($s>2$ say), can also be recognized by a polynomial-time algorithm $B$ which is polynomial-space $O(n^{s/2})$? Presumably any $B$ with less memory would need significantly more time, but can we restrict this extra time to polynomial-time?

So, recapping: is there for poly-time algorithms, some general way to compensate for less memory, in polynomial time? Answers and references greatly appreciated.

update: Dan Brumleve and Timothy Chow have given valuable insights and references in the comments and in this chat. It seems that time-space trade-offs like the one in this question are still a bridge too far.

  • 2
    Surely the answer to this is 'no', because some problems have, say, $\Omega(n^3)$ lower bounds on their time complexity regardless of space usage? – LSpice Jan 09 '18 at 20:46
  • 2
    The compensating algorithm gets more time, not less. So your remark doesn't immediately help me see why a memory-rich polytime algorithm cannot be mimicked by a less memory-rich but time-richer polytime algorithm. Or am I missing something obvious? – Franka Waaldijk Jan 09 '18 at 21:29
  • Sorry, you're quite right; I read "polynomial-time $\mathrm O(n^{s/2})$" even though you had clearly written "polynomial-space $\mathrm O(n^s)$". – LSpice Jan 10 '18 at 00:05
  • 1
    Don't worry, I'm also quite good in misreading :-) and I'm sure your remark helps other people to understand the question better. – Franka Waaldijk Jan 10 '18 at 00:12
  • 3
    Nothing like this is known for general computations, although I'm not sure I could back up this claim with a reference since it's so far beyond the current frontier of knowledge that people don't usually discuss it. A useful keyword for you may be DTISP, which is used for complexity classes that are simultaneously time-bounded and space-bounded. – Timothy Chow Jan 11 '18 at 19:33
  • @TimothyChow: thanks for your comment. It surprises me a bit, since I've tried to pose this question in a really natural way. People discuss things like whether NP=PSPACE, which seems much harder and far less natural by comparison (and also P=L (see below) which is more natural but also harder). Anyway, if there are so few results on questions like this one, then that would for me be interesting as well, so if you do think of any reference please let me know. I think that some time-memory trade-off theory (model-invariant) is minimally required to get a grasp on computational complexity. – Franka Waaldijk Jan 12 '18 at 09:42
  • I'm chatting with Dan Brumleve on this subject (see the link in the comments below the answer below), but I'm pressed for time right now and can only continue after the weekend... – Franka Waaldijk Jan 12 '18 at 09:54
  • 2
    Another observation which supports Timothy Chow's claim that this problem is hard: if we had $\text{DTISP}(n^a, n^b) = \text{DTISP}(n^{2 \cdot a}, n^{\frac{b}{2}})$ for all $a, b$ in some model, by iterating it we would obtain that for all $\epsilon > 0$, $\text{P} \subseteq \text{DTISP}(n^{O(1)}, n^{\epsilon})$. That is really close to $\text{P} \subseteq \text{DTISP}(n^{O(1)}, n^{o(1)})$ which implies $\text{P} \neq \text{PSPACE}$. – Dan Brumleve Jan 13 '18 at 05:24
  • 1
    Actually, that implies $\text{P} \neq \text{PSPACE}$ already, by the space hierarchy theorem, so we won't be able to prove it. At best we could have some restricted version like $\text{DTIME}(n^k) \subseteq \text{DTISP}(n^{O(1)}, n^{f(k)})$ with $\liminf_{k \rightarrow \infty} f(k) = \infty$. I'll incorporate these comments into my answer after thinking this over a little more. – Dan Brumleve Jan 13 '18 at 18:22
  • @DanBrumleve: yes, but that partly justifies the intuition for specifying $s>2$. The iteration then stops way before reaching your $\epsilon>0$ situation. also, i disagree that things like P=PSPACE are clearly impossible to (dis)prove. that would imply that they are independent of PA, which may be true but then i would like to see some arguments for this. I've added more thoughts on chat...cheers Frank – Franka Waaldijk Jan 17 '18 at 11:18
  • 1
    @FrankWaaldijk : When Dan says that we won't be able to prove it, I think he just means that humanity is currently too stupid and ignorant to prove such a thing. – Timothy Chow Jan 23 '18 at 23:41

2 Answers2

11

It is an open problem whether or not every polynomial time algorithm can be made $O(\log(n))$-space. Note that every $O(\log(n))$-space algorithm is simultaneously polynomial time because it has $2^{O(\log(n))} = n^{O(1)}$ states. This problem is usually referred to as $\text{P} = \text{L}$. Amazingly, $\text{NP} = \text{L}$ is also open!

If we want to talk about particular algorithms and polynomial exponents, we'll also want to get specific about the model of computation. For example, consider the class of models of computation related to a Turing machine by time translations satisfying $T_b \in T_a \cdot S_a^{O(1)}$ and space translations satisfying $S_b \in O(S_a)$. This includes single-tape and multi-tape Turing machines and variations of $\text{RAM}$. We can't say much, since an algorithm with $T_a = S_a$ in one model can have $T_b = S_b^{O(1)}$ in the other, that is, by changing the model an algorithm can be made arbitrarily more space-efficient relative to the time it takes.

I posed a related problem recently over on cstheory.SE.

Dan Brumleve
  • 2,254
  • thanks for your answer Dan, I will have to ponder this. but now I first need some sleep :-), I will get back to you, cheers Frank – Franka Waaldijk Jan 10 '18 at 00:15
  • OK I have given it some thought, hopefully enough to comment sensibly. Your answer addresses a stronger question, the flavor of which is indeed very close to what I'm looking for. But to go from polynomial space $S_a\in O(n^{s})$ to $S_{a'}\in O(\log(n))$ seems far stronger than to go from $S_a\in O(n^{s})$ to $S_{a'}\in O(n^{s/2})$, especially if we specify $s>2$. That P = L is still open gives me some hope that the weaker question that I'm interested in has been (partially) answered in the affirmative. – Franka Waaldijk Jan 10 '18 at 11:56
  • I'm not sure if I understand your remark about having to specify the model of computation. I would think that there is some robustness in the idea of poly-time and poly-space, in the sense that if the above question has an answer in any common model of computation, than that answer applies to other models as well? – Franka Waaldijk Jan 10 '18 at 12:00
  • Could you shed some light on the relation between your question on cs.stackexchange and the above question? (this is just my ignorance, but I don't readily understand your question there enough to link it to my own). Thanks in advance! – Franka Waaldijk Jan 10 '18 at 12:02
  • (by the way I'm voting for your answer, but I won't accept it as the definite answer just yet, in the hope that some improvement can be found.) – Franka Waaldijk Jan 10 '18 at 12:15
  • If we consider the class of models related by polynomial time translations, or the class I defined, then complexity classes $\text{P}$ and $\text{L}$ are robust, but classes like $\text{DTIME}(n^2)$ are not. The class of models I defined is a little smaller, and within this class $\text{DTISP}(n^a, n^{o(1)})$ is robust for all $a$ even though that's not true for the class of models related by polynomial time translations. But in this smaller class $\text{DTISP}(n^a, n^b)$ is not robust. – Dan Brumleve Jan 10 '18 at 16:18
  • So the truth of a statement like "this algorithm can be made to run in a square root of the space but it will take a square of time" can be relative to the model of computation. Maybe all of this model-invariance business is not interesting, in that case we can assume "on a 2-tape Turing machine" and start finding some examples. – Dan Brumleve Jan 10 '18 at 16:25
  • The problem I asked about on cstheory.SE is sort of in-between $\text{P}=\text{L}$ and your problem, it asks for a way of bringing a polynomial time algorithm all the way down to subpolynomial ($n^{o(1)}$) space while keeping it polynomial time. But also to make exponential time algorithms subexponential space, etc. – Dan Brumleve Jan 10 '18 at 16:31
4

I suspect that what you're hoping for—i.e., that for all $a$ and $b$ there exists $c$ such that $\mathrm{DTISP}(n^a, n^b) \subseteq \mathrm{DTISP}(n^c, n^{b/2})$—is false, but for sure nobody has proved this. In fact I suspect it may not even be true that $\mathrm{DTISP}(n^a,n^b) \subseteq\mathrm{DSPACE}(n^{b/2})$ (i.e., I suspect you cannot compensate for the loss of space with any amount of time). Certainly, if we consider all problems solvable in space $O(n^b)$ (with no restrictions on time), then the space hierarchy theorem tells us that we will be able to solve certain problems that are simply unsolvable in space $O(n^{b/2})$, no matter how much time we give ourselves. But settling your question will probably require proving a timespace hierarchy theorem, and very little is known unconditionally about such things (see this question on CS Theory Stackexchange for some information). As another illustration of our ignorance about time-space tradeoffs, note that it is not known whether a problem that can be solved in polynomial time, and that can also be solved (possibly by a different algorithm) in polylogarithmic space, is in SC (i.e., is solvable is polynomial time and polylogarithmic space simultaneously).

Timothy Chow
  • 78,129
  • thanx Timothy for summarizing your comments in the chat and above [btw i voted for this answer as well]. still, i believe that for polytime algos some space loss can be compensated, albeit perhaps not in polytime. right now i only have a sketchy argument for this though. – Franka Waaldijk Jan 24 '18 at 09:22