53

This is a big-list community question, so I'm sorry in advance if it is deemed too soft but I haven't seen anything similar yet.

I've seen computer scientists post questions looking to learn things from pure maths. This is basically the other way around... My ignorance may prevent me from being as specific as I think I would like to be and so I have separated my main question into two.

What good books - readable introductions - are there for mathematicians to learn about computer science?

By this I really mean the science of how computers work. There are perhaps some books out there which are written in a style which mathematicians can relate to - e.g. not practicality-focused, starting from the more abstract fundamentals and building up (I may be wrong but I'm under the impression that a lot of books in other disciplines shy away from presenting things this way around, whereas mathematicians (for better or worse) are accustomed to it). Partly to illustrate what the first question is not asking, the second question is

What good books - readable introductions - are there for mathematicians looking to learn about theoretical computer science, as it is as a subfield of maths?

Here is where my ignorance prevents me from explaining the question any more because I can only assume these two aren't the same thing...

It seems quite frustrating that I have made it to grad school and know very little about computers and theoretical CS.

Standard "one recommendation per post" is probably appropriate, + a few sentences about what the books did for you. Also, maybe I should say that I'm not looking to ditch my current interests and become a computer scientist, so things being readable is a fairly strong condition. I'm not looking to become an expert, just to deal with my own ignorance. Thanks in advance.

YCor
  • 60,149
Spencer
  • 1,771
  • 1
    I'm not sure I understand the first question. Is it about software or hardware? – Qiaochu Yuan Jan 05 '11 at 17:02
  • 13
    I don't want to sidetrack the discussion too much, but: is it clear that the best CS books for mathematicians would be any different from the best CS books for CS people? By contrast, I well understand the need for physics-for-mathematicians books, because math as viewed and practiced by physicists is not a subset of math as viewed and practiced by mathematicians. But is this the case for CS? [Not rhetorical questions. By the way, I don't know much CS myself and am looking forward to the answers.] – Pete L. Clark Jan 05 '11 at 17:06
  • 9
    Knuth's TAOCP are the best CS (not programming!) books for anyone. – Steve Huntsman Jan 05 '11 at 18:01
  • @Steve, hey there's a lot of assembler code in TAOCP, so certainly a very programming set of books ;-) – Suvrit Jan 05 '11 at 18:08
  • 2
    @Suvrit--What I mean is that while on the one hand "Teach Yourself Java" isn't a CS book so much as a programming book, TAOCP is more a CS book, and the reader shouldn't expect these to be the same. Moreover, books about computational complexity or category theory for CS are not the place to learn the basic elements of CS proper. Only after studying actual practical algorithms in some detail can a reader really gain the full benefit of the more abstract views of the subject. – Steve Huntsman Jan 05 '11 at 18:18
  • @Steve: i was just attempting to be partially cheeky; i understood what you meant / implied. – Suvrit Jan 05 '11 at 20:22
  • @Suvrit: I do not quite understand your remark about assembly code. There is actually very little assembly code in TAOCP and it is used to illustrate some concepts that cannot be easily illustrated using high-level languages (like coroutine implementation). The absolute majority of algorithms in TAOCP is written in English. – Dmitri Pavlov Jan 05 '11 at 20:48
  • @Dmitri: my statement was not to be taken tooo literally (which is why I put a ;-) there). However, recall that Knuth says in TAOCP: "...any person more than casually interested in computers should be well schooled in machine language..."; and then, there is the MIX computer, and the MMIX computer that Knuth uses to show implementation of some of the algorithms that are described in English.... – Suvrit Jan 05 '11 at 21:28
  • 1
    You may want to check ACM's classification to get a less biased idea about CS: http://portal.acm.org/ccs.cfm?CFID=12395215&CFTOKEN=59446756 – Kaveh Mar 04 '11 at 05:47
  • it depends. do you want highly mathematical/technical refs or would you like to approach it with "zen beginners mind"? for the latter consider computer science books aimed at young adults but might also be useful to newcomer adults— see cs.se, http://cs.stackexchange.com/questions/888/computer-science-book-for-young-adults – vzn Sep 20 '12 at 20:21
  • you also mention directly, "how computers work". see eg cs.se, http://cs.stackexchange.com/questions/3390/how-does-a-computer-work which also has a link to a nearly identical programmers.se question – vzn Sep 20 '12 at 20:23
  • see also tcs.se, "what books should everyone read" http://cstheory.stackexchange.com/questions/3253/what-books-should-everyone-read – vzn Sep 20 '12 at 20:28
  • Learning to program in both C and Haskell can teach you a lot of CS on its own. – wlad Dec 24 '22 at 17:17
  • Basically, get a sample of programming paradigms. Indirectly, you will learn a lot of CS. The two most eye-opening are the systems paradigm (C / Rust / C++) and the functional paradigm (Haskell / Scheme / Clojure / OCaml / Elixir). – wlad Dec 24 '22 at 17:25
  • @wlad What is the system paradigm? Rust seems to be quite different from C in the sense that its lifetime depends much more on linear logic, and also Rust demands a modifier mut for variables with side-effect, and in that sense, it also has some sort of functional programming. – Z. M Dec 24 '22 at 19:05
  • @Z.M By "systems", I mean with no runtime, with no garbage collector, and with access to pointers. Rust is most of those things, I guess - but it doesn't expose pointers unless you use "unsafe". – wlad Dec 24 '22 at 19:09

17 Answers17

28

For the second question (theoretical computer science) I strongly recommend Sipser's Introduction to the Theory of Computation. It is a very easy read for someone with a math background, and requires essentially no specific previous knowledge. It is essentially a one-semester first course in the subject of computability and complexity theory. As such, you get all the nice classical results, but not the more recent results which are less complete, polished, or appealing to an outsider who just wants a "taste".

Noah Stein
  • 8,403
  • 1
  • 32
  • 55
  • 4
    Intro to Automata Theory, Languages, and Computation, Hopcroft, Ullman, Motwani http://infolab.stanford.edu/~ullman/ialc.html is a good alternative. – Mitch Jan 05 '11 at 19:00
23

I'm surprised nobody has recommended Knuth's The Art of Computer Programming.

John D. Cook
  • 5,147
  • 1
    You are? That's a mathematics book, having very little to do with computing, in my opinion. – Igor Rivin Jan 05 '11 at 18:28
  • 3
    Knuth's books are on the shelf of almost every serious programmer I have ever met. – drbobmeister Jan 05 '11 at 19:08
  • @Igor - I was about to say the same thing, but it certainly is a text about mathematics that is inspired by thinking about writing programs...well, those programs that Don Knuth was thinking about in 1960. – Mitch Jan 05 '11 at 19:15
  • @drbobmeister: I have them on my shelf, and they have been quite helpful for questions for polynomials over $\mathbb{F}(p),$ and the like, but NEVER, EVER for anything to do with programming.

    @Mitch Harris: what? you don't program MIX daily????

    – Igor Rivin Jan 05 '11 at 19:38
  • 6
    @Igor: Theoretical Computer Science might as well be thought of as a part of mathematics. I don't find Knuth's books particularly useful for my mathematical work, but I used algorithms from Knuth's books literally hundreds of times in my programs. – Dmitri Pavlov Jan 05 '11 at 20:41
  • 2
    @Igor: I also don't understand you remark about MIX. MIX (and its successor MMIX) are used only a few times in the entire series of books to illustrate some concepts that cannot be illustrated easily in high-level language (like implementing coroutines, for example). The absolute majority of algorithms in Knuth's book is written in English, not in MIX or MMIX. – Dmitri Pavlov Jan 05 '11 at 20:43
  • 1
    @Igor: Well, Knuth is where I learned about trees and stacks. Both of which I've used -- a lot -- in programming. Sure, since then other books have come along. But Knuth is where a lot of that sort of stuff was first coherently organized. (The MIX business, though, I think is pretty forgettable and ignorable.) – Carl Offner Jan 06 '11 at 02:03
  • @Dmitry Pavlov: The algorithms described in Knuth take up MAYBE 5% of the volume, and the rest consists of related parts of mathematics, used to get exact asymptotics (not very useful in applications, since the actual running time is heavily hardware dependent), to third order (REALLY useless in applications, though mathematically quite cool. Plus (a) the books are very old, and much better (and different kinds of) algorithms now exist and (b) subroutine libraries in all known programming languages implement optimized versions of the most modern algorithms. So, I am not sure what you mean. – Igor Rivin Jan 06 '11 at 02:50
  • @Carl: first, does not mean best. Corman Leiserson Rivest, though imperfect, is vastly superior for learning that sort of stuff from. – Igor Rivin Jan 06 '11 at 02:50
  • 3
    @Igor: Well, that's a different matter. I was responding to your statement that Knuth's work had "very little to do with computing". Just on the face of it, that's not true at all. I in fact teach an analysis of algorithms class. I use Cormen et al. And I also point out to my class that the very phrase "analysis of algorithms" was coined by Knuth in his preface to Volume 1. His work is fascinating, both for the wealth of information it contains, and also for the historical insights of someone who was "there at the time." – Carl Offner Jan 06 '11 at 22:28
  • 2
    @Igor: Knuth's books were updated in 1997. Volume 4A will be published this year. Knuth will update the first three volumes again when Volume 5 is finished. Thus one cannot say that his books are old. Your 5% estimate is very far from the truth. My estimate is that at most 1/3 of the book is devoted to matters that are currently irrelevant for programming. As for (b), it simply does not make any sense. What are you trying to say with it? Should we study proofs of known theorems, provided that these proofs are already written down somewhere? – Dmitri Pavlov Jan 07 '11 at 00:14
  • 4
    @Igor: CLRS is meant for theorists, not programmers. For example, to learn linked lists from CLRS one needs to possess a very high level of programming culture. And a programmer with such a high level of programming culture almost certainly already knows linked lists. Knuth's books are much better in this respect, because they actually explain how to implement linked lists. And you wouldn't learn from CLRS that Fibanacci heaps are unsuitable for real programming, even though they have better asymptotics. No offense meant, but to claim that CLRS is "vastly superior" is simply ridiculous. – Dmitri Pavlov Jan 07 '11 at 00:22
  • 1
    @Dmitri and @Carl: I have nothing but respect for Knuth and his books. But as mathematics books, not practical computing books. Just look at the exercises -- they are all math problems. Calling the books "The Art of Computer Programming" was great marketing, but not exactly truth in advertising. CLRS is far from perfect, but I have (very successfully, in the sense that the students appeared to have learned a lot) taught algorithms from that book. As for my comment on subroutine libraries. Dmitri said he used algorithms from Knuth "hundreds of times". – Igor Rivin Jan 09 '11 at 03:56
  • 1
    @Dmitri and @Carl, continued. This, to me, seems to indicate an inefficiency in your work. C++ or Java or python, or OCaml, well, any currently used programming language (except possibly Fortran) implement a Map type (C has many hash table libraries available). Is it implemented as hash tables? B-trees? Do I really care? Do I REALLY want to implement an optimized quicksort myself? Yes, I have done such things, but I have no interest in doing them again. Should you knoww how these things work? Sure, but plowing through $\aleph_0$ pages of Knuth seems to be not very efficient... – Igor Rivin Jan 09 '11 at 04:01
  • 3
    @Igor: It might be the case that the phrase “the students appeared to have learned a lot” characterizes the situation much more accurately than you would like it to be. I really don't see any point in understanding how linked lists work if you cannot implement one yourself efficiently. For me it's much easier and faster to implement an algorithm myself than try to look it up in some library. Also, the implementations of standard algorithms such as linked lists or hash tables in languages like C++, Java, Python, or OCaml are simply way too slow. – Dmitri Pavlov Jan 15 '11 at 04:56
  • @DmitriPavlov I am probably late. I remember that I learned linked lists from CLRS when I was a high school student (more than 15 years ago) without "very high level of programming culture". I believe that I looked at the second edition, and now I glimpse it on Google Books, and I would not agree with your judgment. For example, §10.3 is covering the necessary materials on memory management. – Z. M Dec 24 '22 at 18:37
  • @CarlOffner I wonder whether "analysis of algorithms" (in Knuth's way) is useful in practice? I mean, the techniques to seek asymptotic expansions of running time instead of a coarse asymptotic equivalence? I guess that, in industry, one simply profiles different algorithms with the "same" time complexity — the current architecture is much more complicated than MMIX (e.g. there are out-of-order executions). – Z. M Dec 24 '22 at 18:56
  • @Z.M: Looking at my comments, I should have specified that I meant “learn to use _efficiently_”. While CLRS do discuss how to pack lists and trees into arrays from a theoretical point of view, they precede the whole discussion with the sentence “How do we implement pointers and objects in languages, such as Fortran, that do not provide them?”, which does create an impression of the irrelevance of material for pretty much any other programming language. And the fact that one also has to use such ideas in languages that offer dynamic memory management is not mentioned at all. – Dmitri Pavlov Dec 24 '22 at 19:14
  • @Z.M: There are plenty of algorithms out there (in many areas, the majority) whose running time is dominated by memory access (i.e., moving things between RAM and registers). For such applications, constants most certainly do matter, and it is important to understand how memory access works and how to make it more efficient. This can easily result in a speedup factor of 10 or more, and cannot be ignored. – Dmitri Pavlov Dec 24 '22 at 19:17
  • @Z.M: For example, the fast sorting algorithms in CLRS have running time O(n log n), and CLRS explain a bunch of them. In practical implementations, the running time is dominated by memory access. Using asymptotics alone, we cannot say which algorithm is better. Variants of quicksort have the smallest constant, and exploit the benefits of caching by using mostly sequential memory access. Of course, in the case of sorting algorithms these facts are well known, but this applies equally well to other situations, where such an analysis need not exist. – Dmitri Pavlov Dec 24 '22 at 19:35
  • @DmitriPavlov This is machine/platform dependent, and I am afraid that the efficiency difference between arrays and pointers is no longer significant or even existent. Look at this answer on StackOverflow. The modern compilers today are very different from decades ago. Years ago, I wrote some codes in C and Rust to test, and the Rust code ran faster than C, both compiled with best optimizations. I looked at the assembly code and I did not understand, but very different from 8086 that I learnt, e.g. https://en.wikipedia.org/wiki/Automatic_vectorization – Z. M Dec 25 '22 at 12:51
  • @Z.M: You are talking about entirely different set of issues (vectorization, pointers), none of which are discussed above. What is discussed above is whether CLRS give reasonable advice on implementing linked lists, trees, etc. I posit they do not, since a typical software engineer reading their book would be led to think that it's okay to dynamically allocate individual elements of the list, when in reality the resulting program will be orders of magnitude slower than the one that packs the list into an array. This has nothing to do with vectorization. – Dmitri Pavlov Dec 25 '22 at 17:46
  • @DmitriPavlov Thanks. Now it is clearer (it was clear, but I was distracted by comments about "caching", which led me to think that you are talking about technicalities) and fair enough. But this comment "you wouldn't learn from CLRS that Fibanacci heaps are unsuitable for real programming" seems to be incorrect. In the second edition, introduction to Fibonacci heaps, it reads that "From a practical point of view, however, the constant factors and programming complexity ... less desirable than ordinary binary (or k-ary) heaps for most applications." – Z. M Dec 25 '22 at 18:52
  • @Z.M: Concerning Fibonacci heaps, the book does not explain the reasoning behind its claim, nor how to identify “most applications”. For me, “learning” means understanding why this is the case and being able to make such judgments for other data structures, not just communicating some vaguely stated claims without any evidence, like CLRS do. – Dmitri Pavlov Dec 25 '22 at 20:05
20

Theory A (Algorithms/Complexity):

  1. Kleinberg, Tardos - Algorithms
  2. Easley, Kleinberg - Networks Crowds and Markets
  3. Nisan, Tardos, Vazirani - Algorithmic Game Theory
  4. Arora, Barak - Computational Complexity: A Modern Approach

Theory B (Logic/Semantics/Automated Reasoning):

  1. Benjamin Pierce - Types and Programming Languages
  2. Benjamin Pierce - Software Foundations in Coq (Really lets you see how to translate theory into practice)
  3. The links here
  4. Harrison - Practical Logic and Automated Theorem Proving
19

"Computational Complexity" by Christos Papadimitriou - very good introduction to logic/theory of computation (Turing machines etc.) and computational complexity. One of the best textbooks IMHO.

18

Abelson and Sussman's Structure and Interpretation of Computer Programs is a great introduction (but not a read-before-going-to-bed book, you must do the exercises. This comment applies no matter what book you are learning the subject from).

Igor Rivin
  • 95,560
15

If you want to know how computers work--really work, the the real world--and you have some basic understanding of logic design, so you know how elementary gates and latches are made, and some basic understanding of elementary CPU structure--ALUs, registers, sequencers and crontrollers--then I think the books by Patterson and Hennessy are tops: Computer Architectue, A Quantitative Approach and Computer Organization and Design: The Hardware/Software Interface. They're not terribly mathematical, but they show, structurally, how modern processors organize, process, and move data.

If you need the more basic stuff first, there are any number of good introductory texts.

On theoretical side, I would have to agree with John D. Cook that Knuth is the place to go. (He gives a link in his answer.) The theory in Knuth is illustrated in terms of realistic machine architecture--Knuth's MIX computer, for which simulators are available online if you want to try it out. The model in Knuth is very much a programmer's model--i.e., Von Neumann architecture, rather than Turing or state-machine based models such as utilized by most theoretical computer scientists, but they show how it really looks in practice. Furthermore, his books are quite sophisticated, mathematically, and certainly propose some problems which at the time of publication were definitely open and/or research level.

  • Computer architecture (a la Patterson and Hennessey) as interesting and useful and having mathematical underpinnings and relevant to understanding motivations for TCS as it is, is in no way TCS and is not mathematics. – Mitch Jan 05 '11 at 19:30
  • 3
    @Mitch: The question was about CS in general, not just TCS. – Dmitri Pavlov Jan 05 '11 at 20:45
  • 1
    @Mitch: The OP asked, "By this I really mean the science of how computers work." This is not a purely theoretical issue, as evidenced by our participation in MO! – drbobmeister Jan 05 '11 at 21:50
  • 4
    Patterson and Hennessy's Computer Architecture, A Quantitative Approach is great for understanding how computers work. – Beren Sanders Jan 07 '11 at 00:03
11

For those who want to go from zero knowledge to substantial breadth quickly, I recommend A. K. Dewdney's The New Turing Omnibus. Once that book is finished, tackling some of the more sophisticated books like Knuth, Aho-Hopcroft-Ullman, and the like seems more reasonable. Further, the classic books will teach CS theory that is, well, classic, and will leave the reader ill-prepared (in my opinion) for the theoretical and technological developments of this millenium. The New Turing Omnibus will prepare the reader for classic CS theory, but will not impede those who wish to learn more recent theory.

The book has influenced my writing style. One project I am working on involves "moving a mountain one pebble at a time", and is inspired by the mountain of a book Dewdney has created.

Gerhard ""Ask Me About System Design" Paseman, 2011.01.05

10

I guess I've managed to take some of the path you want. My training was as a mathematician but over the last few years I've learnt a lot about theoretical computer science. (I've programmed for many years, but had limited awareness of the existence of theoretical computer science as a field in its own right.)

I mostly learnt from documents available on the web, of which there is no shortage. The exceptions were some parts of Boolos and Jeffrey for the theory of recursive functions and computability and Basic Category Theory for Computer Scientists to help with grasping some of the beautiful connections with category theory. (I know it says "for computer scientists" but I've never met a computer scientist who liked it. On the other hand, I found it very useful as a mathematician.) Years ago, Cormen et al. really opened my eyes to the kinds of non-obvious algorithms that exist and shouldn't be hard for a mathematician to read.

Dan Piponi
  • 8,086
8

While I admire both The Art of Computer Programming and Structure and Interpretation of Computer Programs, I thought I'd mention two other books in answer to the first of your two questions.

J. Glenn Brookshear's Computer Science: An Overview, 10th ed. is intelligently organized and quite well written. It also has worthwhile chapter review problems and useful bibliographies at the end of each chapter.

David J. Eck's The Most Complex Machine: A Survey of Computers and Computing might be worth looking at. If I remember correctly, Eck received his PhD in mathematics from Brandeis University in 1980.

grshutt
  • 833
  • 8
  • 11
  • 2
    I'd like to strongly second the recommendation of David Eck's Most Complex Machine. It is a great way for a mathematician to learn what CS is all about. David is not only a first-rate expositor, but he has one foot firmly in Math and the other just as firmly in CS; e.g., he is chair of the Dept. of math and CS at Hobart & Wm. Smith, and while his thesis "Gauge-natural Bundles and Generalized Gauge Theories" was very pure math, it was the first math thesis written in TeX---and in those days you had to be a programmer to use TeX. (And BTW, Eck also has a great book on how to program with Java.) – Dick Palais Jan 09 '11 at 20:12
4

Computer Networks by Andrew S. Tanenbaum is the ideal introduction to computer networks.  We used that book for a one-semester self-study,  distance education course in Computer Science at the UP Open University.

  • 1
    The same author has written Structured Computer Operation, which is a very easy-to-read introduction to the actual mechanics of how computers work. It begins with boolean algebra and transistor-transistor logic, and moves step by step to high level programming languages like Java. This seems to be what Spencer was concerned about, perhaps more than learning how to program. I would have included a link, but I couldn't figure out how – David White Apr 12 '11 at 13:23
  • @David, +1 for sharing! :) Thank YOU! – Jose Arnaldo Bebita Dris Apr 13 '11 at 02:54
3

You could also be interested in the dragon book: "Compilers: principles, techniques, and tools", by Aho, Lam, Sethi and Ullman, which is considered a classic in computer science, from the angle of compilation theory and practice.

Julien Puydt
  • 2,013
3

This question is meaningless if you don't specify the goal(s) you have in mind. Goals are teaching , using , understanding , linking .....
Some people like me think that Knuth is bad from a certain point of view: his writings are interesting, of high quality and contains precise and accurate facts yet it is "not functional" and it is misleading in a way.

As a mathematician try to read some of Wadler below. It is specific but may show you what computing is about. (see https://homepages.inf.ed.ac.uk/wadler/) .

Computing is about neatly describing parts of the real world, it is not about encoding in zeroes and ones. This confusion is akin to someone saying he 'understood maths' when he has mastered calculus.

  • 1
    "Not functional" = not in the spirit of functional programming, or not written with usability in mind? – darij grinberg Jan 09 '11 at 13:34
  • The second mainly but also in a sense of describing the world.I believe the first meaning (FP programming) is also concerned,but not knowledgeable enough to say so, In fact I'd be interested to know if FP think like this. Note also that I take Knuth as a serious( academically and intellectually), interesting person and mathematician too. – Jérôme JEAN-CHARLES Jan 10 '11 at 02:27
2

Well, if you want to know how computers work, the authoritative reference on computer architecture happens to have been written by a set theorist!

John von Neumann. First Draft of a Report on the EDVAC University of Pennsylvania technical report (1945) 101pp.

Ah, the good old days: when logicians understood circuit design and computer scientists believed Rice's Theorem...

Adam
  • 3,247
  • 1
    Computer scientists don't believe Rice's theorem? – Qiaochu Yuan Jan 06 '11 at 19:56
  • 4
    Sort of a (bad) joke. There are entire conferences dedicated to approximating various decision procedures which Rice's Theorem shows cannot exist, under the name "static analysis". It's a thriving field (at least measured by volume of publications) with quite a lot of interest and activity, which is something I think would surprise the recursion theorists of the 30's-50's -- advances in computer hardware notwithstanding. – Adam Jan 07 '11 at 18:59
  • 7
    John von Neumann a set theorist?????????? – Igor Rivin Jan 09 '11 at 04:02
  • 6
    John von Neumann was pretty much an universalist. Whoever calls John von Neumann a something theorist is probably one himself. – darij grinberg Jan 09 '11 at 13:36
  • 9
    Igor, Darij, the topic of von Neumann's doctoral thesis was a novel axiomatization of set theory. – Adam Jan 09 '11 at 19:19
  • @Adam: I completely agree with Darij that von Neumann was a universalist, and I disagree that he wrote the "authoritative reference on computer architecture", about as much as I disagree that Euclid wrote the authoritative book on geometry. As for people with his breadth, there are a few around, including some contributors to MO. – Igor Rivin Jan 09 '11 at 21:20
  • @Darij, I agree: von Neumann was very much a universalist; my intention was not to aggrandize set theorists (and though mildly flattered by your suggestion that I might be one, I am not). Perhaps I should have said "written by somebody who first achieved fame from work on set theory". I think it is a really terrible shame that people with his breadth no longer seem to be produced. – Adam Jan 09 '11 at 23:09
  • Agreed. von Neumann covered more between 1925 (thesis) and 1944 (EDVAC) than most of us achieve in our entire lives. – smci Jan 18 '11 at 20:25
  • 2
    I don't know much about computer science, but I'd have guessed that "approximating various decision procedures which Rice's Theorem shows cannot exist" would be pretty serious business, done on a large scale, because, to name just one example, isn't that what you're doing when you write a program intended to detect computer viruses? – Michael Hardy Apr 12 '11 at 12:45
1

Although it may not be comprehensive for your purposes, one book that's worth mentioning in this category is Modern Applied Algebra by Garrett Birkhoff and Thomas Bartee. It's mainly interesting because the theory it presents has a much more algebraic "flavor" than a typical Computer Science treatment of the same material. There doesn't seem to be a decent preview of the book online, so you may find this AMS review helpful.

Glorfindel
  • 2,743
datageist
  • 101
  • 4
1

You don't give any indication of what types of CS you're interested in.

For algorithms, Knuth is classic but old-fashioned; CLRS is pretty standard now I think.

For a connection between CS and logic, you might start with "Proofs and Types" by JY Girard (it's online, http://www.paultaylor.eu/stable/Proofs+Types.html )

Oded Goldreich (http://www.wisdom.weizmann.ac.il/~oded/) has a few books online too. Lots of good stuff on his site, browse around. He has some complexity books and an old draft of his "Foundations of Cryptography" that you might like.

Quantum computation: Nielsen and Chuang, Quantum computation and quantum information

Hang out on some CS blogs, like Scott Aaronson's (which has quieted down lately...). Look at some of the papers (http://scottaaronson.com/papers/) and articles (http://scottaaronson.com/blog) on his site.

Etc...

none
  • 1
1

For your first question, you could have a look at The Elements of Computing Systems, by Nisan and Schocken. It covers the workings of a computer from logic gates, via the CPU to programming languages and the OS.

The website of the book is here; via the 'Study Plan' link you can even find most of the book online, and some presentations based on it - those could be useful to get a quick idea of whether the book is right for you.

Glorfindel
  • 2,743
0

It's a subset of computer science, certainly, but on the networking front, W.R. Stephens' TCP/IP Illustrated, Vol. 1: The Protocols is a book that deserves a place on everybody's shelf. It's a very hands-on book, including the sources used to actually implement various parts of the TCP/IP stack (from a variety of Unix systems). It also has Fantastically useful diagrams, and makes extensive use of command-line tools to actually test the theory being discussed.

It is certainly the bible for understanding the theory and practice of TCP/IP networking.

  • 4
    Despite being an element of the set "everybody", I rather doubt that this book belongs on my shelf... – Andy Putman Jan 06 '11 at 22:59
  • 1
    "Belongs" perhaps not, but it does deserve it. ;) – Kjell Wooding Jan 07 '11 at 01:46
  • 2
    I always thought that the Dodge Dart repair manual was very lucid, and yet I am not sure it belongs on every physicists' bookshelf... – Igor Rivin Jan 09 '11 at 04:05
  • 1
    Unless the physicist in question was asking about learning auto repair.

    Let me phrase the question backwards. If I were to ask "What is a good book for a computer scientist to learn about mathematics," would it make sense for someone to recommend Hardy & Wright to me? Number theory is a merely a subset of mathematics.

    Computer Networking is a subset of computer science, but greatly contributes to an understanding of software architectures - in particular layered design. This is the best book I know of for understanding this very significant component of "the science of how computers work."

    – Kjell Wooding Jan 11 '11 at 19:08
  • 1
    @Kjell: I was unnecessarily facetious. Networking is engineering, not science of any kind. Just as a civil engineer learns physics before he learns about bridges, so should someone start learning about computers from something much more abstract than the TCP stack. – Igor Rivin Jan 27 '11 at 05:28