All chapters are available on introtcs.org

**Pre course reading: Mathematical Background (34 pages):**Read the whole chapter with the following exceptions: You can skip the optional section on sets in python, and only skim the extended example on proving non connectivity of graphs with too few edges.**Chapter 0: Introduction (15 pages):**Read the whole chapter (this is fairly light reading)**Chapter 2: Representation (30 pages):**Read the whole chapter with the following exceptions: you can skip the “proof by Python” section, you can skim or skip Sections 2.3.4-2.3.8. Make sure to read carefully the sections on prefix free encoding (Sections 2.3.2 and 2.3.3), and the proof of the theorem that the reals are uncountable (Theorem 2.2).**Chapter 3: Defining computation (29 pages):**Read the whole chapter with the following exceptions: you can skim Section 3.4 on physical implementation of computing devices (though it’s a fun read). Make sure to read carefully Section 3.3.2 where the equivalence of Boolean circuits and AON-CIRC programs is shown, as well as Section 3.5 where we define NAND circuits and the NAND-CIRC programming language. You should be able to understand the proof of Theorem 3.18 (equivalence of computational models).**Chapter 4: Syntactic Sugar (29 pages):**Read the whole chapter with the following exceptions: you can skip or skim the optional Section 4.1.2 “Proof by Python” . You can skip or skim the extended example on addition and multiplication (Section 4.2). It is important read the proof that LOOKUP can be computed (Section 4.3) and the corollary that we can compute*every*finite function using Boolean circuits or straightline programs (Section 4.4) . The alternative proof in Section 4.5 is simple and nice to go over. The class SIZE, defined in Section 4.6 will be very important for us. Make sure you understand its definition and why it is a set of*functoins*and not of*programs*or*circuits*.**Chapter 5: Code as data, data as code (22 pages):**This is a hugely important chapter, as the concept that we can*represent*programs as strings and use them as inputs to other programs is one that is pervasive in this course (and computer science in general!). Make sure to read this chapter deeply and carefully. In particular, you should understand the proof of the “universal NAND-CIRC program” or “NAND-CIRC interpreter in NAND-CIRC”. You should also understand why the theorems of this chapter are not specific to NAND-CIRC at all: we could have done everything using Boolean circuits, AON-CIRC programs, or a number of other formalisms/ Don’t skip the “discussion” sections - they might not contain theorems or proofs, but it doesn’t mean they are “unimportant fluff”. Chapter 5 marks the end of the finite computation part of the book (and this course). This is a good opportunity to go back to chapters 3 and 4 and make sure you are in good shape before we embark on to the next part.**Chapter 6: Loops and infinity (31 pages):**In this chapter we introduce two equivalent models for computing infinite functions: Turing Machines programs and NAND-TM programs. Turing Machines are the standard model for algorithms in this text and computer science at large, while NAND-TM programs are simply a programming language formulation of the same concept, that helps emphasize the ties between Turing Machines to both to the circuit models we saw before and to real-world programming languages. You should read the whole chapter and make sure you understand**(i)**the difference between*functions*and*programs*,**(ii)**what are these models and why they are equivalent.**Chapter 7: Equivalence to other models (31 pages):**In this chapter we show the equivalence of Turing machines to a host of other computational models that correspond closely to real-world programming languages or physical processes. You should read carefully Section 7.1 about RAM machines (also known as NAND-RAM programs) and make sure you follow the discussion in Section 7.2 about the “best of both worlds paradigm”. You should read carefully**Section 7.4.3 (configurations of NAND-TM/Turing machines and one dimensional cellular automata)**and the proof of Theorem 7.7 that one-dimensional automata are Turing complete. You can skip Sections 7.5-7.7 which covers the*Lambda calculus*which underlies functional programming languages such as OCaml, Lisp, Haskell, and more, as well as functional features in languages such as Python, Javascript, Ruby, Swift and many others. If you are interested in the lambda calculus, you might want to take a look at my interactive python notebook for lambda calculus. The discussion in 7.8 on the Church Turing thesis is important to cover.**Chapter 8: Universality and uncomputability**This chapter contains some of the most important ideas of this course, and should be read carefully. Luckily, much of it does not depend on the particular details of Turing Machines or NAND-TM. Read**Theorem 8.1 (universal Turing machine)**and make sure you understand its*statement*. The proof is not as important- you should be able to guess the proof by yourself if you replaced Turing Machines with your favorite programming language, and by now we have the toolkit to transform an arbitrary computation to Turing machines.**Theorem 8.3**shows the existence of uncomputable functions. The proof is very reminiscent of the proof that the reals are uncountable in Chapter 2 so you might want to go back to that proof and compare them. Once more, the heart of the proof is to understand what is the thing that is being proved.**Theorem 8.4 (Halting problem)**is one of the most fundamental results of computer science. Make sure you understand*both*the statement*and the proof*. In particular the proof uses the technique of*reduction*, see**Section 8.4**which you should read carefully. The Halting problem is so important that it’s worth reading two proofs for it, and so I recommend you read**Section 8.3.2**as well, but feel free to skip it if you’re short on time. Once we have the uncomputability of the*Halting problem*and the notion of*reductions*, we can obtain many more uncomputability results.**Section 8.5**contains some such results and in particular*Rice’s Theorem*: a central theorem in the realm of software verification.**Chapter 10: Godel’s Incompleteness Theorem**(Note that we will cover chapter 10 before chapter 9 this term) This chapter is fairly short and so you should read the whole thing. Make sure not to lose the forest for the trees. The chapter has several technical details, but the take-aways from it are*(i)*the idea that uncomputability results immediately translate into*unprovability*result,*(ii)*using the notion of a*configuration*and*computation history*of a Turing machine which time and again arises in some of the more involved results we see in this course. The fine print of exactly how a configuration is encoded is not so important, but you should have the confidence that you could work it out for yourself if you really needed to.**Chapter 9: Restricted computational models**This chapter deals with*regular expressions*and*context free grammars*that are used in many areas of computer science. We will focus on regular expressions only so you can skip the reading on context free grammars (though they are very useful for compilers and programming languages, so feel free to read them). Read Sections 9.1-9.5 fully and then you can jump to the summary in Section 9.8. It is OK to skim the proof of the pumping lemma in a first reading, though do read the proof idea.**Chapter 11: Efficient computation**This is a fairly short chapter and has no theorems so just read the whole thing, including the “more advanced examples” section. I am not trying to steal CS 124’s thunder and teach you all about algorithms in one lecture, but you do want to get some intuition to coarsely analyzing algorithms (particular polynomial vs exponential time) and an introduction to some of the computational problems we will deal with more formally in the coming weeks.**Chapter 12: Modeling Efficient Computation**This is a very important chapter, as we see the formal definitions for running time, the invariance of classes such as P and EXP under different models, the time hierarchy theorem, and the relation of uniform and non uniform computation. You should make sure you understand all the*definitions*and*theorem statements*in this chapter. It is OK not to follow all the fine points of the proof that Turing Machines can simulate NAND-RAM with polynomial overhead, and the proof that NAND<< has an efficient universal program. However, you should make sure you understand the*statements*of these theorems, as well as the general idea of the proof. It is important you understand the proof of the*Time Hierarchy Theorem*as well as the proof of the theorem that $\mathbf{P} \subseteq \mathbf{P_{/poly}}$.**Chapter 13: Polynomial time reductions**This is a fairly short chapter, but reductions can be confusing so read the whole thing and read it carefully. The reduction to $LONGESTPATH$ is a little messy, so feel free to just skip iot.**Chapter 14: NP completeness**This is again a short chapter but gives the definition of**NP**,**NP-completeness**and the**Cook-Levin Theorems**. These are**extremely important**concepts for this course. Read the*whole chapter*and make sure you understand it.**Chapter 15: What if P=NP?**This is a short chapter but worth reading completely as it does relate the P vs NP question to many many areas that you will encounter throughout your studies and beyond. Make sure you read carefully and understand the proof of the search to decision reduction. Ideally you should also understand the proofs of the theorem showing that we can solve all optimization problems if P=NP, and the theorem showing that we can solve quantified boolean formulas with a constant level of quantifiers if P=NP. (The latter theorem is also known as “polynomial hieararchy collapses if P=NP” and you can search online for resources about it.) However, if you don’t completely follow the*proof*of this theorems it’s not that bad, as long as you understand their*statements*.**Chapter 17: Probability Theory 101:**This chapter can serve as a good review of probability theory. If you have already taken or are taking a course such as STAT 110, you probably know most of this already, but still read the chapter for our notation. Note that we will almost always be exclusively interested in the finite sample space where we choose an n bit string and every possible output is obtained with probability 2^(-n). This is in many ways simpler than the more general setting of STAT 110, but our focus is different. A particular focus for us is*concentration*: what is the probability that a random variable X deviates by a certain amount from its expectation. In particular make sure you understand the*Chernoff bound*, which we use time and again in computer science.**Chapter 18: Randomized algorithms:**This is a short chapter, so read the whole thing, except that you can simply skim the algorithm for perfect bipartite matching. You should however make sure you understand the statement of the Schwartz-Zippel lemma. The next chapter (Chapter 19) would be much more significant, so I recommend you start reading it early. In particular, I will mention the definition of BPP and amplification already in this lecture.**Chapter 19: Modeling randomized computation:**This is a somewhat dense but extremely important chapter, so make sure to read carefully and fully. The main character here is the class BPP of functions that can be computed using randomized polynomial-time algorithms. You want to make sure you understand the definition of BPP and why the particular constant^{2}⁄_{3}does not matter. The theorem that BPP is in P/poly is important to read and also a good opportunity to remind yourself what is P/poly. Also see the discussion of what happens if some NP complete problem like 3SAT is in BPP. Derandomization is very important topic that is the underpinning of cryptography (and conversely, insecure pseudorandom generators or randomness generation are often the underpinnings of many famous attacks on deployed systems). You want to read Definition 19.7 (pseudorandom generators) more than once until you understand what it means, and understand at least at a high level why strong enough pseudorandom generators imply that BPP=P. The Sipser-Gacs Theorem that if NP=P then BPP=P is another important result. It’s OK if you don’t follow all the proof details, but make sure you understand the statement. Section 19.5 - non constructive existence of pseudorandom generators - is a little harder and so I made it optional, though do recommend you at least skim it.**Chapter 20: Cryptography:**Cryptography is fast-becoming a “must know” subject for everyone that cares not just about technology, but also about the balance of power between governments, corporations and individuals, and in particular questions such as privacy and control of decision-making. As such, I highly recommend you read this whole chapter carefully. The chapter is a bit on the long side, but much of it is discussions and history that should not take that much time to read, and is good for context, even if I will not ask you in the exam which battle was won because of breaking the Enigma cipher. On the mathematical side, it is particularly important to read**Section 20.4**where we define perfect security and show that a perfectly secret encryption scheme exists via the*one time pad*.**Section 20.5**shows an important theorem - Theorem 20.4 (necessity of long keys) - that underlies the reason we use computational assumptions in cryptography.**Section 20.6**is where cryptography meets what we did before, and we see how a pseudorandom generator can be used to create an encryption scheme that bypasses the limitations of Theorem 20.4. In**Section 20.7**we see how this connects to the P vs NP question. Sections 20.8, 20.9, 20.10 are really good to read to know about more advanced topics in cryptography, that we’ll probably not have time to go into in depth in this course (aside from zero knowledge proofs, that we will discuss).**Proofs:**(Salil Vadhan’s lecture notes) From*Lecture 1*read Section 1.1 - definitions, and see the graph non isomorphism example in Section 1.2. From*Lecture 2*read Section 2.1 “Definition” and 2.2 “Zero knowledge proof for**NP**”. You are also very welcome to read the other parts of these lecture notes. In lecture we might discuss other notions of proofs such as computer-verifiable proof systems and probabilistically checkable proofs.**Chapter 22: Quantum Computing:**Quantum computing is a topic of great interest and excitement, and I would suggest you read the whole chapter if you can, but otherwise skip the sections marked “advanced, optional”. Specifically, you should read Section 22.1 - 22.6 (inclusive). You can skip the analysis of Bell’s experiment (Section 22.7), but read Section 22.8 (quantum computing), 22.9 (physical realizations), and 22.10 (Shor’s Algorithm). You can skip Section 22.11 (quantum Fourier transform).