Nand2Tetris Unit 2 Boolean Arithmetic and the ALU

来自iCenter Wiki
跳转至: 导航搜索

Binary Numbers

Last week, we learned how to manipulate bits in a computer. We have yes, we have no, 1, 0, and we know how to manipulate them. But what can we do with only 1 and 0? Can we represent more sophisticated things with them? Of course we, we must, if we want to use a computer to do useful things. But what can you do with only two values, 0 and 1? Well, you definitely can put two of them together, and then you get four possibilities. Three of them together, you get eight possibilities. In general, if you have n of them together, you have 2 to the n possibilities and now you could pre, represent any 2 to the n different things that you may want to. For example, you can represent numbers. And this is what we're going to be interested in. How are you going to represent integer numbers? Well, 0 is easy. 0, 1 is easy. 1. 2 is already the first problem, basically because we don't have yet another possibility, so we have to use two bits. 0, 10. Three is 11. For the next one, we already need another bit, 110 for 4. And in general, it seems we can represent any number that we want with some sequence of bits. Now how did we co, connect the sequence of bits, the binary representation of an integer number, to the integer number? What's going on? What's the general system? In order to understand that, we have to go back to second grade. When we learned about decimal numbers. When you see 789, what does that mean? Where, what does the digit 7, 8 and 9 have to do with the value of the number 789? Well, we learned that we have the positional system, where the right-most digit is the ones, the next one is the tens, the next one is the 100s. So we know that 789 is really 9 plus eight 10 plus seven 100s, and so on. In general, the case position from the right is 10 to the k. So now exactly the same thing that was going to happen also with binary numbers, but it's going to be much simpler because we only have 0 and 1 rather than all the numbers, digits between 0 and 9. Thus in binary notation, the different positions will be powers of 2. 1, 2, 4, 8 and so on. Suppose you want to know, for example, what is a value of 1 0 1, in binary notation? Well, we know the rightmost bit is basically the 1s. The next one, the 0, correspond to 2s. And the third, the left-most bit here, is the 4s. Altogether, we have 4 plus 2 plus 2 times 0, plus 1 times 1. Altogether, that's 5, so our number is 5 and in general, what do we do in general? Exactly the same thing. You have any sequence of bits and we are going to number them from the right. Most bit, which is going to be b 0, the next one b 1, and so on, all the way to bn, if we have n plus 1 bits. And the value of this thing is going to be b0 times 2 to the 0 plus d1 times 2 to the 1, plus b2 time 2 to the 2, all the way until the end of it, which we have, where we have bn time 2 to the n. And, this is going to be a number. And, that's how we can covert any sequence of bits, any binary representation of a number to the value of a number. One thing that we should note at this point that if we look at the maximum number we can represent with k bits. Well we sum up from 2 to the 0 all the way to the k minus 1. Remember that the case bit if we start counting from 0, the last bit is not counted as index k, k minus 1. So we have the sum of 1 plus 2 plus 4, all the way to the 2k minus 1. All together, we have 2 to the k of that minus 1. And that's a range of bits, we can actually represent with k. The range of numbers we re, can represent with k bits. Now, so far, we assumed, we have an arbitrary length number of bits that we can use to represent. And of course, if we want to represent an arbitrary long number,. We will need an arbitrary number of bits. In computers, you usually have a fixed number of bits that is allocated. And then of course you can only be able to represent a fixed number, a fixed range of integer numbers. So for example, eh, if we only have eight bits, what are the, what are the possible numbers that you can represent? Well you can represent that something starting with 000 eight times. 000 ending with a 1 all the way up to eight 1s. The smallest number is, we have all together 256, 256 such possibilities. The first one is index 0, the last one is index 255. Now, that's not exactly true. Really, when we have eight bits, usually we want to, to, want to reserve part of these, all the possibilities to actually represent negative numbers. We're not going to actually talk about this now, but rather in a unit from now. But in general, half of the indi, half of the 256 possibilities are going to be re, be reserved for negative numbers, and we're only going to be able to use the numbers between 1, between 0 and 127, and these are the positive numbers that we're going to be able to use out of these 256 possibilities of 8 bits. So far we saw, basically, if you better or a string of bits, how can you convert it to decimal numbers? Now we are going to do the opposite thing, suppose you were given a number in decimal, 87 for example. How can you represent it as a sequence of bits? This is also something which we should be able to do, if we're going back and forth between decimal notation and binary numbers. That they're actually going to be eh, represented as in the computer. Well remember that we know that really the way that we get the decimal from the binary is by summing up powers of 2. So we start by figuring out, figuring out what is the largest power of 2 that fits into our 87 number. And that is going to be 64. And then, what is the next one? After we have 64, what is the next power of 2 that we can add to 64 and we still remain under 87? That turns out to be 16. And so on, we can keep on going, and we write the number 87 as a sum of binary pow, of powers of 2. And it turns out that 87 is exactly 64 plus 16 plus 4 plus 2 plus 1 and once we have that, from this representation of the decimal number, as a sum of powers of 2, we can quite directly actually get the binary representation. How do you do it? Well every time, we have a power that the peer is in the sum we need to put a 1 in the bit there. And whenever is a power is not part of the sum, we put a 0 there. So for example, look at the right-most bit, that corresponds to the 1s. We do have a 1, so that's going to be a 1. On the other hand, if you look at that third bit from the right, which is correspond to the 8s. Since we don't have 8 in the sum there, we're going to have 0s there. And that's basically a gener, general way how you can take any number and convert it to binary. So this concludes this unit where basically we discussed how you can represent integer numbers within a binary system. In the next unit, we're going to actually discuss how can we actually perform arithmetic operations. In particular, addition and these represented binary numbers. Once we get that under our belt, we will go back and discuss the issue of negative numbers in unit 3.

Binary Addition

In the last unit, we saw how to represent numbers in computers using binary bits, but now we can represent numbers, that's an important thing. Now the whole point of representing something is if we want to manipulate it. What kind of manipulations do we want to do with bi, with numbers? Well we want to add them, subtract them, multiply them and so on. This is what we're going to learn how to do in this unit. Well, not all of them. What we will really be talking about is how to add. Once we do that we'll basically get the whole rest of the other operations almost for free. It turns out that once we understand how to represent negative numbers, which we will do in next unit, we will be able to get subtraction for free and to understand which of two numbers is greater for free. Multiplication and divisions are more complicated, but nicely enough, we can actually postpone them to software. We will not build in hardware any multiplication or division eh, circuitry. But rather, we will actually let software do it and things are much easier to do in software because you just have to write little programs rather than actually connect stupid little devices. So we get two sequences of binary numbers. How can we add them? Well one way we already know how to do, and that's going to be very easy. We convert them to decimal. We add the decimal numbers. That we know how to do from second grade. We get a decimal number. We convert it back to binary. That we just learned how to do. And we get the answer. This is great and fine, but of course that's not what a computer does. A computer doesn't know how to add decimal numbers without first converting them to binary numbers. So we need to figure out how to actually do the addition of binary numbers, and directly, without converting them to decimal. And how are we going to do that? As usual, everything I need to know I learned in second grade. So we need to go back to second grade. We need to add 5,783 plus another number. How do we do this kind of addition? What did we learn? Well, we start with the ones, with a right-most digit, and we add 3 plus 6, we get the 9 and we are all very happy with that. That's easy. But now what happens when we next, do the next digit and we have 8 plus 5? Well, 8 5, as 8 plus 5 is 13, and we cannot write 13 underneath the tens position because 13 is greater than 10. So we all learned this important and amazing trick of say, writing only 3 and having 1 as a carry to the 100s place. And then we know how to con, we can continue from there. And then we can con, add the 1 to the 7 and so on. And after we finish the left-most digit, we have the complete result. So, exactly the same thing we're going to do in binary numbers, only it's going to be much, much easier. So we take 1 plus 0, the right-most two digits, and we need to add them. And that's easy, 1 plus 0 is 1, we can write it down. If we have 0 plus 0, that's also going to be very easy. We can write them down. Now the first time we get into a problem is we have 1 plus 1. because 1 plus 1 is 2. 2 is more than what we can write in a single digit, bit, Boolean bit. So we actually need to do the same kind of trick, write down 0 and carry the 1 to the next position. And now we can continue. We need to add 1 plus 0 plus 1. That's again, more than what we can write in a single bit because that's 2. So we write 0, we carry 1. Now we have 1 plus 1 plus 1. The answer is 3. So we write down 1, and we carry another 1. And that's how we continue, until we get the final solution, the final answer. And that's all there is to it. In, the rest of the unit we'll actually learn how to do this thing exactly, how to do this thing completely mechanically, in the sense of really understanding how every operation works in true, ter, terms of binary operations that we've learned so far. But the basic principle is extremely simple, just like we learned in second grade. Before we continue, there is one thing I want to take out of the way, and that is a question of overflow. Suppose that we were somehow unlucky and the, and the two left-most bits of our, the two numbers we were adding were 1. So what is the problem with that? The problem is that, that when we add them, we have a carry that needs to go to the left of the word size. And there is no place to carry, to put that carry bit because we finished our word size. So what would we do? Will we raise some warning or anything like that? Well, the answer is very simple. What is usually done in computer systems is nothing. We just ignore any carry bit that does not fit into the word. What does that mean, really? So if you try to look at it from a mathematical point of view, what it means is that the addition that we're actually doing in our hardware is not real integer addition, because we cannot go beyond the numbers that fit inside the word size. Instead, what we have is really an addition module 2 to the width of the word size if you look at it mathematically. In others, in, in other, in other words, the answer is correct, but may, except for the case that it may be off by exactly 2 to the n where n is the word size. If the result was more than 2 to the n, the hardware automatically decreases 2 to the n, which is basically the carry that we just threw because there was an overflow. So that's what they usually do, and the rest of anyone using a computer, anyone using computer and software, needs to remember that if he exceeds the word size, then the result that you get, that you get is not the true integer result of the integer addition, but rather the truncated result after the overflow was already disposed of. Once we took good, got that out of the way, let's go back and try to understand how will we, we actually going to build this kind of an adder. How can we actually get, build hardware that gets two numbers as bits, as input bits, and the output is another number that actually represents the sum of the two input numbers? We'll do that in three easy stages. In the first stage we'll just learn how to add two bits. Very simple. The second stage we'll learn how to add three bits. It may seem there is a long way to go if we are progressing so slowly, but then in one big step we'll be able to add any two numbers of any number of digits. So let's start with that. So let's look at the typical operation we were, when we were adding two bits in the, in the process we just looked at. How do we took a 1 plus 1 and we added it and we got a 0 sum and a 1 carry. How did we do that? Well, the fir, most important thing is to notice that if the rest of the bits of all these two numbers do not add, do not matter at all when we're doing just this one bit slice of operation. Whatever the other bits were, as long as what we're adding now is 1 plus 1, and as long as one additional important thing, the carry so far, the carry we had to this place was 0, whatever the other bits are, we still are doing the operation exactly the same. We're going to put, write 0 below these two 1s, and we're going to have a carry of 1. This tells us that now we have really a just a simple binary operation. Taking two bits, a and b, and producing two output bits which depend only on them, which is one of them we're going to call the sum, the binary sum of these two things, and the other is a carry. And this is really the first step that we need to do. This would actually allow us to add two bits. Now this operation really, which is the slice one oper, one, one step of the process that we saw so far, is that's naturally abstracted by a chip. [COUGH] And the chip gets two inputs, a and b, two outputs, and we know exactly, for every combination of inputs, what the output is supposed to be. This chip is called the half adder, and implementing it is the first thing that you'll do in the exercise for this week. In fact, we're going to give you the exact HDL that describes the interface of this chip, and you're just going to have to do the implementation, which actually does implement this operation that we'll now understand what it is. And we've finished the first step of our journey adding, of journey to add numbers adding two bits. Now if you remember, the only real restrictions that we had when, when we were doing this addition of two bits was the fact that the carry to this point was 0. But that's not the general case. What happens more generally, is there may be a carry. Suppose now that we get another bit of the input, called c, which describes the carry from the previous step, which can be either 0 or 1. Now how do we do this addition? Well, we know we end, add the three numbers and we still get the sum and the carry. Again, now we get a Boolean gate, a chip, if you wish, that we know exactly its functionality. We have three inputs, a, b, and c, two outputs, sum and carry. And there are eight possibilities of the inputs. And for each of them, we can very well know what the outputs are. So that's another chip. And that chip is called a full adder. And again, you can go and implement it. And indeed, this is the second part of we, of our joining to addition. And again, we're giving you the HDL of this chip, and please go ahead and implement it. And now we're ready for the final step for doing everything. We get two full numbers and we want to add them. How are we going to do that? Well, we already know how to do every single step of the process, so we just have to repeat doing the single step. So let's look at what we did. We first, let's color the bits, the right-most bit yellow, the next one green, the next one blue. This is just so we can talk about them in terms of colors. But of course in the implementation there are no colors, just different bits. So we start of course, by just adding the two yellow bits. And adding the two yellow bits is just a half adder because we have no carry so far. And we get a yellow sum and a yellow carry. Now the next step is, we need to add the yellow carry to the two green bits. And from that, we get a green sum and a green carry. Now we take the green carry, add it to the two blue bits. Each one of these colored steps is simply a full adder now. The thing that takes that yellow carry, the two green bits, and outputs a green carry and a green sum, that's just a full adder. And that already we've implemented. So to implement the whole thing, we just need to basically connect 16 of these full adders, or maybe 15 full adders and 1 half adder for the right-most bit, connect them together in the right way, and you get exactly a, an adder. And this is what you're supposed to do. This is our 16-bit adder. It accepts two numbers now, each one of them is a bus of 16 bits, and outputs a single number that is supposed to be their sum as 16-bit integers. And again, we're going to give you the HDL for this. It just specifies that you're going to get two numbers, two 16-bit numbers, as busses in input, and produce a single number, 16-bit bus as output that is their sum in terms of 2 representation of the number. And you can go ahead and implement that. So we've just learned how to add two numbers in a very concrete sense of building the chips that actually do that. The next unit we'll actually go back and actually look at how do we represent negative numbers, something that we still owe you from last unit. Once we do that it will turn out that we'll get subtraction for free. After we get that under our belt, then we'll go to the capstone of this week's this week's lecture which is building a complete arithmetical logic unit. And it turns out that most of the cleverness is already done. The most clever thing that we have in a ALU unit is just adding two numbers. But, of course, we need a lot of logic around it, and that's what we will do in the fourth unit.

Negative Numbers

In the last two units, we've seen how to represent integers as binary numbers using bits and how to manipulate them, how to add them. But so far, we only discussed positive numbers. And of course, our computers will have to deal also with negative numbers. So how can we deal with them? So let's take for an example the case of 4 bits. So we know there are 16 possible value there and what we done so far we manage, we, we presented the integers between 0 and 15. All 16 numbers of them using the 4 bits. In general, so far if we had N bits. We try, we use them to represent the positive integers between 0 and 2 to the N minus 1. If we want to represent negative numbers, we will need to give up part of these 16 possibilities in order to represent negative numbers. So maybe eight of them will be positive and eight of them will be negative or something like that. So how can we do that? Well the simplest thing you may consider and it was used at sometimes was to basically take the first bit and use it as a sign bit. And then you remain, you have three more bits or n minus 1 more bits in general to represent the actual number. So this way, if it starts with 0, it's going to be just positive numbers. If the first bit is 1, then the next three bits, and it's going to be a negative number, that is represented by the next three bits. In this case, we can represent 0, 1, 2, all the way up to 7. And then, negative 0, negative 1, all the way up to negative 7. So, this would be one possibility of representing negative numbers. A possibility that is not very popular. Why? There are a bunch of problems with it. One thing that you may immediately notice, this is very inelegant. We have negative 0. What is this negative 0 and why is it different than 0. What we learned in math was that 0 equals negative 0. So of course, we may in principle decide to have two representations of 0 in our computer, but that is inelegant and probably means that there's going to be prob, trouble. Usually, if you have something that's not elegant, it's going to bite you. And in fact here, if you actually try to manipulate these kind of negative numbers. Using some kind of hardware, you'll get into trouble. You'll need to explicitly deal with pluses and minuses, and the whole thing will be a mess. So hardly anyone uses thispossi, this anyti, anymore. Here is what people use instead. They use a system called 2's complement, and it has a very simple idea. If you want to represent a negative number, negative x, you just instead repre, use, repre, represent a positive number, 2 to the n minus x. In our case of 4 bits, 16 minus x. Which is going to be a num, a positive number and you are going to present it like we've seen so far. So, for example, in our case here are the numbers. You have 0, 1, 2, all the way up to 7 as usual. Now, if you want to represent, let's say negative 3. Well you represent negative 3 by the integer 16 minus 3, which is 13, so if you look at the place 1101, which is really the binary number 13, the value of that is negative 3. So this is basically the table that we have so far, and this system is called 2's Complement representation. So let's look what we get in this representation. First of all, the positive numbers that we have are half of what we previously had. Basically, we're missing a bit, so we can represent on that the numbers up to 2 to the n minus 1, but rather up to 2 to the n minus 1, all that minus 1. The negative numbers, in this system, we get one more of them. So we get all of the negative numbers between negative 1 and negative 2 to the n minus 1. And in our case, we get the negative numbers between negative 1 and negative 8. But we get only the positive numbers between 0, 1 and 7. And of course we have the 0 as usual. So the main thing that's nice about this trick is that we will basically get our addition and subtraction and almost all the operation that we need to do with numbers almost for free. So let's see what does that mean. Suppose that we want to add two numbers. What we're, what I'm going to show now that if we just use the same addition circuitry that we built so far. We'll magically get the correct result. And this is true not only of this example but in general. So let's look what happens if we just take negative 2 plus negative 3 using our circuitry that was not designed for, for negative numbers but just designed for positive numbers as we've done in the last unit. So here's what happens. So negative 2 plus negative 3 is, if we use 2's complement, that's really 14 plus 13. So, 14 plus 13, we know how to add in binary. We get the two binary numbers and just we add them using the previous circuitry. So here is the sum, if we just use the previous thing, here is the usual sum of these two numbers. Now notice that there was a carry bit, the one that is in green there, that usually in our circuit we threw away. So what does that mean? So really the output that we get is 1011, and the most significant 1 was thrown away. So what does that mean? [COUGH] Well, the original result without throwing the carry bit is really the number 27. Which is what it should be, the sum of 14 and 13. Once we threw that away, we lost 16, because the next bit that was thrown away, equals16. So what we get is only 11. Now what's nice about 11? So basically what our hard work computed was that 14 plus 13 equals 11. What is nice about 11? Well 11 in 2s compliment really represents the number negative 5. And this is exactly what we needed to do. So amazingly we got the correct result without doing anything new. Now how does that happen? Why does this magic happen? Will it always happen? Well as we saw in the last unit, our addition is anyway modulo 2 to the n. That is because we throw the overflow bits. The result that we get is correct up to an additive 2 to the n at the additive factor. And our representation is also modulo 2 to the n in the sense that we represent two numbers as equal. Negative 3 and 13 are equal up to an addition of exactly the same 2 to the n. And since both the representation and addition have the same convention, then they exactly fit and we don't need to do anything else. But immediately our hardware that was designed as previously just for positive numberage, just works like it is. So one thing we may still need to do is given the number gets its negation. So given is input x and the binary number could output its negative, the number negative x, again it 2s complements. Why would we want to do that? Well for one thing, remember that we still didn't see any circuit that does subtraction. But once we can solve this problem then definitely we have already solved the subtraction problem. because if you want to do y minus x, you just need to add negative x to y. And addition we already know how to do. So once we can compute negative x from an input x, we've already solved the subtraction problem, and that's a good thing, because we don't have to build new hardware for it. So the basic idea of how to do this kind of neg, negation in 2s complement is very simple. It uses a mathematical trick that 2 to the n is actually 2 to the n minus 1 plus 1. And thus we can rewrite negative x, which is really 2 to the n minus x in 2s compliment as 1 plus 2 to the n minus 1 minus x. That may seem completely crazy. What have we gained so far? But here is one very nice thing about this. 2 to the n minus 1. If you look at that as an integer number, better as a binary number, better presented as bits as all one bits. So with just one 1111 and bits of 1. What's so nice about that? It's very easy to subtract a number from this because you never need to borrow anything. So, in order to represent 11111 in binary minus any x. You just need to flip each one of the bits. It's very simple. You never need to borrow from the next time. So, this is a very simple boolean operation, just negation. Once you've done that, now we only need to add 1 and this is very nice because we already know how to add in general. So, of course, we know how to add 1. So now we've seen that using this very simple trick, basically we can now do subtraction. Again, just by the fact that we know how to do addition. So let's see an example. Let's see how we can design a boolean, boolean hard circuitry that takes as input x and produces as output negative x in 2s compliment. For example, takes 4 as input and produces negative 4. So again, what would negative 4 be in 2s complement? It would be the bits 1100, which would represent 12, which is 16 minus 4, is the way we are going to represent negative 4 in 2s complement. So, how are we going to do that? Well, our input is 0100. That's 4. Remember the first thing that we did, we needed to do to the n minus 1, the all 1s bit and subtract our number from it. So this is the first thing we do. We take the all 1s and subtract the number from it and then we just flip the bits and we get the result. Now we need to add 1. So we just add 1 to it. And we know how to add 1, and we got the new number, exactly what we needed to get. Our num, the number 12 that is the number -4. Now, in general we need not worry too much how to add 1 to a given number, because we know how to add any, a, a, the any numbers. But this is a special case, so it may be worthwhile to actually note what happens, how do we add a sin, 1 to a number. Well, if you look what happens if you just do addition, well, you start adding 1, if the left, rightmost bit is 0, you just turn 0 to 1 and you've got your new number. If it's 1, you turn the 1 to a 0 because you add 1 plus 1, and then you have a carry. And you move to the next bit from the right. And again, if it's 0, you turn it to 1. If it's 1 you turn it to 0, but you need to keep on going because you have another carry. So in general basically, in general what you do for the special case of adding 1 to a given number, you start from the rightmost bit. You keep on flipping bits until you reach a 0 that you flip to 1. And at that point you stop. And that's the very simple hardware to manipulate, you don't, to actually build. You don't need to write the completely general addition circuitry although you may. And we got our result. So, we finished now to say how we deal with negative numbers. And the most important point is, that we don't need to new, do anything new really. We just need to understand what we're doing. And everything that works for positive numbers will keep on working, as long as we're careful enough about that. And now we're ready to actually design the arithmetic logic unit of the computer, and we will not really need to do any special thing, neither to subtract numbers nor to have the negative numbers. They will all be taking care of themselves once they, we understood these ways to represent negative numbers

Arithmetic Logic Unit

In this unit we're going to talk about a very important component of every general purpose computer called ALU or the Arithmetic Logic Unit. Back in 1945 the mathematician John Von Neumann wrote a seminal paper in which he included a diagram or a description of how general purpose computers can be built. And this became to be known over the years as the Von Neumann Architecture. Now, when you look at this diagram, you see that one key player in the diagram is the central processing unit and within this CPU, yet another important piece the ALU or the Arithmetic Logic Unit. Now, if we obstruct the way all these details and focus only on the ALU we can think of an abstraction which receives to multi-bit inputs. We call them input one and input two, and the third input is the function that has to be computed. So the ALU computes the function and out comes the result of this computation. Now, the function f is one out of a family of predetermined functions that taken together define what this ALU is capable of doing. Some of these functions are arithmetic and some of these functions are logical. So for example common computations that ALU typically perform are integer addition, integer multiplication, integer division. There may be some logical operations like bit wise AND, bit wise OR and so on. So there's a really interesting question, which is when you set out to build an ALU, how much functionality do you want to build in this hardware device. Well this is a classical hardware software trade off question, because if you choose not to implement something in hardware you can always augment it later with software. So if for some reason you decide that the ALU will not include multiplication or division, presumably at a later point when you build your software layer you will deal, you will complete this functionality with software. All right, now so far everything that we said was true for every ALU out there on any computer. From now on we are going to focus on one specific example of an ALU, we call it the Hack ALU because this is the ALU that will hum inside our hack computer. This is the overall gate diagram, and as we see the ALU has two 16-bit data inputs which we call x and y. It outputs a single 16-bit output which we call out, which function to compute is determined by six control bits that have strange names like zx and nx and so on. We will explain these names in just a few minutes. Based on these six control bits, the ALU computes one out of the following 18 functions of interest. And I call these functions functions of interest because in principle the ALU can compute many more functions, but we've decided to focus on these 18 functions only. Now some of these functions are very simple, like the constance of 01 minus 1, xy and so on. And other functions are more involved and interesting like x plus y, x and y, and so on. Now as we see from the diagram, the ALU also computes two 1-bit control outputs which are called zr and ng. The role of these two control bits and the reason for these names would become clear later in the unit. So let's move on and focus on the output of the ALU from one side and the control bits that caused the ALU to compute these outputs, and they cause it using this truth table. So this truth table gives you a complete functional specification of this ALU. That is, if you want to compute a certain function you can look it up on the right hand side of the table. You can then read off the zeros and ones that correspond to this function. You enter the zeros and ones into the control bits and boom, using some sort of black magic, the ALU will compute the desired function. So let me illustrate how the ALU works in action and in order to do it, I'm going to use our HDL simulator. So we can fire up the simulator, we can load the built-in ALU chip and as a result, we'll get this chip into the HDL window and once we do that, notice that we also get some nice GUI. And indeed, some of the built-in chips in our simulator have a GUI side effect that helps the user, you to understand what goes on inside the respective chip. So this is a diagram that we made up to help you keep track of what happens inside the ALU. So moving along, we begin testing. As you can notice we have set the ALU inputs to two values, which are 30 and 20. And we also set the control bits to 000111, which if you look at the table that I showed you before, what they mean is it's a directive that tells the ALU compute y-x. So the next thing that we do, is we have to tell the simulator to actually do something. And we do it by clicking this calculator icon, and this tells the LU to evaluate the chip logic on the given inputs. So this is what happens next and then we can inspect the outputs and if we do that indeed we see that the ALU came out with what was advertised, which is minus 10, the result of y- x. So it looks like the ALU at least in this example, works well. Let me give you a second example, which demonstrates the logical computational abilities of the ALU. The first thing that I do is I tell the simulator to revert to working with a Boolean formats rather than decimal formats which make it much it much easier to enter zeros and ones into the various inputs of the ALU, and that's what we do here. So I picked up two arbitrary examples of 16-bit zeros and ones values, I entered them. And then I entered also the control bit values 000000 which happens to be the directive to compute x and y. Once again, if you look at the truth table, you will see this role listed out. And indeed after we click the calculator icon, which I've skipped in this example, we see that the ALU actually computed bit wise in the operation, and the result was the bit wise end of the two given inputs. So so far, it seems like the ALU is functioning quite well, at least in these two examples. Well, we didn't say anything about how the ALU actually works. Everything so far was kind of magic. So now is the time to open up the black box and understand how the ALU actually performs this magic. So once again this is the gate diagram or the interface diagram of the ALU, and I want to focus on the six control bits and explain the names and operations of every one of these bits. So if zx=1, what we want to do is set the x input to 0. So irrespective of what x is it can be 17 or 19 or 5,003 or whatever. We set it to zero that's what we do if the x is 1. If nx is 1, we set the x input to not x this is Bitwise negation, and also notice that these two things happen one after the other. So for example, if zx equals 1 and nx equals 1, first of all, we 0 the x input and then we negate it. So we'll get 1, 1, 1, 1, 1, 1, if this is indeed the values of these two control bits. The same thing exactly happens with a y input using the zy and ny directives if you will, then we have an f bit. If f is 1, we compute x plus y. If f is 0, we compute x and y. Now, these are the processed x and y. So before we do these computations, the x and ys have already undergone these manipulations that we talked about before. They became either zero or negator or nothing, maybe we didn't touch them, and then we compute either addition or 16-bit ending. Finally, we have the no bit. If the no bit is 1, we negate the resulting output. The output that we just computed and if no is 0, we leave it as is. If we do all of these operations one after the other then what comes out is the desired function. And now that you understand these semantics, you can actually look at the table and try to convince yourself. You can actually prove that this table delivers the required results, and I will demonstrate to you how we can do it. So let's pick up one example, let's see how value computes not x. So I look up not x in the right-hand side. I see it right there, and then I look up the binary values of the six control bits and I start simulating on paper what happens inside the ALU. So in order to do it I have to come up with some arbitrary examples of x and y. So I make up two values and I use 4-bits instead of 16 to make it less tedious. So I have these two examples x and y arbitrarily chosen, and then I look at the control bits. Zx equals 0, and nx equals 0, which means that we don't touch the x input, we leave it as is. And then we move onto the y input and we see that zy equals 1, so we 0 the y input and n y equals 1, so we negate it and what we get is the result 1, 1, 1, 1. Moving along f is 0 and if f is 0, we want to compute x and y. So we compute x and y and we get 1, 1, 0, 0. This is a bit wise end and finally, no is 1. So we negate the result, and we get 0, 0, 1, 1. Lo and behold, 0, 0, 1, 1 is exactly not of x. If you look at the original x, which was 1, 1, 0, 0, we got not x. So we have proved that this row in the truth table performs as advertised, so to speak. Moving along, we can take another example, and this will be the final example. We look at y-x, an arithmetic operation and once again we see the binary values, and we begin simulating them. So once again, we start with two arbitrary examples of x and y. I've chosen 2 and 7. This is arithmetic operations so it's easier to think about it then both in decimal and in binary. So we have 2 and 7. And if everything works well, we should get the result 5 because y- x, 7- 2, should give us 5. So we see that zx is 0 and nx is 0, so we don't touch the x input, it remains as is. And moving along, we see that zy is 0, so we don't touch the y input. But ny is 1, which means that we have to negate it. So 0, 1, 1, 1 became 1, 0, 0, 0. Moving along, we see that f equals 1. So we compute the addition, and if we add up x and y, we get 1, 0, 1, 0. No also equals 1, so we negate the result and we get 0, 1, 0, 1. And 1, 0, 1 represents 5, which is exactly what we wanted. So once again you see that the ALU performed as specified and many of you may still wonder how this magic actually happens. We were told that we have to do subtraction and actually we did addition, and we got the result that we expected. Well we don't want to get too much into it, but if you go back and read or look at the units where we discussed the two's complement method, you will understand also the internal mechanics here, and why everything comes out as expected. Now as we said earlier in the unit, the ALU also computes an output to a 1-bit output control bits. And these bits are called zr and ng, and the role of these bits is to say something about the main output of the ALU denoted out. Specifically, if out equals 0, the ALU sets zr to 1. Otherwise, the zr becomes 0, and if out is negative then ng equals 1. Otherwise, ng equals 0. Now, you may ask yourself why do we need these two bits? Well, this will become clear when we put together the overall computer architecture in which these two bits play an important role. I'd like to make a few end notes about the Hack ALU. I hope that we have convinced you that it's a simple concept. It is very elegant, and surprisingly it's also extremely easy to implement and let me explain why. If you remember what we do with ALU given all these control bits, in some cases we have to set 16-bit values to 0s, easy. In other case, we have to set it to 1, also very easy. In some cases, we have to negate an incoming value. We've done it before I think in project one. We built a gate that does exactly 16-bit negation. And in some other cases we have to compute either plus or end and these two computations were already, are already taken care of using the chips that you designed in previous projects. So all together, there is very little to do. Everything was done in one way or another by existing chips that you have already developed. So to sum up this is Leonardo da Vinci, one of the greatest inventor's in history and Leonardo said that simplicity is the ultimate sophistication. And I hope that I convince you that the Hack ALU is both simple but quite sophisticated. So this has been the unit on the ALU, and this leads up to the next unit in which we will get our hands dirty and build one such ALU, along with some other chips.

Project 2 Overview

In this unit we're going to describe all the chips that you have to build in Project 2, and give you some guidelines on how to actually implement them. So when you set out to build the chips of Project 2, you can certainly use all the chips that you built In Project 1, you know these are the fruits of your hard labor. So you can suddenly plug them into your HDL implementations. And using these building blocks, you now have to build the following five chips ranging from HalfAdder all the way to ALU. Basically this is a family of combinational chips, from simple adders to more complex ones. So, let us start with the simplest chip in this family, the HalfAdder. The HalfAdder takes two bits, and adds them up, and outputs both the sum of these two bits and the carry bit, which may be either 0 or 1. This is the truth table of the HalfAdder, and here is the stub file of this chip. And if you look carefully at this at this truth table, you will realize that the sum and the carry columns are identical to the outputs of two gates that we have already built in project one. So, this tip is sufficiently, I think, helpful to let you understand that building this gate is a rather trivial thing. You have to pick up two gates that you already built, plug them in in some way, and what you get is a HalfAdder functionality. Which is kind of interesting because we use a logical device to affect something which is semantically an addition operation, something that happens all the time when you do digital design. The next chip up the hierarchy is called FullAdder and FullAdder is slightly more powerful than the HalfAdder. It is capable of summing up three incoming bits, so to speak and it outputs the same thing, the sum of the three bits and the carry. This is a truth table of the FullAdder. This is the stub file that we have to complete. Now in general you can build a FullAdder using two HalfAdders. Which you can put together and add some glue in, in the form of some other logic gates that combine the operations of these two HalfAdders. And this by the way is the reason why this gate is called a, a half adder. Not this one but the one we saw previously because it takes two half adders, in addition to some other functionality to deliver the functionality of a FullAdder. By the way, this is not the only way to implement the FullAdder, you can do it in other ways. So you're welcome to to come up with any hdl implementation that makes sense. The next shape that we'd like to talk about is a 16-bit adder. We begin to see some industrial strength addition, so to speak. This is the the stub file. And if you think about it, it can be built quite easily from a sequence of of 16 regular adders or 16 full adders. So you put these 16 adders, one next to the other and you can pipe the carry bit of one adder to one of the inputs of of the next adder up the significance ladder, so to speak, going from right to left. And notice that that according to the chip specification, the most significant carry bit is simply ignored. Moving along the next chip is called a incrementor. It's a simple version of an adder. It takes a single input called in, it adds 1 to the incoming value and delivers the result. The stub file is straightforward, and you can build such an incrementor using the chips that you built already. In doing so, I just want to remind you that in HDL, you can represent a single bit value, 0 and 1, using the keywords false and true, respectively. Finally we get to the most interesting chip in this project, the ALU. And we talked about the ALU to great detail in the previous unit, but just to repeat the main functionality of the ALU, I'm giving you this section of the stub file that you will get from us, which documents what the ALU is supposed to do. Once again, all the operations here are straightforward. We've done them before using previously implemented chips. So now you have to put together all this functionality in order to deliver the required ALU functionality. Two useful hints. I hope you can build this ALU using a 16-bit adder and various chips that you built in Project 1. And finally, you can you can do all the above, you can build all this functionality with less than 20 lines of HDL code, which is another manifestation of the simplicity and elegance of this non-trivial chip. In terms of resources you have to go to the NAND to NAND to Tetris website and get the full description of the of Project 2. Once again I want to remind you that if you have downloaded the course, software suite, there is no need to download anything else during the course. You now have a directory on a computer called, projects/02, which includes all the files which are necessary for Project 2. There are a few more resources that I described in in project one, which are relevant to this project also. So finally some best practice advice, much of it repeats what I said in project one, so I don't want to read it aloud, but you certainly have to go through it before you set out to work on the project. Now there's something new that we have to say about Project 2. The best practice advice is not to use your HDL implementations and instead use their built-in versions which are automatically available when you use our hardware simulator. So let me repeat and explain how we do it. If you look at your Project 2 files you will see that they include only six chips, beginning with a half adder and ending up with an ALU. And yet I just told you that you're welcome to use previously defined chips like end or mooks or so on. You can use these chips precisely because they are not listed, they are not available in your Project 2 directory. Because the simulator works in such a way that if it tries to evaluate the chip part and it doesn't find it in the current directory, it reverts to using its, built in version of, of this chip. So, this setting is ideal for our purposes. You just use the chips that we gave you, and you use any previously defined chips as chip parts, and, the simulator will kick in and use the automatic versions whenever it's necessary. So, this has been an overview of Project 2 and all the chips that you have to, to build in this, in this project. And the next unit, as usual at the end of each week, will be perspective in which we'll give you some more information about combinational logic, and about ALU use of other computers and so on.

Perspectives

Welcome to the perspective unit of the second week in Nand to Tetris. During this week we built an ALU. And in the next week we're going to put this ALU on the side and focus on building some memory systems leading up to a RAM unit. But this, this is something that we'll do next week. Let us now look back and bring up some questions that typically come up when we talk about combinational logic and ALU design. The first question is, we've already built about 20 chips in this course, are these chips standard? Namely, are they typically being used in many other computer systems, or are they specialized to the computer that you built in this particular course. Noah? >> Most of the gates that we build in the course are completely standard. The half adder, the full adder, the adder. These gates are completely standard, as are the gates we built last week, like the multiplexor the or Xor gates and so on. What is not completely standard is our ALU, which is extremely simple. In this course we clearly emphasize simplicity over almost anything else, because we want to fit everything into one course. So for that reason our ALU is extremely simplified in its implementation, and in the respect it's a bit unique among usual ALUs. >> The next question is directly related to the previous one and the question is how come the allele that we built does not feature more operations like multiplication and division. Well the answer is that indeed there is no problem to write HDL code that specifies chips that carry out multiplication and division by operating directly on zeroes and ones, on, on bits. And in fact, there are some very elegant and very nice algorithms to do just that. But in general, when you build a computer system the overall functionality that the system provides is divided between the hardware and the operating system that runs on top of it. So, it is the designers freedom to decide how much functionality to put in each one of these layers. When we design the heck computer which is the computer that will continue to build in the course. We decided, as Noam explained before, that the ALU will be extremely simple. And that operations like division and multiplication will be delegated to the software that runs on top of this computer. And indeed, in the second part of this course, in Nand to Tetris part two, we are going to design among other things an operating system, and the operating system will have several libraries and one of these libraries is gone, is going to be called math. And the math library will feature all sorts of of very useful mathematical operations including multiplication and division. So at the end of the day the programmer who writes programs that have to run on this computer will not feel the difference. The programmer will not really care if certain algebraic operation is being done by the operating system or by the hardware. It will be completely transparent for the high level programmer. But of course there's there's some tradeoff here. And typically when you design an operation in hardware typically it runs much faster but it's costly to design and it also costs money to manufacture the the more complex hardware unit. So once again it's a matter of tradeoff, cost effectiveness. And that's how we decided to build the the heck computer. Simple ALU and many extensions later when we build the operating system. We hope that we convinced you that the ALU that we designed in this course is indeed simple. And now the next question is is this ALU actually efficient? >> Almost everything that we did in the construction is completely efficient, so you really can't say much more. But there is one component which is where some important optimization is still possible, and it's probably worthwhile to talk for one second about the kinds of optimizations that we're talking about, and this is the adder. So let us see what is the main problem with the adder and how one may, may decide to improve upon it. So let us recall the implementation of the adder. The adder was constructed with a sequence of full adders inside it. [NOISE] Each full adder got some inputs from the input of the adder gate, and then the important thing that one of its output's carry went on to the next adder, the next full adder. Similarly, the outputs are carried from this full adder went down to this full adder and so on. Okay. So this chain of connections is really the problematic one that one may wish to op, wish to optimize. So let us look at the distance, if you wish. How many gates must the signal, signal traverse between the input and the final output of the last full adder? It needs to go inside the first full adder, go through a few gates, then go here through another few gates. Again, another through, few gates and so on. So assume that there are, like, three or four gates in each full adder. If this is, say, a 32-bit full add, 32-bit adder, altogether we have 32 bits times 3 or 4 bits of delay from input to output. And when you actually run the such hardware in a system you will have to take this delay into account because it actually takes time for the single to perverse for all the capacitor to the capacitors and then implementation of all these full adders to actually completely a load and so on. So having such a long chain is not considered a good thing because it actually increases the delay. So what could you do instead? Really, if you want to shorten the chain then maybe you can compute this thing, the carry that goes into one of the full adders at the top, it's, it's a most significant bits, computed in a different way, not throughout this long chain. And in fact if you actually look at the logic that's computed here there are more efficient ways, ways that have less delay. And what you could do is do what's called carry look ahead. Compute this carry independently of this long chain so that may duplicate if you wish some effort, but at least you're minimizing delay. So instead of just having the carry be computed by this long chain, have the carry be computed separately for each one of the full adders and the least delayed way possible. And that is called carry look ahead and that would allow you to run this chip at a much faster rate because the delays are shorter. >> All right, moving along. The last question that we want to focus is why do you recommend using built-in chips in project two instead of the chips that we actually built in project one? Well first of all you're welcome to use the chips that you built. You know, we definitely don't prevented and its it makes a lot of sense to use the chips that that you actually build because this is what this course is all about. We are building a complex machine which consists of various layers of construction and it is perfectly reasonable that when you build a certain layer you will use layers that you built before. However there are very good reasons why you should want to use built in chips that we supply rather than the chips that you've built yourself. And the most important of these reasons is the notion of we can call it local failures. And the idea is that if you build, I'm sorry, if you use built-in chips as your chip arts, and some and some problem raises its ugly head in the present project, then you are guaranteed that this problem can be attribute to bugs and problems that were created in this project only, and not in previous projects. So, this is also sometimes called the notion of unit testing. You test each unit separately from the rest of the system. And this principle of unit testing goes hand in hand with other very important principles like obstruction and modularity. And taken together, you know, one of these things that these principles imply is that once you've finished building a certain module you can put it away. You can stop worrying about its implementation and use only the interface or the API of this module when you build more complex functionality. This really is the only way to manage complex projects. And by adhering to these principles which we find extremely important by doing so, we can really take this super-ambitious project of building a modern computer from first principles and do it only in seven weeks. >> Shimon? >> Yes. >> I think we should also confess that our simulator is not that efficient, especially when we get to more complex projects, if you actually layer the chips that were constructed in previous projects, our simulator will have to face a huge number or arrays and will simply be slow. If you do it using just the, the finished end of the previous chips, just the, the specifications of the previous chips, then our simulator will be fast and nice to work with. >> Yes. In fact that's another very important technical reason why we want to use built-in chips and this problem will come up later in the course when we build memory systems and more complex functionality. And indeed this is a very good other reason why it makes a lot of sense to use software based built-in chips.