==Boolean Functions==
So, in the last unit we talked about what
are Boolean functions, Boolean values, Boolean algebra, Boolean formulas. What we want to do now is actually
talk about how we can construct Boolean functions from
more primitive operations. So, we already saw two ways to
represent the Boolean function, a Boolean expression, and a truth table. We also know how, already know how to go
from the expression to the truth table. You take the expression, evaluate it for
each possible, possible value of the input bits, and then you get the, and
then you can fill the truth table. What we want to do now
is exactly the opposite. You start with a description
of a function, let's say given as a truth table, and our challenge is to come up with a formula
that computes the same boolean function. Why do we need to do that? Well, that's exactly what we have to
do when we come to design a computer. We know what we want to do, we know
what we want a certain unit to do, but then we actually have to actually
compose it from primitive gates, from primitive operations. So let's see how we can do it. So again,
we continue with our abstract treatment. We're trying to see what
is a basic logical way we can actually construct such a Boolean
function from primitive operations. Later, once we actually talk about
practically constructing them, we will be more practical oriented. We'll go step-by-step and so on. At this point, we just want to make sure,
what are the principles? So let us start with a specification of
a Boolean function given as a truth table. How can we construct it? Well, here's what I'm going to describe
to you as a standard way of what's called constructing a disjunctive normal form
formula for it, and it goes like this. We actually go row by
row in the truth table. We focus only on the rows
that have a value of 1. For example,
the first row here has a value of 1. Now, what we can do is, we write an expression that basically
gets a value of 1 only at this row. For example, so, in particular, since
here in this row, the values of x, y, and z are 0, 0, and 0,
if we look at the expression not x and not y and not z,
that is going to be a Boolean function, the green Boolean function,
that only gets a value 1 on this row. So now we have one Boolean function. We do the same thing,
we construct another Boolean function, another clause for
each row that has the value of 1. So for example, there's another
second row with a value of 1. This time, in here, this row,
y equals to 1 while x and z equal to 0. So the clause we write here is not x and
y and not z. Again, this is something that completely
gets a value of 1 only on this row and gets a value of 0 everywhere else. We do that for every possible row that
has a value of 1, Now the purple row. Now we have a bunch of different functions
that each one of them gets a value of 1 only in its row and
gets a value of 0 in all other rows. But we desire a single function,
a single expression, that gets exactly the value 1 exactly on
all of these rows and 0 on the other rows. How do we do that? Well, that's very simple. We just or them together. And now we get a single expression,
a single Boolean function that gets value 1 exactly on the rows for which we built
clauses for, and gets 0 everywhere else. And now we've basically constructed our function as a Boolean expression
only using ands, nots, and ors. Now, of course,
once we have this expression, we can start manipulating
it in varied ways. This is one way to write the function as
an expression, but if you actually look at it, you can see that we can start
changing, start changing its format. For example, if you look at the first two
clauses, you can see that one of them is not x and not y and not z,
while the other is not x and y and not z. Notice that the,
what we have both possibilities for y and exactly the same fixed value for x. So instead of these two, two clauses,
we can combine them into one clause which does not ask about y, and
only asks about not x and not z. So we get an equivalent expression
that's slightly shorter. There are more manipulations
that we can do. Let's not go into them, but we can actually write the same
expression in many different ways. Some of them will be shorter than others,
some, some of them might be, might be more efficient in terms of when we
actually go implement them in a computer. But the point is, logically,
they're all completely equivalent. Now, you might wonder,
how do you actually find the shortest or most efficient formula that's equivalent
to the one we've just derived? Well, that's not an easy
problem in general. It's not easy for humans, nor is there any
algorithm that can do that efficiently. In fact, this is an NP-hard problem to actually find the shortest expression
that's equivalent to a given one, or even to verify if the expression that
you're given is just a constant 0 or 1. What's more interesting,
what I really want to focus at this point, is that we've really prove the really
remarkable mathematical theorem that any Boolean function, it doesn't
really matter on how many variables and what the boolean function is, can be represented in an expression using
only the operations AND, OR, and NOT. Now, to notice how remarkable that is, just think about,
just think about functions on integers. Integer functions that you've
learned in elementary school. Not every element, not every function and integer numbers can be represented using
let's say, addition or multiplication. In fact, most functions cannot be
represented just by arithmetic operations. Yet, because of the finite world that
we're living in, the Boolean algebra, ever Boolean function can just be
represented with ANDs, ORs, and NOTs. And that is what exactly gives us
the basic power to actually construct computers only with these possible gates, only with these possible operations,
AND, OR, NOT. But do we really need all of them? Well, here's a better theorem,
if you wish. We don't really need OR gates. Just with ANDs and NOTs,
we can construct any Boolean function. How can we prove that? Well, we already know that if we have ORs,
we can do everything. We've just seen that. And now the only thing that we need to
show is that we can actually compute an OR with AND and NOT gates. But we know how to do that already,
we remember, we recall De Morgan formula that exactly give us a formula for
OR that only uses NOT and AND gates. So now we have a really more remarkable
theorem, that we only need these two basic operations to actually compute every,
every Boolean function. Can we go even less? Can we give up, let's say, the AND gate? Well, that doesn't make sense, because
not only it takes one value to one value, it doesn't even allow
us to combine anything. Can we give up the NOT gate? Well, not really,
because ANDs has this property that if you only feed it zeroes it
will always have zero as output. And there are Boolean functions that when
you feed 0 give you 1 as output, so AND by itself wouldn't suffice. But it turns out that there is yet
another operation that by itself does suffice to actually compute everything,
so let me introduce the NAND function. So the NAND function,
here is a truth table. It gives 0 only if both
of its inputs are 1. And every other possibility it gives 1. Logically, x NAND y is defined
to be the negation of x and y. So what's so
remarkable about this Boolean function? Well, the nice thing is that we can prove
the following theorem, that if you only have NAND gates, you can already compute
every Boolean function, you can already represent every Boolean function as
an expression using just these NAND gates. How do we prove that? Well, we know that if you can do NOT and
if you can do AND, you can do everything. So we just have to show how to do NOT
with NAND gates and how to do AND with NAND gates. So here's how you do NOT. If you just look what happened when you
feed x to both inputs of the NAND gate, you plug it into the truth
table in the previous slide and you can see that NOT x is
really represented by x NAND x. That's part one. The second part we need to show,
how do we do AND. Well, x AND
y turns out to be NOT of x NAND y. But how do we have NOT, well, we've just
seen that you can do NOT with NAND itself. So now we've basically reduced
the fact instead of using NOTs and ANDs, we're just using NAND gates. And we got an amazing theorem that
just if you have a NAND gate, you can compute every, everything. And this is exactly what is going to be
our approach when we actually go to build a computer. We will give you a basic,
a primitive NAND gate operation, and you will basically build the whole
computer, all the complicated logic that we'll actually ask you to build, using
just from these basic NAND operations. So at this point, we've just finished our abstract
point of view about Boolean logic. And now we're doing a really
interesting shift of perspective from the abstract logical
operations to the actual gates and, the actual gates from
which we build computers. This switch really has no, there's no
actual difference between the two, two different things in terms
of the kind of thing that we do. But it's just in the way that
we're thinking about them. Until now, we ask you to think about
everything as abstract Boolean logic. From now, we're going to start
asking you to think about everything as actual little pieces in a computer that
compute the functionality that we want.
==Logic Gates==
==Hardware Description Language==