Simplification can actually reduce the number of
logic gates you need. And Karnaugh maps are a good
simplification tool.
Last time, I introduced the ideas behind symbolic logic,
logic circuits, and so on. This month, I'll talk more about
minimizing logic equations; but before we start, I'd like to
make a couple of corrections.
Electronics
Workbench
First, last month I mentioned the
computer program, Electronics Workbench. I wondered out loud
if the company that makes it was still around. A Google search
reveals that they are (http://www.electronicsworkbench.com/). Their
current product, MultiSim, seems to fill the niche formerly
occupied by their Windows product. Unfortunately, a sales rep
told me that they now concentrate only on industrial customers
and no longer sell individual licenses. For the record, I
invited their sales manager to write a paragraph explaining
their current product line, but got no reply. I guess that's
because I'm not a corporation. Should you be interested in
tinkering with electronics sims, better choose one of the
others.
Second, I should explain where I'm going with this new
focus on electronics. Some of you may find electronics in
general and digital electronics in particular interesting, and
there's certainly nothing wrong with that. Some of us still
get that nostalgic glow when we smell rosin-core solder, so if
I pique your interest, that's fine. But it's not my intention
to turn you all into electronics wizards. Rather, my focus
here is on the use of symbolic logic. It's my view that a
little knowledge of the rules and techniques of symbolic logic
will be a nice tool to add to your toolbox and will help you
write better software.
NAND or
NOR?
Finally, reader Hank Schultz e-mailed to
chide me for using the terms NAND and NOR too loosely. At
first, I thought Hank was being too critical. I distinctly
remembered using symbols for Fairchild's RTL logic gates as
both NAND and NOR gates.
Here's the deal. Last month, I gave you DeMorgan's theorem,
which I'll reproduce here for convenience:
(1)
From this, it's easy to see that a logic gate designed to
do the AND function can just as easily do the OR function;
it's simply a matter of defining which voltage—high or low—you
define as representing a 1 (true) or 0 (false). I showed you
symbols for each usage, for both RTL and TTL logic families
(though the former is now mostly useful only to historians).
By convention, any signal that has a little circle at its
connection is considered to be inverted. With TTL NAND gates,
the inputs work correctly if a high voltage is 1. The circle
is on the output. I showed the circles on the input for the
RTL gate.
Well, maybe I used the gates that way, but Fairchild never
did. The URL www.wps.com/archives/solid-state-datasheets/Datasheets/Fairchild-uL900-914/1.JPG
shows an original Fairchild data sheet, copyrighted 1966.
Their diagram clearly shows the μL914 as a NOR gate, not a
NAND. They were consistent in that depiction. Figure 1 shows
the symbol they used, which is perfectly consistent with
modern practice.

Figure 1: Equivalent logic
In my defense, consider Fairchild's own words: "The μL914
device is a dual two-input gate. Each gate performs the
NAND/NOR logical function using RTL circuitry."
Ok, maybe they didn't use the NAND symbol for the 914, but
they certainly agreed that one could use it as a NAND gate.
And I did.
Minimal is
good
Let's get back to logic minimization, with
an example. X is a function of four variables, A
through D. Remember, in this notation, "+" stands for
"or," and "." stands for "and." Also remember, a bar over a
symbol or expression stands for "not."
(2)
Looks pretty messy, doesn't it? Nevertheless, this is often
the kind of equation one gets after considering what a certain
logical behavior must be. Let's see if we can simplify the
equation by factoring. We'll start by factoring out terms in
A and
(omitting the and-ing dots for simplicity):
(3)
Note that the term
is repeated in the last set of parentheses. A term OR'd with
itself is redundant; we clearly only need one of the terms. We
write:
(4)
A little more factoring gives:
(5)
Now, C has to be either true or not true; it can't
be something else:
(6)
And our equation becomes:
(7)
It's not obvious how we can simplify further, but perhaps
we can. Let's try expanding the terms again, and factoring in
a different way. We get:
(8)
Now, DeMorgan's theorem says:
(9)
The first factored term in Equation 8 becomes:
(10)
Sometimes it helps to multiply a term by 1, in the form of
Equation 6:
(11)
Now we can combine the two terms that have D as a
factor, to get:
(12)
In the first term, we have a thing—
—or'd with its own complement. That's the same form as in
Equation 6, so we know how to handle it:
(13)
Equation 12 now becomes:
(14)
When all else fails, try expanding again:
(15)
Now look at the first and fourth terms. They factor to
give, simply, D:
(16)
At this point, we can see that we have one term involving
D alone, plus a lot of terms involving
.
Time to factor again:
(17)
Inside the parentheses, we have one term involving A
alone, plus a lot of terms involving
.
Factoring yet again gives:
(18)
There's that familiar form of Equation 6 again. Simplifying
from here gives:
(19)
Are you kidding me? After all that work, the
equation reduces to nothing but a single constant?
Yep, that's what it does. You might have guessed it's
because I designed the equation that way. Even so, I wasn't
faking the simplification; I needed every step to get to the
final form.
Admittedly, this is an extreme case. But it serves to
illustrate a very important point: simplifying logic equations
can be important. If we were to implement Equation 2 directly,
using hardware logic gates, we'd need 27 gates. After
simplifying, we need none at all; only a single wire from the
high (plus, true, 1) supply to X. Perhaps, if X
feeds into some other equation, we'd get even further
simplifications there.
Now you can see why my friend's computer program that
simplified logic automatically was so valuable. Equation 2, as
complicated as it is, is only one equation with four inputs.
Real systems might have hundreds of such equations, with
dozens of inputs. If you're blessed with so much intuition
that you could look at Equation 2 and see immediately that its
simplification is trivial, you'll go far. Most of us have
trouble even finding the best simplification. I had trouble
doing this myself, and I'm the one who wrote down the equation
in the first place. One wrong turn in the process, and we
might easily have missed the key step.
I mentioned last month that there's more than one way to
represent a given logical relationship. The most fundamental
one is the truth table, in which you write down the desired
output for all possible sets of inputs. The other is the logic
equation and a third is an electronic circuit mechanizing the
relationship. In this particular example, the truth table
would have been the one to use. The truth table would have
shown 1s for every single set of inputs, so the logic's
trivial nature would have been obvious.
We don't, however, usually get to choose the form of the
problem as it's given to us. We can convert from one form to
another, but the advantage of one form over the other is
rarely so clear. What we need, in general, are techniques that
lead us to minimizations that aren't so obvious.
The Karnaugh
map
One such technique is called the Karnaugh
map, and it's one of the neatest approaches you'll ever see. I
remember when I first saw it in a GE manual, I felt as though
I'd found the pot of gold at the end of the rainbow.
Here's how it works. First, for four inputs (the case where
the Karnaugh map works best), make a 4x4 array as in Figure 2.

Figure 2: The basic Karnaugh map
Two things are noteworthy about this map. First, we've
arranged the 16 possible values of the four inputs as a 4x4
array, with two bits encoding each row or column.
The second and key feature is the way we number the rows
and columns. They aren't in binary sequence, as you might
think. As you can see, they have the sequence 00, 01, 11, 10.
Some of you may recognize this as a Gray code.
Why this particular sequence? Because the codes associated
with any two adjacent rows or columns represent a change in
only one variable. In a true binary counting code, sometimes
several digits can change in a single step; for example, the
next step after 0x1111 is 0x10000. Five output signals must
change values simultaneously. In digital circuits, this can
cause glitches if one gate delay is a bit faster or slower
than another. The Gray code avoids the problem. It's commonly
used in optical encoders.
Suppose the output value for two adjacent cells is the
same. Since only one input variable is changed between the two
cells, this tells us that the output doesn't depend on that
input. It's a "don't care" for that output.

Figure 3: Looking for patterns
Look at Figure 3. Group X is true for inputs ABCD =
0100 and 1100. That means that it doesn't depend on A,
and we can write:
(20)
Similarly, Group Y doesn't depend on B or C.
Its value is:
(21)
Note that the groupings don't necessarily have to be in
contiguous rows or columns. In Group Z, the group wraps around
the edges of the map.
If we can group cells by twos, we eliminate one input. By
fours, two inputs, and so on. If the cell associated with a
given output is isolated, it depends on all four inputs, and
no minimization is possible.
The Karnaugh map gives us a wonderful, graphical picture
that lets us group the cells in a near-optimal fashion. In
doing so, we minimize the representation. Neat, eh?

Figure 4: Equation 2, mapped
Now let's see how Equation 2 plots onto the Karnaugh map,
as shown in Figure 4. To make it easier to see, I'll assign a
different lowercase letter to each term of the equation:
(22)
In this case, the individual groups don't
matter much. All that counts is the largest group we can
identify, which is of course the entire array of 16 cells. The
output X is true for all values of the inputs, so all
four are don't-care variables.
As you can see, using a Karnaugh map lets us see the
results and, usually, the optimal grouping at a glance. It
might seem that drawing a bunch of graphs is a tedious way to
minimize logic, but after a little practice, you get where you
can draw these simple arrays fast and see the best groupings
quickly. You have to admit, it's a better approach than
slogging through Equations 2 through 19.
The decade
counter
Next, I'd like to show you a classic
problem that we used to think was really important—at least
until we got chips that did all the work for us. It's the
decade counter. In this problem, we have four inputs and four
outputs, which happen to be the next state of the four inputs.
I'll denote the current states by the upper case letters,
A through D, and the output (next value) states,
lowercase a through d. Table 1 gives the truth
table.
Note that, although a 4-bit number can encode 16 unique
values, there are only 10 rows in the truth table. The other
six rows aren't needed because the counter should never be in
those states. When we draw the Karnaugh maps for this truth
table, the unused cells will show up as six "don't-care"
states. Quite literally, we don't care what value is output
for those cases, since the inputs should never occur.
I should note in passing that if we truly implement this
design, we had danged well better make sure that the don't-
care states really never, ever occur. This may require some
care when things power up.

Figure 5: Looking for patterns in
decade counters
For the decade counter, we'll need four Karnaugh maps as
shown in figure 5, one for each output value. In the figures,
note how I let the groupings spill over into the don't-care
areas if it will let me make larger groups.
The logic equations are:
(23)
These are by no means the only possible choices of terms,
but we have reasonable assurance that they are, if not
optimal, at least nearly so. That's because we used the
groupings of the Karnaugh map to get the largest groups we
could.
An exercise for
the student
As another example, I'll give you a
problem and its solution. This one encodes the output of a
decade counter, to drive a seven-segment display.

figure 6: The seven-segment
display
The standard nomenclature for this display is shown in
figure 6. It assigns the letters a through g to
the segments. Given any combination of 4-bit numbers, our goal
is to light the combination of the seven segments in such a
way as to display the proper decimal digit.
The truth table is shown in Table 2. Note that, as in Table
1, I'm using A as the most significant bit of the
binary number.
Table 2: Seven-segment decoder
A
|
B
|
C
|
D
|
a
|
b
|
c
|
d
|
e
|
f
|
g
|
0 |
0 |
0 |
0 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
0 |
0 |
0 |
1 |
0 |
1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
1 |
1 |
0 |
1 |
1 |
0 |
1 |
0 |
0 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
0 |
1 |
0 |
1 |
0 |
0 |
0 |
1 |
1 |
0 |
0 |
1 |
1 |
0 |
1 |
0 |
1 |
1 |
0 |
1 |
1 |
0 |
1 |
1 |
0 |
1 |
1 |
0 |
1 |
0 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
0 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
1
|
As I said, I was going to show you how to generate the
equations for the decoder. In restrospect, though, I think
it's much better to involve a little audience participation.
You have the truth table, you have the technology. You can use
the practice, so how about you solve the problem instead of
me?
You're going to need seven different Karnaugh maps; one for
every segment of the display. Remember, we're talking decade
counter here, so don't try to generate the remaining hex
digits. Because only 10 of the 16 states are going to occur,
you can use the remaining six as don't-care states, as I did
in Figures 5a through 5d.
One hint: look for similar patterns in each of the seven
maps. If you can minimize the logic the same way on more than
one map, it would lower the total gate count for a practical
implementation.
Next month, I'll show you my solution and share a personal
tale about how I tried to build my own displays. I'll also
show you the more powerful method by Quine and McCuskey.
See you then.
Jack Crenshaw is a senior software engineer at
Spectrum-Astro and the author of Math Toolkit for Real-Time
Programming, from CMP Books. He holds a PhD in physics
from Auburn University. E-mail him at jcrens@earthlink.net.
Reader response
K-maps are the most underutilized tools in the industry.
You make a brief reference to Gray codes and problems with
race conditions. It is worth pointing out that one of the most
powerful uses of K-maps is the ability to resolve race
conditions (particularly those caused by propagation delays in
the complement of a signal) by adding redundant logic terms
that "overlap" adjoining groups. It is almost impossible to
identify and resolve these types of glitches with other
reduction methods with which I'm familiar.
Pete Jungwirth
Firmware Engineer
Renaissance
Learning
While Karnaugh maps (even multidimensional ones) were a good
tool, we let the synthesis tool take of the logic minimization
for us these days.
Mike Murphree
Read more about
Jack W. Crenshaw