Among the mathematicians, Shannon was unusual in that he asked
the right theoretical questions in terms of greatest engineering
significance.
His best-known work is Shannon and Weaver, The Mathematical Theory
of Communication, University of Illinois Press, 1949,
ISBN 0-252-72548-4. This short book is actually a reprint of
two articles. The first, by Weaver, originally appeared in
condensed form in the July 1949 issue of Scientific American,
and is called "Recent contributions to the mathematical theory
of communication". The article by Shannon is called
"The mathematical theory of communication" and appeared originally
in the Bell System Technical Journal, July and October 1948.
I have a postscript copy of this. I also own the book.
Some of Shannon's better known known theorems include
the Sampling Theorem, which indicates that a bandwidth-limited
signal can be reconstructed only if sampled at least at twice
the frequency of the highest-frequency spectral content.
The method for perfect reconstruction of the signal from
the samples involves convolution with a sine-cardinalis function
and is called "Shannon reconstruction". But of course, in the
area of information theory, Shannon's theorem with the
most implications concerns the information throughput limit
on a bandwidth-limited transmission channel with white noise.
Essentially, the theorem states that there exist coding schemes
such that the transmission rate can approach arbitrarily close
the value H0 = B log_2 (1 + S/N), where B=bandwidth of channel
and S/N = signal to noise ratio. log_2() is the base-2 logarithmic
function. One could use this formula to derive the theoretical limit
for the capacity of a vynil LP record, mentioned a few threads ago.
While Shannon's work may seem highly theoretical for the non-mathematics
inclined, it was essential for signal processing, information theory,
and control theory. Without it, for example, we would have no idea
about what kind of transmission speed is possible with a space probe,
or what is the acceptable rate in a 10bT line with such and such cabling,
or at which point we need more transmitter power in a geosynchronous
satellite instead of a larger dish on earth, no matter what.
I am including part of the introduction of The mathematical
theory of communication at the end of this message. It is on
topic since it deals with storing information in bits.
Carlos.
At 03:51 PM 2/27/01 -0800, you wrote:
What pops into my memory is the reference common in
digital audio of
Shannon coding or limits.
http://www.itr.unisa.edu.au/~alex/shannon.html
Shannon, Claude Elwood (1916- )
Mathematician and pioneer of communication theory, born in Gaylord, MI. He
studied at Michigan and at the Massachusetts Institute of Technology, and
in 1938 published a seminal paper on the application of symbolic logic to
relay circuits, which helped transform circuit design from an art into a
science. He worked at Bell Telephone Labs (1941-72) in the area of
information theory, and wrote The Mathematical Theory of Communication
(1949) with Warren Weaver.
A Mathematical Theory of Communication
By C. E. SHANNON
INTRODUCTION
The recent development of various methods of modulation such as PCM
and PPM which exchangebandwidth for signal-to-noise ratio has
intensified the interest in a general theory of communication. A
basis for such a theory is contained in the important papers of
Nyquist[1] and Hartley[2] on this subject. In the present paper we will
extend the theory to include a number of new factors, in particular
the effect of noise in the channel, and the savings possible due to
the statistical structure of the original message and due to
thenature of the final destination of the information.
The fundamental problem of communication is that of reproducing at
one point either exactly or approximately a message selected at
another point. Frequently the messages have meaning; that is they
refer to or are correlated according to some system with certain
physical or conceptual entities. These semantic aspects of
communication are irrelevant to the engineering problem. The
significant aspect is that the actual message is one selected from a
set of possible messages. The system must be designed to operate for
each possible selection, not just the one which will actually be
chosen since this is unknown at the time of design.
If the number of messages in the set is finite then this number or
any monotonic function of this number can be regarded as a measure of
the information produced when one message is chosen from the set, all
choices being equally likely. As was pointed out by Hartley the most
natural choice is the logarithmic function. Although this definition
must be generalized considerably when we consider the influence of
the statistics of the message and when we have a continuous range of
messages, we will in all cases use an essentially logarithmic
measure.
The logarithmic measure is more convenient for various reasons:
1. It is practically more useful. Parameters of engineering
importance such as time, bandwidth, numberof relays, etc., tend to
vary linearly with the logarithm of the number of possibilities. For
example, adding one relay to a group doubles the number of possible
states of the relays. It adds 1 to the base-2 logarithm of this
number. Doubling the time roughly squares the number of possible
messages, or doubles the logarithm, etc. 2. It is nearer to our
intuitive feeling as to the proper measure. This is closely related
to (1) since we intuitively measure entities by linear comparison
with common standards. One feels, for example, that two punched
cards should have twice the capacity of one for information storage,
and two identical channels twice the capacity of one for transmitting
information.
3. It is mathematically more suitable. Many of the limiting
operations are simple in terms of the logarithm but would require
clumsy restatement in terms of the number of possibilities.
The choice of a logarithmic base corresponds to the choice of a unit
for measuring information. If the base 2 is used the resulting units
may be called binary digits, or more briefly bits, a word suggested
by J. W. Tukey. A device with two stable positions, such as a
relay or a flip-flop circuit, can store one bit of information. N
such devices can store N bits, since the total number of possible
states is 2N and log2 2N = N. If the base 10 is used the units may
be called decimal digits. Since
log2 M = log10 M / log10 2
= 3.32 log10 M;
a decimal digit is about 3.32 bits. A digit wheel on a desk
computing machine has ten stable positions and therefore has a storage
capacity of one decimal digit. In analytical work where integration
and differentiation are involved the base e is sometimes useful. The
resulting units of information will be called natural units. Change
from the base a to base b merely requires
multiplication by logb a.
1. Nyquist, H., "Certain Factors Affecting Telegraph Speed," Bell
System Technical Journal, April 1924, p. 324; "Certain Topics in
Telegraph Transmission Theory," A.I.E.E. Trans., v. 47, April 1928,
p. 617.
2. Hartley, R. V. L., "Transmission of Information," Bell System
Technical Journal, July 1928, p. 535.
--end of excerpt