Well, this
makes it impossible to implement long long, which
(loosely put) must have at least 64 bits of range.
I think a double-precision type
could be shoehorned [...]. Basically
96 bits of integer spread across 128 bits.
Could work, just as today some C implementations do long long by gluing
together two 32-bit words.
Consider this one--in the arrangement described,
there's no
particular reason that the exponent (protected padding bits if you
will) need to be in the most significant bit position in a word.
Of course not.
You could exchange the positions of the exponent and
significand or
even scatter the exponent/padding bits among the significand bits.
How do bit logical operations (especially shift)
operate then?
With difficulty. Integer shift couldn't be just a shift operation
applied to the 64-bit representation; it would have to be more complex
than that. But it could certainly be done. If the padding is
scattered around sufficiently randomly it might be easiest to implement
shift operations as calls to a support function.
One thing that I've never understood was the lack
of a bit type in C
Given its genesis - the era and the original target - I kind-of do.
[...] bit-addressed architectures [...]
Yes. Not very C-friendly - or, to turn it around, C is not very
bit-addressable-hardware-friendly.
It certainly would be possible to design a C-ish language with support
for directly addressible bits. I think it'd be an interesting
experiment.
Vector machines certainly use this capability and C
seems to be
totally oblivious to it (c.f. sparse and control vectors).
C is oblivious to a lot of things - though its influence, mostly
through Unix and then POSIX, is strong enough that many of them aren't
noticed nowadays (and I think the world is the poorer for it).
The biggest problem of C to me is the existing code
base that makes
all sorts of assumptions about data type and structures. Because
this is The Way Things Are, it becomes more difficult to propose
alternative architectures that might be more efficient.
Yes. I've felt that way about POSIX for some time now: that anything
that can't be fit into a POSIX framework semi-can't be done, producing
a positive feedback loop that only ensures the POSIX way becomes even
more entrenched.
I became very aware of this in 2002, when I was hired to take an
experimental encrypted distributed storage paradigm and make it
mountable as a filesystem on a Unixy system (NetBSD, specifically).
The impedance mismatch was severe, because the paradigm in question
couldn't really support a POSIXy write() directly - it had versioning
and a naive implementation would have created a new version of the file
on every write(). Since it didn't do delta compression (and really
couldn't, given the way it was encrypted), this would have meant that
small changes to large files would have flooded the system with similar
large files whose similarity could not be exploited to reduce storage
costs. I ended up introducing a `freeze' operation, so that write()
affected only a not-yet-stored copy-in-progress, with it getting pushed
to the tree of versions only upon freezing.
I recall the moan from one of my project members when
confronted with
automatic optimization of C. "A *&^%$ pointer can refer to
ANYTHING!"
Yes - though modern C has aliasing rules that make life significantly
easier for optimizing compilers (but correspondingly harder for
programmers used to the "pointers are just memory addresses" mental
model.
Ah well, time for me to get back to coding for my
ternary-based
machine that uses Murray code as its character set.
:-)
/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML mouse at
rodents-montreal.org
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B