From: "Jim Battle" <frustum at
pacbell.net>
Dwight K. Elvey wrote:
...
Hi
The last place I worked, the processor was designed to
be able to optimize by doing out of order execution ( HaL
computer system, first Sparc64 ). They soon discovered the
problem when dealing with I/O. They, luckily, had a
sequential mode that they could switch to during I/O
operations that made the order predictable. You'd have
thought that someone in the design team might have realized
the problem.
Dwight
Dwight, I'm more than a little sceptical that the architects at Hal
didn't understand that reordering memory accesses would cause problems
with programmed I/O. I'm sure that they put in instruction
serialization and memory barrier instructions for precisely those
reasons. It wasn't a matter of "luckily" at all.
The sequential mode was for boot ( and I/O ). I suspect that
the original architects understood the need but a hole team of
software fellows had no idea what was wrong. Trust me on this,
I was there and went to the debug meetings.
Even before OOO (out of order) execution at the instruction level was
practical in the 90s, there were designs that performed memory access
reordering since the 60s, leading to some of the same issues.
As a side note, I believe the first company to attempt (thought they
didn't execute) real out of order instruction execution was Metaflow.
Some have said that Metaflow's architectects (Bruce Lightner primarily)
were ahead of their time, but the hallmark of good engineering is having
the judgement to specifify something that can be built within
constraints of practicality, not just specifying something with all the
cool ideas you can come up with.
It would look ahead to see if it could execute anything that wasn't
dependent on something that needed a current pending calculation
or something that wasn't already in cache. Because it is a memory
mapped I/O, it didn't treat the I/O and different then data.
Dwight