On 30 Oct 2011 at 14:05, Toby Thain wrote:
One increasingly important reason: parallel builds. A
correct makefile
gives you that for free.
We'd do it at the source level years ago, using a source library
program such as UPDATE. It's often the case that so many
structures and procedures are shared by routines in the OS, that a
full recompile was about as effective as a partial build.
An extreme case of compiles taking a long time was when I was doing
STAR OS development and the only (convenient) resource was a pair of
STAR-1B systems (the only extant ones), that ran at somewhere around
1/1000th the speed of the real thing. They were not intended as
standard line products, but rather as development platforms that had
the same core (256K words), and attached to standard STAR SBUs and
MCU, but were otherwise bit-serial heavily microprogrammed
emulators..
A compile of the OS kernel was an overnight job, taking a bit more
than 8 hours. That's the kernel, not any of the system utilities.
More often than not, the hardware would develop problems and you'd
lose a day.
You quickly learned that desk-checking everything thoroughly would
save huge amounts of time. It was a very frustrating way to work.
There were only two other options--beg some maintenance time from the
CEs at Livermore, or grab a flight to Minneapolis/St. Paul to use the
system at ADL in Arden Hills to recompile.
This was quite possibly the worst development environment that I'd
ever worked in.
--Chuck