It was thus said that the Great Toby Thain once stated:
On 21/02/12 1:23 PM, Dave McGuire wrote:
On 02/21/2012 08:30 AM, Toby Thain wrote:
Not an
embedded system, but at work we have access to a multicore SPARC
system (Sun, I don't recall the model since it's actually stashed away
in a
data center) with 8 cores. Doing a parallel make (it helps to have a
properly written makefile; I went to the trouble to do so for the part
I'm
responsible for) only takes 1/10th the time of a non-parallel make.
Impressive. I've yet to see a *super*linear speedup for parallel make,
myself.
--Toby (uses 8 core at work)
I've *only* ever seen a superlinear speedup on parallel builds. Are your
Makefiles ok?
Yes, I know how to write correct Makefiles.
If a "make clean ; make -j" breaks, then the Makefile isn't correct 8-)
By superlinear I mean, taking less time than the
serialised process
divided by the number of cores, which is what Sean is seeing. I am not
sure I've ever seen this. Close to 1/N, sure. But 0.7/N?
The directory I posted figures for has 389 files to compile (C++). It
wouldn't surprise me that there are a few files that take a while to
compile, while in the same time, other cores could compile a multiple number
of files. I'm sure there's an answer in queueing theory (it's faster for
one line of customers to N tellers than one line for one teller).
In another section of the codebase, there's a subsection that takes around
1:30 (one minute, 30 seconds) to compile serially, and 1:15 parallel, but
that's because there's one file that's a few megabytes in size [1].
-spc (I also avoid recursive makes)
[1] I can't be sure of certain data files being install where the
program runs, so it's easier to just embed said data files into the
executable. This is purely for testing purposes, not release.