On Oct 25, 2011, at 9:32 PM, TeoZ wrote:
I have no idea what embedded systems are like these
days , they used to be processor, storage, and RAM starved so they did need optimized but
you also knew exactly what the resources were.
They still generally are processor, storage and RAM starved these days, but the spectrum
is a lot larger on the top end. We had a customer who insisted that the CPU doing the
housekeeping for an FPGA board had to be an 800MHz PowerPC with half a gig of RAM running
Linux. They insisted on this because they needed to keep up with about a 1KHz interrupt
rate and couldn't be bothered to learn how to use a proper embedded OS (or
pseudo-RTOS) which would have done the same on a little embedded ARM with a few hundred K
of RAM (or perhaps even tens, but it's hard to fit an IP stack in anything that small
and have room left for anything else).
A lot of these decisions are driven less by the engineers than the project managers who
were promoted out of engineering so that they could do less damage (protip: they
don't). All the engineers on this project (including most of the ones from the
customer) fought this tooth and nail.
Years ago when Intel started using MMX instructions in
the Pentium 1 line were developers smart in using that optimization in their code knowing
that most of the new machines (but not all) coming out will have it? Things have changed
quite a bit since the days of 8 bit computers where you knew exactly what the resources
and chipsets were AND your app would be the only thing running.
True, but MMX instructions (and their modern counterparts, the SSE branch) seldom make a
general-purpose difference. Most people for whom they will make a difference (game and
media programmers, particularly) typically are the types who will make use of them. On
x86, at least, pretty much all hardware since 2005 supports at least SSE3 except for the
barest-bones stuff (Atom, for example, is a mixed bag).
- Dave