On 10/16/2011 06:58 AM, Jochen Kunz wrote:
Be careful
with those "all CPUs" assertions; server processors are a
pretty big part of the market, and a GPU would be a very expensive
useless piece of silicon real estate there.
Depending on what type of service a server provides, it may use the GPU
to do extensive math, offload en/decryption, complex database search
algorithms, transcoding of audio / video content on the fly, ...
GPUs will evolve into more general purpose coprocessors.
Hmm yes, that's a good point. I've done some CUDA programming which
is one way to take advantage of otherwise-useless GPUs. I can easily
see (say) OpenSSL using them. Databases, well, that'd be a lot of work.
In order to not offend the sensibilities of overly anal people (such
as myself) there'd need to be some sort of standardized interface to an
OS- or library-provided "assists" system, like maybe hash generation,
regexp processing, or something like that, and the OS can provide those
functions as it's best able to in a given installation. Then, the
OS/library side can be set up to send those functions to coprocessors
(which may be implemented via GPUs) to speed things up.
I use a hardware crypto coprocessor on my big machine here; that's
pretty nice.
I can see this coming full-circle. For awhile there, the trend was
to dump EVERYTHING on the single slow-as-molasses-anyway x86 processor
in the box; clueless designers who couldn't see the big picture did
things like putting printer handling (all rasterization, etc) and modem
DSP in the host processor. Remember "winprinters" and "winmodems"?
Now
we're "inventing" distributed processing again. It's about time. ;)
-Dave
--
Dave McGuire
New Kensington, PA