SGI Indigo2 graphics options
Sean Caron
scaron at umich.edu
Tue Mar 31 10:24:08 CDT 2015
I work in medium-scale HPC (~3-4K cores, ~8 petabytes online) and we do run
NFS in async mode still to this day for performance's sake (of course,
cognizant of the potential risk involved). Honestly, on Linux, our
experience has not been bad. If we're talking about things SGI did that can
be dicey, I would focus more on XFS :O
I don't mean that as a burn to anyone involved; Dave Chinner and team does
a good job with the resources they have and the Linux kernel they have to
deal with and XFS does fly even if it is a little fragile sometimes...
Always had good luck with it on IRIX at home... but my duty cycle at home
is much, much, much lower than at work, LOL.
I would love to get my hands on an EISA multimode OC-3... I have a ton of
Fore PCA-200 cards stashed and I am still running OC-3 ATM in production at
my home on a ForeRunner LE155 switch and it would be great fun to tie in
one of my Indigo2s.
Anyway, more on-topic, congratulations, Jules, that's a fine system you
have found! My strongest Indigo2 is a bit of a Frankenstein :O Non-factory
config... R10K at 175 Extreme in green skins, non-Impact PSU, 256 MB RAM and
it runs IRIX 6.5.x very well; graphics performance would be horrible for
anything textured but for regular interactive use, browsing the Web,
running all the demos, it's plenty fast. I have a few more in my storage
room; one of the more rare R8K systems, an R4K/250 system (which is also
pretty quick) and another R4K system and a half in parts... I was obsessed
with these things when I was in high school :O
IMO, Indigo2 systems are great machines; I have found them to be very
robust in the long-term although from my experience the Impact PSUs seem a
bit more prone to failure than the non-Impact PSUs (the mode seems to be
harmless to the rest of the system; it just one day decides it won't power
on; reminiscent of how I've seen the Nidec supplies go on the Indys). Parts
are pretty easy to find and fairly inexpensive as SGI equipment goes.
If you want to know the specifics, you have probably found this already,
but there are two SGI-specific commands you can run from IRIX, "hinv" and
"gfxinfo" which will tell you what's there.. You can also run "hinv" from
the ROM monitor.
Have fun!
Best,
Sean
On Tue, Mar 31, 2015 at 4:40 AM, David Brownlee <abs at absd.org> wrote:
> On 31 March 2015 at 07:05, Pontus Pihlgren <pontus at update.uu.se> wrote:
> > On Mon, Mar 30, 2015 at 11:23:21PM +0100, David Brownlee wrote:
> >>
> >> We had 640MB in quite a few R10k I2 boxes at Dreamworks. I seem to
> >> recall issues with the Fore systems ATM drivers beyond that. (Don't
> >> ask about Origin 2000 Fore ATM drivers and SGI 'lying sync' NFS
> >> servers... "For all your data loss needs...")
> >
> > What about the Fore ATM drivers and NFS? ;)
> >
> > I'm hoping to get my Onyx2 rack set up this year. I'm hoping to keep my
> > data.
>
> The Fore EISA drivers not that bad. The PCI ones, were... less stable.
> Of course the PCI based SGI machines came with Fast Ethernet or
> better, so it was less of an issue (except when you had a studio wide
> ATM network :)
>
> Regarding SGI NFS servers. Typically NFS runs synchronously, where the
> client keeps a copy of the written data until the NFS server reports
> it has hit persistent storage. Hence the fancy battery backed up
> PRESTOserve type hardware used to speed up servers.
>
> SGI took a different approach, they lied and reported the data as
> written to storage as soon as it hit RAM on the server. Unsurprisingly
> this led to significant performance wins. Also somewhat obvious was
> the effect of a server reboot due to buggy drivers to the couple of GB
> of unsynced data which clients believed had been written. Combine this
> with scene assets which are being constantly migrated around storage
> and to/from tape resulted in a fair amount of work needing to be
> redone.
>
> A certain degree of clown college was in effect...
>
More information about the cctalk
mailing list