Like many, I used an HP 2000 TSB system in high school to first learn
about programming. Wanting to learn more than just BASIC, I soon
discovered there was a program on the system that let you write and run
FORTRAN programs. All these years later I can't remember anything more
than that and haven't seen anything more on it since then until recently.
While clearing out my storage space I came across an old binder with a
photocopy of a manual. Looking though it I soon realized it was from my
high school computer class and described the system for running FORTRAN.
So now I have a name at least and a copy of a manual, but haven't yet
found anything more and hope that someone here might be able to shine
some light and supply more about it.
The manual says it was known as OSMI 2000 FORTRAN and was a "series of
programs written in the BASIC language which run short FORTRAN programs"
on an HP 2000 BASIC system. Anyone heard of this before?
Thanks.
David Williams
www.trailingedge.com
Hi Friends,
Micro fiche scans of the PDP-11 XXDP listings are online now:
http://files.retrocmp.com/fichescanner/bitsavers/pdf/dec/pdp11/microfiche/D…
You can insert this into your bitsaver mirror tree with
$ cd <your-bitsavers-mirror-root>
$ wget --recursive --level 0 --no-host-directories --cut-dirs 2
--no-parent -R index.htm?* http://files.retrocmp.com/fichescanner/bitsavers/
You need about 130 GB space for 1600+ listings.
A Win10 version of wget is at
http://files.retrocmp.com/wget-1.21.2-win32.zip
In 2016 I posted a batch of listings, which was archived at
http://www.bitsavers.org/pdf/dec/pdp11/microfiche/ftp.j-hoppe.de/...
These were repacked and included in the above distribution.
So despite I'm very pleased to see my name on bitsavers:
Please discard the "ftp.j-hoppe.de" directory now !
For each listing there are 3 files:
- a "gray" pdf in archive quality.
- a highly compressed "bw" pdf, about 10x smaller.
- an ASCII *.dat with context and title strip data, prepared for
database import.
The pdfs contain pictures of their fiches as title pages.
The quality of the fiches is everything between "brilliant" and "awful"
DEC made every possible error while preparating them, the list is endless.
My favorite bug: Title strips glued to the wrong fiche (corrected here).
I even tried OCR but the results where poor.
"ocrmypdf" (= "tesseract + pdf") seems a good tool, but
the fiches are too problematic for a fully automatic run.
You have to dive into tesseracts training procedures.
See https://hub.docker.com/r/jbarlow83/ocrmypdf/
Some project links:
http://www.retrocmp.com/projects/scanning-micro-ficheshttps://youtu.be/X22gr5THBRAhttps://hackaday.com/2021/09/17/automatic-microfiche-scanner-digitizes-docs/
By the way: This project ate up lots of (physical and personal) resources.
I'll will scan other document sets in the future, maybe begging for a
donation then.
Enjoy!
Joerg
> From: Adrian Stoness
> [M?]iror everything guys make copies and stash
> From: Paul Koning
> The web can make things perpetual if they are stored redundantly ...
> But anything centralized is just as vulnerable as any centralized copy
> ever was, whether from risk of fire or flood, or abandonment.
I've been thinking about this issue for a while (although I tend to have a
long scope, e.g. looking forward to a time when everyone currently on this
list is dead; so I think things like 'failed states' need to be a concern
too), and I think history has a key lesson for us.
I've been reading up on the history of the Greek cities after the
Pelponnesian War, down through the War of the Successors (the Diadochi) after
Alexander the Great died. One book I read said that the only surviving source
for many major periods in this stretch was Diodorus (a Greek historian from
Syracuse in the first century BC); he wrote a history of the world in 40
volumes, only 15 of which survive today complete. The sad thing is that there
_was_ a complete set in the library at Constantinope, as late as 1453 (and we
know what happened then). So it survived the best part of 2K years, and was
then lost; the parts that _did_ survive, did so because there were copies in
other libraries.
So the lesson is clear: we need to _replicate_ stuff, in a geographically and
nationally distributed way.
The mirroring of Bitsavers is _very_ good news. However, even in the class of
stuff that it focuses on, e.g. old manufacturer documentation, some things
don't make it in there, but do exist in other online repositories (e.g.
Manx's collections). So one thing we need to do is come up with something
like Bitsavers, but with more curatorial work-sharing. Al has done an
_incredible_ job, for which we are all deeply in his debt - but it would be
good to come up with some way to help him.
(E.g. I've been adding links to online versions of manuals, in articles on
older DEC stuff I'm doing the CHWiki, and I often find things which aren't in
Bitsavers. But sending Al an email saying 'hey, xxx is {here}, you might want
to upload it' is just putting all the load on him.)
Getting all this stuff into the replicated, mirrored system is a key priority.
> And in the case of digital data the added complication is the loss of
> the necessary technology.
Multiple independent copies will of course help with this (very real)
problem. The mirrors will likely be using different hardware, and will turn
it over at different times.
We could definitely use more mirrors, though - and geographically
distributed: it looks like there are current (non-US) ones in the UK, and
in Germany - more would be good. New Zealand? Australia? Maybe Japan and
India?
Individual volunteers aren't really what we need ('when everyone currently on
this list is dead'); it needs to be institutions.
> The Long Now Foundation has done some good thinking about this; some
> others have as well.
Jerry Saltzer thought about this, especially the 'generations of hardware',
and 'software formats' (e.g old Word docuents) issues. See:
"Technology, Networks, and the Library of the Year 2000"
http://web.mit.edu/Saltzer/www/publications/inria/inria.pdf
(particularly Section 4.3 "Persistence"), and also:
"Fault-Tolerance in Very Large Archival Systems"
http://web.mit.edu/Saltzer/www/publications/fault-tol/fault-tolerance.pdf
> I'd say more of us need to be more paranoid about mirroring stuff.
Yes. Don't just use a link, copy stuff down to a place _you_ control. (I.e.
not Google Drive. Nothing against Google, but their business might go
somewhere different, like Geocities, etc.) I have a large collection of
down-loaded stuff. Already I've run into cases where stuff has gone offline,
and without my local copy...
Noel
Hi,
Located in Toronto Canada, for shipping cost only:
- 3 x Hitachi DK516-15
- 2 x Computer Memories Inc (CMI) 6426-S
- Microscience HH-1060 (half height; marked bad)
- Tandon TM-502
Unknown working condition, but have been stored well.
First come, first served, etc.
--Toby
Hey everyone!
Has anyone been able to use a SCSI2SD setup where HVD is required? I
know by default that isn't supported, but given we can get custom kits
to solder, we could just change out one of the controller chips
(optimistically?)
Cheers!
--
-Jon
+44 7792 149029
On 10/1/21 1:00 PM, Chuck Guzis<cclist at sydex.com> wrote:
> Got a small batch (8) of Victor 9000 floppies, MSDOS ca. 1985. I
> really don't want to write a decoder for such a small batch--I've got
> other things on the burner right now. Anyone want to take a crack at
> transferring the data? (Funds available).
>
> --Chuck
I don't have a Victor (looked for one for a while, and man, are they
heavy) because I have a couple of large-ish batch of disks here as well.
I read them and have "triangular," Chuck Peddle-esque images, but not
sure how to get something like mtools to understand a triangular image.
So I understand the motivation to just Kermit the files over to
something more sane. :-)
- David
Ed writes:
?If? we? ever? ?get? a? way? to? read? tapes? ? for? the? 2000 and? 3000?
Well, we can "read" tapes for the HP 3000, and restore the files from HP
3000 backup tapes ... via Allegro Consultant's "ROSETTA STORE" product (of
which I'm the primary author).
I'm happy to restore some files for fellow collectors/enthusiasts (as
time/energy permits) for free.
The problem breaks down into two parts:
1. reading the tape
Although Rosetta can read from a physical tape drive, that capability
hasn't been tested for a decade (because of loss of hardware).
Every user we know of uses Rosetta to restore files from tape images.
There are a number of formats of tape images ... quite a number.
Rosetta understands many tape image formats, including:
AWS / HET
STORE-to-disk
SIMH
Stromasys tape image
Tapecopy format (Data Conversion Resources)
(Oddly, I think it doesn't understand Allegro's own proprietary tape image
format, which records a lot more information than others (e.g., read-retry
information).)
If you need an HP 3000 'STORE' tape recovered, and it's in a different
format, let me know.
2. extracting files from the tape image
Rosetta can read Classic HP 3000 STORE tapes (aka "CM STORE") of various
versions, and MPE/iX STORE tapes (aka "NM STORE") of various versions
(although 'interleave' has been tested only very lightly).
By "read" I mean that it extracts the desired files, converts some (with
some controls), and creates either a hierarchical directory structure
matching the original, or a flattened one.
What about IMAGE databases?
On some platforms (Linux, HP-UX, Windows (?)), IMAGE databases can be
converted to Eloquence databases (Eloquence is a product of Marxmeier
software).
On all platforms, IMAGE databases can be converted to .csv or .xml files.
It can also handle SLT tapes, and provide some information on a few other
kinds of tapes one might see from an HP 3000 (e.g., dump tapes, Serial Disc
images), SPOOK tapes.
Rosetta runs on Mac, Linux, HP-UX, and Windows.
The HP-UX version can read older versions of ORBiT's Online Backup tapes
(before they changed the tape record header format)
TL;DR Ed: for the 3000, it's essentially a solved problem, and has been
for over 20 years!
Note: I also have a utility to restore files from (older?) Burroughs
mainframe (e.g., B6700) backup tapes.
Ken Gielow sold his Z80DIS (Z80 disassembler) for CP/M 80 as shareware ($20) thru his Butler, PA firm (SLR Systems), until the end of the 1980s.
I left Slippery Rock (just north of Butler) in the summer 1983 (about time of release).
?
Info World, October 24, 1983
Software Review by Steve Mann
https://books.google.com/books?id=rS8EAAAAMBAJ&pg=PA40&lpg=PA41&dq=Z80DIS&s…
greg
==
Date: Wed, 17 Nov 2021 17:23:06 -0800
From: Stan Sieler <sieler at allegro.com>
To: "General Discussion: On/Off-Topic Posts" <cctalk at classiccmp.org>
Subject: Ken Gielow passed away
Last week, Ken Gielow passed away.
He was the author of the Z80DIS disassembler, which was both interactive and used a form of "artificial intelligence" to cleverly disassemble Z80 code.
In my pile of DEC computer stuff I have a DEC qbus IBV11 IEEE-488
controller board (M7954) with cable (BN11-A) that connects to the GPIB bus.
It would be interesting to try this out, but I don't have the DEC
'Instrument Bus Subroutines' that work under RT-11.? Does anyone have
this package?? Or know where it can be found?
Doug