>From: "Cini, Richard" <RCini(a)congressfinancial.com>
>
>This example represents the block data using metatags...I guess along the
>"XML" part of the thread.
>
>I was thinking similarly to you but not using XML metadata:
>
>;Hardware descriptor
>MFGR
>MACHINE
>SUBTYPE
>DRIVETYPE (this of course defines what follows)
>;for floppy
>DRIVESIZE
>ENCODING
>TRACKS
>SECTORS
>SECTSIZE
>;HexData
>; Each record or group of records contains the related media data. The
>address record would be used for encoding the metadata
>00TTSSHH: (00-track-sector-head)
>
>I looked to Intel Hex (or Motorola) because it had built-in CRC facilities
>and it was human-readable ASCII. The drive and machine description could be
>encoded in special MOT records probably.
>
>XML is more a more "current" technology but I was trying to keep with the
>platform neutrality by sticking to text-only and not assuming the use of any
>other technology like XML.
>
>Rich
>
Hi
I like the use of OpenBoot like languages better than more natural
languages like XML or such. My primary reason is that the Forth
like languages are one of the few languages that syntax rules are
simple enough that anyone ( with some programming skills ) can
implement an interpreter. Also, the language is rich enough
that one can even include various converters and even things like
directory printout and file extractors inside the archive file it
self ( with minimal overhead ). This is the concept of a postscript
file. The file it self defines how to be printed with only a
few initial primitives that directly correspond to printing.
Without too much to go on, one could take the actual printout
of a postscript file and the file itself and be able to determine
the general rules for how do decode any postscript file. Kind
of like the rosetta stone concept.
We are talking about the maximum information in the smallest human
readable form.
Dwight
>From: "Steve Thatcher" <melamy(a)earthlink.net>
>
>I agree with Sellam on the point about using it both for media re-creation and
emulation. The trouble with the approach below of just using raw data on a
track sector basis is that now you have created a file that can only be used
with an emulator that understands the physical format and OS access for the
computer system you are emulating. My earlier point of separating the data and
the format information allows a single file (that would not be much bigger that
the one described below) to contain multiple platform specific files that can be
"read" by a simple utility that does not require any knowledge of the OS or the
platform.
>
>best regards, Steve Thatcher
Hi Steve
You seem to be assuming that the particular disk you are
archiving has a file structure. This is not always the case.
Dwight
no, I have only talked about data represented in xml ascii, that has three distinct sections. A overall structure that contains author, and other info. A data section that contains multiple data blocks with subsections that are identified as files, and finally, a third which describes the physical arrangement of the data blocks on some type of media.
best regards, Steve Thatcher
-----Original Message-----
From: Vintage Computer Festival <vcf(a)siconic.com>
Sent: Aug 11, 2004 2:53 PM
To: Steve Thatcher <melamy(a)earthlink.net>,
"General Discussion: On-Topic and Off-Topic Posts" <cctalk(a)classiccmp.org>
Subject: RE: Let's develop an open-source media archive standard
On Wed, 11 Aug 2004, Steve Thatcher wrote:
> I know a three section approach that I was proposing is more
> complicated, but from a code standpoint allows total freedom of data
> access without having to create a target media let alone have the
> computer system to then read the media just to get at the data that was
> on a floppy disk. The beauty is that if you need to create a Northstar
> system diskette then you can, but if all you need is a copy of the
> dump.asm program then you can get that also without having to go any
> further than the file you started with.
What you're discussing here are binary images.
--
Sellam Ismail Vintage Computer Festival
------------------------------------------------------------------------------
International Man of Intrigue and Danger http://www.vintage.org
[ Old computing resources for business || Buy/Sell/Trade Vintage Computers ]
[ and academia at www.VintageTech.com || at http://marketplace.vintage.org ]
XML is platform neutral, ascii and provides a structure to information rather than just an INI file type of dump - the start and end keywords let you define as many sub structures as you need.
best regards, Steve Thatcher
-----Original Message-----
From: Vintage Computer Festival <vcf(a)siconic.com>
Sent: Aug 11, 2004 3:23 PM
To: "General Discussion: On-Topic and Off-Topic Posts" <cctalk(a)classiccmp.org>
Subject: RE: Let's develop an open-source media archive standard
On Wed, 11 Aug 2004, Cini, Richard wrote:
> This example represents the block data using metatags...I guess along the
> "XML" part of the thread.
>
> I was thinking similarly to you but not using XML metadata:
>
> ;Hardware descriptor
> MFGR
> MACHINE
> SUBTYPE
> DRIVETYPE (this of course defines what follows)
> ;for floppy
> DRIVESIZE
> ENCODING
> TRACKS
> SECTORS
> SECTSIZE
> ;HexData
> ; Each record or group of records contains the related media data. The
> address record would be used for encoding the metadata
> 00TTSSHH: (00-track-sector-head)
>
> I looked to Intel Hex (or Motorola) because it had built-in CRC facilities
> and it was human-readable ASCII. The drive and machine description could be
> encoded in special MOT records probably.
I like the XML style because it's more explicit; more human-readable.
> XML is more a more "current" technology but I was trying to keep with the
> platform neutrality by sticking to text-only and not assuming the use of any
> other technology like XML.
XML is platform neutral because it's basically ASCII, right?
--
Sellam Ismail Vintage Computer Festival
------------------------------------------------------------------------------
International Man of Intrigue and Danger http://www.vintage.org
[ Old computing resources for business || Buy/Sell/Trade Vintage Computers ]
[ and academia at www.VintageTech.com || at http://marketplace.vintage.org ]
>From: "Jules Richardson" <julesrichardsonuk(a)yahoo.co.uk>
---snip---
>
>As for file size, if encoding as hex that at least doubles the size of
>your archive file compared to the original media (whatever it may be).
>That's assuming no padding between hex characters. Seems like a big
>waste to me :-(
>
---snip---
>Jules
>
>
Hi
It mightseem a waste but i would expect to see a proper archive
file to be 200% to 500% larger than the original data. Having the
data in something that can be printed directly on paper is a
must. This means something like HEX values or binary or whatever.
Still, anything as non-printable bits is useless.
JMHO
Dwight
Have there been any sort of list problems? I noticed a couple days ago that
I'd not gotten any messages since Saturday, but hadn't had time to do
anything about it. Well, about 370 so far just showed up.
Zane
>From: "Vintage Computer Festival" <vcf(a)siconic.com>
>
>On Tue, 10 Aug 2004, John Foust wrote:
>
>> Astounding! Will that computer never die? And I say that
>> as someone who Believed, '85-92.
>
>The Amiga is still going strong in some circles. More power to them.
>
>> I'm tempted to say that we should leave copy protection
>> hacks out of the spec for now, but if it was extensible,
>> that would be great.
>
>Yes, copy protection will defintely be able to be described naturally in
>the specification I have in mind. The spec should be able to define
>several layers of bit storage: logical (files, directories, etc.), byte
>(e.g. tracks, sectors), and raw (bit streams). In this way, copy
>protection schemes can be preserved by storing the image in the raw
>format.
>
>This will of course have to be thought out, and it may not even be
>included in the first revision of the spec, but as I declared originally,
>the spec will be extensible.
>
>--
>
>Sellam Ismail Vintage Computer Festival
Hi Sellam
I can understand the need for both raw bit stream and extracted
data. I propose that it should always include both types of information.
The raw bits are needed to actually rebuild a particular format
but often the information in the data is all that one needs to extract.
In the case of the H8/89, we have working machines to read and write
the format. We just need the data that fills the sectors. In the case
of the H8/89, I've written a bootstrap that can be entered through
the monitor commands. In some cases, the machine has no monitor or
bootstrapping method. In these cases, it would be necessary to create
the disk on another machine. Having the raw bits of clock and data
would then be valuable.
I'm currently looking at using a DSP chip to extract raw data from
disk. The biggest problem so far is that one needs to do one of two
things. One either needs to extract clocking operations with something
like a PLL or simply oversample the bit stream from the drive.
To do the PLL method, one needs to understand the disk format used
and have hardware to handle that particular format. The oversampling
has the advantage that one can capture all that is needed and post
process it to normalize the data. The disadvantage here is that it
takes a lot of data space. The DSP chip I'm looking at doesn't have
enough RAM space to capture an entire track. Capturing track fragments
has the issue that one needs realign things later. Knowing when
the two fragments are properly connected is not easy. It looks like
the newer DSP chips do have enough speed to capture raw data with
little or no external hardware. This means that one can get one
of the manufacture's development boards ( usually in the $50-$150
range ) and wire it up to the disk drive by connector.
The disadvantage here is that as newer chips come out, the older
development boards are obsoleted. Still, having the raw data means
that one can recreate the disk in the future, with some effort.
Dwight
This was posted on oldcomputers.net comments page, can
anyone help?
Please email rosy thomas directly:
rosy.thomas(a)talkbackthames.tv
hello,
we are working on a comedy series over here at
talkback - basically a pastiche of the old British
technology programme "Tomorrow's World" specifically
circa 1980. A script is currently being written which
includes a fictitious military super computer -
interest has been expressed in the stylistic beauty of
the Osbourne 1 and also the TRS-80 model III... I
expect you would not be interested in hiring your
computers to our production but I wonder if you could
advise me of any easy way to get hold of such rare
computers for our temporary purposes. We do have full
insurance public liability and otherwise. We start
filming on the 13th September for 6 weeks. I will
attempt e-bay and also explore all your links but if
you could advise me in any way I would be very
grateful. If you are interested there is information
about our first series on the www.bbc.co.uk. This was
a slightly different format in that it was a pastiche
on the open university programmes of the late
seventies. Thanking you in anticipation for any help
you can give me.
__________________________________
Do you Yahoo!?
Read only the mail you want - Yahoo! Mail SpamGuard.
http://promotions.yahoo.com/new_mail
I don't mean "machine address" but rather some sort of block address on the
media. I was thinking more on the lines of how data sectors on a hard drive
are numbered...not CHS numbering but "absolute sector" numbering from 0 to
something.
-----Original Message-----
From: cctalk-bounces(a)classiccmp.org
[mailto:cctalk-bounces@classiccmp.org]On Behalf Of Vintage Computer
Festival
Sent: Wednesday, August 11, 2004 2:00 PM
To: General Discussion: On-Topic and Off-Topic Posts
Subject: RE: Let's develop an open-source media archive standard
On Wed, 11 Aug 2004, Cini, Richard wrote:
> I tend to forget about the Motorola format (call me an Intel snob).
> The 16mb would be enough for many systems, and I would hope that 4gb would
> be enough, at least for now, to represent the largest of the media types
we
> want to represent.
The data should be structured in a way where address size does not even
come into consideration. Why would we encode platform specific
information into a media archive?
--
Sellam Ismail Vintage Computer
Festival
----------------------------------------------------------------------------
--
International Man of Intrigue and Danger
http://www.vintage.org
[ Old computing resources for business || Buy/Sell/Trade Vintage Computers
]
[ and academia at www.VintageTech.com || at http://marketplace.vintage.org
]
I agree with Sellam on the point about using it both for media re-creation and emulation. The trouble with the approach below of just using raw data on a track sector basis is that now you have created a file that can only be used with an emulator that understands the physical format and OS access for the computer system you are emulating. My earlier point of separating the data and the format information allows a single file (that would not be much bigger that the one described below) to contain multiple platform specific files that can be "read" by a simple utility that does not require any knowledge of the OS or the platform.
best regards, Steve Thatcher
-----Original Message-----
From: Vintage Computer Festival <vcf(a)siconic.com>
Sent: Aug 11, 2004 10:56 AM
To: "General Discussion: On-Topic and Off-Topic Posts" <cctalk(a)classiccmp.org>
Subject: RE: Let's develop an open-source media archive standard
On Wed, 11 Aug 2004, Cini, Richard wrote:
> I might have missed what the ultimate use of this archive would be. Will the
> archive be used to (1) re-generate original media; (2) operate with
> emualtors; (3) both?
Both. Emulators will certainly be able to make use of the archive by
having parsers built-in that can translate the archive data into
something the emulator can use. So instead of point the emulator to a
binary disk image, you would point it to an archive file and it would
translate the file back into tracks/sectors, or punch cards, or whatever.
> To ensure integrity of the data I would propose recording the data in the
> Intel Hex format -- it's text-based and has built-in CRC. Now, we'd have to
> modify the standard format a bit to accommodate a larger address space and
> to add some sort of standardized header (a "Hardware Descriptor"). This data
> would be used by the de-archiver to interpret the stream of data read from
> the data area (the "Hex Block").
I think you're thinking of this in terms of a large binary file encoded as
ASCII hex. If so, this is not what's being proposed. What is being
discussed is a format which actually describes the physical medium. For
example, on floppy:
<MEDIA TYPE=FLOPPY SIZE=5.25 SIDES=1 DENSITY=SINGLE FORMAT=GCR TRACKS=35
SECTORS=16 SECTORSIZE=256>
<VOLUME>Apple ][ System Disk</VOLUME>
</MEDIA>
<DATA>
<TRACK 0><SECTOR 0>
HERE WOULD BE THE ASCII HEX DATA FOR TRACK 0, SECTOR 0
</SECTOR></TRACK>
...
<TRACK 34><SECTOR 15>
HERE WOULD BE THE ASCII HEX DATA FOR TRACK 34, SECTOR 15
</SECTOR></TRACK>
</DATA>
> I think that we should start compiling a list of the various media we want
> represented and how that media is organized natively. I don't mean "well, it
> has blocks and sectors" either. We should examine the exact format down to
> the actual numbers (i.e., "2048 blocks of 256-bytes recorded twice"). Seeing
> how the various data stores are organized should bring some clarity to how
> we should represent it.
I agree. This would be useful. Does someone want to volunteer to do
this?
--
Sellam Ismail Vintage Computer Festival
------------------------------------------------------------------------------
International Man of Intrigue and Danger http://www.vintage.org
[ Old computing resources for business || Buy/Sell/Trade Vintage Computers ]
[ and academia at www.VintageTech.com || at http://marketplace.vintage.org ]