On 12/07/2018 11:38 AM, Rod G8DGR via cctalk wrote:
> Oh good how do you set them to 110 baud?
>
Oh, WOW! Good catch, it only goes down to 300 baud! major
screwup, ought to be reported to the developers.
Jon
Will be having some extra brochures for plated memory from Memory systems Inc.? in El Segundo Calif. Available soon.? appears? to? be? from month? 4? of? 1973? and? 3? different? ?sheets? both? sides.
?
I? know? about? core memory? but this is? something? I never? used..
?
these may be out there already? somewhere? ? ??
?
Ed#
?
?
Hi All,
I have a PDP-8/e that's missing the knob on the front panel.
Does anyone have a spare for sale, or know of a compatible part?
Looking up the DEC parts numbers has turned up nothing but the
engineering drawings...
I've never seen another one in person so I can't tell if the knob is meant
to attach to a shaft on the rotary switch, or if the knob itself is meant
to have a shaft. Either way, I'm lacking both, so have been making do with
a screw wrapped in tape.
Regards,
-Tom
mosst at sdf.lonestar.org
SDF Public Access UNIX System - http://sdf.lonestar.org
Teraterm on Windows definitely goes to 110 baud. I use it all the time...
Rob.
On 12/7/2018 10:38 AM, Rod G8DGR via cctalk wrote:
> Oh good how do you set them to 110 baud?
> Rod
>
>
> Sent from Mail for Windows 10
Listed these on eBay a few times. No takers.
Being offered here for the price of USPS Media Mail cost. Total of 52 lbs of
books in 2 boxes. I estimate shipping at $137.
Price will be actual shipping cost payable by PayPal.
See books at http://www.myimagecollection.com/ITBooks/
Slides pause for 5 seconds each or you can click the Pause button.
No pressure but they hit the trashcan 12/14/2018. J
> From: Paul Birkel
>> I thought RL0x drives use an IBM 5440 type pack (as used on the IBM
>> System/3 .... DEC may have used their own format (and servo track
>> stuff), I don't know much about the 5440.
> Sounds to me like it was different, but in a good way?
I took a look, and found a manual for a 5440:
http://bitsavers.org/pdf/ibm/system3/GA33-3002-0_5444_5440_ComponentsDescr_…
and the details (format, etc) are indeed different. The packs are physically
compatible, but that's as far as it goes.
Noel
The MAME folks have the 68K versions of the terminals mostly working in simulation
now, and are wondering if anyone could dump the firmware from the 88K model, which
has a similar hardware design.
tip is the standard BSD program for calling other unix systems. It's a fine
terminal program. 'tip -110 com1' is all you'd need to do in this case :).
Warner
On Fri, Dec 7, 2018 at 10:39 AM Rod G8DGR <rodsmallwood52 at btinternet.com>
wrote:
> Er whats tip?
>
>
>
>
>
> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
> Windows 10
>
>
>
> *From: *Warner Losh via cctalk <cctalk at classiccmp.org>
> *Sent: *07 December 2018 17:36
> *To: *systems_glitch <systems.glitch at gmail.com>; General Discussion:
> On-Topic and Off-Topic Posts <cctalk at classiccmp.org>
> *Subject: *Re: PDP-8/e
>
>
>
> These days I just use tip.
>
>
>
> Warner
>
>
>
> On Fri, Dec 7, 2018, 10:25 AM systems_glitch via cctalk <
>
> cctalk at classiccmp.org wrote:
>
>
>
> > Indeed, unless you need character pacing.
>
> >
>
> > Thanks,
>
> > Jonathan
>
> >
>
> > On Fri, Dec 7, 2018 at 12:13 PM Guy Sotomayor Jr via cctalk <
>
> > cctalk at classiccmp.org> wrote:
>
> >
>
> > > I just use ?cat?. Seems to work fine. ;-)
>
> > >
>
> > > TTFN - Guy
>
> > >
>
> > > > On Dec 7, 2018, at 4:57 AM, Pete Turnbull via cctalk <
>
> > > cctalk at classiccmp.org> wrote:
>
> > > >
>
> > > > On 07/12/2018 09:59, Rod G8DGR via cctalk wrote:
>
> > > >
>
> > > >> OK now I need a little help.
>
> > > >> Does anybody know of a terminal emulation program that will simulate
>
> > > the reader on an ASR33?
>
> > > >> I know about RIM and BIN loaders but how and what to feed them I
>
> > have
>
> > > long forgotten
>
> > > >
>
> > > > For a Unix or Linux machine, there's send and rsend, and several
> other
>
> > > utilities, that you can find at Kevin McQuiggin's web page:
>
> > > > http://highgate.comm.sfu.ca/pdp8/
>
> > > > and on mine:
>
> > > > http://www.dunnington.info/public/PDP-8/
>
> > > >
>
> > > > --
>
> > > > Pete
>
> > > > Pete Turnbull
>
> > >
>
> > >
>
> >
>
>
>
>
> On Thu, Dec 6, 2018 at 4:39 AM Liam Proven via cctalk <
> cctalk at classiccmp.org>
> wrote:
>
> > On Thu, 6 Dec 2018 at 12:44, Tony Duell <ard.p850ug1 at gmail.com> wrote:
> > >
> > > I don't think anyone is questioning that it's a workstation, and that
> it
> > was made by Sun.
> > >
> > > I think the problem is over 'first' and that a Sun-2 is not going to be
> > the 'first' model.
> >
> > Ah! Excellent point. I have to admit, I was totally unfamiliar with
> > the very early Sun products. I was happy with my little ZX Spectrum
> > back then, and being about 14, wasn't paying much attention to the
> > world of academic Unix usage. :-)
> >
> > Looking up the SUN-1, I see that it lacked a graphics adapter, and was
> > a text-only machine. I didn't know that. That alone means that it's
> > not really what I think of when I think of a Sun workstation: no
> > windowing system means that for me it's not really a workstation.
>
> The Sun-1 absolutely had a framebuffer and a display and was not a
> text-only machine, it did 1024x800 at 1bpp, had a mouse, the whole deal.
>
> See the picture in this article, for example:
> https://www.britannica.com/topic/Sun-Microsystems-Inc
I can 100% confirm this. I have a Sun 1/100 that runs just fine...and it
fires up Suntools with mouse and windows... Windowing pretty much the same
as any other
Sun running circa Sun OS 3.2 It came standard with B/W framebuffer. I
also have the color framebuffer option (not currently installed... don't
have a monitor that
works with that) The base system has a monitor that does what looks like
the standard Sun 1152x900 resolution (I've not confirmed that but sure
looks the same as my other early Suns...)
Earl
I thought folk might enjoy this short-ish (~12min) Youtube video
showing startup of arguably the first ever Sun workstation, from a
contemporaneous SunOS... I did.
Permission obtained before x-posting, naturally.
--
Liam Proven - Profile: https://about.me/liamproven
Email: lproven at cix.co.uk - Google Mail/Hangouts/Plus: lproven at gmail.com
Twitter/Facebook/Flickr: lproven - Skype/LinkedIn: liamproven
UK: +44 7939-087884 - ?R (+ WhatsApp/Telegram/Signal): +420 702 829 053
---------- Forwarded message ---------
From: Walter Belgers <walter+rescue at belgers.com>
Date: Sat, 1 Dec 2018 at 12:34
Subject: Re: [rescue] Sun2/120 SunOS 3.2 suntools movie (was: advise
on Sun2 disk install)
To: The Rescue List <rescue at sunhelp.org>
Hi,
Another update in case you are interested:
I rescued a keyboard and mouse to go with the Sun2. I also installed SunOS 3.2
on disk. I took a different route: I installed FreeBSD, installed tme on top
of that and using the information at
https://people.csail.mit.edu/fredette/tme/
<https://people.csail.mit.edu/fredette/tme/>,
http://www.heeltoe.com/index.php?n=Retro.Sun2
<http://www.heeltoe.com/index.php?n=Retro.Sun2> and
http://typewritten.org/Projects/Sun/8-4841.html
<http://typewritten.org/Projects/Sun/8-4841.html> I installed SunOS 3.2 from
virtual tapes onto a virtual harddrive. I then copied the virtual drive to a
real drive and hooked it up. I could then boot SunOS 3.2!
I then took the one TTL monitor I have (for the 2/50) and hooked it up to a
bwtwo. At first it did not work, apparently it must be in a specific slot. I
added 1MB as well, so the cage is fully populated. That extra MB is used by
the btwo. The monitor still worked and I was able to run the graphical
windowing system.
I had the system on the internet for a couple of hours yesterday, some people
logged in remotely and it still felt surprisingly fast. Only when you start
hammering the disk it is slow (SCSI-1 is slower than ESDI drives I read).
I made a movie of the box, it can be viewed here: https://youtu.be/CoAYs0Uc7As
<https://youtu.be/CoAYs0Uc7As>
Cheers,
Walter.
_______________________________________________
rescue list - http://www.sunhelp.org/mailman/listinfo/rescue
What are people doing for early Sun monitor replacements? I've got a Sun
3/60 that I'd like to hook up to a modern monitor, but am unaware of any
means of doing so.
Thanks!
Kyle
On 12/07/2018 03:59 AM, Rod G8DGR via cctalk wrote:
>
> Does anybody know of a terminal emulation program that will simulate the reader on an ASR33?
> I know about RIM and BIN loaders but how and what to feed them I have long forgotten
> My PDP-8 course completion certificate is dated November 1975.
>
> Rod Smallwood
>
>
>
> Sent from Mail for Windows 10
>
>
I use minicom on Linux, but don't know if a Windows version
is available. It has allowed me to connect to a bunch of
older devices and send data back and forth.
Jon
Several years ago when I restored my 8/M, I whipped up
a quick and dirty program that uses TCL/Tk to make a
little graphical interface for selecting, reading, and punching
paper tape images. When running, it looks something
like this:
https://www.cs.drexel.edu/~bls96/museum/asrscreen.jpg
You need the P9P (Plan9 from user space) libraries installed
to build it, but I could whip up a binary for you if you'd like
to try it out. I typically run it in a shell script that looks like:
#!/bin/sh
xterm -vb -sb -geom +180+10 -fg '#D0D0FF' -bg black -e asr33 $*
BLS
--------------------------------------------
On Fri, 12/7/18, Rod G8DGR via cctalk <cctalk at classiccmp.org> wrote:
Subject: PDP-8/e
To: "General Discussion: On-Topic and Off-Topic Posts" <cctalk at classiccmp.org>
Date: Friday, December 7, 2018, 4:59 AM
Hi All
? Seasons Greetings..
My PDP-8/e was long due for a major
overhaul.
1. So everything out
2. Big Hoover job on the Omnibus
3. Bring up on Variac ? No smoke
4. Check? PSU volts. ? All OK
5. Power off
6. Install minimal System ? Front
Panel, Three CPU cards, RFI shield,? 4k Core and Bus
term.
7. Yup all looks in right order
8. Power on
9. Toggle in standard AC count up
program
10. Clear + Cont
11. And they are racing at
Rockingham!!
12. Yup counts up just like it should.
13. Let it run for a while.
14. All stop.
15. PSU off
16. Inset Async Card (Its 110 baud
only)
17. Fire up VT100. Beep - yup its
alive.
18. Toggle in keyboard echo test.
19. Clear + Cont ? Program runs
20. And .. yes keyboard gets echoed
back.
OK now I need a little help.
Does anybody know of a terminal
emulation program that will simulate the reader on an
ASR33?
I know about? RIM and BIN loaders
but how and what to feed them I have long forgotten
My PDP-8 course completion certificate
is dated November 1975.
Rod Smallwood
Sent from Mail for Windows 10
Congrats!!
On Fri, Dec 7, 2018 at 3:59 AM Rod G8DGR via cctalk <cctalk at classiccmp.org>
wrote:
>
> Hi All
> Seasons Greetings..
>
> My PDP-8/e was long due for a major overhaul.
> 1. So everything out
> 2. Big Hoover job on the Omnibus
> 3. Bring up on Variac ? No smoke
> 4. Check PSU volts. ? All OK
> 5. Power off
> 6. Install minimal System ? Front Panel, Three CPU cards, RFI shield, 4k
> Core and Bus term.
> 7. Yup all looks in right order
> 8. Power on
> 9. Toggle in standard AC count up program
> 10. Clear + Cont
> 11. And they are racing at Rockingham!!
> 12. Yup counts up just like it should.
> 13. Let it run for a while.
> 14. All stop.
> 15. PSU off
> 16. Inset Async Card (Its 110 baud only)
> 17. Fire up VT100. Beep - yup its alive.
> 18. Toggle in keyboard echo test.
> 19. Clear + Cont ? Program runs
> 20. And .. yes keyboard gets echoed back.
>
> OK now I need a little help.
> Does anybody know of a terminal emulation program that will simulate the
> reader on an ASR33?
> I know about RIM and BIN loaders but how and what to feed them I have
> long forgotten
> My PDP-8 course completion certificate is dated November 1975.
>
> Rod Smallwood
>
>
>
> Sent from Mail for Windows 10
>
>
Hi All
Seasons Greetings..
My PDP-8/e was long due for a major overhaul.
1. So everything out
2. Big Hoover job on the Omnibus
3. Bring up on Variac ? No smoke
4. Check PSU volts. ? All OK
5. Power off
6. Install minimal System ? Front Panel, Three CPU cards, RFI shield, 4k Core and Bus term.
7. Yup all looks in right order
8. Power on
9. Toggle in standard AC count up program
10. Clear + Cont
11. And they are racing at Rockingham!!
12. Yup counts up just like it should.
13. Let it run for a while.
14. All stop.
15. PSU off
16. Inset Async Card (Its 110 baud only)
17. Fire up VT100. Beep - yup its alive.
18. Toggle in keyboard echo test.
19. Clear + Cont ? Program runs
20. And .. yes keyboard gets echoed back.
OK now I need a little help.
Does anybody know of a terminal emulation program that will simulate the reader on an ASR33?
I know about RIM and BIN loaders but how and what to feed them I have long forgotten
My PDP-8 course completion certificate is dated November 1975.
Rod Smallwood
Sent from Mail for Windows 10
I bought this and a line clock module on eBay and it turns out the person I got it for
only needed the clock, so it's available for $50 plus shipping
You need this if you're going to try to run Unix on an 11/35 or 40 and they are pretty
tough to find.
Hello David
I saw your posting on the cctalk mailing list regarding RSX180.
It is Hector Peraza that's been tinkering with this. He intends making the
full source-code available via SourceForge or GitHub but is still working
on preliminary web pages and documenting etc. No doubt he will provide you
with more details.
I've been tinkering with a Z280 system designed by Bill Shen (the Z280RC on
the RetroBrew web site at
https://www.retrobrewcomputers.org/doku.php?id=builderpages:plasmo:z280rc )
and have contacted Hector about porting it to the Z280.
A Z180 system is also on my hobbyist "to-do" list. Should you decide to
produce another run I'd be interested in one. Most likely I'd use a
CompactFlash on IDE interface and an GoTek style floppy emulator with it.
--
Tony Nicholson <tony.nicholson at computer.org>
I don't know who did it, but here's a video of a P112 running RSX:
https://www.youtube.com/watch?v=5s6IOCCk3Uw
If the creator of this thing is reading, I'd very much like to get my
hands on RSX-180 and put it up on the P112 page at Sourceforge, Gitlab,
et al.
--
David Griffith
dave at 661.org
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
The re-work of that Dallas nvram chip is just beautiful. It makes me
ashamed of myself. (I just chopped into the epoxy with a pocket knife,
soldered two leads, and velcroed the new batteries somewhere inside the
machine I installed it in.)
I salute you sir.
Jeff
At 12:24 PM 4/12/2018 -0800, you wrote:
>On Mon, Dec 3, 2018 at 5:09 AM Christian Corti via cctalk <
>cctalk at classiccmp.org> wrote:
>
>> Hi,
>> how does one open a RL02 disk pack? A couple of packs need cleaning but I
>> can't figure out how to open them...
>>
>>
>I was curious to see if there would be any replies to this. I have just shy
>of 40 RL02 packs, and a few of them had bad scratches rendering them
>useless. Therefore, I attempted to open them in a non-destructive way, just
>to see if it was possible. So far, I haven't had much luck. Also, I noted
>that while all the packs I attempted were DEC (not clones), they did have
>slightly different construction and mechanics, probably based on production
>date.
>
>- Earl
Ha, this made me realize I don't know either. Despite that I now have some RLO3K-DC
packs, and one RLO2 drive.
Dug one out. After a few moments of being stumped, found the trick. Here's how:
On the blue handle, top center, look on the section that has the pivot pins.
There is a flat plastic 'button' on which one end is slightly concave.
With the handle DOWN (flat), put finger on that concave end of the button, and
push sideways, till the button reaches the end of travel.
With it still at end of travel, lift the handle up to vertical. At about 45 degrees
(half way) you'll feel a resistance, then hear a thump.
Once the handle is vertical, lift the pack up by the handle. The lower cover is
separated and the disk is exposed.
But it still is mostly covered, only a slot for the heads is open.
You could lever open the several latches that hold the bottom inner cover on if you wanted.
Guy
Since RS6K systems have been mentioned recently, I thought I should ask
for advice.? I have a Powerserver 320H with 32MB of RAM, an 8-port async
EIA-232 adapter, a SCSI adapter and a 400MB HD.?? No framebuffer or
keyboard; no LAN card.? Because of the last issue, I haven't tried to do
much with it.? I tried getting it to talk on the serial console (Serial
1 connector in the back), following all the advice I found on the net:?
The pinout of the MODU serial connector, the null modem cable with full
handshake (also driving the DCD line in the 320H).? I turn it on in
service mode, and it spits a lot of LED codes, finds the HD, spins it up
and it apparently loads something (I suppose AIX) from it.? But nothing
is? ever sent out on the serial 1 port, or any other serial port.? I
believe that during the POST it fails to initialize the serial 1 and 2
ports, because the 320H's DTR and RTS lines are never asserted (the
ports in the async RS232 card do assert these on power up, but they are
equally silent). I made sure that the CTS, DSR and DCD inputs of the
320H are being driven by the external terminal.
I made a video of the LED codes during POST and found some problems;
here are the codes and their meaning:
120 BIST starting a CRC check on the 8752 EPROM.
122 BIST started a CRC check on the first 32K bytes of the OCS EPROM.
124 BIST started a CRC check on the OCS area of NVRAM.
130 BIST presence test started.
101 BIST started following reset.
153 BIST started ACLST test code.
154 BIST started AST test code.
100 BIST completed successfully; control was passed to IPL ROS.
211 IPL ROM CRC comparison error (irrecoverable). !!!!!!!
214 Power status register failed (irrecoverable).?????????? !!!!!!!
218 RAM POST is looking for good memory.
219 RAM POST bit map is being generated.
290 IOCC POST error (irrecoverable).???????????????????????? !!!!!!!
291 Standard I/O POST running.
252 Attempting a Service mode IPL from 7012 DBA disk-attached
???????? devices specified in IPL ROM Default Device List.
253 Attempting a Service mode IPL from SCSI-attached devices
???????? specified in the IPL ROM Default Device List.
299 IPL ROM passed control to the loaded program code.
814 NVRAM being identified or configured.
538 The configuration manager is going to invoke a configuration
???????? method.
813 Battery for time-of-day, NVRAM, and so on being identified or
???????? configured, or system I/O control logic being identified or
???????? configured.
538 The configuration manager is going to invoke a configuration
???????? method.
520 Bus configuration running.
538 The configuration manager is going to invoke a configuration
???????? method.
869 SCSI adapter being identified or configured.
538 The configuration manager is going to invoke a configuration
???????? method.
954 400MB SCSI disk drive being identified or configured.
538 The configuration manager is going to invoke a configuration
???????? method.
539 The configuration method has terminated, and control has
???????? returned to the configuration manager.
551 IPL varyon is running.
553 IPL phase 1 is complete.
The code 290 above is particularly worrysome, I think.? The NVRAM
battery reads 2.85 volts even after all these years. I reseated all of
the chips that are on bases, all of the cards, and connectors; there was
no change.? Any ideas on how to proceed?
carlos.
Hi all --
I picked up a ZAX ICD-178 in-circuit debugger in the hopes of using it
to help debug / reverse-engineer a couple of 68k-based machines I have.?
This unit can work with 68000, 68010, and 68008-based machines, however
a different emulation CPU module is used for the 68008 vs. the
68000/68010.? Unfortunately, mine came with only the 68008 CPU module.
Since this is a fairly uncommon device, I figure it's unlikely, but just
in case someone's sitting on a pile of parts somewhere, if you have the
68000/68010 Emulation CPU module ("CPU S-813") please drop me a line.
Thanks as always,
Josh
>
> Date: Sun, 2 Dec 2018 16:49:37 -0800
> From: Alan Perry <aperry at snowmoose.com>
> Subject: Re: sun model 47. code 4/40 does it have the nvram with
> battery?
>
> The RDI Britelite (laptop) is a SPARCstation IPX system board in a
> laptop chassis.
>
I have IPC, IPX, and LX versions of the RDI Britelite.
--
Michael Thompson
I'm trying to id this system I just rescued last month.? It is an Eclipse of some type.
Chip date codes are 1976-1977? The front panel is white with blue paddle switches.The rear panel ID plate says? it's a 8461, SN 4802-142-157. it has Options? 4192, 4010, 4042.? it's a 16 slot back plane and was part of a EMI? Cat scan system.
There are 16 boards in this, 9 are DG and the rest my are EMI scanner boards.
Not sure what sales model it is?? ie? C330 or C130 or ???
The front panel is trashed, so what are the differences? between the front panels fromother models.
Are there any manuals for this out there ??
The back plane has 2 damaged chips.? one has a? 74S133 which i understand the other has 20a.?if I read it right.? looks to be a hex inverter of some type ??
Any help would be appreciated
Thanks, Jerry
Ouch, what was I thinking? Mentioning a project I fundamentally can't talk in detail about yet; not very smart.
Thus spawning a thread guaranteed to go chaotic. Sorrrry!
Also I've changed the title, since it's disrespectful to drag a deceased person's name along with this.
I've been busy a couple of days, didn't have time to follow the thread. Still busy, but briefly with extracts:
@ Keelan Lightfoot
> Our problem isn't ASCII or Unicode, our problem is how we use computers.
> Markup languages are a kludge, relying on plain text to describe higher level concepts.
[snip lots]
Nice post, and I agree with all of it. This is the type of thinking needed, and in general much like my approach. Except I'm a software and hardware designer, synthesist, and pursue practical results. Or at least _try_ to.
Funny you mention keyboards, as that's one of the project's bootstrapping steps. First a simulated keyboard (html & js initially) to allow free experimentation, later an open hardware design suitable for makers, 3D printing, etc. The crappyness of commercial keyboards is a bugbear of mine. Keyboards should be MUCH better than they are. And last forever.
@ Grant Taylor & Toby Thain
>> ???? bold
>> ???? italic
>> ???? overline
>> ???? strike through
>> ???? underline
>> ???? superscript exclusive or subscript
>> ???? uppercase exclusive or lowercase
>> ???? opposing case
>> ???? normal (none of the above)
>This covers only a small fraction of the Latin-centric typographic
>palette - much of which has existed for 500 years in print (non-Latin
>much older). Computerisation has only impoverished that palette, and
>this is how it happens: Checklists instead of research.
>Work with typographers when trying to represent typography in a
>computer. The late Hermann Zapf was Knuth's close friend. That's the
>kind of expertise you need on your team.
More generally, an encoding standard needs to allow for ANY kind of present and future characters, fonts and modifiers.
But even more critically, it has to allow for such things without reference to 'central standards groups'. Enforced centralism is poison. For instance Unicode, and that vast table of symbols - that still doesn't include decent arrows (and many other needs.) What's required is a way for any bunch of people to be able to define their own character sets, fonts, adornments, etc, create definition files for them, and use those among themselves. Either embedded in documents or used as referenced defaults - both must be possible. It is easy enough to define a base encoding that allows this. And in which legacy coding (ASCII, Unicode, etc) is one of the available defaults.
The point with embedding such capabilities in the base coding scheme, and then building the superstructure of computing language and OS on top of that, is to achieve a scheme in which human language and typesetting freedom is available through the entire structure.
@ Cameron Kaiser
>> Surely a Chinese or Japanese based programming language could be
>> developed.
>The Tomy Pyuuta has a very limited BASIC variant called G-BASIC which has
>Japanese keywords and is programmed with katakana characters (such as "kake" ...
Exactly, except it should be possible for any group (eg who speak whatever language) to modify existing computer language to their own human dialect. With compilers and assemblers this is not trivial, but with dictionary-based interpreters it's much easier. The keywords and operators are all just looked up in tables to achieve effects, and what characters or ideograms serve as the keywords are entirely flexible.
Then imagine one interpretted scripting language, that serves multiple functions: document layout, user apps and OS scripting. And that scripting language can be phrased in any human language, AND includes full typsetting of itself.
@ Liam Proven
> There are a wider panoply of options to consider.
...
> Try to collapse all these into one and you're doomed.
Lots of great references, thanks! As for doomed... well we'll see. I think the trick is to merely provide a mechanism for including extensible classes of 'stuff' in the base coding. Because being rigid about the mechanics of the higher level capabilities really is fatal. Fortunately, 'flexible extensibility' isn't so hard to do. Especially when you have a bunch of disused legacy control codes to work with.
At 02:34 PM 28/11/2018 -0700, Jim Manley wrote:
>Some computing economics history:
>
>I'm an engineer and scientist by both education and experience,
[snip]
>A theoretically "superior" encoding may
>not see practical use by a significant number of people because of legacy
>inertia that often makes no sense, but is rooted in cultural, sociological,
>emotional, and other factors, including economics.
Yep. I'm intensely aware of the economics and inertia factors. Points:
1. The ASCII-replacement coding is just a part of a wider project.
2. It's all a private project, for fun.
3. And yet there's a convergence of developments suggesting an opportunity in near future.
MS/Intel are bastardizing, backdooring and box-closing the Wintel platform into something so evil even non-technical people are getting sick of it. This will continue, due to political agenda of MS/Intel.
Simultaneously the competing Linux world is fragmenting into churn-chaos. (Complex but irreversible reasons.)
Apple is... Apple. Becoming a platformm based mostly on virtue signalling, and increasingly as bad as Wintel.
4. If it ever is released, it will be freeware, open hardware and copylefted. DRM specifically banned from the platform. With many quite appealing wow-factors, several of which will be totally killer. It is not politically possible for MS/Intel/Apple to follow this path.
[snip]
>Logic and reasoning are
>simply nowhere near enough to create the conditions necessary for
>widespread adoption - sometimes it's just good luck in timing (or, bad
>luck, as the case may be).
Absolutely. It's mostly about politics and meme-crafting. Ref: Marx, L Ron Hubbard,
Mao, various religions, etc. Odd isn't it - so few instances of memetic weavers who
used their skills for the benefit of humankind. As opposed to those guys above, who
were all arseholes with pretty twisted objectives. Did you know L Ron Hubbard created
Scientology to win a drunken bet in a bar? Someone said "I bet you can't create a
religion!" And L Ron said "I bet I can!"
>ASCII was developed in an age when Teletypes ...
Yep.
>You can't blame the ASCII developers for lack of foresight when no one in
>their right mind back then would have ever predicted we could have upwards
>of a trillion bytes of memory in our pockets ...
Absolutely. ASCII was a godsend at the time and I take pains to make this clear in the proposal docs. This is a _hindsight_ refactoring.
>Someone thinking that they're going to make oodles of money from some
>supposedly new-and-improved proprietary encoding "standard" that discards
>five-plus decades of legacy intellectual and economic investment, is
>pursuing a fool's errand.
Ha ha, I don't intend to even try to make any money from this. Other objectives.
Though, I'd probably set up a donations channel. Just in case people like it.
> Even companies with resources at the level of
>Apple, Google, Microsoft, etc., aren't that arrogant, and they've
>demonstrated some pretty heavy-duty chutzpah over time. BTW, you won't be
>able to patent what apparently amounts to a lookup table, and even if you
>copyright it,
Patents and copyright are poisons that are crippling intellectual and technological progress. The original concepts were OK, but got over-extended by greed (and still getting worse.) Patents in particular have become a tool for big corporate suppression of any potential competition, while copyright is used to destroy free expression. The entire DRM/copyright legal framework should be nullified.
This project will be intentionally copyright and patent excluding. Freeware, published, open source, open hardware, etc. Just a conformance symbol, which certifies (among other things) that _nothing_ in the systems & software is under any kind of DRM restriction. People buy or build such a system, they own it entirely.
This is why I can't mention details or coined terminology now.
>True standards are open nowadays - the days of proprietary "standards" are
Except that by 'open' they usually mean you can pay a lot of money for a copy of the standard doc.
That's not what I call 'open.'
>a couple of decades behind us - even Microsoft has been publishing the
>binary structure of their Office document file formats. The specification
>for Word, that includes everything going back to v 1.0, is humongous, and
>even they were having fits trying to maintain the total spec, which is
>reportedly why they went with XML to create the .docx, .xlsx, .pptx, etc.,
>formats. That also happened to make it possible to placate governments
>(not to mention customers) that are looking for any hint of
>anti-competitive behavior, and thus also made it easier for projects such
>as OpenOffice and LibreOffice to flourish.
>
>Typographical bigots, who are more interested in style than content, were
>safely fenced off in the back rooms of publishing houses and printing
>plants until Apple released the hounds on an unsuspecting public. I'm
>actually surprised that the style purists haven't forced Smell-o-Vision
>technology on The Rest of Us to ensure that the musty smell of old books is
>part of every reading "experience" (I can't stand the current common use of
>that word). At least I have the software chops to transform the visual
>trash that passes for "style" these days into something pleasing to _my_
>eyes (see what I did there with "severely-flawed" ASCII? Here's how you
>can do /italics/ and !bold! BTW.).
Oh yes, tell me about it. 'Do it this way' bigots of all kinds. Pick any possible thing that can be done more than one way, and there will be camps of fanatics insisting their one way is the true way and all others are crazy.
Finding such artificial dichotomies (or n-way splits) has been a very rich source of inspiration for holistic rethinking.
Btw, again I'll emphasize that when I say ASCII is severely flawed, I mean this in the context of what we know now about information coding requirements, and creating extensible systems. It was't 'severely flawed' back when it was created.
>Nothing frosts me more than reading text that can't be resized and
>auto-reflowed, especially on mobile devices with extremely limited display
>real estate. I'm fully able-bodied and I'm perturbed by such bad design,
>so, I'm pretty sure that pages that prevent pinch-zooming, and that don't
>allow for direct on-display text resizing/auto-reflow, violate the spirit
>completely, if not virtually all of the letters, of the Americans with
>Disabilities Act (and similar legislation outside the U.S., I imagine).
Well, there's more than that one requirement. If one wanted to capture a historical document, the absolute image of the page(s) is a core aspect, and can't be 'reflowed'. But otoh, the text content should be accessible as a searchable and reflowing character stream. A decent coding scheme will support both objectives simultaneously.
Btw I'm constantly amazed by how badly tech docs are being 'digitized' even now. Service manuals with fold out schematics, screened tonal multi-colour illustrations etc... just endless awful digital copy fails. Meanwhile the original paper copies get rarer and rarer, because idiots think 'those are all online now, paper copies are obsolete', and throw them out.
@ Keelan Lightfoot
>from a usability standpoint, control codes are
problematic. Either the user needs to memorize them, or software needs
to inject them at the appropriate times.
You're thinking of 'control codes' as something you type by holding down CTRL and some other key. Yes, these are a pain and I personally hate UI's that depend on memorising lots of them.
But strictly speaking 'control codes' are the byte codes 0x00 to 0x1F, in the ASCII table. Most of which are now little used apart from in hardware protocols. How those would be brought into use in an ASCII-replacement and new UI, is another topic. Sadly, part of the area I won't talk about. Just bear in mind that this system includes new keyboard designs, and 'things that have to be memorised' are fine for some people but not for others (including me.)
Ha ha, even ctrl-C and ctrl-V for cut and paste are a pain, not because they must be memorised, but because the ergonomics of distorting the fingers to type them, is horrible for such a common action. Stuff like this...
Oh, and if you are wondering if I'm imagining some huge keyboard with even more keys, no. Personally I use a short ('10-keyless') keyboard, and don't want to ever have to go back to stupidly big keyboards.
>In addition to crusty old computers, I also enjoy the company of three
crusty old Linotypes. In fact, that's what got me thinking about this
stuff in the first place.
Ah, I am intensely jealous! I wish I could find an old but working linotype. And someone to teach me how to use it. Hot lead, yeah! (I used to cast things in lead as a child, have done bronze casting and intend to do more.)
I have some exposure to typesetting & printing; enough to know how much I don't know. Some articles on related topics are in-progress, but not yet posted.
Anyway, back on topic (classic computing.) Here's an ascii chart with some control codes highlighted.
http://everist.org/ASCII/ascii_reuse_legend.png
I'm collecting all I can find on past (and present) uses of the control codes. Especially the ones highlighed in orange. Not having a lot of success in finding detailed explanations, beyond very brief summaries in old textbooks.
Note that I'm mostly interested in code interpretations in communications protocols. Their use in local file encodings not so much, since those are the domain of legacy application software and wouldn't clash with redefinition of what the codes do, in future applications.
And now, back to machining a lock pick for a PDP-8/S front panel cylinder lock.
http://everist.org/NobLog/20181104_PDP-8S.htm#locks
Guy
Hi folks,
In my long ongoing quest to image and otherwise copy the hard sectored floppies with my Exidy Sorcerer I?m trying to find other floppy drives I can use with it since I don?t like relying on just one set of drives. I have a Cumana dual drive set that came with my TRS80 Model1 that I thought might be jumperable to 300rpm, indeed I can see drive activity if I try and boot.
Does anyone know where I might find the/a manual for the drives? They?re marked as Intertec 5002040 so I?ve been all over Superbrain docs and PDFs on bitsavers but haven?t found anything so far.
Cheers!
--
adrian/witchy
Owner of Binary Dinosaurs, the UK's biggest private home computer collection?
t: @binarydinosaurs f: facebook.com/binarydinosaurs
w: www.binarydinosaurs.co.uk
Firstly, my goal: to run MazeWar on something other than a NeXT.
I thought this would be fairly straightforward, starting with getting SunOS
4.1.3 booting with QEMU. Turns out, I've not had much luck. I get different
error messages depending on what machine type I'm emulating. I can start
booting from the .iso running this command:
$ qemu-system-sparc -bios ss5.bin -M SS-5 -m 64M -drive
file=sunos413.img,if=scsi,bus=0,unit=3,media=disk -drive
file=SunOS_4.1.3_sparc.iso,format=raw,if=scsi,bus=0,unit=6,media=cdrom,readonly=on
-boot d
and get a near immediate panic:
machine type 0x80 in NVRAM
panic: No known machine types configured in!
Data Access Exception
ok
Okay, how about trying the default BIOS and SS-20? It definitely gets
further, but no dice...
Boot: vmunix
Size: 843776+2315672+64016 bytes
SuperSPARC/SuperCache: PAC ENABLED
SunOS Release 4.1.3 (MUNIX) #3: Mon Jul 27 16:47:33 PDT 1992
Copyright (c) 1983-1992, Sun Microsystems, Inc.
cpu = SUNW,SPARCstation-20
mod0 = TI,TMS390Z55 (mid = 8)
mem = 49020K (0x2fdf000)
avail mem = 44707840
Ethernet address = 52:54:0:12:34:56
espdma0 at SBus slot f 0x400000
esp0 at SBus slot f 0x800000 pri 4 (onboard)
sd2: non-CCS device found at target 2 lun 0 on esp0
sd2 at esp0 target 2 lun 0
sd2: <QEMU 0 blocks>
sd2: Vendor 'QEMU', product 'QEMU', (unknown capacity)
sd3: non-CCS device found at target 0 lun 0 on esp0
sd3 at esp0 target 0 lun 0
sd3: corrupt label - wrong magic number
sd3: Vendor 'QEMU', product 'QEMU', (unknown capacity)
ledma0 at SBus slot f 0x400010
le0 at SBus slot f 0xc00000 pri 6 (onboard)
zs0 at obio 0x100000 pri 12 (onboard)
zs1 at obio 0x0 pri 12 (onboard)
SUNW,fdtwo0 at obio 0x700000 pri 11 (onboard)
BAD TRAP: cpu=0 type=29 rp=f00daba4 addr=0 mmu_fsr=0 rw=0
MMU sfsr=0: No Error
regs at f00daba4:
psr=40400cc7 pc=f00a0968 npc=f00a096c
y: 20000 g1: f00c1e78 g2: 40900ce6 g3: fb005ff0
g4: 2c g5: f00db000 g6: 0 g7: 30000000
o0: 1 o1: 8 o2: f00dac00 o3: f0076e50
o4: 0 o5: 0 sp: f00dabf0 ra: f1000000
(unknown): bad trap = 41
rp=0xf00daba4, pc=0xf00a0968, sp=0xf00dabf0, psr=0x40400cc7, context=0x0
g1-g7: f00c1e78, 40900ce6, fb005ff0, 2c, f00db000, 0, 30000000
Begin traceback... sp = f00dabf0
Called from f00c1eb8, fp=f00dac58, args=ff009000 f00dacbc 0 f0314a70 1000
1000
Called from f00a7d34, fp=f00dacc0, args=ff009000 0 ff009000 fb002098
f0314a70 ff009000
Called from f00a7708, fp=f00dad20, args=1080000 d f0102d50 f0102db3 0 2
Called from f00a74e0, fp=f00dad80, args=f0305bd4 f0102d50 fb001000 fb001050
0 0
Called from f00a5028, fp=f00dade0, args=f00fc000 fefe0014 0 0 f0102d50
f0305bd4
Called from f00ac084, fp=f00dae40, args=72 1000 1 1 86 800000
Called from f0015f7c, fp=f00daef8, args=800000 100000 fb000000 2fdd 2000 2
Called from f000539c, fp=f00daf58, args=f00dafb4 f00076c0 10801522 821020ff
200 f00ce600
Called from 403f0c, fp=0, args=4000 3ffd60 1 235598 4000 0
End traceback...
panic: trap
rebooting...
Then I thought, why not use The Machine Emulator to emulate a Sun 3 and
play with something even older? I can't get that to build using clang under
OS X 10.9. I've changed a few lines of source already to get it further
along in the compilation process, but now I'm stuck:
In file included from module.c:48:0:
module.c: In function 'tme_module_init':
module.c:93:3: error: 'lt_preloaded_symbols' undeclared (first use in this
function); did you mean 'lt_dlloader_remove'?
LTDL_SET_PRELOADED_SYMBOLS();
^
Okay, now I'm tired of trying to emulate it (actually, I still would like
to play with QEMU or TME...), so I pulled a SS-20 off the shelf and threw a
SCSI2SD card in it. I didn't have a means of burning a CD, so I used the
SCSI2SD to also emulate a CDROM drive at device 6, and unplugged the
existing CDROM drive. I can boot off of it just fine, and I get now even
further along the process of installation, and am able to format the hard
drive. Right when I think things are going well, I get this:
esp0: Target 6.0 reverting to async. mode
sr0: SCSI transport failed: reason 'data_ovr': giving up
m partition number 3
fastread: can't read label on /dev/rsr0:I/O error
ERROR while loading miniroot disk: /dev/rsd0b
#
Any ideas?
Thanks,
Kyle
I'm not sure how many of you who are on this list are on the vcfed.org
forum, but just for those who aren't, with the help of Dave and Monty from
there, I have recently restored a 4051 I bought a couple years ago to
working condition. Last night with their guidance I connected it to a
Tektronix development system called the Board Bucket, also a 6800 driven
machine that Tek engineers/employees could buy from Tek (I think in parts)
that I purchased previously.
With the 4051 in terminal mode, we were able to demonstrate that the BASIC
in ROM in the Board Bucket can drive graphics on the Tek terminal. This was
pretty much clear after I dumped the ROMs and Dave had a close look at them,
but it was still very cool to see the two working together nonetheless. I
feel very privileged to have both one of the products of Tek's computer
development efforts and the development machine used to help create it
(and/or others) in my possession.
Anyway for those interested, I posted a 4 min video here:
https://www.youtube.com/watch?v=SSkHRzx5Bno
Brad
At 09:49 PM 26/11/2018 -0700, Grant wrote:
>On 11/26/18 7:21 AM, Guy Dunphy wrote:
>> Oh yes, tell me about the html 'there is no such thing
>> as hard formatting and you can't have any even when
>> you want it' concept. Thank you Tim Berners Lee.
>
>I've not delved too deeply into the lack of hard formatting in HTML.
It was a core of the underlying philosophy, that html would NOT allow
any kind of fixed formatting. The reasoning was that it could be displayed
on any kind of system, so had to be free-format and quite abstract.
Which is great, until you actually want to represent a real printed page,
or book. Like Postscript can. Thus html was doomed to be inadequate for
capture of printed works. That was a disaster. There wasn't any real reason
it could not be both. Just an academic's insistense on enforcing his ideology.
Then of course, over time html has morphed to include SOME forms of absolute
layout, because there was a real demand for that. But the result is a hodge-podge.
>
>I've also always considered HTML to be what you want displayed, with
>minimal information about how you want it displayed. IMHO CSS helps
>significantly with the latter part.
Yes, it should be capable of that. But not enforce 'only that way'.
By 'html' I mean the kludge of html-css-js. The three-cat herd. (Ignoring all the _other_ web cats.)
Now it's way too late to fix it properly with patches.
>> Except that 'non-breaking space' is mostly about inhibiting line wrap at
>> that word gap.
>
>I wouldn't have thought "mostly" or "inhibiting line wrap". I view the
>non-breaking space as a way to glue two parts of text together and treat
>them as one unit, particularly for display and partially for selection.
>Granted, much of the breaking is done when the text can not continue (in
>it's natural direction), frequently needing to start anew on the next line.
And that's why in html that character is written " "
You just rephrased my 1.2 lines as 5 lines.
>> But anyway, there's little point trying to psychoanalyze the writers of
>> that software. Probably involved pointy-headed bosses.
>
>I like to understand why things have been done the way they were.
>Hopefully I can learn from the reasons.
We already established that they thought it a good idea to insert fancy 'no-break'
coding if the user typed two spaces. They thought they were adding a useful feature.
I meant there's no point trying to determine why they were so deluded, and failed to
recognise that maybe some users (Ed) would want to just type two spaces.
>
>> Of course not. It was for American English only. This is one of the
>> major points of failure in the history of information processing.
>
>Looking backwards, (I think) I can understand why you say that. But
>based on my (possibly limited) understanding of the time, I think that
>ASCII was one of the primordial building blocks that was necessary.
YES! I'm not arguing ASCII was _bad_. It was a great advance. There was
no way they could have included the experience of 50 more years if comp-sci.
And now 'we' (the world) are stuck with it for legacy compatibility reasons.
Any extensions have to be retro-compatible.
[snip]
>> Containing extended Unicode character sets via UTF-8, doesn't make it a
>> non-hard-formatted medium. In ASCII a space is a space, and multi-spaces
>> DON'T collapse. White space collapse is a feature of html, and whether
>> an email is html or not is determined by the sending utility.
>
>Having read the rest of your email and now replying, I feel that we may
>be talking about two different things. One being ASCII's standard
>definition of how to represent different letters / glyphs in a
>consistent binary pattern.
That's what you are talking about.
> The other being how information is stored in an (un)structured sequence
> of ASCII characters.
What I'm talking about is not that. It's about how to create a coding scheme
that serves ALL the needs we are now aware of. (Just one of which is for old
ASCII files to still make sense.) This involves both re-definition of some
of the ASCII control codes, AND defining sequential structure standards.
For eg UTF-8 is a sequential structure. So are all the html and css codings,
all programming languages, etc. There's a continuum of encoding...structure...syntax.
The ASCII standard didn't really consider that continuum.
[snip] ACK - ACK.
>> ----------
[snip]
>> Human development of computing science (including information coding
>> schemes) has been effectively a 'first time effort', since we kept on
>> developing new stuff built on top of earlier work. We almost never went
>> back to the roots and rebuilt everything, applying insights gained from
>> the many mistakes made.
>
>With few notable (partial) exceptions, I largely agree.
Which exceptions would those be? (That weren't built on top of ASCII!)
[big snip]
>> This is a scan from the 'Recommended USA Standard Code for Information
>> Interchange (USASCII) X3.4 - 1967' The Hex A-F on rows 10-15, added
>> here. Hexadecimal notation was not commonly in use in the 1960s. Fig. ___
>> The original ASCII definition table.
>>
>> ASCII's limitations were so severe that even the text (ie ASCII) program
>> code source files used by programmers to develop literally everything
>> else in computing science, had major shortcomings and inconveniences.
>
>I don't think I'm willing to accept that at face value.
I assume you're thinking that ASCII serves just fine for program source code?
This is a bandwagon/normalcy bias effect. "Everyone does it that way and always has,
so it must be good."
Sigh. Well, I can't go into that without revealing more than I wish to atm.
>> A few specific examples of ASCII's flaws:
>>
>> ?? Missing concept of control vs data channel separation. And so we
>> needed the "< >" syntax of html, etc.
>
>I don't buy that, at all.
>
>ASCII has control codes to that I think could be (but isn't) used for
>some of this. Start of Text (STX) & End of Text (ETX), or Shift Out
>(SO) & Shift In (SI), or Device Control 1 - 4 (DC1 - DC4), or File /
>Group / Record / Unit Separators (FS / GS / RS / US) all come to mind.
You're making my point for me. Of course there are many ways to interpret
existing codes to achieve this effect. Some use control codes, others
overload functionality on printable characters. eg html with < and >.
My point is the base coding scheme doesn't allocate a SPECIFIC mechanism
for doing this. The result is a briar-patch of competing ad-hoc methods.
Hence the 'babel' I'm referring to, in every matter where ASCII didn't
define needed functionality.
>Either you're going to need two parallel byte streams, one for data and
>another for control (I'm ignoring timing between them), -or- you're
>going to need a way to indicate the need to switch between byte
>(sub)streams in the overall byte (super)streams. Much of what I've seen
>is the latter.
By definition, in a single baseband data stream it's ALWAYS the case that
time-interleaving is the only way to achieve command/data separation.
>It just happens that different languages have decided to use different
>(sequences of) characters / bytes to do this. HTML (possibly all XML)
>use "<" and ">". ASP uses "<%" and "%>". PHP uses "<?(php)" and ">?".
>Apache HTTPD SSI uses "<!--#" and "-->". I can't readily think of
>others, but I know there are a plethora. These are all signals to
>indicate the switch between data and control stream.
Exactly. Because ASCII does not provide a specific coding. It didn't
occur to those drtafting the standard. Same as with all the other...
>
>> ?? Inability to embed meta-data about the text in standard programatically
>> accessible form.
>
>I'll agree that there's no distinction of data, meta, or otherwise, in a
>string of ASCII bytes. But I don't expect there to be.
And so every different devel project that needed it, added some kludge on top.
This is what I'm saying: ASCII has no facility for this, but we need a basic
coding scheme that does (and is still ASCII-compatible.)
>Is there any distinction in the Roman alphabet (or any other alphabet in
>this thread) to differentiate the sequence of bytes that makes up the
>quote verses the metadata that is the name of the person that said the
>quote? Or what about the date that it was originally said?
Doesn't matter. The English alphabet (or any other human language) naturally
do not have protocols to concisely represent data types. That's no reason to
not build such things into the character coding scheme used in computational
machinery. In a way we can read.
Like, for instance written decimal numbers, sci-notation, units, etc.
The written form is much more compact than the spoken forms.
>This is about the time that I really started to feel that you were
>talking about a file format (for lack of a better description) than how
>the bytes were actually encoded, ASCII or EBCDIC or otherwise.
The project consists of several parts. One is to define an extension of ASCII
(with a different name, that I'm not going to mention for fear of pre-emptive
copyright bullshit.) Other parts relate to other areas in comp-sci, in the same
manner of 'see what happens if one starts from scratch.'
It's a fun hobby project. That text I quoted is a small part of one chapter of the docs.
Atm the whole thing is undergoing _another_ major refactoring, due to seeing a better way
to do some parts of it.
>> ?? Absense of anything related to text adornments, ie italics, underline
>> and bold. The most basic essentials of expressive text, completely
>> ignored.
>
>Again, alphabets don't have italics or underline or bold or other. They
>have to depend on people reading them, and inferring the metadata, and
>using tonal inflection to convey that metadata.
And yet written texts do have adornments (which can be of different forms
in different languages.) So, you're saying a text encoding scheme should not have
any way to represent such things? Why not?
The ASCII printable character set does not have adornments, BECAUSE it is purely a
representation of the alphabet and other symbols. That's one of its failings, since
all 'extras' have to be implemented by ad-hoc improvisations.
>> ?? Absense of any provision for creative typography. No awareness of
>> fonts, type sizes, kerning, etc.
>
>I don't believe that's anywhere close to ASCII's responsibility.
I'm pretty sure you've missed the whole point. The ASCII definition 'avoided responsibility'
thus making itself inadequate. Html, postscript, and other typographic conventions layer
that stuff on top, messily and often in proprietary ways.
>
>> ?? Lack of logical 'new line', 'new paragraph' and 'new page' codes.
>
>I personally have issues with the concept of what a line is, or when to
>start a new one. (Aside: I'm a HUGE fan of format=flowed text.)
Then you never tried to represent a series of printed pages in html.
Can be sort-of done but is a pain.
ASCII doesn't understand 'lines' either. It understands physical head printers.
Hence 'carriage return' and 'line feed'. Resulting in the CR/CR-LF/LF wars for
text files where a 'new line' was needed.
Even in format-flowed text there is a typographic need for 'new line'.
It means 'no matter where the current line ends, drop down one line and start
at the left.'
Like I'm typing here.
A paragraph otoh is like that, but with extra vertical space separating from above.
Because ASCII does not have these _absolutely_fundamental_ codes, is why html
has to have <br> and <p>. Not to get into the whole </p> argument.
Note that including facility for real newline and paragraph symbols in the basic
coding scheme, doesn't _force_ the text to be hard formatted. That's a display mode
option.
>
>We do have conventions for indicating a new paragraph, specifically two
>new lines.
Sigh. Like two spaces in succession being interpretted to do something special?
You know in type layout there are typically special things that happen for
paragraphs but not for newlines? You don't see any problem with overloading
a pair of codes of one type, to mean something else?
>Is there an opportunity to streamline that? Probably.
Factors to consider:
- Ergonomics of typing. It _should_ be possible to directly type reasonably typographically
formatted text, with minimal keystrokes. One can type html, but it's far from optimal.
There are many other conventions. None arising from ASCII, because it lacks _everything_ necessary.
- Efficiency of the file/stream encoding. Allowing for infinitely extensible character sets,
embedded specifications of glyph appearances (fonts), layout, and dynamic elements.
- Efficiaency and complexity of code to deal with constructing, processing and displaying texts.
>
>I also have unresolved issues of what a page is. (Think reactive web
>pages that gracefully adjust themselves as you dynamically resize the
>window.)
Sure. Now you think of trying to construct a digital representation of a
printed work with historical significance. So it NUST NOT dynamically reformat.
Otoh it might be a total simulation of a physical object/book, page turn physics and all.
[snip]
>> ?? Inadequate support of basic formatting elements such as tabular
>> columns, text blocks, etc.
>
>ASCII has a very well defined tab character. Both for horizontal and
>vertical. (Though I can't remember ever seeing vertical tab being used.)
Ha ha... consider how does the Tab function work in typewriters? What does
pressing a Tab key actually do?
ASCII has a Tab code, yes. It does NOT have other things required for actual use
of tabular columns. So, the Tab functionality is completely broken in ASCII.
That was actually a really bad error on their part. They didn't need foresight,
they just goofed. Typewriters had working Tabs function since 1897.
>I think there is some use for File / Group / Record / Unit Separators
>(FS / GS / RS / US) for some of these uses, particularly for columns and
>text blocks.
Not the same thing.
>> ?? Even the extremely fundamental and essential concept of 'tab
>> columns' is impropperly implemented in ASCII, hence almost completely
>> dysfunctional.
>
>Why do you say it's improperly implemented?
Specifically, ASCII does not provide any explicit means to set and clear an array of
tabular positions (whether absolute or proportional.)
Hence html has to implement tables, grid systems, etc. But it SHOULD be possible to
type columnar text (with tabs) exactly and as ergonomically as one would on a typwriter.
>It sounds as if you are commenting about what programs do when
>confronting a tab, not the actual binary pattern that represents the tab
>character.
Why would I be talking of the binary code of the tab character?
>What would you like to see done differently?
Sigh. You'll have to wait.
>> ?? No concept of general extensible-typed functional blocks within text,
>> with the necessary opening and closing delimiters.
>
>Now I think you're asking too much of a character encoding scheme.
ASCII is not solely a 'character encoding scheme', since it also has the control codes.
But those implement far less functionality than we need.
>I do think that you can ask that of file formats.
Now tell me why you think the fundamental coding standard, should not be the same as
used in file formats. You're used to those being different things (since ASCII is missing so much),
but it doesn't have to be so.
>> ?? Missing symmetry of quote characters. (A consequence of the absense
>> of typed functional blocks.)
>
>I think that ASCII accurately represents what the general American
>populous was taught in elementary school. Specifically that there is
>functionally a single quote and a double quote. Sure, there are opening
>and closing quotes, both single and double, but that is effectively
>styling and doesn't change the semantic meaning of the text.
There you go again, assuming 'styling' has no place in the base coding scheme.
>> ?? No provision for code commenting. Hence the gaggle of comment
>> delimiting styles in every coding language since. (Another consequence
>> of the absense of typed functional blocks.)
>
>How is that the responsibility of the standard used to encode characters
>in a binary pattern?
You keep assuming that a basic coding scheme should contain nothing but the
common printable characters. Despite ASCII already containing more than that.
Also tell me why there should not be a printable character specifically meaning
"Start of comment" (and variants, line or block comments, terminators, etc.)
You are just used to doing it a traditional way, and not wondering if there
might be better ways.
>That REALLY sounds like it's the responsibility of the thing that uses
>the underlying standard characters.
You think that, because all your life you've been typing /* comment */ or whatever.
In truth, the ASCII committee just forgot.
>> ?? No awareness of programatic operations such as Inclusion, Variable
>> substitution, Macros, Indirection, Introspection, Linking, Selection, etc.
>
>I see zero way that is the binary encoding format's responsibility.
Oh well.
>I see every way that is the responsibility of the higher layer that is
>using the underlying binary encoding.
>
>> ?? No facility for embedding of multi-byte character and binary code
>> sequences.
>
>I can see how ASCII doesn't (can't?) encode multi-byte characters. Some
>can argue that ASCII can't even encode a full 8 bit byte character.
a) ASCII is 7 bits.
b) UTF-8
This is getting a bit pointless.
>But from the standpoint of storing / sending / retrieving (multiples of
>8-bit) bytes, how is this ASCII's problem?
>
>IMHO this really jumps the shark (as if we hadn't already) from an
>encoding scheme to a file format.
>
>> ?? Missing an informational equivalent to the pure 'zero' symbol of
>> number systems. A specific "There is no information here" symbol. (The
>> NUL symbol has other meanings.) This lack has very profound implications.
>
>You're going to need to work to convince me of that.
You're going to need to wait a few years, till you see the end product.
That bit of text I quoted is a very, very brief points list. Detailed discussion
of all this stuff is elsewhere, and I _can't_ post it now, since that would
seriously damage the project's practical potential. (Economic reasons.)
>Mathematics has zero, 0, for a really long time. (Yes, there was a time
>before we had 0.) But there is no numerical difference between 0 and 00
>and 0000. So, why do we need the latter two?
Column multiplier significance. That's a different thing from the nature of '0'
as a symbol. At present there is no symbol meaning 'this is not information.'
Nevermind, it's difficult to grasp without a discussion of the implications for
very large mass storage device structure. And I'm not going there now.
>> ?? No facility to embed multiple data object types within text streams.
>
>How is this ASCII's problem?
It wasn't then, but the lack of it is our problem now.
>How do you represent other data object types if you aren't using ASCII?
>Sure, there's raw binary, but that just means that you're using your own
>encoding scheme which is even less of a common / well known standard
>than ASCII.
UTF-8 is multi-byte binary, of a specific type. Just ONE type. No extensibility.
>We have all sorts of ways to encode other data objects in ASCII and then
>include it in streams of bytes.
??? Are you deliberately being obtuse? The point is to attempt to formulate
a new standard that allows all this, in one well defined, extensible way that
permits all future potential cases. We do know how to do this now.
>Again, encoding verses file format.
>
>> ?? No facility to correlate coded text elements to associated visual
>> typographical elements within digital images, AV files, and other
>> representational constructs. This has crippled efforts to digitize the
>> cultural heritage of humankind.
>
>Now I think you're lamenting the lack of computer friendly bytes
>representing the text that is in the picture of a sign. Functionally
>what the ALT attribute of HTML's <IMG> tag is.
No. People who do scan captures of documents will understand that. They face the
choice: keep the document as page images (can't text search), or OCR'd text
(losing the page's visual soul.) But it should be possible to do BOTH, in
one file structure - if there was a defined way to link elements in the symbolic
text to words and characters in the images.
You'll say 'this is file format territory.' True at the moment, but only because
the basic coding scheme lacks any such capability.
>IMHO this is so far beyond a standard meant to make sure that people
>represent A the same way on multiple computers.
You realise ASCII doesn't do that?
>> ?? Non-configurable geometry of text flow, when representing the text
>> in 2D planes. (Or 3D space for that matter.)
>
>What is a page ^W 2D plane? ;-)
Something got lost there. "^W' ??
Surely you understand that point. English: left to right, secondary flow: downwards.
Many other cultural variants exist.
>I don't think oral text has the geometry of text flow or a page either.
>Again, IMHO, not ASCII's fault, or even it's wheelhouse.
Huh? This is pretty random.
It's a common response syndrome when someone discusses deviating from the common paradigm.
If I'm being silly enough to try discussing this in fragmentary form, I expect a lot of it.
>> ?? Many of the 32 'control codes' (characters 0x00 to 0x1F) were allocated
>> to hardware-specific uses that have since become obsolete and fallen
>> into disuse. Leaving those codes as a wasted resource.
>
>Fair point.
>
>I sometimes lament that they control codes aren't used more.
>
>> ?? ASCII defined only a 7-bit (128 codes) space, rather than the full
>> 8-bit (256 codes) space available with byte sized architectures. This
>> left the 'upper' 128 code page open to multiple chaotic, conflicting
>> usage interpretations. For example the IBM PC code page symbol sets
>> (multiple languages and graphics symbols, in pre-Unicode days) and the
>> UTF-8 character bit-size extensions.
>
>I wonder what character sets looked like for other computers with
>different word lengths. How many more, or fewer, characters were encoded?
There are many old codings.
>Did it really make a difference?
Not after ASCII became a standard - unless you were using a language that needed more
or different characters. ie most of the world's population.
>Would it make any real difference if words were 32-bits long?
Hah. In fact, the ability to represent unlimited-length numeric objects,
is one of the essentials of an adequate coding scheme. ASCII doesn't.
The whole 'x-bits long words' is one of the hangups of computing architectures too.
But that's another story.
>What if we moved to dictionary words represented by encoding schemes
>instead of individual characters?
You're describing Chinese language programming. Though you didn't realise.
And yes... :) A capable encoding scheme, and computing architecture built
on it, would allow such a thing.
>Or maybe we should move to encoding concepts instead of words. That way
>we might have some loose translation of the words for mother / father /
>son / daughter between languages. Maybe. I'm sure there would still be
>issues. Gender and tense not withstanding.
Point? Not practical.
The coding scheme has to be compatible with the existing cultural schemes
and existing literature. (All of them.)
[snip]
>> ?? Inability to create files which encapsulate the entirety of the visual
>> appearance of the physical object or text which the file represents,
>> without dependence on any external information. Even plain ASCII text
>> files depend on the external definition of the character glyphs that the
>> character codes represent. This can be a problem if files are intended
>> to serve as long term historical records, potentially for geological
>> timescales. This problem became much worse with the advent of the vast
>> Unicode glyph set, and typset formats such as PDF.
>
>Now even more than ever, it sounds like you're talking about a file
>format and not ASCII as a scheme meant to consistently encode characters.
Hmmm... well this is what happens when I post a short snippet from a larger text.
Short because I have to carefully read anything I cut-n-past post to be sure I didn't
include stuff I don't want to expose yet. Anyway, here's a bit more, that may
make things clearer.
----------------
Starting Over
What began as my general interest in the evolution of information encoding schemes, gained focus as more and more instances of early mistakes became apparent. Eventually it spawned a deliberate project to evaluate 'starting over.' What would be the result of trying?
Like this:
* Revisit the development history of computing science, identifying points at which, in hindsight, major conceptual shortcomings became cemented into foundations upon which today's practices rest.
* Evaluate how those conceptual pitfalls could have been avoided, given understandings arrived at later in computing science.
* Integrate all those improvements holistically, creating a virtual 'alternate timeline' of computing evolution, as if Computing Science had evolved with prescience of future conceptual advances and practical needs. Aiming to arrive at an information processing and computing architecture, that is what we'd already have now if we knew what we were doing from the start.
The resulting computing environment's major components are the ****** coding scheme, the ***** operating system and hardware platform, the ***** scripting language, and the ***** file system.
----------------
>> The PDF 'archival' format (in which all referenced fonts must be defined
>> in the file) is a step in the right direction ??? except that format
>> standard is still proprietary and not available for free.
>
>Don't get me started on PDF. IMHO PDF is where information goes to die.
Hey, we totally agree on something! I *HATE* PDF, and the Adobe DRM-flyblown horse it rode in on.
When I scan tech documents, for lack of anything more acceptable I structure the
page images in html and wrap as a RAR-book.
Unfortunately few know of this method.
>Once data is in a PDF, the only reliable way to get the data back out to
>be consumed by something else is through something like human eyes.
>(Sure it may be possible to deconstruct the PDF, but it's fraught with
>so many problems.)
There *was* at one point a freeware utility for deconstructing PDF files and analysing their structure.
I forget the name just now. It apparently was Borged by the forces of evil, and no longer can be found.
Anyone have a copy?
Photoshop is able to extract original images from PDFs, but it's a nightmare process.
>> ----------
>>
>> Sorry to be a tease.
>
>Teas is not how I'd describe it. I feel like it was more of a bait
>(talking about shortcomings with ASCII's) and switch (talking about
>shortcomings with file formats).
No, they are not intrinsically different things. It just seems that way from the viewpoint of convention
because ASCII lacks so many structural features that file (and stream) formats have to implement on their own.
(And so everyone does them differently.)
>That being said, I do think you made some extremely salient points about
>file formats.
Ha, wait till (eventually - if ever) you see the real thing.
I'm having lots of fun with it. Result is like 'alien tech.'
>> Soon I'd like to have a discussion about the functional evolution of
>> the various ASCII control codes, and how they are used (or disused) now.
>> But am a bit too busy atm to give it adequate attention.
>
>I think that would be an interesting discussion.
Soon. Few weeks. Got to get some stuff out of the way first. I have way too many projects.
Guy
> On a whim, I tried searching for '"pdp-11" "pdp-11"' (i.e. just
> repeated the keyword), and this time it _did_ turn it up! Very odd.
> I wonder why that made a difference?
So I have a new theory about this. Searching for 'pdp-11' causes eBay to
automagically limit the search to the 'Vintage Computing' category. They
must have a keyword->category database.
Anyway, if I manually then select 'All' categories, I get the same results
for searches for both 'pdp-11' and 'pdp-11 pdp-11'. So my theory is that
'pdp-11 pdp-11' _doesn't_ hit their database, and so it goes to 'All' -
thereby producing different results.
So I just have to hit 'All' every time I do a search...
Noel
For some actual content about classic computers (instead of flaming about
various ideas for improving existing systems), I think I've worked out
why the BA11-C and BA11-E mounting boxes have out of sequence variant codes.
It's obvious the variants were not assigned in creation order (the /44 and /24
use the -A variant box), but the -C and -E (the earliest variants, it seems)
apparently come from the fact that the first is used to hold the CPU and
console (for the /20), and the latter is an Expansion box.
And speaking of the -C/-E, somewhat to my surprise, I've discovered that their
H720 Power Supply is actually a switching supply. Ironically, its manual gives
a _far_ better explanation of the EI conversion concept than the later H742
one (which we discussed here at some length, after it confused me no end).
Speak of BA11 variants, I've seen mention a BA11-B on Web sites, but only a
single ref in a DEC manual (the DH11 Maint Man); does anyone have a pointer
to a location where it's dicussed at more length? If so, thanks!
Noel
Card Edge Connectors are PC Board to Wire Connectors. The 34-pin version was popular for control board on 5-1/4? floppy drives in 1980s.
https://en.m.wikipedia.org/wiki/Edge_connector
I have not seen commercial PC board widgets used as an interconnect.
gb
==
Date: Tue, 27 Nov 2018 14:56:18 -0500
From: "William Sudbrink" <wh.sudbrink at verizon.net>
To: cctalk
Subject: 34 pin card edge male to male biscuit (wafer? adapter?)
Hi,
Before I go to the bother of making up a gerber, and putting in a cheap Chinese PCB order, does anyone know of any place that has them for sale?
Hello,
I have added a Unibus CH11 Chaosnet interface to SIMH. I have tested it
with 4.1BSD running on the vax780 simulator, and MINITS running on the
pdp11 simulator.
Hi Bob,
On 11/28/2018 11:30 AM, Robert Feldman wrote:
> FYI, your symbols do not make it through to the list digest -- they just
> come through as question marks, the same as Ed Sharp's extra spaces.
Thank you for letting me know.
Here's a screen shot from the copy I received from the mailing list:
Link - grant-symbols
- https://dotfiles.tnetconsulting.net/images/grant-symbols.png
I know that they did come through the normal non-digest messages from
the mailing list.
Would you mind forwarding me a copy (directly) of the raw digest? I'm
curious what happened to them and I don't currently subscribe to the
digested feed. Depending on what I see, I may bring the issue up on the
Mailman mailing list.
--
Grant. . . .
unix || die
When I bought that Sparcstation 4/330 at Computer Parts Barn, the 48T02
was one of the problems with it. The chip looks like a piggieback rom
encapsulated in epoxy.
I was not reinventing the wheel at the time, I think, because it was
the year 2000 or so, but I looked for a replacement and found them hard
to come by. So, knowing the battery was most likely the fault, I went
about fixing that bit.
The battery accounts for the high profile. You do not have to cut the
entire doggone batter off, the terminals are at one side, iirc, the
right-hand side if the notch is to your left. It is high on the epoxy,
so all you need do is cut down an eighth of an inch in that region,
just shave that top edge until you expose the battery terminals. I
forget how I determined the polarity of them, perhaps I plugged it into
the board after and tested the terminals for power, but all you do once
you've exposed the terminals is solder a power and a ground wire to
them and attach a 3volt battery. I used a pack with two AA's, in a
case so they are user-replaceable. They are probably STILL keeping
time in that machine, wherever DHS took it and my MEGA ST4 and DG
MV4000/dc... That's another story.
So refurbishing these chips is a cakewalk, takes 15 minutes (the second
time 'round), and will work til' doomsday.
Best regards,
Jeff
I thought cctalk was supposed to be a complete superset of cctech, but
looking at the cctech archives, I see a lot of posts that didn't make it
to cctalk. Does one need to do both to see everything?
Noel
Hello group:
Has anyone used a FixMeStick to fix computer issues like virus problems and key tracking hacking? Does it really fix such problems or create new ones?
Thanx for any comments that are posted.
Ed
Hi,
Before I go to the bother of making up a gerber, and
putting in a cheap Chinese PCB order, does anyone
know of any place that has them for sale?
I bought a stack of them for about a buck a piece, a
number of years ago, but I can't seem to find them
for sale anymore.
They are very useful when you want to preserve the
original cables on machines where the CPU and drive
chassis are "permanently" joined but you want the
ease of being able to separate them. Ohio Scientific
machines are a good example of this.
They are also useful to provide test points in the CPU
to drive signals.
Over the years, I've lost a few and given a few away
and now I need some more.
Thanks,
Bill S.
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus