Int 13h buffer 64k boundaries (was: 8085 Dissasembly?

Fred Cisin cisin at xenosoft.com
Wed Apr 18 23:20:29 CDT 2018


>> I always found it amusing that many programs (even FORMAT!) would fail
>> with the wrong error message if their internal DMA buffers happened to
>> straddle a 64K block boundary.  THAT was a direct result of failure to
>> adequately integrate, or at least ERROR-CHECK!, the segment-offset kludge
>> bag.  Different device drivers and TSRs could affect at 16 byte intervals
>> where the segment of a program ended up loading.
>> It was NOT hard to normalize the Segment:Offset address and MOVE the
>> buffer to another location if it happened to be straddling.

On Wed, 18 Apr 2018, Charles Anthony wrote:
> Huh. I would guess that this is the source of a DOS bug that I found back
> in the day, reported to MS, and never heard back.
> . . . 
> A buffer boundary straddling error certainly sounds like the issue I was
> seeing; it feels very odd to see a plausible explanation 35 years later.

I'm learning a lot these days that would have been handy back then!

Segment:Offset hides it until you normalize the resulting address.
IIRC, INT13h should return a code of 09h if the DMA straddles a 64K 
boundary.
But, not all code checks for that, or knows what to do when it happens.
Looking at the value of ES:BX? can work, or, if it happens, swap your 
DMA buffer with one that is not used for DMA (and doesn't happen to be 
64K away :-)  In my code, I happened to have buffers for several purposes, 
so that was easy to do.
If operating above Int 13H (DOS calls), then you are dependent on DOS 
error checking.  "Can you trust THAT?"
If operating below Int 13h, then be careful where your DMA ends up, work 
without DMA, or simply watch for occurrence.

And, of course, a lot of C code can't tell the difference between end of 
file and a disk error.
#define EOF (-1)    /* depending on implementation */
while ((ptr2++ = fgetc(fp2)) != EOF); /* does not differentiate between 
error and end of file */ 
fgets() returns a null pointer for EITHER end-of-file OR error!
and therefore assumes total reliability and any failure to read is assumed 
to be EOF.
IFF available, feof(fp2) is much better.


You certainly did the right thing, narrowing it down to load address.  The 
final conclusion would have been to systematically try many/all load 
addresses, and see whether it was consistent for given ones, and what the 
failing ones had in common.

Yes, the "solution" for the extraneous FORMAT failure was "add or remove 
TSRs and device drivers"!

When I first hit it, I used a P.O.S.T. card, and put in minimal code to 
output values until I realized that DS was the key, and that I had 
mishandled error #9.  Eventually I realized that even for code not my 
own, I needed to write a TSR intercepting Int 13H calls.
(For exampole, the critical error handler in certain early versions of 
PC-Tools was more concerned with protecting their pretty display than 
success of writes!)


Microsoft's response to error reporting was amusing.

I was in the Windows 3.10 Beta, and encountered the SMARTDRV write 
caching problem.  There was apparently a flaw on one of my drives, that 
neither SPINRITE nor SSTOR could find.  But, during Windoze installation, 
a write would fail, and with write caching ON (Windoze installation did 
NOT give you a choice), there was no way to recover from a write error!
(SMARTDRV had already told SETUP that it had been successful, so now, when 
the error occured, there was no way to (I)gnore the error (figure out 
which file copy had failed, rename the failed copy "BADSECS", and go back 
later to copy that one manually).  All you could do was (R)etry which 
didn't work, or (A)bort, which cancelled the entire setup before it ever 
wrote the directory entries for the files that had worked. By loading a 
bunch of space filler files on the disk, I was able to get the 
installation to be in a working area.
Once I finally determined WHERE the bad track was, I put in a filler file 
to keep it from being used.  (SPINRITE tried to return it to use when I 
just marked it as BAD!)

Microsoft's response was, "YOU have a HARDWARE problem.  NOT OUR PROBLEM."
I was unable to either convince them that CORRECT response to a hardware 
problem was a responsibility of the OS, NOR that SMARTDRV with 
write-caching was going to cause a lot of data losses that they would get 
blamed for, inspite of it not be narrowed down to SMARTDRV, and that it 
would end up costing them a lot.

Sho'nuff, COMPRESSION got blamed for the data losses.

DOS 6.2x had to be put out for FREE to fix "the problems with 
compression".
The "problems with compression" were fixed by having SMARTDRV NOT default 
to write caching ON, have SMARTDRV NOT rearrange writes for efficiency (it 
wasn't writing DIRectory sectors until later), and having SMARTDRV NOT 
returning a DOS prompt until its buffers were emptied.
(One of the common losses was people would save a file, and turn off the 
computer as soon as the word processor came back to the DOS prompt. 
SMARTDRV had not finished writing their file!  When my girlfriend went 
back to school for some classes, she would stand with her coat on, 
pulling on the paper as her homework printed, then kit ^KD? and turn off 
the computer.)

I was not invited to be in the 3.11, nor WIN95, Betas.
They wanted cheerleaders, not testers, anyway.

--
Grumpy Ol' Fred     		cisin at xenosoft.com



More information about the cctalk mailing list