>The idea is nice, using the net like a bubble
memory, but capacy
>is rather small, since any delayed loop storage is determinated
>by bandwidth and trip time.
No, capacity is limited by the total latency,
isn't it? If we arranged
a long chain of computers that did nothing but relay packets, the
latency time goes up. A packet might take an hour to return for
"regeneration". Yes, write speed is limited by your outgoing
connection. Read speed is dependent on the round-trip time.
Data capacity of the entire chain is linked to the transmission
speeds, but other factors as well. Do I have this right?
Yes and No - maybe we should clarify terms.
a) round trip time (rt)- the time uses until one pice of
information (packet) is returned - for calculation it
is enough to use an (hypothetical) average time
b) access time (at) - access time defines the time needed
to access a specific data unit within the storage - the
access time has no influence on storage capacity.
c) storage capacity (s) - maximum amount of data to be
stored at a given moment in time
d) bandwidth (f) - the amount of data that can be send/recived
within a given time (at this definition level bandwidth
and storage capacity is equal, but to help calculation
we should specify given time to be 1s at all calculations)
These terms should be incooperated as following:
I) s = rt * f - since you can only put a continous data stream
of f data units for one period of rt until you
need _all_ f for refresh
II) at(min) = 0
at(avg) = rt/2
at(max) = rt
or 0 <= at <=rt
Since rt and f are given (exterior) values, they have to be assumed.
Looking at these numbers, you might come to the conclusion that
s is the only number needed to describe the system and s is strict
dependant to rt and f.
(please excuse my missing English on mathematical terms).
Followin this, we may insert your names (as I have understood them):
latency ^= round trip time (rt)
write speed ^= bandwidth (f)
read speed ^= access time (at)
See the confusion ?
>If we now add a delay time for roundtrip of eight
seconds (prety
>high - 1s would be more realistic), we just get along with 1 Mega-
>byte of storage.
You're estimating based on 'ping' or
'traceroute' times, but I'm
talking about deliberately maximizing the delay between computers.
That raises the storage with every link, no?
even then, 8 seconds is _pretty_tough_. given an average switching
time of .2 seconds it needs 40 hops - transmission time can be
ignored at this stage, since we can't name the data amount - and
transmission time is amount dependant.
As soon as you like to have longer delay times, you _need_ explicite
storage inbetween (remember, even a satalite linke is just .2 seconds
round trip) - and you won't have a lot on your chain - satelite lines
are a NoNo for (modern) hige speed interactive connections.
Gruss
H.
And for the naysayers, I assume this on topic, since it on topic
for the basics of delay dependant storage.
--
Der Kopf ist auch nur ein Auswuchs wie der kleine Zeh.
H.Achternbusch