Ideas there
are. (One: zoom in with a snap-to grid corresponding to
whatever precision is needed;
First idea: I'm sorry, but I'd spend all day
adjusting my zoom level.
Picture routing lines all over a board.
Oh, it doesn't have to be done with the usual "tablet" zoom interface.
For example, suppose that a touch in a particular corner gave you
instant 10X zoom (of just the work area, not the rest of the UI),
centred on the other touch point. (There are frills: what if you touch
there while not touching elsewhere? what about releasing the zoom
corner before the other touch? etc.)
two: displace
the touch point from the active point so that the user
can see precisely where is being addressed;
Second idea: That sounds like it might
have potential. Let's see if
someone (you?) adds it to a windowing system somewhere. If it
appears, I will try it.
It doesn't sound to me like something to be added to a windowing
system, but rather something to be added to the application in
question. You don't _always_ want it, after all.
Progress
currently being made? What reason do you have to think you
would necessarily be aware of any such progress?
Because this is my profession.
For all you know I could be making such progress
right now.
Of course. But if you keep it a secret, well, that helps nobody.
Well, sure. My point is just that progress may be being made right
now, just being kept secret, or at least not being touted, until it can
be turned into a product (if corporate) or until the author thinks it's
ready for release (if not).
What I really
think it will take is for someone to completely
redesign the software - schematic capture and board layout, in your
example - for a touchscreen interface, rather than trying to
perver^Wadapt mouse-based software for touchscreen use.
That's been talked
about for years, and has actually been done a few
times. We keep going back to mice for a reason. ;)
Yeah - most computers have 'em.
There are UI tasks keyboards and mice suck at, but people put up with
them because that's what they've got. (Indeed, I've seen cases where
people use mice when a keyboard would work better, or vice versa, for
reasons I can only guess at.) Back in the '80s, I had the good fortune
to get to use a tablet input device - basically, think touchscreen, but
input-only, and with a (wired) pen-like device rather than a fingertip
selecting the active input point; alternatively, think laptop touchpad
"mouse" grown up to a size on the order of one foot square, with the
pen-style device needed because at that scale you can't really help
resting your hand/arm on the thing. I've sometimes wanted to get
myself one for tasks it does better than a mouse-and-keyboard for.
I've also got an electronic piano which can be turned into a MIDI
keyboard (basically, the keybaord and synth can be decoupled); I've
considered how I might be able to use it instead of, or in addition to,
my existing input devices, and what sorts of tasks it might work better
than them at - besides music input, of course.
As for touchscreens, I don't know. Like most computer people, I've
become so heavily accustomed to the mouse-and-keyboard paradigm that I
have trouble breaking out of that mold. Perhaps I should try to find
documented touchscreen hardware and play with it for a bit, see what I
can come up with for input paradigms - part of the issue, I suspect,
with redesigning things like schematic capture for touchscreen use is
that API designers' minds, and thus the APIs, are still designed from
a mouse-and-keyboard mindset, making it difficult to truly exploit a
touchscreen's capabilities.
I could also be wrong. Except for that tablet I mentioned above, I
haven't played with alternative input devices enough to really have
much of a feel for what can be done with them....
/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML mouse at
rodents-montreal.org
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B