On 12/16/2011 10:55 AM, Mouse wrote:
I -will- add
that due to Unix's shell design (namely the decision
that the shell expands wildcards on behalf of the tools it executes)
that it does make it impossible (at least in a consistent manner) to
enable some operations that I think *would* be useful in the general
case.
For a basic example, if "rm" could know it was passed "*" as an
argument it could [] protect the user [...]
I think it's no coincidence that
most Lisp systems provide a way to
write not only functions that eval their arguments, but `functions'
that get passed their arguments unevaled.
Some of the reasons for this have no analog here. But some are fairly
closely analogous.
If we could invent some way for a program to indicate, pre-exec, that
it wants unglobbed (`unevaled') arguments, this might be doable.
Or, perhaps, programs always get globbed arguments, but there's some
way for them to obtain the arglist in its pre-globbing form.
main(argc,argv,envp,preglobargv)?
(I know, I know -- Unix being friendly, what am I
thinking, restrain
yourselves.)
I know, I know...but there's some truth here. Two different
truths, in
fact. One truth is that there is strong pressure against "friendly" in
Unix. The other is that, as consistent and predictable as the Unix
mechanisms are, and as valuable as that consistency is, they do have
downsides.
I think there are major parts of the pressure against "friendly" -
which, as it is usually used, really means "novice-friendly".
One is that novice-friendly correlates remarkably well with
expert-hostile. This is somewhat inevitable, since part of being
novice-friendly is preventing self-foot-shooting - but expert-friendly
means not preventing stupid things because that also prevents clever
things. Not surprisingly, the experts who would have to implement
novice-friendliness are rather, um, hostile, to expert-hostile things.
The other is that novice-friendly also means logically inconsistent
special cases. I'm not sure why this is; I speculate that it's because
humans are not logical creatures, so working the way a human, at least
a na?ve human, expects involves accurately matching a bunch of messy
heuristics. This means that it's difficult to do at all, _extremely_
difficult to get right, and possibly even completely impossible to get
right for more than one person at a time.
This is why I'm not looking for a way to "fix" rm. I'm looking for a
generalized facility that can be used to "fix" rm as a special case.
It is much more in keeping with the Unix philosophy to introduce a new
pattern than to introduce a special case.
I'm not sure I agree with your two points here. Specifically, I don't
agree that making a commandline user friendly necessarily makes it
"expert-hostile" as you put it, or makes it impossible for said
commandline to be consistent. I'm not saying it's *easy* to make a
tool satisfy both worlds, but I know it's possible -- it just takes
careful design.
Let's start with the "rm *" example. "rm *" is a particular
invokation
of rm that isn't commonly used -- it's an exceptional case that someone
really wants to delete all files in a particular directory. Is it
annoying to anyone to either answer "y" to an "Are you sure?" prompt
or
type in "rm -y *" to autoconfirm in these few cases? Even if it is
slightly more work, think of the tradeoff that's being made: you're
saving every Unix user the pain of accidentally screwing themselves via
a typo -- and just as evidenced by responses on this list, *everyone*
(well, mostly everyone) has managed to do this at least once or knows
someone who has.
That's one example of the kind of thing I mean when I talk about being
more friendly -- not hand-holding, but preventing screw cases.
Another example, from Symbolics Genera (which I've been spending a fair
amount of time with). The Command line in the Lisp Listener has a few
wonderful features which manage to be both powerful *and* helpful at the
same time. (It also has a few warts, but I believe the ideas present in
it are something worthy of being brought back).
First of all, commands are verbose and self-descriptive -- for example,
the command to copy a file is:
:Copy File
That's a lot of typing (especially for more lengthy commands like "Show
Machine Configuration". Fortunately this has been dealt with by only
requiring the user to type as much of a command needed to disambiguate
it -- "S Mac" works for the latter case, "Cop F" is enough for the
former.
Once you've finished typing "Copy File" the listener immediately appends
some simple context-sensitive information hinting at the arguments, so
you end up the below on your screen:
:Copy File (pathnames of files) _
What if you've forgotten the name of the file you want to copy? Hit
Ctrl-/ and it'll give you a list of files in the most recently accessed
directory, or will show files that start with a path you've partially
completed. You can then either type one in from the list and hit the
"Complete" key or click on one of the filenames listed. (The
commandline is mouse-sensitive as well.)
At this point the command prompt helps you again with the second
argument (hinting that you need to type in the "to" portion of the
copy), and you have:
:Copy file (pathnames of files) FOO:>bar>baz.lisp;4 (to [default <most
recently used path>]) _
And you type in the destination, and again are prompted:
:Copy file (pathnames of files) FOO:>bar>baz.lisp;4 (to [default <most
recently used path>]) FOO:>bar>quux.lisp (keywords) _
It's now hinting that "Copy File" takes optional keywords. If you don't
need them, hit "Return" to execute the command, otherwise you can hit
"Ctrl-/" (consistency!) to see a list of possible keyword arguments and
a brief documentation string for each, as seen (excerpted) below:
These are the possible keyword arguments:
:Byte Size Byte size in which to do copy operation
:Copy Properties Properties of the file to be duplicated
:Create Directories What to do if a destination directory does not exist
:If Exists What to do if asked to copy to an existing file
.
.
.
:Copy file (pathnames of files) FOO:>bar>baz.lisp;4 (to [default <most
recently used path>]) FOO:>bar>quux.lisp (keywords) _
Like the commands themselves, only what's needed to disambiguate the
arguments from each other needs to be typed. And like the commands,
"Ctrl-/" will show all possible values for the argument, again, with
short documentation.
And I'll stress that the above does not get in your way, it's fast and
unobtrusive (once you get used to the idea of the commandline inserting
text for you) and it means that you rarely need to consult the *actual*
online documentation (which is excellent, btw :)). For commonly used
commands you can speed right through them using just a few characters
(cop f foo:>bar>baz.lisp;4 foo:>bar>quux.lisp :b s 8 :co p :au) and for
commands you haven't used in months, the commandline itself helps you
remember how to use them.
And of course, this being a Lisp machine, if the above commands are just
too easy for you, you can always directly invoke Lisp forms from the
same commandline :).
(And in the "rm *" case here -- Genera has both a versioned filesystem
(so you can recover clobbered files) and a system wherein deleted files
are merely marked as deleted -- they are not actually cleaned up until
either someone manually "expunges" them or the auto-expunge time for the
directory expires. So you have to try -really- hard to accidentally
delete a file :). )
Really, if you have the means I really recommend playing with Genera,
even if you aren't a Lisp guy. It was eye-opening for me. (Open Genera
is out there if you look, and Brad Parker wrote a usable Linux version a
few years back for running it on amd64 systems (or you can run it on an
Alpha box on TRU64...))
One other example of power + user friendliness is from Windows
PowerShell (which again is far from perfect but contains some
interesting ideas). All commands (called "cmdlets") that affect system
state (deleting files, reconfiguring network adapters, restarting
services, etc) take a "-whatif" parameter which effectively does a dry
run of the command, telling you what the command *would* do if it were
to run without actually doing it.
This is useful when writing scripts -- for example imagine you wanted to
write a script that would rename a subset of files (sounds familiar...)
but wanted to make sure you got it right before running it on an actual
set of files.
This is a feature which helps out both novices and experts without
getting in the way of either.
I realize this goes against another, unwritten,
Unix philosophy:
"Unix makes it easy to screw yourself to the wall, and that's a good
thing because I'm l33t."
That's not the Unix philosophy. That's
a the philosophy of a wannabe
who doesn't yet understand. The Unix philsophy leading to many of the
same observed effects as that one is actually "don't prevent stupid
things because doing so also prevents clever things". To the extent
that stupid things can be prevented without also preventing clever
things, it's often done, and rarely objected to.
Never said it was "the" Unix philosophy, just one I've picked up on over
the years. It's really more of an attitude than anything else. I've
literally had people tell me that "oh, accidentally deleting your files
is a 'rite of passage'". I cannot express the level to which I disagree
with this idea :).
Remember that the next time you type "rm *
.o" instead of "rm *.o" by
mistake. :)
I don't think I've ever done that. I _have_ done "rm
*>o" (actually,
I'm not sure I've done it with rm; I've certainly done the same thing,
mutatis mutandis of course, with other commands).
Exactly :).
- Josh