I -will- add that due to Unix's shell design
(namely the decision
that the shell expands wildcards on behalf of the tools it executes)
that it does make it impossible (at least in a consistent manner) to
enable some operations that I think *would* be useful in the general
case.
For a basic example, if "rm" could know it
was passed "*" as an
argument it could [] protect the user [...]
I think it's no coincidence that most Lisp systems provide a way to
write not only functions that eval their arguments, but `functions'
that get passed their arguments unevaled.
Some of the reasons for this have no analog here. But some are fairly
closely analogous.
If we could invent some way for a program to indicate, pre-exec, that
it wants unglobbed (`unevaled') arguments, this might be doable.
Or, perhaps, programs always get globbed arguments, but there's some
way for them to obtain the arglist in its pre-globbing form.
main(argc,argv,envp,preglobargv)?
(I know, I know -- Unix being friendly, what am I
thinking, restrain
yourselves.)
I know, I know...but there's some truth here. Two different truths, in
fact. One truth is that there is strong pressure against "friendly" in
Unix. The other is that, as consistent and predictable as the Unix
mechanisms are, and as valuable as that consistency is, they do have
downsides.
I think there are major parts of the pressure against "friendly" -
which, as it is usually used, really means "novice-friendly".
One is that novice-friendly correlates remarkably well with
expert-hostile. This is somewhat inevitable, since part of being
novice-friendly is preventing self-foot-shooting - but expert-friendly
means not preventing stupid things because that also prevents clever
things. Not surprisingly, the experts who would have to implement
novice-friendliness are rather, um, hostile, to expert-hostile things.
The other is that novice-friendly also means logically inconsistent
special cases. I'm not sure why this is; I speculate that it's because
humans are not logical creatures, so working the way a human, at least
a na?ve human, expects involves accurately matching a bunch of messy
heuristics. This means that it's difficult to do at all, _extremely_
difficult to get right, and possibly even completely impossible to get
right for more than one person at a time.
This is why I'm not looking for a way to "fix" rm. I'm looking for a
generalized facility that can be used to "fix" rm as a special case.
It is much more in keeping with the Unix philosophy to introduce a new
pattern than to introduce a special case.
I realize this goes against another, unwritten, Unix
philosophy:
"Unix makes it easy to screw yourself to the wall, and that's a good
thing because I'm l33t."
That's not the Unix philosophy. That's a the philosophy of a wannabe
who doesn't yet understand. The Unix philsophy leading to many of the
same observed effects as that one is actually "don't prevent stupid
things because doing so also prevents clever things". To the extent
that stupid things can be prevented without also preventing clever
things, it's often done, and rarely objected to.
Remember that the next time you type "rm *
.o" instead of "rm *.o" by
mistake. :)
I don't think I've ever done that. I _have_ done "rm *>o"
(actually,
I'm not sure I've done it with rm; I've certainly done the same thing,
mutatis mutandis of course, with other commands).
(Yes I know Unix gurus never make typos, it's
purely hypothetical.)
Har. Anyone who thinks Unix gurus never make typos is clearly not a
Unix guru.
/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML mouse at
rodents-montreal.org
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B