Ray Arachelian wrote:
Don wrote:
And you have to ensure that there is *no* way the
user can
execute code *before* your interpreter/virtual machine/etc.
gains control of the CPU. I.e., at the very least, you
need physical control over the machine. This isnt possible
in all cases (e.g., a consumer device!)
That's all fine and good, but in the end, it only addresses a very small
class of security flaws.
I'm not claiming that it does anything *more* than that! :>
Rather, I am claiming that <insert your favorite CPU+OS>
is NOT "hackerproof" nor "crashproof" in an environment
where an unskilled user (e.g. "consumer") can install
(intentionally or accidentally) unproven software
(hostile or just buggy) at will.
The approach I have begun to take locks devices up tight.
If it doesn't start up properly, it's *broken*. No, you
can't reinstall the software -- return it to the factory
(or, wait for the FedEx man to bring you a replacement
tomorrow morning).
It is much easier to design to minimize the potential
for that "doesn't start up properly" than to deal with
corrupted systems (code, data, etc.) after-the-fact.
(in fact, "doesn't start up properly" happens almost
*never* so this is an easy tradeoff).