Don wrote:
And you have to ensure that there is *no* way the user
can
execute code *before* your interpreter/virtual machine/etc.
gains control of the CPU. I.e., at the very least, you
need physical control over the machine. This isnt possible
in all cases (e.g., a consumer device!)
That's all fine and good, but in the end, it only addresses a very small
class of security flaws. Running untrusted code in a VM (or chroot
jails, etc) are a good way to isolate it, but the core issue is that
said code is untrusted.
You can prevent a few types of exploits in the extent of the damage they
can cause, but that's about it. If you code you're running has
security flaws, or back doors, you'll still be affected by them, just to
a lesser degree, since anything that allows an opponent to gain remote
control of that code would be locked down to that sandbox.
A VM is not a total security fix. Remember that the VM or emulator is a
Turing Universal Machine. The definition of which is a machine that can
do everything some other machine can. This includes the execution of
the very same security holes you're trying to prevent. No matter how
many layers of abstraction you wrap around something, a security flaw is
still a security flaw, and there are no easy fixes.
You can fix and prevent buffer overflows, you can fix and prevent stack
smashes without VM's.
VM's however are good at isolation of data, and rolling back state. You
can use them to run a web browser and prevent access to your financial
files from that web browser. If something takes over your web browser,
you can undo all the changes by shutting it down, reverting the data
from a backup and restarting the VM. By using a VM to
encapsulate
untrusted code, you haven't fixed any security hole, but what
you've
done is to have turned a remote access exploit into a denial of service
exploit. A good trade, IMHO.
What you gain with a VM is a bit more than what separate computers
(known as an air gap) gain you. Unless there are flaws in the VM that
allow a remote access exploit to escape the VM.
Going back to the web browser example, if you use a VM to surf the web,
and happen to download an interesting program, or update that you trust,
but is actually malware, and run it outside the VM, you haven't won
anything by using the VM. This is a policy issue, because at that
point, you've violated the constraints of the air-gap.
VM's are not magic bullets, although they're very useful. In the end,
there's only the question of what's your threat model, and what's an
acceptable risk.