Linux also (by default) happily overcommits memory. This means that you've got no guarantee that any memory you've malloc'd is actually available for use until you try to write to it.
Interestingly there is a justification for this. On Linux, the only way to start a new process is to fork your existing process, creating a complete copy of it. You then replace the copy of your process with whatever you actually want to run (ok, there's vfork(), but the man page for that contains the wonderful gem "[don't use] vfork() since a following exec() might fail, and then what happens is undefined").
So back in ye olden days when you only had 8MB of memory, your 5MB emacs process would fork itself. Both emacs instances now come to a total of 10MB which is 2MB more than you have, but that's OK because the second process hasn't changed anything and so shares the physical memory of the existing process (via copy-on-write semantics). The second one then gets replaced by your 1MB shell or whatever, taking the total down to 6MB. But if the second process actually wants exclusive use of the entire 5MB, then you've got a problem. And that's "solved" by the out-of-memory killer.
It's a perfectly sensible way to work around the insanity of the fork()/exec() model, except computers today have crazy amounts of memory and so don't actually need this workaround. And this workaround would never have been needed if there was an actual "create new process" syscall. Remind me again why Linux is better?
This rant brought to you by three servers getting broken in various ways due to the out-of-memory killer nuking about a half-dozen processes per server. Hope you didn't actually need Tomcat. Or MySQL. Or cron.