Log in

No account? Create an account
The glorious out-of-memory killer - 'Twas brillig, and the slithy toves did gyre and gimble in the wabe [entries|archive|friends|userinfo]

[ website | Beware the Jabberwock... ]
[ deviantArt | the-boggyb ]
[ FanFiction | Torkell ]
[ Tumblr | torkellr ]

[Random links| BBC news | Vulture Central | Slashdot | Dangerous Prototypes | LWN | Raspberry Pi]
[Fellow blogs| a Half Empty Glass | the Broken Cube | The Music Jungle | Please remove your feet | A letter from home]
[Other haunts| Un4seen Developments | Jazz 2 Online | EmuTalk.net | Feng's shui]

The glorious out-of-memory killer [Friday 1st April 2011 at 9:44 pm]

[Tags|, ]
[Playing |Sector Intro - Make Your Mind ~ Chris Geehan & Dan Byrne McCul/Iji Soundtrack]

Linux has this glorious thing known as the out of memory killer. The documentation claims that when the system runs out of memory it carefully works out which process is responsible, and kills it. This is of course completely false. What it really does when the system runs out of memory is select the most mission-critical process on the system and kill that instead. It then kills a few more processes for good measure.

Linux also (by default) happily overcommits memory. This means that you've got no guarantee that any memory you've malloc'd is actually available for use until you try to write to it.

Interestingly there is a justification for this. On Linux, the only way to start a new process is to fork your existing process, creating a complete copy of it. You then replace the copy of your process with whatever you actually want to run (ok, there's vfork(), but the man page for that contains the wonderful gem "[don't use] vfork() since a following exec() might fail, and then what happens is undefined").

So back in ye olden days when you only had 8MB of memory, your 5MB emacs process would fork itself. Both emacs instances now come to a total of 10MB which is 2MB more than you have, but that's OK because the second process hasn't changed anything and so shares the physical memory of the existing process (via copy-on-write semantics). The second one then gets replaced by your 1MB shell or whatever, taking the total down to 6MB. But if the second process actually wants exclusive use of the entire 5MB, then you've got a problem. And that's "solved" by the out-of-memory killer.

It's a perfectly sensible way to work around the insanity of the fork()/exec() model, except computers today have crazy amounts of memory and so don't actually need this workaround. And this workaround would never have been needed if there was an actual "create new process" syscall. Remind me again why Linux is better?

This rant brought to you by three servers getting broken in various ways due to the out-of-memory killer nuking about a half-dozen processes per server. Hope you didn't actually need Tomcat. Or MySQL. Or cron.
Link | Previous Entry | Share | Next Entry[ One penny | Penny for your thoughts? ]

[User Picture]From: olego
Monday 4th April 2011 at 9:21 pm (UTC)

Really, Linux?

After reading this post, I just read the documentation on fork() and exec()... Seems a bit heavy-handed to have to fork the *entire* process to simply have it be replaced (??) by a different process. Hmm.
(Reply) (Thread)