The fine art of allocating memory

Serial Killer

Article from ADMIN 10/2012
By
As RAM runs out, the OOM Killer springs into action.

In issue 9, I talked about swap memory and how it should be used as a buffer to protect your servers from running out of memory [1]. Swapping is nearly always detrimental to performance, but its presence provides the system with a last chance for a soft landing before more drastic action is taken. This month, I examine the darker side of the picture: Swap hits 100%, and hard out-of-memory errors are appearing in your logs in the form of killed processes (Figure 1). In most cases, the performance degradation and furious disk thrashing caused by highly active swap areas will alert you well in advance of your logs.

Figure 1: The consequences of swap hitting 100%.

A system that does not have swap space configured can still swap to disk – the filesystem cache, shared libraries, and program text can still be swapped out as memory pressure mounts – it just has fewer options to do so. The Linux kernel's defaults allow for overcommitting memory at allocation time. Only memory pages in actual use ("dirty") are backed by physical RAM, so the program shown in Listing 1 will have no trouble allocating 3GB of memory on any current machine, almost irrespective of actual system capacity, because the memory is only being

...
Use Express-Checkout link below to read the full article (PDF).

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=