Managing Linux Memory

Memory Hogs

Measuring Results

Measurements are shown in Figure 4. Restricting the size of the page cache shows the expected behavior: The applications are not much slower, even with massively parallel I/O. The kernel patch by Mel Gorman behaves similarly: It is already available in SLES 11 SP 3 and is going to be included in RHEL 7. It supports good, consistent performance of the applications.

Figure 4: Measurements for evaluating kernel adjustments (statistical mean values for several test cycles are shown).

Also, setting swappiness = 0 on SLES 11 SP 2 and SP 3 seems to protect the applications adequately. Amazingly, Red Hat Enterprise Linux 6.4 behaves differently: The implementation seems to be substantially different and does not protect applications against aggressive swapping – on the contrary.

The different values for swappiness do not show any clear trend. Although they lead to a clearly poorer performance of the application with increasing I/O, it is difficult to distinguish systematically between smaller values like 10 or 30 and the default value of 60. It seems that the crucial question is whether or not swappiness is set to  . Intermediate values have hardly any effect.

Table 1 summarizes all the approaches and gives an overview of their advantages and disadvantages, as well as the measured results. Currently no solution eliminates all aspects of the displacement problem. The next best thing still seems to be a correct implementation of swappiness – and possibly, in future, the approach by Mel Gorman. Even with these two methods, however, administrators of systems with large amounts of main memory will not be able to avoid keeping a watchful eye on the memory usage of their applications.

Table 1

Strategy Overview

Approach Tool Advantage Disadvantage
Focusing on Your Own Application
Pinning pages mlock() Does the job Requires code change; massive intervention; inflexible
Huge pages   Performance improvement Requires code change; massive intervention; inflexible; administrative overhead
Reducing size mmap() Elegant access; performance improvement Requires code change; only postpones swapping
Focusing on Third-Party Applications
Setting resource limits setrlimit Requires code change in third-party application; massive interference with application up to and including termination; does not prevent the page cache growing
Control groups cgroups Flexible; also without code changes Settings unclear; does not prevent the page cache growing
Focusing on the Kernel
Keeping the page cache small Direct I/O Performance boost possible for application because of proprietary caching mechanism Requires code change in third-party application; requires own cache management; does not prevent the page cache growing; a single application without Direct I/O can negate all benefits
Restricting the page cache Kernel patch Works (demonstrated by other Unix systems); no intervention with applications required No general support, slow behavior in the case of massive I/O
Small swap space Admin tools Little swapout Not a solution for normal systems; risk of OOM scenarios
Configuring swapping Swappiness No intervention with applications required; works for a value of   Not functional on all distributions; no gradual adjustment
Modifying kswapd Kernel patch Does the job; no intervention with applications required; very few side effects Officially available as of kernel 3.11; possibly works with explicitly parallel I/O ("hot memory")

Conclusions

Displacement of applications from RAM still proves to be a problem, even with very well equipped systems. Although memory shortage should no longer be an issue with such systems, the basically sensible, intensive utilization of memory by the Linux kernel can lead to significant performance problems in applications.

Linux is in quite good shape compared with other operating systems, but it can be useful – given the variety of approaches that is typical of Linux  – to investigate the behavior of the kernel and the applications you use, so you can operate large systems with consistently high performance.

For more details on experience with these systems and the tests used in this article, check out the Test Drive provided by SAP LinuxLab on the SAP Community Network (SCN) [14].

Infos

  1. SAP HANA Enterprise Platform 1.0 Product Availability Matrix: http://www.saphana.com/docs/DOC-4611
  2. Siberschatz, A., G. Gagne, and P.B. Galvin. Operating System Concepts . Wiley, 2005.
  3. Magenheimer, D., C. Mason, D. McCracken, and K. Hackel. "Transcendent memory and Linux" in Proceedings of the Linux Symposium 2009 , pp. 191-200: http://oss.oracle.com/projects/tmem/dist/documentation/papers/tmemLS09.pdf
  4. AMD Inc. "AMD64 Architecture Programmer's Manual Volume 2: System Programming," Section 5.1: http://support.amd.com/TechDocs/24593.pdf
  5. Love, R. Linux Kernel Development . Addison-Wesley, 2010.
  6. Man page for mlock: http://linux.die.net/man/2/mlock
  7. "Huge Pages" by M. Gorman, Linux Weekly News : http://lwn.net/Articles/374424/
  8. Man page for mmap: http://linux.die.net/man/2/mmap
  9. Stevens, W.R., and S.A. Rago. Advanced Programming in the Unix Environment . Addison-Wesley, 2008.
  10. Cgroups documentation: https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt
  11. Man page for open: http://linux.die.net/man/2/open
  12. "2.6 swapping behavior" by J. Corbet: http://lwn.net/Articles/83588/
  13. "Reduce system disruption due to kswapd" by M. Gorman: http://lwn.net/Articles/551643/; Patchset under: https://lkml.org/lkml/2013/3/17/50
  14. SAP LinuxLab, miniSAP: http://www.sap.com/minisap

The Author

Alexander Hass, who has been with SAP LinuxLab since 2002, collaborates with Linux distributors and provides support for customers' systems to, among other things, reduce the effect of the Linux page cache on production operations.

Willi Nüßer is the Heinz-Nixdorf Foundation Professor for Computer Science at the Fachhochschule der Wirtschaft (FHDW), University of Applied Science, in Paderborn, Germany, where he develops and directs large and small R&D projects. He previously worked for SAP AG for six years, where as a developer at SAP LinuxLab, he was responsible for, among other things, porting SAP memory management to Linux and supporting various hardware platforms.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Tuning Your Filesystem’s Cache

    Keeping your key files in RAM reduces latency and makes response time more predictable.

  • Processor and Memory Metrics

    One goal of HPC administration is effective monitoring of clusters. In this article, we talk about writing code that measures processor and memory metrics on each node.

  • Optimizing Windows Server 2016 performance
    With Windows Server 2016, tweaking the settings and taking advantage of performance monitoring tools can help boost your system's performance.
  • The Benefit of Hybrid Drives
    People still use hard disks even when SSDs are much faster and more robust. One reason is the price; another is the lower capacity of flash storage. Hybrid drives promise to be as fast as SSDs while offering as much capacity as hard drives. But can they keep that promise?
  • RAM Revealed

    Virtualized systems are inflationary when it comes to RAM requirements. Storage access is faster when excess RAM is used as a page cache, and having enough RAM helps avoid the dreaded performance killer, swapping. We take a look at the current crop of RAM.

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=