IBM Support

FAQ1: How much Paging Space Rule of Thumb?

How To


Summary

Frequently Asked Questions from 2013 that are still asked today.

Objective

Nigels Banner

Steps

The answer is "it depends" and there is no Rule Of Thumb - IMHO.

The 2 times ROT is very old school and for small servers where peaks could really run out of RAM fast and hurt.

  • The worst I saw was 6 times but that was OS guys and RDBMS guys and app guys all said  2 times ... each!!  and then the System Admin guy just added them up not realising it was the same 2 times!  Basically, bonkers and they used 2% in the end.

With machines with large memory, say starting at 64 GB, the memory demand peaks will not go too high nor be too rapid. Suddenly doubling say 1 GB of memory on a small machine with a handful of users, it could happen that three times the number of users arrives or a few large SQL statements run to eat up memory. But doubling a 64 GB machine's memory is very unlikely we don't get hundreds of users suddenly logging in at the same minute, nor sudden rises in running thousands or more SQL statements.   It is just maths the larger the resources to slow the rate of change and the lower the peaks above the average.  It is the same fact that means large server consolidation projects putting hundreds of LPARs on a large box does not see the same urgent peaks of dozens of smaller machines, where one of the boxes runs out of RAM while there is spare capacity in all the others.

If there is a large Oracle SGA or DB2 buffer cache dominating RAM like 30% to 70% - well that is never swapped out, so does not affect the paging area. Nor do memory-mapped files that a lot of applications use these days.

The only rule is: Never ever run out of paging space - because absolute mayhem is guaranteed.  The UNIX kernel will have to "seemingly" randomly crash a process to carry on - it is the process demanding RAM that cannot be supplied.  The AIX kernel is pretty robust and will pause processes demanding RAM and wait 30 seconds and 60 seconds to see if the lack of RAM is transitory before killing a process. This is an old quote from the original UNIX manuals = the manuals written by Ken Thompson & Dennis Ritchie and they should know.

I would always start at 4 GB (not much on modern disks) on larger LPARs, make it in 4 equal pieces on different disks at the back end to spread the I/O so if we need to page it is done quickly.

Then monitor it for 3 months with nmon, of course.

For LPAR with massive RAM, say 1 TB+,  I might start with larger paging space, for example, 64 GB or 128 GB.

Additional Information


Other places to find content from Nigel Griffiths IBM (retired)

Document Location

Worldwide

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG10","label":"AIX"},"Component":"","Platform":[{"code":"PF002","label":"AIX"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}},{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"HW1W1","label":"Power -\u003EPowerLinux"},"Component":"","Platform":[{"code":"PF016","label":"Linux"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"","label":""}},{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG60","label":"IBM i"},"Component":"","Platform":[{"code":"PF012","label":"IBM i"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB57","label":"Power"}}]

Document Information

Modified date:
14 June 2023

UID

ibm11117581