I received a few pieces of (not)fan-mail about my latest post in the Crabby Series. One reader took offense at the fact that I bother to blog about hugepages because, in his words:
…you insult the intelligence of your readers. You know full well everyone uses hugepages
Is that why Metalink Note 749851.1 goes to the trouble of advising DBAs that the default database setup from Database Configuration Assistant (DBCA) configures Automatic Memory Management which does not use hugepages?
I assure you, not everyone uses hugepages and part of that is because it can be difficult to set it up if you have several databases—especially if your databases have a mix of heavy PGA usage and heavy SGA usages. Also, if your calculations are off and there are insufficient hugepages to cover the SGA, Oracle will go ahead and allocate with a shmget() that doesn’t pass in SHM_HUGETLB. The effect of that little twist is you’ll be “missing” the memory that was carved out for hugepages and the SGA will reside in other non-hugepages memory. So, for instance, if you calculate your SGA to be 1GB and you allocate 513 (1GB + 1 page for wiggle room) but your SGA turns out to be 1073758208 (1GB + 16KB), you’ll get a non-hugepages SGA and eventually there will be roughly 2GB tied up. I think it is an important topic.
Oracle Support offers a script to assist DBAs in calculating hugepages requirement. With all your instances up, run the script and it will calculate a setting for you. The note is entitled Shell Script to Calculate Values Recommended HugePages / HugeTLB Configuration.
There is a small nit regarding this note ( the procedure it involves actually). In order for the script to give you a recommendation, you have to revert from AMM first, then do a boot of your instances with MMM so it can peek what SysV IPC segments are being allocated for the instances. So, it’s a multi-step process. I suppose with a lot of extra thought the same thing could be calculated by tallying up all the “granule files” found in /dev/shm under AMM, but no matter. This is fairly simple.
Let’s look at my system. Here’s what we’ll see:
First, we’ll see how large the SGA really is.
Next, we’ll see how large of an IPC segment the instance called for. In my case it is about 37MB larger than the actual SGA. That’s fine.
Finally we’ll see the output of the hugepages_settings.sh script to see what it advises.
SQL*Plus: Release 11.X.0.X.0 Production on Mon Jul 27 13:35:41 2009 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.X.0.X.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> show sga Total System Global Area 8351150080 bytes Fixed Size 2214808 bytes Variable Size 1543505000 bytes Database Buffers 6777995264 bytes Redo Buffers 27435008 bytes SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.X.0.X.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options $ ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x522d5fd4 327681 oracle 660 8390705152 53 $ sh ./hugepages_setting.sh Recommended setting: vm.nr_hugepages = 4003 $ grep Huge /proc/meminfo HugePages_Total: 5000 HugePages_Free: 999 HugePages_Rsvd: 0 Hugepagesize: 2048 kB
So it looks like the script is accurate and even allows a little wiggle room. That’s good. I think this script (being helpful) combined with a healthy fear of the nastiness in large SGA+large dedicated connection deployments (without hugepages) should get us all one step closer to insisting on hugepages backing for our SGAs.