Allocating hugepages for Oracle Database on Linux can be tricky. The following is a short list of some of the common problems associated with faulty attempts to get things properly configured:
- Insufficient Hugepages.You can be short just a single 2MB hugepage at instance startup and Oracle will silently fall back to no hugepages. For instance, if an instance needs 10,000 hugepages but there are only 9,999 available at startup Oracle will create non-hugepages IPC shared memory and the 9,999 (x 2MB) is just wasted memory.
- Insufficient hugepages is an even more difficult situation when booting with _enable_NUMA_support=TRUE as partial hugepages backing is possible.
- Improper Permissions. Both limits.conf(5) memlock and the shell ulimit –l must accommodate the desired amount of locked memory.
In general, list item 1 above has historically been the most difficult to deal with—especially on systems hosting several instances of Oracle. Since there is no way to determine whether an existing segment of shared memory is backed with hugepages, diagnostics are in short supply. Oracle Database 11g Release 2 (11.2.0.2) The fix for Oracle bugs 9195408 (unpublished) and 9931916 (published) is available in 11.2.0.2. In a sort of fast forward to the past, the Linux port now supports an initialization parameter to force the instance to use hugepages for all segments or fail to boot. I recall initialization parameters on Unix ports back in the early 1990s that did just that. The initialization parameter is called use_large_pages and setting it to “only” results in the all or none scenario. This, by the way, addresses list item 1.1 above. That is, setting use_large_pages=only ensures an instance will not have some NUMA segments backed with hugepages and others without. Consider the following example. Here we see that use_large_pages is set to “only” and yet the system has only a very small number of hugepages allocated (800 == ~1.6GB). First I’ll boot the instance using an init.ora file that does not force hugepages and then move on to using the one that does. Note, this is 11.2.0.2.
$ sqlplus '/ as sysdba' SQL*Plus: Release 11.2.0.2.0 Production on Tue Sep 28 08:10:36 2010 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to an idle instance. SQL> SQL> !grep -i huge /proc/meminfo HugePages_Total: 800 HugePages_Free: 800 HugePages_Rsvd: 0 Hugepagesize: 2048 kB SQL> SQL> !grep large_pages y.ora x.ora use_large_pages=only SQL> SQL> startup force pfile=./x.ora ORACLE instance started. Total System Global Area 4.4363E+10 bytes Fixed Size 2242440 bytes Variable`Size 1406199928 bytes Database Buffers 4.2950E+10 bytes Redo Buffers 4427776 bytes Database mounted. Database opened. SQL> HOST date Tue Sep 28 08:13:23 PDT 2010 SQL> startup force pfile=./y.ora ORA-27102: out of memory Linux-x86_64 Error: 12: Cannot allocate memory
The user feedback is a trite ORA-27102. So the question is, which memory cannot be allocated? Let’s take a look at the alert log:
Tue Sep 28 08:16:05 2010 Starting ORACLE instance (normal) ****************** Huge Pages Information ***************** Huge Pages memory pool detected (total: 800 free: 800) DFLT Huge Pages allocation successful (allocated: 512) Huge Pages allocation failed (free: 288 required: 10432) Startup will fail as use_large_pages is set to "ONLY" ****************************************************** NUMA Huge Pages allocation on node (1) (allocated: 3) Huge Pages allocation failed (free: 285 required: 10368) Startup will fail as use_large_pages is set to "ONLY" ****************************************************** Huge Pages allocation failed (free: 285 required: 10368) Startup will fail as use_large_pages is set to "ONLY" ****************************************************** NUMA Huge Pages allocation on node (1) (allocated: 192) NUMA Huge Pages allocation on node (1) (allocated: 64)
That is good diagnostic information. It informs us that the variable portion of the SGA was successfully allocated and backed with hugepages. It just so happens that my variable SGA component is precisely sized to 1GB. That much is simple to understand. After creating the segment for the variable SGA component Oracle moves on to create the NUMA buffer pool segments. This is a 2-socket Nehalem EP system and Oracle allocates from the Nth NUMA node and works back to node 0. In this case the first buffer pool creation attempt is for node 1 (socket 1). However, there were insufficient hugepages as indicated in the alert log. In the following example I allocated another arbitrarily insufficient number of hugepages and tried to start an instance with use_large_pages=only. This particular insufficient hugepages scenario allows us to see more interesting diagnostics:
SQL> !grep -i huge /proc/meminfo HugePages_Total: 12000 HugePages_Free: 12000 HugePages_Rsvd: 0 Hugepagesize: 2048 kB SQL> startup force pfile=./y.ora ORA-27102: out of memory Linux-x86_64 Error: 12: Cannot allocate memory
…and, the alert log:
Starting ORACLE instance (normal) ****************** Huge Pages Information ***************** Huge Pages memory pool detected (total: 12000 free: 12000) DFLT Huge Pages allocation successful (allocated: 512) NUMA Huge Pages allocation on node (1) (allocated: 10432) Huge Pages allocation failed (free: 1056 required: 10368) Startup will fail as use_large_pages is set to "ONLY" ****************************************************** Huge Pages allocation failed (free: 1056 required: 10368) Startup will fail as use_large_pages is set to "ONLY" ****************************************************** Huge Pages allocation failed (free: 1056 required: 5184) Startup will fail as use_large_pages is set to "ONLY" ****************************************************** NUMA Huge Pages allocation on node (0) (allocated: 704) NUMA Huge Pages allocation on node (0) (allocated: 320)
In this example we see 12,000 hugepages was sufficient to back the variable SGA component and only 1 of the NUMA buffer pools (remember this is Nehalem EP with OS boot string numa=on).
Summary
In my opinion, this is a must-set parameter if you need hugepages. With initialization parameters like use_large_pages, configuring hugepages for Oracle Database is getting a lot simpler.
Next In Series
- “[…] if you need hugepages”
- More on hugepages and NUMA
- Any pitfalls I find.
More Hugepages Articles
Link to Part II in this series: Configuring Linux Hugepages for Oracle Database Is Just Too Difficult! Isn’t It? Part – II. Link to Part III in this series: Configuring Linux Hugepages for Oracle Database is Just Too Difficult! Isn’t It? Part – III. And more: Quantifying hugepages Memory Savings with Oracle Database 11g Little Things Doth Crabby Make – Part X. Posts About Linux Hugepages Makes Some Crabby It Seems. Also, Words About Sizing Hugepages. Little Things Doth Crabby Make – Part IX. Sometimes You Have To Really, Really Want Your Hugepages Support For Oracle Database 11g. Little Things Doth Crabby Make – Part VIII. Hugepage Support for Oracle Database 11g Sometimes Means Using The ipcrm Command. Ugh. Oracle Database 11g Automatic Memory Management – Part I.
Great post, Kevin! Looking forward to follow-up.
“Since there is no way to determine whether an existing segment of shared memory is backed with hugepages, diagnostics are in short supply”
This is why I looooove AIX:
vmstat -l 5
shows me exactly how many large/huge pages are available and in use, at any given time, every 5 seconds. Piece of cake to have it running in another window while starting up Oracle, multi-instance or not.
Another one that is extremely useful is:
vmstat -p ALL 5
just in case I have 4K, 64K and 16M page size pools in the same system!
😉
Kevin
Having just recently gone through the pain of trying to get Hugepages to work correctly, I couldn’t agree more with you.
It is however one of those things where the payoff afterwards is more than outweighed by the headache of doing it.
It would be interesting to know what percentage of systems out there are actually using Hugepages though.
John
Hi John,
I’m afraid the percentage is quite low as the levels of difficulty vary as you go from most-modern software stack (Oracle + Linux) to older stacks. I’ll blog on that point soon.
Hi Kevin,
Actually it is difficult based on how many times I’ve seen miss configured. I embrace this new parameter, however it still doesn’t protect against some scenarios that happen quite often.
At some point Linux kernel had the great idea of not allocating hugepages that have not been touched. Those remained as free. Unfortunately if you try to start up a few database in that manner, once these got allocated, you have weird errors from Oracle.
Then Linux kernel introduced a new modification, that allowed to have “reserved” pages. These are pages that are “claimed” but unused yet. Your true free pages are “free – reserved”. This partially solved the problem, except the case where you start Oracle within hugepages and then later de-allocate some of them.
That still causes problems.
So ideally, I would like to have a parameter where on startup Oracle would “walk” the entire SGA in order to ensure all pages are allocated and reserved.
Chris,
isn’t the PRE_PAGE_SGA just for that purpose? I’m not saying it is OK to run it on every production scenario, but for example Oracle with long-lived JDBC connection pools from application servers? … also for building test-cases it should be OK too (?).
Kevin, your thoughts on PRE_PAGE_SGA + Huge Pages ?
Actually Huge/Large Pages is complicated ^H^H^H^H^H interesting topic no matter what you write on this great blog (LOCK_SGA/mlock() + PRE_PAGE_SGA + ulimit + /etc/security/limits.conf@Linux + + AIX/Linux/Solaris + ISM or DISM today? + 11.1 or you want 11.2 or perhaps 9.2? + memory_target + various OS kernel versions + some capabilities (AIX/HP-UX anyone?) + some sysctl entries + etc + now the hardest one: explaining it to the management for allowing Change Controls AKA “but we were told that Oracle is self-tuning” :))
Jakob,
Yes PRE_PAGE_SGA will fault in and validate the pages and should take them off reserved status. If you have a lot of time on an 2S Nehalem EX box with a 512GB SGA (like I do) you can watch the boot process while the paint dries.
I love that run-on blurb ending in self-tuning. Funny!
Kevin, I’m just a little curious, what so special Oracle doing during touching the memory ? It was always interesting me (a little oversimplified benchmark, just to demonstrate that really Linux – 2.6.26 here – is lying about memory, no IPC, but still memory is mapped directly to the process, of course no hugepages here too)
vnull@xeno:~$ grep -i free /proc/meminfo
MemFree: 1064244 kB
SwapFree: 787092 kB
HugePages_Free: 0
vnull@xeno:~$ cat t.c
#include
#include
#include
#include
#include
long long int mydiff(struct timeval *tod1, struct timeval *tod2)
{
long long t1, t2;
t1 = tod1->tv_sec * 1000000 + tod1->tv_usec;
t2 = tod2->tv_sec * 1000000 + tod2->tv_usec;
return t1 – t2;
}
int main(int ac, char **av)
{
int i,j;
char *ptr;
struct timeval start, end;
long long int SZ;
if(ac != 2) {
printf(“usage: ./t \n\n”);
exit(2);
}
SZ = atoi(av[1]) * 1024 * 1024;
for(j = 1; j <= 3; j++) {
printf("[j=%d] *****************\n", j);
gettimeofday(&start, NULL);
if((ptr = (char *)malloc(SZ)) == NULL) {
exit(1);
}
gettimeofday(&end, NULL);
printf("malloc(%lld MB) took %lld usecs\n", SZ/1024/1024, mydiff(&end, &start));
gettimeofday(&start, NULL);
mlock(ptr, SZ);
gettimeofday(&end, NULL);
printf("mlock(%lld MB) took %lld usecs\n", SZ/1024/1024, mydiff(&end, &start));
for(i = 1; i < 8; i++) {
char c = 'A' + i;
gettimeofday(&start, NULL);
memset((void *)ptr, c, SZ);
gettimeofday(&end, NULL);
printf("[%d – char %c] memset(%lld MB) took %lld usecs\n",
i, c, SZ/1024/1024, mydiff(&end, &start));
}
gettimeofday(&start, NULL);
free(ptr);
gettimeofday(&end, NULL);
printf("free(%lld MB) took %lld usecs\n", SZ/1024/1024, mydiff(&end, &start));
}
exit(0);
}
vnull@xeno:~$ !gcc
gcc -Wall t.c -o t
vnull@xeno:~$ ./t 1024
[j=1] *****************
malloc(1024 MB) took 36 usecs
mlock(1024 MB) took 2 usecs
[1 – char B] memset(1024 MB) took 751333 usecs
[2 – char C] memset(1024 MB) took 159287 usecs
[3 – char D] memset(1024 MB) took 159952 usecs
[4 – char E] memset(1024 MB) took 161558 usecs
[5 – char F] memset(1024 MB) took 161029 usecs
[6 – char G] memset(1024 MB) took 161020 usecs
[7 – char H] memset(1024 MB) took 160822 usecs
free(1024 MB) took 39230 usecs
[j=2] *****************
malloc(1024 MB) took 21 usecs
mlock(1024 MB) took 1 usecs
[1 – char B] memset(1024 MB) took 761451 usecs
[2 – char C] memset(1024 MB) took 160113 usecs
[3 – char D] memset(1024 MB) took 161321 usecs
[4 – char E] memset(1024 MB) took 159104 usecs
[5 – char F] memset(1024 MB) took 160305 usecs
[6 – char G] memset(1024 MB) took 159395 usecs
[7 – char H] memset(1024 MB) took 160017 usecs
free(1024 MB) took 38600 usecs
[j=3] *****************
malloc(1024 MB) took 22 usecs
mlock(1024 MB) took 1 usecs
[1 – char B] memset(1024 MB) took 762322 usecs
[2 – char C] memset(1024 MB) took 160131 usecs
[3 – char D] memset(1024 MB) took 159381 usecs
[4 – char E] memset(1024 MB) took 159975 usecs
[5 – char F] memset(1024 MB) took 159163 usecs
[6 – char G] memset(1024 MB) took 160019 usecs
[7 – char H] memset(1024 MB) took 160556 usecs
free(1024 MB) took 39562 usecs
vnull@xeno:~$
PRE_PAGE_SGA will re-read the entire sga every time you connect to the database, not just at startup. So not really the same.
Christo,
That is not what PRE_PAGE_SGA does, Christo, unless a bug has been introduced that I can’t see in the code. Do you have evidence of this?
Those where my observation, last I tested in 10g.
But according to 11.2 docs (reference manual), this is still how it works:
PRE_PAGE_SGA can increase the process startup duration, because every process that starts must access every page in the SGA. The cost of this strategy is fixed; however, you might simply determine that 20,000 pages must be touched every time a process starts. This approach can be useful with some applications, but not with all applications. Overhead can be significant if your system frequently creates and destroys processes by, for example, continually logging on and logging off.
Well, Christo, nobody is perfect. Perhaps my 19yr knowledge of what PRE_PAGE is all about and my speed reading of ksm.c is leading me astray. I think this is a doc bug, but thanks for the pointer. I’ll see if I can figure out what’s up.
Christo,
You have jarred my memory. The Linux port is a bit strange with it’s implementation of PRE_PAGE_SGA…and PRE_PAGE_SGA is a totally port-specific ordeal. There have been bugs where PRE_PAGE_SGA caused every process to poke every page as I blogged in the following reference. I’m now checking into whether that bug became a feature 🙂
https://kevinclosson.wordpress.com/2009/05/08/oracle-database-11g-automatic-memory-management-part-iv-dont-use-pre_page_sga-ok/
Well that feature is not very useful is it? 🙂 It looks like leftovers from an attempt to keep the SGA more touched, thus remain in memory.
Ideally, convert this to do one run on Linux, to make sure all hugepages are touched. That way I won’t have to write such a tool my self.
Note, it needs to work for ASM instance as well.
Christo,
You are partially correct in the historical purpose of PRE_PAGE_SGA. Before lockable SHM the way to make SGA pages safe from paging out was to touch them more than once as many Linux derivations in the old days treated multiply-referenced SHM pages as locked without any API extention to shmget(). That was then, this is now and this is a bug. The nature of the bug is that only the foreground that boots the instance need perform the page touching that happens when PRE_PAGE_SGA=TRUE. I’m escalating that bug this morning on the following grounds:
1. Without PRE_PAGE_SGA we cannot promote hugepages reserved to hugepages used at instance start time
2. Instances that don’t use hugepages run the risk of not being able to boot at all
SQL> startup pfile=./z.ora
SQL> ORA-00445: background process “PMON” did not start after 120 seconds
3. Every foreground pays about 10x additional connect time overhead as per my tests this morning.
As for your assertions about ASM instance relevance to this problem, I will disagree on the following grounds:
1. ASM instances function just fine with Automatic Memory Management. Leave them that way.
2. ASM instances do not need PRE_PAGE_SGA. Leave them that way.
Finally, the moral of the story (my lesson) is to revisit the code for these such OSDs on occasion and don’t speed-read while doing so.
As for ASM,
Kevin,
Thanks for taking such quick response to this. This will solve a lot of issues!
However, do reconsider your grounds on ASM instance. If you DO want to use hugepages for ASM (by using sga_target=sga_max_size instead of memory_target), then you need this functionality.
Otherwise, in a system where both ASM and DB are using hugepages, the DB will be protected against “stolen” hugepages, but ASM will not be, thus an imminent crash is likely, should hugepages be stolen.
Christo,
Leave ASM as an AMM instance. Please.
As for the page touching on Linux with PRE_PAGE_SGA=TRUE, watch 877032.
>Leave ASM as an AMM instance. Please.
Could you elaborate on this?
Wouldn’t it be better to have ASM covered by hugepages as well? And be guaranteed to be locked in memory and allocated from non-swappable memory?
Is there a difference for the ASM instance between:
memory_target = X
vs
sga_target= X
?
Christo,
If you have a system that is so overloaded that ASM instance processes are being selected for swap-out to clear a low memory condition you have a situation on your hands that cannot be mitigated with hugepages. The heap and stack of ASM instance processes is still swapable as are the page tables (albeit the page tables are smaller when using hugepages but no matter). You are scrutinizing the swap-proofing of only a portion of these processes.
The point is if these processes get selected for swap-out, you’re in deep trouble with or without hugepages. Why are we talking about swap risks in the year 2010? That is an important question. Don’t swap, ever. And don’t tune for swap pain mitigation, ever.
And, yes, there is a difference between AMM and ASMM. Surely you know. No?
I never really experimented with the fully automatic memory management as it doesn’t support hugepages, thus in my opinion is not an option for large systems.
From what I understand is that memory_target manages both SGA and PGA. Since PGA (talking PGA_AGGREGATE_TARGET) is not really something you would adjust or touch on an ASM instance, I don’t fully understand the purpose.
If you have a good article to point me to I will gladly re-evaluate my opinion.
As per your point that if I am trying to protect ASM’s SGA from swapping I am in much more trouble to begin with, it’s quite valid. However there’s always the possibility of a rarely used portion to be swapped out. That can always be solved with LOCK_SGA so it’s not really an argument.
Nevertheless, in my mind, treating ASM instance and Oracle instance’s SGA allocation equally makes much more sense, than having separate configs.
Plus why waste PTE cache entries, when you can avoid it? Impact would be minimal, but I see no reason not to do it.
Christo,
How can protecting the SGA pages of an ASM instance from swap out be so much more important than the stack, heap and page tables of those processes? Just don’t swap.
I totally agree. It’s not. You cannot allow a production system to swap. Not arguing about that at all.
Christo,
Hang in there. The solution to this madness is mmap(,,,MAP_HUGETLB) sans hugetlbfs! Stay tuned.
Oh! That one I like. Looks promising. Especially if all the PGA goes there as well. But until then, we’re stuck with the current setup.
I really hope the PRE_PAGE_SGA is changed in a patch release.
Christo,
The heap in 11g is mmap()ed so MAP_HUGETLB would be a very good thing. These are just my words, not those of Oracle.
Hi Kevin,
This is an interesting discussion and very educating for someone like me who does not have a whole lot of exposure to Linux.
… The fix for Oracle bugs 9195408 (unpublished) and 9931916 (published) is available in 11.2.0.2….
I am not able to see bug 9931916 on MOS. Is it visible to general public?
Thanks
Amir
Amir,
As always, nice to see you stopping by!
One more thought on PRE_PAGE_SGA
Perhaps have 3 settings
false, startup, always
With true and always been the same thing, and “startup” been default with hugepages only?
That way all cases can be covered. No need to remove something that could have some use, be it purely experimental (for example, demonstrate PTE usage for 1000 connections with and without hugepages).
Christo,
Don’t confuse this with a feature-request site 🙂
I’d highly recommend using 1GB pages if you are on a modern system. Far better than 2MB pages.
Yes, David, you are right. I’m sure Oracle will take advantage of that feature when the time is right. Thanks for stopping by.
I agreed that it’s a little difficult, So I won’t configure Hugepage anymore!
Hi Kevin,
You already know that we are a Solaris/SPARC shop. We are in the planning phase of migrating our large Oracle ERP systems that currently run on the large Sun/Oracle E20k servers to Linux (RHL and NOT OEL). On our development and QAservers, each of which is a 40-core E20k domain, we currently run around 40 databases on each server and 95% of those are ERP systems. With Solaris, we have never had to worry about configuring large pages for shared memory segments, the OS automatically uses 4M pages for the SGA segments. But it seems like for Linux, some planning needs to be done to achieve the same or else there could be trouble. At this time, all of our ERP systems are based on 10.2.0.4 so the 11gR2 related tips will not apply to us. Are there any general guidelines on Linux on what to consider to configure huge pages on systems that will run large number of Oracle databases?
Thanks
Amir
Hi Amir,
Thanks for stopping by, as always. Have you read my blog posts that defer to MOS notes?
https://kevinclosson.wordpress.com/kevin-closson-index/2010/03/18/you-buy-a-numa-system-oracle-says-disable-numa-what-gives-part-i/
https://kevinclosson.wordpress.com/kevin-closson-index/2009/05/14/you-buy-a-numa-system-oracle-says-disable-numa-what-gives-part-ii/
https://kevinclosson.wordpress.com/kevin-closson-index/2010/03/31/you-buy-a-numa-system-oracle-says-disable-numa-what-gives-part-iii/
You are right that Solaris large page support is much easier to deal with than Linux. However, regardless of how difficult hugepages might seem (and I’ve blogged about how it is getting better diagnostics wise specific to Oracle), you really need to do the research or you will lose an awful lof of memory to page tables.
Amir,
Although it looks complicated, it is quite easy if done properly. And once setup, it’s very reliable.
Solaris – you only use hugepages if they are available. i.e. if the memory is not fragmented. If your memory is fragmented, Solaris will try it’s best and use a mixture of various sizes (usually 64K, 4M and 64M) for your SGA.
The one thing that makes Solaris much easier to “deal” with is the ISM which shares the pagetable memory, thus the sizing is not such a big deal in terms of consumed memory.
CPU PTE cache misses are still possible.
Hi Kevin,
I have read your posts (mentioned above) in the past and I just went through them again to refresh my memory. I have started doing some research as well. In one of the posts, it seems that you had advised Mark Babok to disable NUMA at the HW level as well. Does that advise hold good for large x86 servers as well?
Chris,
Thank you for your comment as well.
Thanks
Amir
Amir,
Yes. For 2-socket Nehalem EP! SUMA is fine for EP.
I spoke on that topic at OOW 2010. I aim to do a quick recording of that presentation again soon and make it available.
Just wanted to give an example of how I determine if the SGA is mapped to HugePages or normal Linux memory. Our SGA sizes run from 10 GB to 400 GB, so HugePages is a fun fact of life. The /dev/shm option in 11g just did not work well for our OLT databases.
The following works for me on RHEL 4.5 and 5.3 with x86-64.
To determine if the SGA is in HugePages, you can use the Linux pmap on a background process and review the output for certain phrases associated to the memory allocated for the SGA.
The output of the pmap command will show the SGA as relatively large memory chunks depending on the value os shmmax. Those mapped to HugePages appear to have “(deleted)” in the object id where as those not mapped to HugePages, i.e. traditional Linux memory, will have “shmid=” in the object id column. For those SGAs using AMM, then the memory will be listed as /dev/shm “files”.
Quick and dirty Linux cmd:
pmap `ps -ef | egrep ‘asm_smon|ora_smon’ | grep -v grep | awk ‘{print $2}’` | egrep ‘asm_smon|ora_smon|deleted|shm’
You can compare the output with ipcs to verify that these are indeed associated to the SGA.
Using the above, we have audit scripts that run periodically to make sure that databases are in fact using HugePages completely before a system slips off the edge.
Ugh. I’m trying to configure hugepages on a server running in an 11g R2 clusterware, 10gR2 database environment. I am also using separate OS accounts for the clusterware and the databases. I CAN get hugepages to be used when interactively using srvctl or sqlplus to start the database. However, when the server is rebooted and the cluster brings up the database automatically, hugepages are not used. I have set soft and hard memlock parameters in /etc/security/limits.conf for the various OS accounts for the clusterware, database, as well as root.
Any ideas why hugepages would not be used in this scenario?
Allen,
You are probably hitting the following problem: 11gR2 Grid Infrastructure Does not Use ULIMIT Setting Appropriately [ID 983715.1]
Yup, I posted too soon. I found that within hours of this post. Modifying the script per the MOS article resolved the issue. Now if I could only get the TAF failover to work for my 10gR2 DB that is running under 11gR2 Clusterware….
Thanks for the heads up anyway!
Hi Kevin, Oracle support got back to me saying that Linux Huge pages and AMM are still incompatible with 11.2.0.2.. Do you have any comment on this?
====
As per Doc id 1134002.1, AMM is not supported with Hugepages.
The article which you have mentioned is external to Oracle, and Oracle does not support that.
The Automatic Memory Management (AMM) and HugePages are not compatible. With AMM the entire SGA memory is allocated by creating files under /dev/shm. When Oracle Database allocates SGA that way HugePages are not reserved. You must disable AMM on Oracle Database 11g to use HugePages.
Kindly refer Doc id 1134002.1 for more information.
===
Jay,
I’m not sure what your comment aims to do. Is this a recap of all my writings on the matter? The whole point about my posts on AMM is AMM != hugepages because it is mmap() and not mmap() with MAP_HUGELTB or mmaps on hugetlbfs.
Kevin,
No it was not a recap. I came across your article from the following link which had a reference to your blog about AMM and Huge pages support with 11.2.0.2..
http://abiliusta.blogspot.com/2011/04/huge-pages-and-amm-is-possible-and.html
I was trying to follow up in this respect.
Hi Jay,
OK, well, my only comment about AMM is don’t use it..unless, perhaps, you are thinking of a very low-end SE or SE1 setup.