Configuring Linux Hugepages for Oracle Database Is Just Too Difficult! Isn’t It? Part – I.

Allocating hugepages for Oracle Database on Linux can be tricky. The following is a short list of some of the common problems associated with faulty attempts to get things properly configured:

  1. Insufficient Hugepages.You can be short just a single 2MB hugepage at instance startup and Oracle will silently fall back to no hugepages. For instance, if an instance needs 10,000 hugepages but there are only 9,999 available at startup Oracle will create non-hugepages IPC shared memory and the 9,999 (x 2MB) is just wasted memory.
    1. Insufficient hugepages is an even more difficult situation when booting with _enable_NUMA_support=TRUE as partial hugepages backing is possible.
  2. Improper Permissions. Both limits.conf(5) memlock and the shell ulimit –l must accommodate the desired amount of locked memory.

In general, list item 1 above has historically been the most difficult to deal with—especially on systems hosting several instances of Oracle. Since there is no way to determine whether an existing segment of shared memory is backed with hugepages, diagnostics are in short supply. Oracle Database 11g Release 2 (11.2.0.2) The fix for Oracle bugs 9195408 (unpublished) and 9931916 (published) is available in 11.2.0.2. In a sort of fast forward to the past, the Linux port now supports an initialization parameter to force the instance to use hugepages for all segments or fail to boot. I recall initialization parameters on Unix ports back in the early 1990s that did just that. The initialization parameter is called use_large_pages and setting it to “only” results in the all or none scenario. This, by the way, addresses list item 1.1 above. That is, setting use_large_pages=only ensures an instance will not have some NUMA segments backed with hugepages and others without. Consider the following example. Here we see that use_large_pages is set to “only” and yet the system has only a very small number of hugepages allocated (800 == ~1.6GB). First I’ll boot the instance using an init.ora file that does not force hugepages and then move on to using the one that does. Note, this is 11.2.0.2.

$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Tue Sep 28 08:10:36 2010

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to an idle instance.

SQL>
SQL> !grep -i huge /proc/meminfo
HugePages_Total:   800
HugePages_Free:    800
HugePages_Rsvd:      0
Hugepagesize:     2048 kB
SQL>
SQL> !grep large_pages y.ora x.ora
use_large_pages=only
SQL>
SQL> startup force pfile=./x.ora
ORACLE instance started.

Total System Global Area 4.4363E+10 bytes
Fixed Size                  2242440 bytes
Variable`Size            1406199928 bytes
Database Buffers         4.2950E+10 bytes
Redo Buffers                4427776 bytes
Database mounted.
Database opened.
SQL> HOST date
Tue Sep 28 08:13:23 PDT 2010

SQL>  startup force pfile=./y.ora
ORA-27102: out of memory
Linux-x86_64 Error: 12: Cannot allocate memory

The user feedback is a trite ORA-27102. So the question is,  which memory cannot be allocated? Let’s take a look at the alert log:

Tue Sep 28 08:16:05 2010
Starting ORACLE instance (normal)
****************** Huge Pages Information *****************
Huge Pages memory pool detected (total: 800 free: 800)
DFLT Huge Pages allocation successful (allocated: 512)
Huge Pages allocation failed (free: 288 required: 10432)
Startup will fail as use_large_pages is set to "ONLY"
******************************************************
NUMA Huge Pages allocation on node (1) (allocated: 3)
Huge Pages allocation failed (free: 285 required: 10368)
Startup will fail as use_large_pages is set to "ONLY"
******************************************************
Huge Pages allocation failed (free: 285 required: 10368)
Startup will fail as use_large_pages is set to "ONLY"
******************************************************
NUMA Huge Pages allocation on node (1) (allocated: 192)
NUMA Huge Pages allocation on node (1) (allocated: 64)

That is good diagnostic information. It informs us that the variable portion of the SGA was successfully allocated and backed with hugepages. It just so happens that my variable SGA component is precisely sized to 1GB. That much is simple to understand. After creating the segment for the variable SGA component Oracle moves on to create the NUMA buffer pool segments. This is a 2-socket Nehalem EP system and Oracle allocates from the Nth NUMA node and works back to node 0. In this case the first buffer pool creation attempt is for node 1 (socket 1). However, there were insufficient hugepages as indicated in the alert log. In the following example I allocated  another arbitrarily insufficient number of hugepages and tried to start an instance with use_large_pages=only. This particular insufficient hugepages scenario allows us to see more interesting diagnostics:

SQL>  !grep -i huge /proc/meminfo
HugePages_Total: 12000
HugePages_Free:  12000
HugePages_Rsvd:      0
Hugepagesize:     2048 kB

SQL> startup force pfile=./y.ora
ORA-27102: out of memory
Linux-x86_64 Error: 12: Cannot allocate memory

…and, the alert log:

Starting ORACLE instance (normal)
****************** Huge Pages Information *****************
Huge Pages memory pool detected (total: 12000 free: 12000)
DFLT Huge Pages allocation successful (allocated: 512)
NUMA Huge Pages allocation on node (1) (allocated: 10432)
Huge Pages allocation failed (free: 1056 required: 10368)
Startup will fail as use_large_pages is set to "ONLY"
******************************************************
Huge Pages allocation failed (free: 1056 required: 10368)
Startup will fail as use_large_pages is set to "ONLY"
******************************************************
Huge Pages allocation failed (free: 1056 required: 5184)
Startup will fail as use_large_pages is set to "ONLY"
******************************************************
NUMA Huge Pages allocation on node (0) (allocated: 704)
NUMA Huge Pages allocation on node (0) (allocated: 320)

In this example we see 12,000 hugepages was sufficient to back the variable SGA component and only 1 of the NUMA buffer pools (remember this is Nehalem EP with OS boot string numa=on).

Summary

In my opinion, this is a must-set parameter if you need hugepages. With initialization parameters like use_large_pages, configuring hugepages for Oracle Database is getting a lot simpler.

Next In Series

  1. “[…] if you need hugepages”
  2. More on hugepages and NUMA
  3. Any pitfalls I find.

More Hugepages Articles

Link to Part II in this series: Configuring Linux Hugepages for Oracle Database Is Just Too Difficult! Isn’t It? Part – II. Link to Part III in this series: Configuring Linux Hugepages for Oracle Database is Just Too Difficult! Isn’t It? Part – III. And more: Quantifying hugepages Memory Savings with Oracle Database 11g Little Things Doth Crabby Make – Part X. Posts About Linux Hugepages Makes Some Crabby It Seems. Also, Words About Sizing Hugepages. Little Things Doth Crabby Make – Part IX. Sometimes You Have To Really, Really Want Your Hugepages Support For Oracle Database 11g. Little Things Doth Crabby Make – Part VIII. Hugepage Support for Oracle Database 11g Sometimes Means Using The ipcrm Command. Ugh. Oracle Database 11g Automatic Memory Management – Part I.

49 Responses to “Configuring Linux Hugepages for Oracle Database Is Just Too Difficult! Isn’t It? Part – I.”


  1. 1 Noons September 29, 2010 at 6:38 am

    Great post, Kevin! Looking forward to follow-up.

    “Since there is no way to determine whether an existing segment of shared memory is backed with hugepages, diagnostics are in short supply”

    This is why I looooove AIX:
    vmstat -l 5
    shows me exactly how many large/huge pages are available and in use, at any given time, every 5 seconds. Piece of cake to have it running in another window while starting up Oracle, multi-instance or not.
    Another one that is extremely useful is:
    vmstat -p ALL 5
    just in case I have 4K, 64K and 16M page size pools in the same system!
    😉

  2. 2 John Scott September 29, 2010 at 10:55 am

    Kevin

    Having just recently gone through the pain of trying to get Hugepages to work correctly, I couldn’t agree more with you.

    It is however one of those things where the payoff afterwards is more than outweighed by the headache of doing it.

    It would be interesting to know what percentage of systems out there are actually using Hugepages though.

    John

    • 3 kevinclosson September 29, 2010 at 2:46 pm

      Hi John,

      I’m afraid the percentage is quite low as the levels of difficulty vary as you go from most-modern software stack (Oracle + Linux) to older stacks. I’ll blog on that point soon.

  3. 4 Christo Kutrovsky September 29, 2010 at 12:50 pm

    Hi Kevin,

    Actually it is difficult based on how many times I’ve seen miss configured. I embrace this new parameter, however it still doesn’t protect against some scenarios that happen quite often.

    At some point Linux kernel had the great idea of not allocating hugepages that have not been touched. Those remained as free. Unfortunately if you try to start up a few database in that manner, once these got allocated, you have weird errors from Oracle.

    Then Linux kernel introduced a new modification, that allowed to have “reserved” pages. These are pages that are “claimed” but unused yet. Your true free pages are “free – reserved”. This partially solved the problem, except the case where you start Oracle within hugepages and then later de-allocate some of them.

    That still causes problems.

    So ideally, I would like to have a parameter where on startup Oracle would “walk” the entire SGA in order to ensure all pages are allocated and reserved.

    • 5 Jakub Wartak September 29, 2010 at 6:10 pm

      Chris,

      isn’t the PRE_PAGE_SGA just for that purpose? I’m not saying it is OK to run it on every production scenario, but for example Oracle with long-lived JDBC connection pools from application servers? … also for building test-cases it should be OK too (?).

      Kevin, your thoughts on PRE_PAGE_SGA + Huge Pages ?

      Actually Huge/Large Pages is complicated ^H^H^H^H^H interesting topic no matter what you write on this great blog (LOCK_SGA/mlock() + PRE_PAGE_SGA + ulimit + /etc/security/limits.conf@Linux + + AIX/Linux/Solaris + ISM or DISM today? + 11.1 or you want 11.2 or perhaps 9.2? + memory_target + various OS kernel versions + some capabilities (AIX/HP-UX anyone?) + some sysctl entries + etc + now the hardest one: explaining it to the management for allowing Change Controls AKA “but we were told that Oracle is self-tuning” :))

      • 6 kevinclosson September 29, 2010 at 6:48 pm

        Jakob,

        Yes PRE_PAGE_SGA will fault in and validate the pages and should take them off reserved status. If you have a lot of time on an 2S Nehalem EX box with a 512GB SGA (like I do) you can watch the boot process while the paint dries.

        I love that run-on blurb ending in self-tuning. Funny!

      • 7 Jakub Wartak September 29, 2010 at 8:10 pm

        Kevin, I’m just a little curious, what so special Oracle doing during touching the memory ? It was always interesting me (a little oversimplified benchmark, just to demonstrate that really Linux – 2.6.26 here – is lying about memory, no IPC, but still memory is mapped directly to the process, of course no hugepages here too)

        vnull@xeno:~$ grep -i free /proc/meminfo
        MemFree: 1064244 kB
        SwapFree: 787092 kB
        HugePages_Free: 0
        vnull@xeno:~$ cat t.c
        #include
        #include
        #include
        #include
        #include

        long long int mydiff(struct timeval *tod1, struct timeval *tod2)
        {
        long long t1, t2;
        t1 = tod1->tv_sec * 1000000 + tod1->tv_usec;
        t2 = tod2->tv_sec * 1000000 + tod2->tv_usec;
        return t1 – t2;
        }

        int main(int ac, char **av)
        {
        int i,j;
        char *ptr;
        struct timeval start, end;
        long long int SZ;

        if(ac != 2) {
        printf(“usage: ./t \n\n”);
        exit(2);
        }
        SZ = atoi(av[1]) * 1024 * 1024;

        for(j = 1; j <= 3; j++) {
        printf("[j=%d] *****************\n", j);
        gettimeofday(&start, NULL);
        if((ptr = (char *)malloc(SZ)) == NULL) {
        exit(1);
        }
        gettimeofday(&end, NULL);
        printf("malloc(%lld MB) took %lld usecs\n", SZ/1024/1024, mydiff(&end, &start));

        gettimeofday(&start, NULL);
        mlock(ptr, SZ);
        gettimeofday(&end, NULL);
        printf("mlock(%lld MB) took %lld usecs\n", SZ/1024/1024, mydiff(&end, &start));

        for(i = 1; i < 8; i++) {
        char c = 'A' + i;
        gettimeofday(&start, NULL);
        memset((void *)ptr, c, SZ);
        gettimeofday(&end, NULL);
        printf("[%d – char %c] memset(%lld MB) took %lld usecs\n",
        i, c, SZ/1024/1024, mydiff(&end, &start));
        }

        gettimeofday(&start, NULL);
        free(ptr);
        gettimeofday(&end, NULL);
        printf("free(%lld MB) took %lld usecs\n", SZ/1024/1024, mydiff(&end, &start));

        }

        exit(0);
        }

        vnull@xeno:~$ !gcc
        gcc -Wall t.c -o t
        vnull@xeno:~$ ./t 1024
        [j=1] *****************
        malloc(1024 MB) took 36 usecs
        mlock(1024 MB) took 2 usecs
        [1 – char B] memset(1024 MB) took 751333 usecs
        [2 – char C] memset(1024 MB) took 159287 usecs
        [3 – char D] memset(1024 MB) took 159952 usecs
        [4 – char E] memset(1024 MB) took 161558 usecs
        [5 – char F] memset(1024 MB) took 161029 usecs
        [6 – char G] memset(1024 MB) took 161020 usecs
        [7 – char H] memset(1024 MB) took 160822 usecs
        free(1024 MB) took 39230 usecs
        [j=2] *****************
        malloc(1024 MB) took 21 usecs
        mlock(1024 MB) took 1 usecs
        [1 – char B] memset(1024 MB) took 761451 usecs
        [2 – char C] memset(1024 MB) took 160113 usecs
        [3 – char D] memset(1024 MB) took 161321 usecs
        [4 – char E] memset(1024 MB) took 159104 usecs
        [5 – char F] memset(1024 MB) took 160305 usecs
        [6 – char G] memset(1024 MB) took 159395 usecs
        [7 – char H] memset(1024 MB) took 160017 usecs
        free(1024 MB) took 38600 usecs
        [j=3] *****************
        malloc(1024 MB) took 22 usecs
        mlock(1024 MB) took 1 usecs
        [1 – char B] memset(1024 MB) took 762322 usecs
        [2 – char C] memset(1024 MB) took 160131 usecs
        [3 – char D] memset(1024 MB) took 159381 usecs
        [4 – char E] memset(1024 MB) took 159975 usecs
        [5 – char F] memset(1024 MB) took 159163 usecs
        [6 – char G] memset(1024 MB) took 160019 usecs
        [7 – char H] memset(1024 MB) took 160556 usecs
        free(1024 MB) took 39562 usecs
        vnull@xeno:~$

  4. 8 Christo Kutrovsky September 29, 2010 at 10:20 pm

    PRE_PAGE_SGA will re-read the entire sga every time you connect to the database, not just at startup. So not really the same.

    • 9 kevinclosson September 29, 2010 at 10:56 pm

      Christo,

      That is not what PRE_PAGE_SGA does, Christo, unless a bug has been introduced that I can’t see in the code. Do you have evidence of this?

      • 10 Christo Kutrovsky September 29, 2010 at 11:08 pm

        Those where my observation, last I tested in 10g.

        But according to 11.2 docs (reference manual), this is still how it works:

        PRE_PAGE_SGA can increase the process startup duration, because every process that starts must access every page in the SGA. The cost of this strategy is fixed; however, you might simply determine that 20,000 pages must be touched every time a process starts. This approach can be useful with some applications, but not with all applications. Overhead can be significant if your system frequently creates and destroys processes by, for example, continually logging on and logging off.

  5. 13 Christo Kutrovsky September 30, 2010 at 2:41 pm

    Well that feature is not very useful is it? 🙂 It looks like leftovers from an attempt to keep the SGA more touched, thus remain in memory.

    Ideally, convert this to do one run on Linux, to make sure all hugepages are touched. That way I won’t have to write such a tool my self.
    Note, it needs to work for ASM instance as well.

    • 14 kevinclosson September 30, 2010 at 4:18 pm

      Christo,

      You are partially correct in the historical purpose of PRE_PAGE_SGA. Before lockable SHM the way to make SGA pages safe from paging out was to touch them more than once as many Linux derivations in the old days treated multiply-referenced SHM pages as locked without any API extention to shmget(). That was then, this is now and this is a bug. The nature of the bug is that only the foreground that boots the instance need perform the page touching that happens when PRE_PAGE_SGA=TRUE. I’m escalating that bug this morning on the following grounds:

      1. Without PRE_PAGE_SGA we cannot promote hugepages reserved to hugepages used at instance start time
      2. Instances that don’t use hugepages run the risk of not being able to boot at all
      SQL> startup pfile=./z.ora
      SQL> ORA-00445: background process “PMON” did not start after 120 seconds
      3. Every foreground pays about 10x additional connect time overhead as per my tests this morning.

      As for your assertions about ASM instance relevance to this problem, I will disagree on the following grounds:

      1. ASM instances function just fine with Automatic Memory Management. Leave them that way.
      2. ASM instances do not need PRE_PAGE_SGA. Leave them that way.

      Finally, the moral of the story (my lesson) is to revisit the code for these such OSDs on occasion and don’t speed-read while doing so.

      As for ASM,

  6. 15 Christo Kutrovsky September 30, 2010 at 4:39 pm

    Kevin,

    Thanks for taking such quick response to this. This will solve a lot of issues!

    However, do reconsider your grounds on ASM instance. If you DO want to use hugepages for ASM (by using sga_target=sga_max_size instead of memory_target), then you need this functionality.

    Otherwise, in a system where both ASM and DB are using hugepages, the DB will be protected against “stolen” hugepages, but ASM will not be, thus an imminent crash is likely, should hugepages be stolen.

    • 16 kevinclosson September 30, 2010 at 4:43 pm

      Christo,

      Leave ASM as an AMM instance. Please.

      As for the page touching on Linux with PRE_PAGE_SGA=TRUE, watch 877032.

      • 17 Christo Kutrovsky September 30, 2010 at 4:46 pm

        >Leave ASM as an AMM instance. Please.

        Could you elaborate on this?

        Wouldn’t it be better to have ASM covered by hugepages as well? And be guaranteed to be locked in memory and allocated from non-swappable memory?

        Is there a difference for the ASM instance between:
        memory_target = X
        vs
        sga_target= X

        ?

        • 18 kevinclosson September 30, 2010 at 6:21 pm

          Christo,

          If you have a system that is so overloaded that ASM instance processes are being selected for swap-out to clear a low memory condition you have a situation on your hands that cannot be mitigated with hugepages. The heap and stack of ASM instance processes is still swapable as are the page tables (albeit the page tables are smaller when using hugepages but no matter). You are scrutinizing the swap-proofing of only a portion of these processes.

          The point is if these processes get selected for swap-out, you’re in deep trouble with or without hugepages. Why are we talking about swap risks in the year 2010? That is an important question. Don’t swap, ever. And don’t tune for swap pain mitigation, ever.

          And, yes, there is a difference between AMM and ASMM. Surely you know. No?

  7. 19 Christo Kutrovsky September 30, 2010 at 9:41 pm

    I never really experimented with the fully automatic memory management as it doesn’t support hugepages, thus in my opinion is not an option for large systems.

    From what I understand is that memory_target manages both SGA and PGA. Since PGA (talking PGA_AGGREGATE_TARGET) is not really something you would adjust or touch on an ASM instance, I don’t fully understand the purpose.

    If you have a good article to point me to I will gladly re-evaluate my opinion.

    As per your point that if I am trying to protect ASM’s SGA from swapping I am in much more trouble to begin with, it’s quite valid. However there’s always the possibility of a rarely used portion to be swapped out. That can always be solved with LOCK_SGA so it’s not really an argument.

    Nevertheless, in my mind, treating ASM instance and Oracle instance’s SGA allocation equally makes much more sense, than having separate configs.

    Plus why waste PTE cache entries, when you can avoid it? Impact would be minimal, but I see no reason not to do it.

  8. 25 Amir Hameed September 30, 2010 at 11:35 pm

    Hi Kevin,
    This is an interesting discussion and very educating for someone like me who does not have a whole lot of exposure to Linux.

    … The fix for Oracle bugs 9195408 (unpublished) and 9931916 (published) is available in 11.2.0.2….

    I am not able to see bug 9931916 on MOS. Is it visible to general public?

    Thanks
    Amir

  9. 27 Christo Kutrovsky October 1, 2010 at 1:47 pm

    One more thought on PRE_PAGE_SGA

    Perhaps have 3 settings

    false, startup, always

    With true and always been the same thing, and “startup” been default with hugepages only?

    That way all cases can be covered. No need to remove something that could have some use, be it purely experimental (for example, demonstrate PTE usage for 1000 connections with and without hugepages).

  10. 29 David Kanter October 1, 2010 at 6:55 pm

    I’d highly recommend using 1GB pages if you are on a modern system. Far better than 2MB pages.

  11. 31 Maclean October 2, 2010 at 4:15 pm

    I agreed that it’s a little difficult, So I won’t configure Hugepage anymore!

  12. 32 Amir Hameed October 4, 2010 at 7:27 pm

    Hi Kevin,
    You already know that we are a Solaris/SPARC shop. We are in the planning phase of migrating our large Oracle ERP systems that currently run on the large Sun/Oracle E20k servers to Linux (RHL and NOT OEL). On our development and QAservers, each of which is a 40-core E20k domain, we currently run around 40 databases on each server and 95% of those are ERP systems. With Solaris, we have never had to worry about configuring large pages for shared memory segments, the OS automatically uses 4M pages for the SGA segments. But it seems like for Linux, some planning needs to be done to achieve the same or else there could be trouble. At this time, all of our ERP systems are based on 10.2.0.4 so the 11gR2 related tips will not apply to us. Are there any general guidelines on Linux on what to consider to configure huge pages on systems that will run large number of Oracle databases?

    Thanks
    Amir

  13. 35 Amir Hameed October 6, 2010 at 1:29 am

    Hi Kevin,
    I have read your posts (mentioned above) in the past and I just went through them again to refresh my memory. I have started doing some research as well. In one of the posts, it seems that you had advised Mark Babok to disable NUMA at the HW level as well. Does that advise hold good for large x86 servers as well?

    Chris,
    Thank you for your comment as well.

    Thanks
    Amir

    • 36 kevinclosson October 6, 2010 at 3:57 am

      Amir,

      Yes. For 2-socket Nehalem EP! SUMA is fine for EP.

      I spoke on that topic at OOW 2010. I aim to do a quick recording of that presentation again soon and make it available.

  14. 37 Robert December 14, 2010 at 3:12 pm

    Just wanted to give an example of how I determine if the SGA is mapped to HugePages or normal Linux memory. Our SGA sizes run from 10 GB to 400 GB, so HugePages is a fun fact of life. The /dev/shm option in 11g just did not work well for our OLT databases.

    The following works for me on RHEL 4.5 and 5.3 with x86-64.

    To determine if the SGA is in HugePages, you can use the Linux pmap on a background process and review the output for certain phrases associated to the memory allocated for the SGA.

    The output of the pmap command will show the SGA as relatively large memory chunks depending on the value os shmmax. Those mapped to HugePages appear to have “(deleted)” in the object id where as those not mapped to HugePages, i.e. traditional Linux memory, will have “shmid=” in the object id column. For those SGAs using AMM, then the memory will be listed as /dev/shm “files”.

    Quick and dirty Linux cmd:

    pmap `ps -ef | egrep ‘asm_smon|ora_smon’ | grep -v grep | awk ‘{print $2}’` | egrep ‘asm_smon|ora_smon|deleted|shm’

    You can compare the output with ipcs to verify that these are indeed associated to the SGA.

    Using the above, we have audit scripts that run periodically to make sure that databases are in fact using HugePages completely before a system slips off the edge.

  15. 38 Allen January 20, 2011 at 9:15 pm

    Ugh. I’m trying to configure hugepages on a server running in an 11g R2 clusterware, 10gR2 database environment. I am also using separate OS accounts for the clusterware and the databases. I CAN get hugepages to be used when interactively using srvctl or sqlplus to start the database. However, when the server is rebooted and the cluster brings up the database automatically, hugepages are not used. I have set soft and hard memlock parameters in /etc/security/limits.conf for the various OS accounts for the clusterware, database, as well as root.

    Any ideas why hugepages would not be used in this scenario?

  16. 39 Freek January 24, 2011 at 10:47 am

    Allen,

    You are probably hitting the following problem: 11gR2 Grid Infrastructure Does not Use ULIMIT Setting Appropriately [ID 983715.1]

  17. 40 Allen January 24, 2011 at 9:37 pm

    Yup, I posted too soon. I found that within hours of this post. Modifying the script per the MOS article resolved the issue. Now if I could only get the TAF failover to work for my 10gR2 DB that is running under 11gR2 Clusterware….

    Thanks for the heads up anyway!

  18. 41 Jay June 22, 2011 at 5:21 pm

    Hi Kevin, Oracle support got back to me saying that Linux Huge pages and AMM are still incompatible with 11.2.0.2.. Do you have any comment on this?
    ====
    As per Doc id 1134002.1, AMM is not supported with Hugepages.

    The article which you have mentioned is external to Oracle, and Oracle does not support that.

    The Automatic Memory Management (AMM) and HugePages are not compatible. With AMM the entire SGA memory is allocated by creating files under /dev/shm. When Oracle Database allocates SGA that way HugePages are not reserved. You must disable AMM on Oracle Database 11g to use HugePages.

    Kindly refer Doc id 1134002.1 for more information.

    ===

    • 42 kevinclosson June 22, 2011 at 5:35 pm

      Jay,

      I’m not sure what your comment aims to do. Is this a recap of all my writings on the matter? The whole point about my posts on AMM is AMM != hugepages because it is mmap() and not mmap() with MAP_HUGELTB or mmaps on hugetlbfs.

  19. 43 Jay June 22, 2011 at 5:41 pm

    Kevin,
    No it was not a recap. I came across your article from the following link which had a reference to your blog about AMM and Huge pages support with 11.2.0.2..
    http://abiliusta.blogspot.com/2011/04/huge-pages-and-amm-is-possible-and.html
    I was trying to follow up in this respect.


  1. 1 eagle’s home » Linux Hugepages Trackback on October 24, 2010 at 11:06 am
  2. 2 eBay DBA Blog » Linux Hugepages Trackback on October 24, 2010 at 11:14 am
  3. 3 [转]Linux Hugepages | Xing AiMing's Home Page - 邢爱明的个人网站 Trackback on November 28, 2011 at 10:03 am
  4. 4 HugePages Overhead « OraStory Trackback on May 30, 2012 at 12:08 pm
  5. 5 Linux HugePages and virtual memory (VM) tuning | IT World Trackback on July 12, 2012 at 3:33 am

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.




DISCLAIMER

I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 744 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: