Sun Oracle Database Machine Cache Hierarchies and Capacities – Part 0.I

BLOG CORRECTION:  Well, nobody is perfect. I need to point out that I must  be too Exadata-minded these days. Exadata Smart Scan returns uncompressed data when performing a Smart Scan of a table stored in Hybrid Columnar Compression form. However, it was short-sighted for me to state categorically that the cited 400 GB of DRAM available as cache in the Sun Oracle Database Machine can only be used for uncompressed data. It turns out that the model in mind by the company for this cache is to buffer data not returned by Smart Scan but instead returned in simple block form and cached in the block buffer pool of the SGA on each of the 8 database servers. So, I was both right and wrong. The Sun Oracle Database Machine is a feature-rich product and I was too Exadata-centric with the information in this post. I have been properly spanked and I’m as contrite as I can possibly be. The original post follows:

…goofy title I know…but, hey, the Roman numeral system had no zero. This is a pre-“Deep Dive Series” post if you will.

I see my colleague Jean-Pierre Dijcks has a blog entry covering the Sun Oracle Database Machine features. It is a good overview but I need to point out a minor correction.

The piece suggests there is 400 GB aggregate DRAM cache in the database grid. There is indeed 576 GB aggregate DRAM cache available between the 8 database hosts so there should be no problem dedicated 70% of that to caching by the new Oracle Database 11g Release 2 Parallel Query caching feature. That particular feature came into play in the recent 1 TB scale record TPC-H result that I blogged about here. The nit I pick with the post is how cites 400 GB raw DRAM capacity and up to 4 TB user data DRAM capacity (presuming the commonly achievable Hybrid Columnar Compression ratio of roughly 10 to 1). Unfortunately, I haven’t had the time to produce one of my typical “Deep Dive Series” webcasts covering such matters as data flow and plumbing (but I’ll get to it) in the Sun Oracle Database Machine. In the meantime I need to point out that data flows from the intelligent storage grind into the database grid in uncompressed form when Oracle Exadata Storage Server cells are scanning (Smart Scan)  Hybrid Columnar Compression tables. So, the DRAM cache capacity of the database grid is an aggregate 400 GB. *Note: Please see the blog correction above regarding how this DRAM cache is populated to achieve the advertised, effective 10TB capacity.

Now, having said that, the Exadata Smart FLASH Cache does indeed cache data in its Hybrid Columnar Compression form. So the effective cache capacity is 50 TB in that tier  presuming a 10 to 1 compression ratio. Data flies  off of FLASH at an aggregate rate of 50 GB/s. Thank heavens there are 16 Xeon 5500 (Nehalem) processor threads in each cell to uncompress the data and perform filtration and column compression.

By the way, the 50 GB/s is actually a safe, conservative number as it represents roughly 900 MB/s per FLASH card each of which has a dedicated 1GB/s lane into memory.

10 Responses to “Sun Oracle Database Machine Cache Hierarchies and Capacities – Part 0.I”

  1. 1 Christo Kutrovsky September 24, 2009 at 7:35 pm

    So many numbers, so many speeds, compressed, uncompressed and etc.

    A nice one page graphic with some tabular data will prevent any confusion.

  2. 3 accidentalSQL September 24, 2009 at 10:37 pm

    “Thank heavens there are 16 Xeon 5500 (Nehalem) processor threads in each cell to uncompress the data and perform filtration and column compression.”

    Cool, sounds like hyper-threading is enabled and is providing a benefit.

  3. 5 salem September 25, 2009 at 7:25 am

    that is 400GB SGA and not PGA, no?


    • 6 kevinclosson September 25, 2009 at 2:27 pm

      Hi Ghassan,

      Yeah, actually I just pulled mention of which virtual address space region does the caching because that is sort of irrelevant. The point of the thread is DRAM cache capacity whether a shared or private cache doesn’t matter when talking about capacity. Good point though.

  4. 7 George September 28, 2009 at 6:53 pm

    Sitting back, thinking back just under a year, it was mind blowing back then the performance we got out of a single cabinet, reading and then rereading these numbers related to the new machine, as was said on the launch, this is ludicrous.

    I can’t imagine that anyone would even have been thinking of these performance numbers a while back.


  5. 9 Yavor October 2, 2009 at 5:55 am

    You say “Exadata Smart Scan returns uncompressed data when performing a Smart Scan of a table stored in Hybrid Columnar Compression form”. However, I still cannot find any specific information what happens when we have predicate offloading to the storage cell. My experiments (on Exadata 1) show the data is *usually* transferred in some compressed form indeed. But it is not documented nor blogged anywhere. Can you reveal some more information in the aspect?

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.


I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 744 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories


All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: