How Many Non-Exadata RAC Licenses Do You Need To Match Exadata Performance?

This post is about exaggeration.

The Oracle Database running in the Database Grid of an Exadata Database Machine is the same as what you can run on any Linux x64 server. Depending on the workload (OLTP/ERP/DW.BI.Analytics) there is the variable of storage offload processing freeing up some cycles on the RAC grid when running Exadata. Yes, that is true.

We all know the only thing that really costs Oracle IT shops is Oracle’s licensing and Oracle’s license model is per-processor.

So the big question is whether spending a significant amount of money for Exadata storage actually reduces the Oracle Database licensing cost due to offload processing. Or, in other words, does the magic of Exadata offload processing save you money.

That’s an interesting topic but before I even blog about it I have to wonder how a company like Oracle aims to improve their bottom line by undercutting their high-margin product space (i.e., RAC licenses)  just to push in low-margin storage products (products based entirely on commodity x64 componentry)  like Exadata? Oh well, who knows? Actually, I can answer that. The investors that think Oracle is a hardware company (as a result of buying Sun Microsystems in 2010) want to see some tin hitting the shipping dock. Really? Swapping high-margin for low-margin? Perhaps it’s a buy high, sell low play where the goal is to make up for it with volume. Hah. I call that ExaMath(tm).

I have heard ridiculous claims concerning how many non-Exadata Linux x64 cores one requires to match the same number of licensed database server cores in an Exadata environment. And when I say ridiculous, I really mean absurd.  But it all comes down to how much you pay for the cores in Exadata storage and what percentage of work is offloaded from the RAC grid to the storage grid. Indeed, if, for instance, 90% of the RDBMS effort is offloaded from the RAC grid to the storage grid in Exadata you’d need 90% fewer excruciatingly expensive RAC licenses to service an application than you would without Exadata storage. That’s an interesting idea and if it helps Oracle sales folks clinch a deal or two I’m sure everyone is all the merrier. As the person cutting the purchase order for the software, aren’t you overjoyed? No? Please, read on.

How Much Offload Processing Will Occur With Your Application?
That depends. However, if you are buying the solution then the onus is upon you to figure that out before you spend money. If there is not a significant amount of offload processing for your application then you paid for a lot of processors that are doing nearly nothing to improve your application performance.

Just for fun sake, please participate in this poll. Your answer may reflect what your Oracle sales team is telling you or it may reflect your perception from Oracle marketing. Either way, let’s see how this goes:

Exaggeration Poll

21 Responses to “How Many Non-Exadata RAC Licenses Do You Need To Match Exadata Performance?”


  1. 1 Enrique Aviles February 13, 2012 at 9:39 am

    I didn’t participate in the poll because the correct answer is missing: “it depends” I would assume Oracle’s sales team would pick “more than double” but that would not be true in all scenarios.

    • 2 kevinclosson February 13, 2012 at 9:53 am

      Enrique, you are correct. “It depends” is the right answer. I assert, however, that there is a floor and a ceiling bounding the degree of “it depends.” Thanks for your comment.

  2. 3 Alex February 13, 2012 at 11:20 am

    The book which you were reviewer of – “Expert Oracle Exadata ” actually I think explains very very well what Exadata can and cannot do . On the bottlenecks sure nothing is perfect and no system will be . Can you bottleneck Exadata – absolutely . But I am sure that the same can be done for Netezza , Vertica and Greenplum just the way of doing it will be different . Now all the sales and marketing guys from all those companies will do wonders to tell you that their mouse trap is the best , they have to make the sell ( “eat what you kill” – or whatever the saying was ). I think that people that know databases and deploy apps for living against those databases can look past that and decide based on technology , technology policy and etc what they need – Exadata /Netezza/Greenplum or Hadoop the whole thing( they say it is “the magic” you know ) or “flash” it with Fusion IO or Violin Mem ( or the next flash startup from last Friday ) and what not 🙂 . For sure nowadays there are more technologies than users in the space I feel sometimes :-).

  3. 5 Matt February 13, 2012 at 5:04 pm

    “It depends” works for me.

    The essence of the question – How many RAC licenses do you need to match performance of Exadata? 0 (RAC (SMP Scaling) does not MPP make)

    To date I have not used more than 80 SMP cores. With or without RAC.

    As cool as Exadata is –
    If you can deliver a solution that provides 150K 8K IOPS (reads) based on SLOB with an dual socket Supermicro server and reduce CPU Stall time by almost 40%. Do you need the cost/expense/complexity of an Exadata?

    Probably the most telling –
    Beating Exadata (matching performance) X2-2 Full Rack

    Server Platform – a single Cisco C460 (4 sockets).
    How? updating the stats on the objects and enabling indexes that were marked invisible. Love it/Hate it – but use the CBO to your advantage.

    The fastest performance comes from not doing the work at all.

  4. 7 Matt February 14, 2012 at 8:03 pm

    Heard something today from an Oracle Rep. I thought was funny. ( i should not have laughed out loud).

    @Kevin
    He stated that RAC was not required for Exadata. While accurate.
    I find it hard to believe a client would be “allowed” to acquire an appliance without the superfluous RAC licenses….

    For fun –
    How much IO/Bandwidth er go performance can a single X4170M2 drive attached to 14 Storage Cells?

    • 8 kevinclosson February 14, 2012 at 8:52 pm

      Actually, Matt, I see nothing wrong with what the rep said. A single Exadata database grid host can drive a tremendous amount of storage throughput but it can only eat 3.2GB/s since there is but a single 40Gb HCA port active on each host. A single host can drive the storage grid nearly to saturation via Smart Scan…but as soon as the data flow back to the host approaches 3.2GB/s the Smart Scan will start to throttle. In fact single session (non-Parallel Query) can drive Smart Scan to well over 10GB/s in a full rack but, in that case you’d have a single foreground process on a single core of WSM-EP so there wouldn’t sufficient bandwidth to ingest much data..about 250MB/s can flow into a single session performing a Smart Scan. So the hypothetical there would be Smart Scan is churning through, let’s say, 10GB/s and Smart Scan is whittling down the payload by about 9.75GB/s through filtration and projection. Those are very close to realistic numbers I’ve just cited but I haven’t measured those sort of “atomics” in a year so I’m going by memory. Let’s say give or take 5% on my numbers.

  5. 9 Alex February 15, 2012 at 4:01 am

    @Matt
    “He stated that RAC was not required for Exadata. While accurate.
    I find it hard to believe a client would be “allowed” to acquire an appliance without the superfluous RAC licenses….” – Huh ? Why not ? Strongly recommend the book “Expert Exadata” ( I get no money or Exadatas or anything from the sales on that – just to be clear 🙂 ) .It could fill the blanks .Even though Kevin and I disagree on the whole AMPP vs SMPP idea I strongly recommend his Exadata “series” of posts from back in the day when he was in Oracle it will help you understand for sure how the system works. Also read Greg Rahn’s analysis on the analysts that some Oracle competitors made on Exadata . That all of course if you really want to know how the thing works.

    @Kevin
    ” EMC Thunder (trust me, you need to ready yourselves for Thunder)” – Actually that will be fun . VFCache( Lighting I think it was called before it became VFCache) I thing opened quite a few possibilities out there . I have read/heard only the rumors about the “Thunder” and seems quite interesting .

    Regards,
    Alex

  6. 10 Matt February 15, 2012 at 11:23 am

    @Alex –
    My laughing out loud – was the selling the solution without RAC not about whether it works or not.

    What customers have purchased the appliance (1/4, half or Full) without the RAC licenses.
    Exadata Software/HW and just Oracle Enterprise Edition.
    Or since we are potentially talking about single node integration – Oracle Standard Edition?

  7. 12 Craig February 16, 2012 at 6:56 am

    Hey Kevin,

    It’s refreshing to hear the truth once in a while. I was wondering if you can shed some light on the following. I attended an Oracle marketing event a week or two ago. During that event, one Oracle rep. mentioned that in certain POCs he has created tablespaces in Flash because the requirements of the POC warranted it. However, he would never suggest doing it in real life. I guess this is basically just the same thing as pinning tables in cache. Can you expound on why he “would never do this in real life?” Isn’t it cheating and doesn’t it show a lack of integrity to do such a thing in a POC? He had me all excited about Exadata before that, but that was quite a turn off.

    Chow,

    Craig

    • 13 Matt February 16, 2012 at 9:29 am

      Kevin will have more detail –

      The Flash by default is not protected. However it is storage.
      It can be configured as LUNs to create a disk group.

      Each amount of Flash Cache used for storage is not available for serving as Cache – 5.3Tb becomes – 5TB with mirroring becomes 4.7TB etc.

      Additionally you may experience write performance issues with the F20 cards as it is not implemented in Exadata as a primary storage device. (wear leveling, garbage collection etc).

      Each storage cell might experience higher CPU loads while it is managing additional IO to/from the Flash Cache.

      possibly other other operational concerns – rolling patches and recycling of storage nodes –

    • 14 kevinclosson February 16, 2012 at 12:06 pm

      Hi Craig,

      I need to address the topic of Exadata static flash provisioning properly soon it seems, but in short what this Oracle rep was proving to you is that Exadata Smart Flash Cache is a misnomer. It isn’t all that smart because it can only think about reads.

      Exadata cannot scale random writes.

      There is a full-rack capacity for only 25,000 (normal ASM redundancy) random writes per second. So, unlike dynamic, comprehensive, optimized auto-tiering technology optimized for reads and writes like EMC FAST, the Exadata admin has to carve up the flash assets in the cells and statically provision disk groups (enter redundancy) for write-intensive objects. Well, we all know that “write-intensive objects” are not cast in stone. So admins will have to constantly monitor and move stuff around. That’s a Y2K approach to solving a problem that is, in my assessment, no longer a problem.

      Some more on the matter: https://twitter.com/#!/kevinclosson/status/170236030312644608

      Oracle’s whitepaper proving they have to do the same thing your rep did in those PoCs: http://bit.ly/zL5MBD

  8. 15 Vishal Gupta May 3, 2012 at 7:45 am

    Kevin,

    In your opinion which other production out of Netezza/Teradata/Greenplum/HP Vertica/etc can meet Exadata in terms of performance of DW in their single full rack configuration?

    • 16 kevinclosson May 3, 2012 at 8:49 am

      >meet Exadata in terms of performance of DW

      Hi Vishal,

      That is an excellent question. The unfortunate answer is “it depends.” Each of these products have strengths and weaknesses. If only we could succinctly itemize all that the term DW encapsulates, but we simply cannot. Since Exadata is nothing more than a RAC database with fast storage it has 30 years of Oracle Database features on board. There are a lot of DW processing elements that benefit from that tooth-long pedigree.

      I’ve seen queries where a half-rack EMC Data Computing Appliance with Greenplum blows the doors off of Exadata. It should also not surprise you that I’ve seen certain queries where Exadata comes in much faster (e.g., 3x) than Greenplum. In neither case, however, did one or the other prove faster in all aspects of the PoC. So customers need to put weight on their criteria.

      I think of the vendors you cite in your question only Oracle would have the unabashed hubris to suggest they are faster than any other solution under any condition.

      One thing I would like to point out is the fact that Greenplum is just software. EMC has a platform option (the DCA) but allows customers to build their own. Why? Well, to specifically address your question. How arrogant would EMC have to be to suggest that the DCA is the perfect embodiment of a Greenplum platform for all cases all the time? If the customer workload demands a totally different blend of memory-to-processor ratio, fine, it’s just software. One customer I’m familiar with has a 18-node cluster of 4-way E7 boxes each with large memory. That’s because 720 Xeon E7 cores fit their needs. And, gasp, they also feed that cluster with Fibre Channel. Egad, how can they possibly survive without the magic of Infinband? Well, they compress their data and it is columnar which serves as a very effective multiplier for data flow and, as you know from viewing Critical Thinking Meets Exadata (http://wp.me/p21zc-Xf), Infiniband is critical (specifically) to Exadata because the architecture has a brick wall between filtration and compute-intensive SQL processing (e.g., join, aggregate, etc) so Exadata demands that level of bandwidth. I’d take memory speed over any network speed any day of the week however.

      So, what did I just say? Well, on a rack-to-rack comparison Exadata is faster than Greenplum–for some tasks. How could that not be the case? Likewise, on a rack-for-rack basis Teradata is faster than Netezza. How could that not be the case for at least some tasks? Moreover, Greenplum is faster than Exadata on a rack-per-rack basis for some tasks and I ask once again, how could that not be the case?

      Believe me when I say this: If one has invested in Exadata they are not fools–unless, that is, they did so because they were too frightened to find something better.

  9. 17 Arun October 18, 2012 at 6:17 pm

    Hey , I work in telco industry , we have large data set and complex edw model , we bought it from teradata …..and it was good for few years … Then data exploded and as u know expansion in teradata is bit expensive ….so we went to evaluate emc green plum , oracle exadata , hp Vertica and IBM netteza ….

    We choose Vertica for no hw dependent pricing model, lower pricing , extremely happy with performance, now all 40+users are all happy to do reports without waiting …and it’s fit in low end hp dl380 servers ..it’s very good for olap /edw use case.

    In speed of reporting of 20 reports
    ranking will be like :no 1is Vertica, 2. Netteza , green plum, oracle
    In compression ratio: Vertica has natural advantage … Among others IBM is good
    Worst is emc and oracle …As always expected as its both want to sell ton of storage and hw
    Scalability :all does scale well … No issues all are good
    Loading time : emc is no 1, then (teradata , Vertica, oracle , IBM , ) are all good too no issues but emc is faster 
    Concurrent user query :Vertica ,emc green plum ,then only IBM …oracle is slow in any type of query case comparatively but much better than its olap 10g 

    Note :Need to compare apple to apple, same no of cores ,ram,data volume, and reports 

    But it’s only for edw/analytics/olap case , I am still an oracle fan boy for all oltp and rich plsql , connectivity any hw or system  … Exadata not much.

    • 18 kevinclosson October 22, 2012 at 8:07 am

      Arun,

      Thanks for sharing your observations. I do have a question, however. You mention Vertica software-only value proposition. I’d like to know if you considered Greenplum in the same manner as Greenplum do offer product in software-only form. In my opinion that is a huge advantage for a workloads that (for whatever reason) cannot be serviced with the appliance approach. That is, I’m grossly in favor of open systems.

      • 19 Arun November 11, 2012 at 7:54 am

        Yes we heard that there are two way emc give Greenplum …one as software …next as appliance model …but forget the differance but they do have between them ..it was very competitive pricing too etc…

        But when choosing we do some ..tco .and thought about amount of amount of hw to do the job now and what about in longer term … So in that aspect considering compression ratio of both Vertica and Greenplum was an key factor ..and we do like some features like ..architecture of Vertica .. no index concept and more easy tunning methoods in Vertica where more compelling ..and green plum architecture is based on master and slave ..but in Vertica it’s like all node act as master..so that’s the reason the concurrancy is better….overall it’s more next generation approach to solve many problem and it sits & fits well with our need sourounding olap.

  10. 20 Joe Smith February 9, 2013 at 3:26 pm

    There are 2000+ Exadata customers globally and this number is increasing every day. How many Greenplum customers are there, 10, 20? The Exadata FUD is no longer valid, Exadata has revolutionized the IT industry.

    I’m also hearing rumors that EMC is letting Greenplum staff go, some of the same people they coerced into leaving Oracle. Sad, very sad.

    • 21 kevinclosson February 11, 2013 at 11:03 am

      Hi Joe Smith,

      It would take a lot to convince us that 2,000 individual named-accounts have adopted Exadata. By this point, however, it seems reasonable that there are 2,000 Exadata systems deployed of varying size. I don’t know the number of Greenplum named accounts nor units shipped.

      Exadata has not revolutionized the IT industry because there is other software running in most IT shops besides Oracle Database. Yes, imagine that!

      My literature is not FUD. It is technically-sound, accurate “knowledgeware.” Since you are commenting from Redwood Shores (according to IP) I suspect you *know* that I know what I’m talking about. However, there appears to be no shortage of people who work for Oracle that know little to nothing about Exadata and that includes the people who market it and sell it. They routinely misrepresent facts and that is not me talking, that would be the ASRC: http://www.bbb.org/blog/2012/07/oracle-pulls-national-ad-campaign-at-bbbs-request/

      The rest of your comment is, sadly, FUD.

      There is now a sufficiently large cadre of deep-thought types that question Oracle’s exaggerations and raise the issues so that Oracle’s customers can think, and act, on their own best interest. I’ll be doing less of that heavy lifting after a near-future blog post I have planned on the matter.

      Oracle crying FUD is some ridiculous form of dark-humor. What irony. There was a day when best of breed Unix systems vendors joined forces with Oracle field personnel who, together, walked in to true-blue IBM shops and spoke the heresy that was *any alternative* to a *pure IBM solution* (sounds a little like “Red Stack”). Those open systems pioneers were fingered as spreading FUD. I don’t know, however, if these open systems pioneers harbored the rational fear of ending up floating face-down in a shallow pool as do those of us marked, incorrectly, as FUD-spreading heretics.

      Just play fair and earn your customers. It’s just that simple.


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




DISCLAIMER

I work for Amazon Web Services but all of the words on this blog are purely my own. Not a single word on this blog is to be mistaken as originating from any Amazon spokesperson. You are reading my words this webpage and, while I work at Amazon, all of the words on this webpage reflect my own opinions and findings. To put it another way, "I work at Amazon, but this is my own opinion." To conclude, this is not an official Amazon information outlet. There are no words on this blog that should be mistaken as official Amazon messaging. Every single character of text on this blog originates in my head and I am not an Amazon spokesperson.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,894 other followers

Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: