My Interview In The Latest Quarterly Journal Of Northern California Oracle Users Group

This is just a quick blog entry to invite you to get a copy of the latest quarterly journal of Northern California Oracle Users Group. The Editor, Iggy Fernandez, interviewed me on a wide range of topics. The article begins on page 4.

Please click on the following link:

The following is a screenshot of the magazine cover and following that is a list of the questions posed to me in the interview.

Interview questions:

Is hardware the new software? Why bother with indexing,
clustering, partitioning, and sharding (not to mention application
design) if hardware keeps getting bigger and faster
and cheaper every day. Can we just “load and go?”

You were an architect in Oracle’s Exadata development organization.
With all the excitement about Exadata in the Oracle
community, what motivated you to leave?

You’ve been referring to it as “Exaggerdata.” Why so harsh?
You’ve got to admit that there’s plenty of secret sauce under
the covers.

“Do-it-yourself Exadata-level performance?” Really? We’re
all ears.

Are TPC benchmarks relevant today?

Where do you stand in the “Battle Against Any Raid Five

Can NAS match the performance of SAN? Can SAN match
the performance of locally attached storage

As a long-time NetApp fan, I like to stay in my comfort zone
but perhaps I’m biased. Does NetApp still offer anything that
the other NAS vendors (such as EMC) don’t?

Do we need yet another “silly little Oracle benchmark”?

Do you recommend that I use ASM? I worry that a sleepy
system administrator will unplug my disks someday because
“there were no files on them.” I do like to have my files where
I can count them every day!

Do you have an opinion on the NoSQL movement? Will the
relational mothership ever catch up with the speedboats?

My management is pushing me to take my databases virtual.
I can appreciate the advantages (such as elasticity) of virtualization,
but I’m not so sure about performance. Also, I cannot
rely on guest O/S statistics. Is it just the Luddite in me? Can
you help me push back on my management?

I like the Oracle Database Appliance (ODA) because RAC
raises the bar higher than most IT organizations can handle,
and ODA lowers it. What would you improve about the ODA
if you had a wizard wand?

Is NoCOUG a dinosaur, a venerable relic of the pre-information
age that has outlasted its usefulness?

25 Responses to “My Interview In The Latest Quarterly Journal Of Northern California Oracle Users Group”

  1. 1 Noons August 3, 2012 at 12:14 am

    Dang! I was going to spend this weekend with my hobbies, now I need to read and digest this, instead! 🙂
    That BAARF thing was always a bit of a disappointment with me – never signed for it. What can I say other than: I never found RAID5 to be the cause of any problem with the dbs I was looking after at the time.
    In 1990, yes. 2000? No.

    • 2 kevinclosson August 3, 2012 at 9:32 am

      Hi Noons,

      I signed the BAARF initiative back when it made sense to do so. Like I said in the article, it’s about shelf-life of what we “know.” Thanks as always for stopping by.

  2. 3 flashdba August 3, 2012 at 10:16 am

    This is the sentence that I keep reading over and over:

    “Idle processors do not speed up database processing! In database processing, the main purpose of DRAM is to drive up processor utilization—removing waits for high-latency storage accesses.”

    Ok two sentences. Would you believe I worked on a POC recently where the customer wanted to include a success criteria that CPU utilisation shouldn’t exceed 80%?

    • 4 kevinclosson August 3, 2012 at 12:45 pm

      @flashdba : Yes I would believe that. It’s the over-configure paranoia rooted in 1990s mentality. If one is deploying fresh kit in 2012 on a platform that isn’t elastic enough to respond to CPU requirements on demand they are simply missing out. I heard a lot of that madness when I was still in the Exadata organization. Customers deploying a ~14kW 42U rack of gear and basking in the “safety” of spare CPU cycles. That’s also one reason I am so opposed to the notion of deploying Oracle on a full rack Exadata system since the 168 cores in the storage grid are mostly idle yet gross power draw, cooling and airflow requirements are essentially the same as running full bore.

      Let’s think of the absurdity of such arbitrary “safety net” approaches. Consider the fact that a simple E5 2s16c32t system at 80% utilization can be 16 cores magically hovering at 80% or 13 cores glued to the wall with 3 cores idle. CPUs are either running something or running nothing. It’s time slicing and voluntary yields that create the appearance of “80% utilization”.

      • 5 Noons August 29, 2012 at 8:45 pm

        This reminded me of a recent episode when I was told to “look at that db” because one of the LUNs it was stored in had 100% disk utilization. When I asked what the disk queue length was, I was told it was 0.
        “OK, so the queue length is 0, the usage is 100%. And you want me to tune exactly WHAT?”
        Some moron then mentioned me not being a “team player”. That’s when I just switched off and left the room.
        They come in all shapes and sizes…

  3. 6 George August 7, 2012 at 5:39 am

    always enjoy reading your thoughts… analysing your thinking patterns behind the comments

  4. 7 stelladba August 29, 2012 at 6:52 am

    I really enjoyed reading this interview. I can understand Oracle’s desire to create engineered systems that only utilize their hardware for a couple of reasons: 1) I’ve seen enough companies fail at building a two node Linux RAC cluster to know that companies trying to roll their own Exadata would not work in most situations. 2) If point 1 is true, why not optimize and deliver on one platform and save the porting costs.
    Where I think this logic really falls apart for Oracle is the requirement that HCC only run on Exadata, Pillar, or ZFS appliance. Clearly HCC is a hardware agnostic solution; proof of this can be found in the patch that “fixes” Oracle to only allow HCC if the underlying storage is sold by Oracle. purposely siloing a database feature that is 100% software to only run on certain hardware seems short sighted and a little desparate to me.

    • 8 kevinclosson August 29, 2012 at 9:50 am

      @stelladba : It concerns me deeply that RAC is so utterly miserable to install and configure that folks are willing to suffer total vendor lock-in as a tonic. If RAC is the poison that motivates folks to buy all those quarter-rack Exadata configurations (mostly for “OLTP” or ERP) then I think it behooves corporate IT to question whether it would be better to go with a larger single system. And, yes, I have heard all the arguments that aim to promote RAC as a high-availability tool. I’ve never bought into those arguments though. RAC is for scale out. That’s why it fits nicely into Exadata architecture because that particular platform requires scale out to compete with non-Oracle DW/BI/Analytics offerings in the marketplace.

      I’m extremely bullish on general purpose pre-configured systems like VCE Vblock and, yes, IBM PureFlex. This is the future, not one application dictating to you the hardware you require. In fact, it’s time for folks to stop thinking so much about hardware. It’s time to virtualize for all the benefits that brings.

      It’s all controversial, I know, but that is just my 2 cents.

      • 9 stelladba August 30, 2012 at 8:51 am

        I couldn’t agree more with your opinion that RAC is not an HA solution. I can count on one hand the number of times I’ve actually seen a node go down and there not be a major disruption in service. I just did some extremely light (wikipedia) reading on vce and PureSystems. I may be missing something but how is buying IBM’s engineered system any less of a vendor lock in than buying Oracle’s? I guess you do have to buy the software for the storage cells with Exadata and that really only runs on Exadata but everything else is just database licenses. If you don’t like Exadata, move your licenses to different hardware. I’m not trying to come off like an Oracle apologist, I just don’t see a huge distinction here. To me the biggest impedement to moving to more virtualized “cloud” offerings is the rigid licensing models of most software vendors. I do think virtualized compute resources is real and it is coming and the larger software vendors that can figure out how to license for that model are going to win in the long run.

        • 10 kevinclosson August 30, 2012 at 11:03 am

          @stelladba : My point about general-purpose “engineered” systems is simple. Neither IBM nor the VCE consortium are aggressively dumping PureFlex-lock-in or Vblock-lock-in code into the Oracle server. Oracle, on the contrary, is rapidly closing the system in just that manner.

          Most Exadata customers are quarter-rack adopters for non-DW/BI in spite of the fact that there is no secret sauce for OLTP. In their defense they offer the astounding value add of having a RAC-ready cluster pre-configured. I’m just saying that it is in the best interest of IT shops to mitigate the pains of building a RAC-ready cluster in any means other than to adopt lock-in features. Hybrid Columnar Compression comes to mind. Weave that into your workflow and you’re locked in. Work out your SLAs with Oracle on a true open platform (RAC-ready from the factory) and thus keep *your* best interests in mind.

          How much of a heretic am I to remind poeple that vendor lock-in has downside–eventually. Isn’t there a little, um, prior art on that matter?

      • 11 stelladba August 30, 2012 at 2:55 pm

        I think what this really boils down to today is The “secret sauce” of the Exadata Storage cells and HCC. I’m not aware of any other software / hardware lock-in in Oracle’s product line. I 100% agree with you on HCC anf think that Oracle has made a short sighted decision in an attempt to push their storage hardware. I would recommend companies who are not already “all in” with Oracle avoid that particular feature or at least understand the ramifications of what they are entering into. (BTW, how long before someone writes a little hack into openfiler to send back the “yes I’m ZFS Appliance! flag” when DNFS asks?) I really hope Oracle sees the light on the HCC. They are cutting their own throats with this strategy. I suspect they will in time but we’ll see.
        I think Oracle has a legitimate case for hardware lock in in regards to Exadata if only for the reason that they’ve seen too many people configure RAC with a crossover cable on a 100Mb interface and then complain that “RAC sucks and doesn’t scale”. Just make sure to buy the hardware and Secret Sauce licenses on a seperate CSI than the database licenses and you should be ok lock-in wise. 🙂

  5. 12 Noons August 29, 2012 at 8:55 pm

    RAC is for scale-out and not HA? Oooops, now you blaspheme, Kevin! 😀
    Surely the mor^H^H^Hexperts who have been trying to flog geographically disperse RAC to us as a HA/DR solution cannot be wrong! After all, they’re “recognized experts” while I’m just an old fart dba1.0.
    If only they would L-I-S-T-E-N. Ah well, I’ll simply have to continue to beat them around the ears with a very big stick…

    • 13 kevinclosson August 30, 2012 at 9:00 am

      Hi Noons: There really are only 4 members of the RAC-is-meant-for-HA camp in my assessment: 1) the folks who are commissioned to sell it, 2) the folks who are contracted to suffer through installing and configuring it, 3) the folks who are not aware that x64 servers are huge and scalable and bigger than 90+% of the needs of production Oracle databases and 4) the folks who sort of bought it to “be safe” and are protecting their decision.

      On paper RAC has hi-availablity attributes and those paper value props have played out for some types of failures, but modern deployment options for protection against server failure offer more uptime and a significantly lower planned annual downtime than RAC.

      Server failures (as in immediate crash) just don’t happen as much as some seem to imagine. The complexities of properly responding to any failure, other than a simple server crash, are what cut into the ability of RAC to deliver on it’s promise.

      • 14 Simon Haslam August 30, 2012 at 10:33 am

        @kevin Your comment beat me to it! I’m leaning to that view too – modern x86_64 is so darn fast that the horizontal scalability offered by RAC is not a requirement for many/most systems. I only have one customer who has a sort of HA driven requirement for RAC: they have a lot of application servers and, with RAC, if they lose an db instance they only impact a proportion of their connections (which can then recover). To completely restart this tier it takes a long time. The appeal of RAC to them is that the database “never” goes down (we could debate black/brown-outs during a node recovery of course) which would not be possible with an Active/Passive cluster (excepting perhaps something like VMware Fault Tolerance but I don’t imagine there are many Oracle databases on that yet). Now whether that is a good example (an intolerant app), and whether real single instance downtime statistics would justify the complexity of RAC, I don’t really know.

        Maybe over the next 5 years we’ll see a polarisation towards either RAC+Exadata or single instance+virtual servers. I for one would welcome some database simplification – ignoring scripts/automation, how much longer does it take to build, say, a two node RAC cluster as compared to a single (non-ASM) instance? 10:1? (i.e. there’s all sorts of other things, and teams, to worry about with GI).

        Note this isn’t a “RAC bashing” comment – it’s just observing that, firstly, things have moved on since 9i (hardware scalability and virtualisation) and, secondly, I’m really not interested in complex solutions – I want relatively simple, cleanly defined layers that I have a vague chance to be able to understand, manage and troubleshoot.

        • 15 kevinclosson August 30, 2012 at 10:52 am

          > Note this isn’t a “RAC bashing” comment – it’s just observing that, firstly, things have moved on since 9i (hardware scalability and virtualisation) and, secondly, I’m really not interested in complex solutions – I want relatively simple, cleanly defined layers that I have a vague chance to be able to understand, manage and troubleshoot.

          …neither you, nor I, are “RAC-bashing.” Both you, and I, are simply willing to accept the the world doesn’t remain flat. The things we know about IT has a shelf-life.

      • 16 Noons August 30, 2012 at 5:05 pm

        Absolutely! Every single time we suffer the Oracle marketeers trying to sell us on RAC for HA, I always end up asking if they checked our actual, real, availability figures.
        Call me recalcitrant but a system with zero(0) – not 0.0001. Z-E-R-O! – unscheduled downtime in 4 years out of 5, and with only a 5 minute outage in one single vio partition in those 5 years, is NOT a system that needs to be improved with RAC!
        But try and make Oracle sales reps and con-sultant “experts” understand they are barking up the WRONG tree? Now, that is what I call an impossible task!

  6. 18 George August 30, 2012 at 9:19 pm

    Hi Kevin, sorry for the one asking this, but how is the exadata feature set a lock in, it’s functionality that makes things faster in the database when deployed on non Ed. nothing stop you from copying the database as is off the Exadata platform onto another Linux platform.
    As for HCC, hmm, that is probably more of a potential lock in, since you have to first uncompress on the target platform, but then again, it’s a simple alter table move command.
    Lock in for me implies I can not run that database on anything else, as it is. for me this would be true appliance, where the software / platform is not available on any other vendor and it is a data migration, note note database migration. These are the true DW appliances… my 1 cent, can’t afford 5… 🙂


    • 19 George August 30, 2012 at 9:25 pm

      ok, to comment on my own comment, HCC is a feature of exadata, but then the use of it, or the non use is a user options, the user chooses to have the benefit, understanding the implications should they want to copy the database some where else,

  1. 1 Oracle Announces the World’s Second OLTP Machine. Public Disclosure Of Exadata Futures With Write-Back Flash Cache. That’s a Sneak Peek At OOW 2012 Big News. « Kevin Closson's Blog: Platforms, Databases and Storage Trackback on August 13, 2012 at 5:17 pm
  2. 2 NoCOUG Referral « Database Fog Blog Trackback on August 23, 2012 at 1:06 pm
  3. 3 What’s next « Oracle Scratchpad Trackback on August 27, 2012 at 10:05 am
  4. 4 Exadata X3 – Sound The Trumpets « flashdba Trackback on September 21, 2012 at 9:55 am
  5. 5 Performance: It’s All About Balance… | flashdba Trackback on April 2, 2013 at 9:57 am

Leave a Reply to Simon Haslam Cancel reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.


I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 744 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories


All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: