Oracle Drops Exadata In Favor of Sun FlashFire Based OLTP Database Machine?

I’ve had numerous emails questioning where Exadata plays in the yet to be announced OLTP Database Machine. The link I offered in my previous post does not in fact mention Exadata so I understand the emails.

The following is another link regarding the announcement where a depiction of Exadata is prominently featured. Folks, it is Exadata.

Announcing the World’s First OLTP Database Machine with Sun FlashFire Technology

41 Responses to “Oracle Drops Exadata In Favor of Sun FlashFire Based OLTP Database Machine?”


  1. 1 Brett Schroeder September 14, 2009 at 6:56 pm

    FYI – The link is broken 😦

  2. 3 Ligarius September 14, 2009 at 7:20 pm

    Thanks Kevin

    Great news… but Exadata is not really great machine (my thinking)

    Regards
    Hector

  3. 5 Amir Hameed September 14, 2009 at 8:55 pm

    Kevin,
    I admit that I have not done enough reading on ExaData but based on the little that I have read, this product lacks disaster recovery features in the sense that there is no easy way to replicate data to the DR site? Is this a correct statement?

  4. 7 accidentalSQL September 14, 2009 at 9:09 pm

    “Folks, it is Exadata”

    Yes, but isn’t Exadata an Oracle/HP thing, or maybe that just applied to The Oracle Database Machine? That’s where it gets a little confusing for me. The link says Oracle/Sun, not Oracle/Sun/HP.

    • 8 kevinclosson September 14, 2009 at 9:31 pm

      The software component is Exadata Storage Server. Until now it has been available only as the HP Oracle Exadata Storage Server. After it is launched there will be a Sun Oracle Exadata Storage Server. The Oracle Database Machine is a pre-configured, balanced offered based on Exadata Storage Server and Oracle Database 11g. To date there has only been the HP Oracle Database Machine. It is only confusing to folks that aren’t aware of the software versus hardware components.

  5. 11 Darryl September 15, 2009 at 12:01 am

    Any guesses on the commodity hardware components going in this Exadata?
    We were thinking X4150’s would make a nice Sun Exadata.

  6. 14 David Aldridge September 15, 2009 at 6:18 am

    You, sir, are living in interesting times. And the rest of us are lucky to even get a sniff of these exciting developments 😦

  7. 15 Alex Gorbachev September 15, 2009 at 12:24 pm

    I guess Larry read my post and decided to make it public. Eh? 😉

    The latest debate was whether SPARC makes in into that platform?
    http://www.pythian.com/news/3981/larry-ellison-to-announce-oltp-database-machine-on-sun-hardware

  8. 17 Chris Adkin September 15, 2009 at 5:49 pm

    Kevin,

    Is it possible for you to elaborate on what makes this machine more suitable for OLTP workloads as opposed to the HP based Exadata which is more targeted at data warehouses.

    Chris

  9. 18 Chris Adkin September 15, 2009 at 6:12 pm

    Kevin,

    I should have said, other than the fact that SSD is better at handling random I/O than conventional disks and that the Sun F5100 which people seem to think this machine is based on is limited to 4 Tb, no good for VLDBs, what makes this machine something that is targeted at the OLTP market ?.

    Chris

    • 19 kevinclosson September 15, 2009 at 6:16 pm

      Chris,

      Why would anyone believe the next Exadata product is based on F5100? Larry Ellison is going to speak about what it actually is in about 1.5 hours from now…

    • 20 Lyman September 16, 2009 at 6:19 pm

      The F5100 doesn’t have any “smarts” (there is no general purpose CPU). So it won’t work so well as a Exadata software host. 🙂

      If there ever was an “Oracle Times Ten, real time, DB Machine” then it would make more sense that the F5100 would be coupled to it. Times Ten is a in memory database so 4 TB is likely an OK upper bound for a single storage device. The size of the DB is going to be bound by the size memory in the DB host node so will end up with more affordable F5100 config than the fully populated one. The real time OLTP would be done in memory, but the smaller amount of persistent writes would get the speed up with the faster bandwidth to storage. That would improve the real time response window for the whole system.

      Times Ten as a read/write cache has more significant overlap with this new system though since the response times and latencies have dropped (when you happen to hit the Flash cache). For example, Ten Times cache in front of a CRM system might be able to get by with a OLTP focused DB machine since very likely have a mixed load (some OLTP and some more “info aggregating” queries from same system and humans as the primary consumer of answers). However, if have hard, small window real time response constraints (and other computers as consumers as answers) rather than many TBs of data, it may still make more sense.

      If the problem doesn’t have hard real time constraints and the data is over 4 TB then will be very hard push for DB Machine version 2. Going to get more fault tolerance built into the single “system”.

      The F5100 may “RAC” better when the Sun Unified storage heads play better in a Infiniband storage set up but won’t be core DB Machine storage nodes.

      • 21 kevinclosson September 16, 2009 at 6:53 pm

        Lyman,

        That comment looks more like a blog post than a comment and I honestly can’t figure out what you are saying. Are you the same Lyman that used to work for Oracle?

      • 22 Lyman September 16, 2009 at 9:16 pm

        Kevin,

        Yes I used to work at Oracle. I apologize on the length of the reply I posted. It was more aim at comments by Chris and some others than your comments.

        In short, I thought an example of hard real time OLTP might highlight better both where the F5100 would be more effective and also, with short implicit inference, the differences between Version 1 and Version 2 of Exadata (removing latencies by removing/reducing spinning drives from the complete system.)
        Two birds with one very big stone is sometimes too big.

        For some deployments Times Ten is placed in front of the Oracle DB to speed up response/latency. Not an exact equivalent, but to similar to ASM/Exadata/Flash being placed “behind” Oracle DB to get similar results when hit the flash cache.

        However, your right that doesn’t line up with the primary focus of your blog, so sorry about that.

        The reference to clustering and the F5100 at the end, I should have just left off (that’s even more explaining about issues that weren’t present yet.)

        The F5100 is a direct attached storage (non cluster environment) solution. That was the other point trying to get at.

  10. 23 Other Brother Darryl September 15, 2009 at 11:52 pm

    Is the HP version going away? Quotes from Reuters…

    http://www.reuters.com/article/technologyNews/idUSTRE58E80D20090915

    “An Oracle spokeswoman said that the Exadata machine built in partnership with HP is no longer available for sale.”

    “Officials at Hewlett-Packard could not be reached for comment.”

  11. 24 Darryl September 16, 2009 at 2:06 am

    The price quoted in the speech and on the pdf price list do not include software costs – I assume that’s intentional. The total cost will be 2-3x that quoted prices. Correct? Do the comparisons to Teradata and such still hold true?

    • 25 kevinclosson September 16, 2009 at 2:07 pm

      Darryl,

      I don’t know anything about price.

    • 26 Lyman September 16, 2009 at 8:19 pm

      Seriously strange how the marketing team didn’t put up the presentation’s slide deck for download in the same places where they put the FAQ and other docs. Larry was clicking through the slides pretty fast in the webcast.

      However, seems as though someone has done snapshot here.
      http://oracleexadata.blogspot.com/2009/09/exadata-story-in-pictures.html

      The HW and SW prices are on a slide there.

      Remember, the Exadata Storage Server software prices are per spinning disk. So the Flash Cache appears to be just a hardware cost. (that is good because flash costs alot all by itself compared to spinning drives. 😉 ). There are 12 spinning disks per storage unit in the newer Sun system. So it is still about a $10K/disk software charge. (Pretty sure that was the same cost for Version 1 disk… only there were fewer, 8, disks so the software cost per physical server was/is smaller. ) Your minimal RAC DB Machine system cost (3 storage servers) is going to be higher now, but you have more raw disk space too. Pragmatically, that’s about “a 8TB kind of problem” to be tall enough to get on that ride. Luckily there is now a single node base unit now for those who need smaller dev systems.

  12. 27 Fairlie Rego September 16, 2009 at 8:27 am

    Hi Kevin,

    A customer of mine is seriously already looking at this.
    Quick question is the db block size of the OLTP Database Machine 32K?

    Ta
    Fairlie

  13. 29 Martin M September 16, 2009 at 9:08 am

    Personally I find it very interesting with the development of Exadata.

    As Kevin says the Exadata Storage explained the Exadata Storage Server has two components, an Oracle developed software that enables smart scans and other functionality by using CPUs placed in the storage server, thus reducing the amount of data that has to be passed from the storage to the RDBMS. Calculations can be made in the storage layer.

    HP has provided the hardware that enables this (Intel CPUs in HP servers). Oracle’s software working with those CPUs.

    I think the choice of HP as a provider for the first Database Machine came out of a long collaboration, could have been another provider of hardware as well (as we now see with Sun).

    It will be interesting to see how the Solid State Disks perform in an OLTP environment.

    What’s next? 🙂

    Martin

  14. 31 Ben Werther September 17, 2009 at 8:27 pm

    Larry is touting the OLTP credentials of Exadata, but that is Oracle’s bread and butter. The irony is that it is exactly the architecture that makes Oracle good at transactional workloads that is its Achilles heel at scaling to large data volumes and handling big parallel analytical queries. This whole shared-disk model of Exadata + ASM (where every node can see all the data — and you have to manually partition to avoid disk contention) is the antithesis of the shared-nothing MPP vendors like Teradata, Greenplum and Netezza that scale linearly to much larger volumes. More details at [ edited KC]

    Case in point — Greenplum’s 6.5 Petabyte database across 96 nodes at eBay that supports a wide class of users and queries with little fanfare or tuning required.

    • 32 kevinclosson September 17, 2009 at 9:52 pm

      What is “manually partition?”

      I’m not going do debate architecture. At this stage in the game it really, truly does not matter.

      P.S., Ben, you can post here and trackback to here, but please don’t solicit your purely competitive posts with links.

      • 33 Ben Werther September 17, 2009 at 10:40 pm

        Thanks Kevin — no problem.

      • 34 Rob February 21, 2010 at 3:43 pm

        I do not work for any of Oracles competitors. I am a data warehouse consultant and I can honestly say that in my experiences, Oracle is much more about marketing hype than actually delivering on their promises.

        Case in point, Exadata on HP. It was the Grand Solution one year ago. Where is it now? Look at all the changes they’ve had in their data warehouse offering. They buy a company, toss their software in the “suite” and then dump it a year of two later.

        How can anyone actually trust this company?

  15. 35 John Hurley September 19, 2009 at 10:54 pm

    Looks like HP is out and Sun in for the hardware side of the Exadata machine.

    Understandable enough with the pending deal with Sun proceeding I guess but man that’s a short marriage on this deal.

    Wonder what the customers who actually bought one of the HP machines are thinking right now but maybe they can trade it in for a Sun one maybe?

  16. 36 Sfiso October 8, 2009 at 6:58 am

    Anyone with a comparison of of key differences between Exadata V1 and V2? More speficifically what makes the one better than the other?

  17. 37 Rob February 21, 2010 at 3:34 pm

    There is “substantial” optimization required for Exadata. Because it compresses each column, many queries will require data to be decompressed thereby reducing performance. Optimizing its memory usage is no trivial task.

    This is a huge downfall to its technology and one that is not often talked about. Appliances like Netezza and Teradata are far less prone to “gaps” in performance that are evident in Exadata.

    • 38 kevinclosson February 22, 2010 at 3:18 pm

      “Because it compresses each column, many queries will require data to be decompressed thereby reducing performance. ”

      Rob,

      Can you elaborate on this please? Which products allow you to compress only portions of a row? As for decompression, yes, it is expensive, but Oracle is able to filter on data in its compressed form so I’m not sure I understand your point in this matter either.


  1. 1 Top Posts « WordPress.com Trackback on September 16, 2009 at 12:25 am
  2. 2 Log Buffer #162: a Carnival of the Vanities for DBAs | Pythian Group Blog Trackback on September 18, 2009 at 5:00 pm
  3. 3 Log Buffer #162: a Carnival of the Vanities for DBAs « PlanetMysql.ru – информация о СУБД MySQL Trackback on September 18, 2009 at 7:08 pm

Leave a Reply to Rob Cancel reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.




DISCLAIMER

I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 743 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: