I’ve had numerous emails questioning where Exadata plays in the yet to be announced OLTP Database Machine. The link I offered in my previous post does not in fact mention Exadata so I understand the emails.
The following is another link regarding the announcement where a depiction of Exadata is prominently featured. Folks, it is Exadata.
Announcing the World’s First OLTP Database Machine with Sun FlashFire Technology
FYI – The link is broken 😦
oops…fixed now
Thanks Kevin
Great news… but Exadata is not really great machine (my thinking)
Regards
Hector
Hello Hector,
Free thinking is appreciated here. Please fill us in on the reasons behind your anti-Exadata stance.
Kevin,
I admit that I have not done enough reading on ExaData but based on the little that I have read, this product lacks disaster recovery features in the sense that there is no easy way to replicate data to the DR site? Is this a correct statement?
Hi Amir,
Your assumption is incorrect. Data Guard is fully supported when using Exadata Storage Server.
“Folks, it is Exadata”
Yes, but isn’t Exadata an Oracle/HP thing, or maybe that just applied to The Oracle Database Machine? That’s where it gets a little confusing for me. The link says Oracle/Sun, not Oracle/Sun/HP.
The software component is Exadata Storage Server. Until now it has been available only as the HP Oracle Exadata Storage Server. After it is launched there will be a Sun Oracle Exadata Storage Server. The Oracle Database Machine is a pre-configured, balanced offered based on Exadata Storage Server and Oracle Database 11g. To date there has only been the HP Oracle Database Machine. It is only confusing to folks that aren’t aware of the software versus hardware components.
Fair enough. I should have read more carefully. I get it now.
AccidentalSQL,
No problem.
Any guesses on the commodity hardware components going in this Exadata?
We were thinking X4150’s would make a nice Sun Exadata.
No guesses here because I’m currently doing performance work on 3 of these configs 🙂
Larry Ellison or the Sun guest speaker will likely give such details tomorrow…
A little google search says : F5100 flash disk array and T5440 Spart T2 servers.
Not my cup of tea – but open to new ideas.
You, sir, are living in interesting times. And the rest of us are lucky to even get a sniff of these exciting developments 😦
I guess Larry read my post and decided to make it public. Eh? 😉
The latest debate was whether SPARC makes in into that platform?
http://www.pythian.com/news/3981/larry-ellison-to-announce-oltp-database-machine-on-sun-hardware
Hi Alex,
You might be right about some of that, you might be wrong about some of that 🙂
You didn’t ping back to me on that post so I missed it originally.
Kevin,
Is it possible for you to elaborate on what makes this machine more suitable for OLTP workloads as opposed to the HP based Exadata which is more targeted at data warehouses.
Chris
Kevin,
I should have said, other than the fact that SSD is better at handling random I/O than conventional disks and that the Sun F5100 which people seem to think this machine is based on is limited to 4 Tb, no good for VLDBs, what makes this machine something that is targeted at the OLTP market ?.
Chris
Chris,
Why would anyone believe the next Exadata product is based on F5100? Larry Ellison is going to speak about what it actually is in about 1.5 hours from now…
The F5100 doesn’t have any “smarts” (there is no general purpose CPU). So it won’t work so well as a Exadata software host. 🙂
If there ever was an “Oracle Times Ten, real time, DB Machine” then it would make more sense that the F5100 would be coupled to it. Times Ten is a in memory database so 4 TB is likely an OK upper bound for a single storage device. The size of the DB is going to be bound by the size memory in the DB host node so will end up with more affordable F5100 config than the fully populated one. The real time OLTP would be done in memory, but the smaller amount of persistent writes would get the speed up with the faster bandwidth to storage. That would improve the real time response window for the whole system.
Times Ten as a read/write cache has more significant overlap with this new system though since the response times and latencies have dropped (when you happen to hit the Flash cache). For example, Ten Times cache in front of a CRM system might be able to get by with a OLTP focused DB machine since very likely have a mixed load (some OLTP and some more “info aggregating” queries from same system and humans as the primary consumer of answers). However, if have hard, small window real time response constraints (and other computers as consumers as answers) rather than many TBs of data, it may still make more sense.
If the problem doesn’t have hard real time constraints and the data is over 4 TB then will be very hard push for DB Machine version 2. Going to get more fault tolerance built into the single “system”.
The F5100 may “RAC” better when the Sun Unified storage heads play better in a Infiniband storage set up but won’t be core DB Machine storage nodes.
Lyman,
That comment looks more like a blog post than a comment and I honestly can’t figure out what you are saying. Are you the same Lyman that used to work for Oracle?
Kevin,
Yes I used to work at Oracle. I apologize on the length of the reply I posted. It was more aim at comments by Chris and some others than your comments.
In short, I thought an example of hard real time OLTP might highlight better both where the F5100 would be more effective and also, with short implicit inference, the differences between Version 1 and Version 2 of Exadata (removing latencies by removing/reducing spinning drives from the complete system.)
Two birds with one very big stone is sometimes too big.
For some deployments Times Ten is placed in front of the Oracle DB to speed up response/latency. Not an exact equivalent, but to similar to ASM/Exadata/Flash being placed “behind” Oracle DB to get similar results when hit the flash cache.
However, your right that doesn’t line up with the primary focus of your blog, so sorry about that.
The reference to clustering and the F5100 at the end, I should have just left off (that’s even more explaining about issues that weren’t present yet.)
The F5100 is a direct attached storage (non cluster environment) solution. That was the other point trying to get at.
Is the HP version going away? Quotes from Reuters…
http://www.reuters.com/article/technologyNews/idUSTRE58E80D20090915
“An Oracle spokeswoman said that the Exadata machine built in partnership with HP is no longer available for sale.”
“Officials at Hewlett-Packard could not be reached for comment.”
The price quoted in the speech and on the pdf price list do not include software costs – I assume that’s intentional. The total cost will be 2-3x that quoted prices. Correct? Do the comparisons to Teradata and such still hold true?
Darryl,
I don’t know anything about price.
Seriously strange how the marketing team didn’t put up the presentation’s slide deck for download in the same places where they put the FAQ and other docs. Larry was clicking through the slides pretty fast in the webcast.
However, seems as though someone has done snapshot here.
http://oracleexadata.blogspot.com/2009/09/exadata-story-in-pictures.html
The HW and SW prices are on a slide there.
Remember, the Exadata Storage Server software prices are per spinning disk. So the Flash Cache appears to be just a hardware cost. (that is good because flash costs alot all by itself compared to spinning drives. 😉 ). There are 12 spinning disks per storage unit in the newer Sun system. So it is still about a $10K/disk software charge. (Pretty sure that was the same cost for Version 1 disk… only there were fewer, 8, disks so the software cost per physical server was/is smaller. ) Your minimal RAC DB Machine system cost (3 storage servers) is going to be higher now, but you have more raw disk space too. Pragmatically, that’s about “a 8TB kind of problem” to be tall enough to get on that ride. Luckily there is now a single node base unit now for those who need smaller dev systems.
Hi Kevin,
A customer of mine is seriously already looking at this.
Quick question is the db block size of the OLTP Database Machine 32K?
Ta
Fairlie
Fairlie,
It is just 11gR2 with powerful, intelligent, high bandwidth storage. All blocksizes are supported.
Personally I find it very interesting with the development of Exadata.
As Kevin says the Exadata Storage explained the Exadata Storage Server has two components, an Oracle developed software that enables smart scans and other functionality by using CPUs placed in the storage server, thus reducing the amount of data that has to be passed from the storage to the RDBMS. Calculations can be made in the storage layer.
HP has provided the hardware that enables this (Intel CPUs in HP servers). Oracle’s software working with those CPUs.
I think the choice of HP as a provider for the first Database Machine came out of a long collaboration, could have been another provider of hardware as well (as we now see with Sun).
It will be interesting to see how the Solid State Disks perform in an OLTP environment.
What’s next? 🙂
Martin
What Solid State Disks? We are talking about FLASH PCI cards, not SSD hanging off of a regular disk controller.
Larry is touting the OLTP credentials of Exadata, but that is Oracle’s bread and butter. The irony is that it is exactly the architecture that makes Oracle good at transactional workloads that is its Achilles heel at scaling to large data volumes and handling big parallel analytical queries. This whole shared-disk model of Exadata + ASM (where every node can see all the data — and you have to manually partition to avoid disk contention) is the antithesis of the shared-nothing MPP vendors like Teradata, Greenplum and Netezza that scale linearly to much larger volumes. More details at [ edited KC]
Case in point — Greenplum’s 6.5 Petabyte database across 96 nodes at eBay that supports a wide class of users and queries with little fanfare or tuning required.
What is “manually partition?”
I’m not going do debate architecture. At this stage in the game it really, truly does not matter.
P.S., Ben, you can post here and trackback to here, but please don’t solicit your purely competitive posts with links.
Thanks Kevin — no problem.
I do not work for any of Oracles competitors. I am a data warehouse consultant and I can honestly say that in my experiences, Oracle is much more about marketing hype than actually delivering on their promises.
Case in point, Exadata on HP. It was the Grand Solution one year ago. Where is it now? Look at all the changes they’ve had in their data warehouse offering. They buy a company, toss their software in the “suite” and then dump it a year of two later.
How can anyone actually trust this company?
Looks like HP is out and Sun in for the hardware side of the Exadata machine.
Understandable enough with the pending deal with Sun proceeding I guess but man that’s a short marriage on this deal.
Wonder what the customers who actually bought one of the HP machines are thinking right now but maybe they can trade it in for a Sun one maybe?
Anyone with a comparison of of key differences between Exadata V1 and V2? More speficifically what makes the one better than the other?
There is “substantial” optimization required for Exadata. Because it compresses each column, many queries will require data to be decompressed thereby reducing performance. Optimizing its memory usage is no trivial task.
This is a huge downfall to its technology and one that is not often talked about. Appliances like Netezza and Teradata are far less prone to “gaps” in performance that are evident in Exadata.
“Because it compresses each column, many queries will require data to be decompressed thereby reducing performance. ”
Rob,
Can you elaborate on this please? Which products allow you to compress only portions of a row? As for decompression, yes, it is expensive, but Oracle is able to filter on data in its compressed form so I’m not sure I understand your point in this matter either.