Not a Record, But It Is In My Book
Not a throughput record that is. This new result at 39,614 QphH comes within 2% of Oracle’s previous top ranking in the 300GB scale. Why would Oracle publish a result that doesn’t even beat their own previous result? Price performance is the answer. I also find the deployment architecture to be very interesting.
Oracle still hold the top ranking at 300GB with their December 2006 result of 40,411 QphH at a price/performance of $18.67 $$/QphH. This new result, however, comes in at only $12.57 QphH-33% cost savings for essentially the same performance! What did Oracle do to reduce the cost? Did they use fewer RAC nodes to reduce their own licensing cost? Did they use a better, cheaper SAN? Did they use a cheaper, faster RAC interconnect? No, none of that. In fact:
- Both results used an 8 node cluster of blade servers
- Both results used Infiniband for the RAC cluster interconnect
- Both results used “networked storage”
So what gives?
No SAN.
The December 2006 result of 40,411 QphH used a $393,765 configuration consisting of some 448 disks in a SAN with 32 MSA 1000 controllers (not including spares). This new result, on the other hand, did not use a SAN at all. Instead, there were 8 blade servers (the RAC cluster) attached to 16 storage blades. The storage blades were running RHEL5 and presenting block storage via SRP over Infiniband to the 8 node RAC cluster. What’s that? Yes, Infiniband-a single, unified connectivity model for both RAC interconnect and storage. With this type of storage, Oracle was able to drive the same performance (yeah, I know, within 2%) with only 128 hard drives. All told, the storage configuration was priced at $141,158-a healthy 64% reduction in storage cost!
This is a very, very good day-but not for Manly Man.
I couldn’t be happier!
Wow – that is very cool! Sounds kinda like google.
I think that the real strength of this is when you combine it with ASM. Oracle automatically makes sure that the data is spread across the LUN’s and manages rebalancing and such. I think that it really could displace storage vendors in a few places.
Jeremy,
Your points are noted, but for whatever reason ASM was not used in this benchmark. However, ASM will undoubtedly play a key part in such storage approaches in the future.
Kevin,
Thanks for pointing on this benchmark. I am currently digging into RAC/ASM/IB/blades performance,scalability, maintenance, etc. Though improved in recent months, max IB supported cable length is (unfortunately) still severely hampering SRP as a viable solution for customers for which synchronous replication on a
remote site cannot be worked around. One of the Cisco IB specialist I talked to recently mentionned that he thought that IB links of a kilometer should come out sometimes in 2008. 11g ASM “fast” resync will also help to get the concept through.