Not a Record, But It Is In My Book
Not a throughput record that is. This new result at 39,614 QphH comes within 2% of Oracle’s previous top ranking in the 300GB scale. Why would Oracle publish a result that doesn’t even beat their own previous result? Price performance is the answer. I also find the deployment architecture to be very interesting.
Oracle still hold the top ranking at 300GB with their December 2006 result of 40,411 QphH at a price/performance of $18.67 $$/QphH. This new result, however, comes in at only $12.57 QphH-33% cost savings for essentially the same performance! What did Oracle do to reduce the cost? Did they use fewer RAC nodes to reduce their own licensing cost? Did they use a better, cheaper SAN? Did they use a cheaper, faster RAC interconnect? No, none of that. In fact:
- Both results used an 8 node cluster of blade servers
- Both results used Infiniband for the RAC cluster interconnect
- Both results used “networked storage”
So what gives?
The December 2006 result of 40,411 QphH used a $393,765 configuration consisting of some 448 disks in a SAN with 32 MSA 1000 controllers (not including spares). This new result, on the other hand, did not use a SAN at all. Instead, there were 8 blade servers (the RAC cluster) attached to 16 storage blades. The storage blades were running RHEL5 and presenting block storage via SRP over Infiniband to the 8 node RAC cluster. What’s that? Yes, Infiniband-a single, unified connectivity model for both RAC interconnect and storage. With this type of storage, Oracle was able to drive the same performance (yeah, I know, within 2%) with only 128 hard drives. All told, the storage configuration was priced at $141,158-a healthy 64% reduction in storage cost!
This is a very, very good day-but not for Manly Man.
I couldn’t be happier!