This is just a quick blog entry to correct some minor (and some not-so-minor) errors I’ve stumbled upon in blog and Web news media.
Exadata Storage Server Gross Capacity
In his recent Computerworld article, Eric Lai was trying to put some flesh on the bones as it were. It is a good article, but a few bits need correction/clarification. First, Eric wrote:
The Exadata Storage Server, a standard rack-mountable HP Proliant DL180 G5 server sporting two Intel quad-core CPUs connected to 12 hard drives of 1TB each.
The HP Oracle Database Machine has 2 hard drive options-SAS and SATA. The SAS option is comprised of 12 300GB 15,000 RPM drives. The SATA option is indeed 12 drives each 1TB in size.
HP Oracle Database Machine Communications Bandwidth
Later in the article the topic of bandwidth between the RDBMS hosts and Exadata Storage Servers was covered. The author wrote:
[…] users can expect a real-world bandwidth today of 1Gbit/sec, which he claimed is far faster than conventional disk storage arrays.
The HP Oracle Database Machine has 1 active and one failover Infiniband path between each of the RDBMS hosts (for RAC inter-process communication) and from each RDBMS host to each Exadata Storage Server. Each path offers 20Gb bandwidth which is more aggregate streaming disk I/O than a Exadata cell can produce for up-stream delivery as I explain in question #2 two of my Exadata Storage Server FAQ. Since the disks can “only” generate roughly 1GB of streaming data, we routinely state that users can expect real-world bandwidth today of 1GB/second. Note the difference in notation (e.g., GB vs Gb) because it accounts for nearly one order of magnitude difference.
A Rack Half-Full or Half-Empty?
When discussing the Beta testing activity, the author quoted Larry Ellison as having said the following at his Keynote address:
A number of Oracle customers have been testing the Machine for a year, putting their actual production workloads onto half-sized Oracle Database Machines, because we’re really cheap
I was there and he did say it. While I may be dumb (actually I can talk), I am not stupid enough to “correct” Larry Ellison. Nonetheless, when Larry said the Beta participants were delivered a half-sized HP Oracle Database Machine he was actually being too generous. He said we sent a half configuration because “we’re really cheap”, but in fact we must be even cheaper because, while we sent them half the number of RDBMS hosts, we sent them 6 Exadata Storage Servers as opposed to 7, which would be exactly half a Database Machine.
Good for the Goose, Good for the Gander
Finally, on my very own blog (in this post even!) I have routinely stated the wire bandwidth of the Infiniband network with which we interconnect Oracle Database instances and Oracle Database instances to Oracle Exadata Storage Server cells as being 20Gb/s. Of course with all communications protocols there is a difference between the wire-rate and the payload bandwidth. One of my blog readers commented as follows:
Yet another minor nit
As I commented elsewhere, 20Gb/s is the IB baud rate, the useful bit rate is 16Gb/s (8b/10b encoding). I am not sure why the IB folks keep using the baud numbers.
And to that I replied:
Not minor nits at all. Thanks. We have used pretty poor precision when it comes to this topic. Let me offer mitigating circumstances.
While the pipe is indeed limited to 16Gb payload (20% less than our routinely stated 20Gb), that is still nearly twice the amount of data a cell can produce by streaming I/O from disk in the first place. So, shame on us for being 20% off in that regard, but kudos to us for making the pipes nearly 100% too big?
Kevin,
The ODM spec sheets says there are four 24-port InfiniBand switches (96 total ports) in each DB machine. If each of the 8 RBMS hosts has 2 links to the switches and each Exadata server (14) also has two, then it is just 2×8 + 2 x 14= 44 links. So I’m wondering how the exact Infiniband configuration looks like. Are the other ports reserved for extra-machnine connectivity (scale-out)?
Hmmmm. Add ioSANs from Fusion-io. Bake. Yummmm.