BLOG UPDATE – 2012.06.07 – Wayward Googlers resurrected this old post. Using my not-so-canny speed-reading skills I jumped in with comments. A reader emailed me to point out I bit myself with nomenclature (“B” vs “b”). At this point I think the 100Mb per 1GHz is prime for more scrutiny. I held fast to that ratio in the time frame of this original post. However, my work with Westmere and Sandy Bridge Xeons leads me to believe the ratio is in dire need for updating. I’ll address that topic in an up-coming post and link back to this post.
Calling My Friends Liars–How Fun
I can’t remember the last time I disagreed with Jeff Needham. I realized about 15 years ago (IIRC) that it doesn’t make sense to do so because he is always (5 “nines”) right. In a quick chat today he said:
Polyserve can sell up the notion that the gateway does break the 1Gbps barrier (which mostly people falsely believe is not enough I/O for “them”)
The “gateway” Jeff is referring to is the File Serving Utility for Oracle but that is not the topic of this post. I want to cowardly disagree with Jeff about his assertion that Oracle IT people are erroneously concerned that the most common NAS bandwidth (1GbE) is not sufficient for their needs. I assert that such a concern may in fact be warranted. The point I want to make, and therefore left-handedly disagreeing with Jeff about, is that it doesn’t matter.
Yes, 1 Gigabit Ethernet is the most common NAS connectivity medium today, and with very little tuning you can get a realizable payload of roughly 112MB/s—I do. If 112/MBs is starving your CPUs, all is not lost.
A Safe Rule of Thumb
There is a rule of thumb that has stood the test of time regarding the balance between processor capability and I/O bandwidth. Now, I know that sometimes man bites dog, but the highest majority of systems will strike a balance between CPU and I/O according to the following formula:
100Mb I/O bandwidth for each 1GHz of CPU
This formula leans towards DSS-style workloads, so it is certainly a safe bet for OLTP. Oh, by the way, Mb is not MB. I see such notation horribly interchanged all to often and it makes a big difference when you are trying to stuff 100 pounds (how many kilos is that?) of rocks into a 10 pound bag…
The last “really big” system I had dedicated to my projects (it was my “personal” lab system) was a Sequent NUMA-Q 2000 with 32 700MHz processors, 32GB RAM and 396 4GB hard disk drives. The formula was true then. It doesn’t sound like much by today’s standards, but about 280MB/s would saturate the system if I was doing heavy lifting such as index creation or complex queries with Parallel Query Option. On the other hand, I assure you that the processors nearly burst into flames running OLTP long before I hit ~280MB/s random 4KB transfers. After all, using the formula, that system would be able to deliver roughly 70,000 4KB IOps—and that was a lot in those days. On the contrary, I’ll blog soon about how uneventful that I/O rate is with modern commodity servers (and I still hate that term, need to blog on it—note to self).
The moral of the story is that if you are running on a legacy Unix system that is near the end of its lease it is quite likely that the compute power it offers can be replaced by an 8 core AMD Opteron system running 64-bit Oracle. Put that thought on the back burner if you had a short lease on, say, an IBM RS/6000 Regatta though J(hey, I still have my favorites). If you are planning a deployment that can be handled by an 8 core AMD Opteron system (very likely), I can all but guarantee you that triple-bonded NFS paths with client-side O_DIRECT will not starve your processors one bit.
Now, if you think a single-headed NAS device Filer) can really feed a 3-way triple bonded NFS data path for reads and writes, you need to do some testing and then read this.
So, in the end, I didn’t really disagree at all with Jeff, and for that, I feel good and safe!