16GFC Fibre Channel is 16-Fold Better Than 4GFC? Well, All Things Considered, Yes. Part I.

16GFC == 16X
If someone walked up to you on the street and said, “Hey, guess what, 16GFC is twice as fast as 8GFC! It’s even 8-fold faster than what we commonly used in 2007” you’d yawn and walk away.  In complex (e.g., database) systems there’s more to it than line rate. Much more.

EMC’s press release about 16GFC support effectively means 16-fold improvement over 2007 technology. Allow me to explain (with the tiniest amount of hand-waving).

When I joined the Oracle Exadata development organization in 2007  to focus on performance architecture, the state of the art in enterprise storage for Fibre Channel was 4GFC. However, all too many data centers of that era were slogging along with 2GFC connectivity (more on that in Part II). With HBAs plugged into FSB-challenged systems via PCIe it was uncommon to see a Linux commodity system configured to handle more than about 400 MB/s (e.g., 2 x 2GFC or a single active 4GFC path). I know more was possible, but for database servers that was pretty close to the norm.

We no longer have front-side bus systems holding us back*. Now we have QPI-based systems with large, fast main memory, PCI 2.0 and lots of slots.

Today I’m happy to see that 16GFC is quickly becoming a reality and I think balance will be easy to achieve with modern ingredients (i.e., switches, HBAs). Even 2U systems can handily process data flow via several paths of 16GFC (1600 MB/s). In fact,  I see no reason to shy away from plumbing 4 paths of 16GFC to two-socket Xeon systems for low-end DW/BI. That’s 6400 MB/s…and that is 16X better than where we were even as recently as 2007.

Be that as it may, I’m still an Ethernet sort of guy.  I’m also still an NFS sort of guy but no offense to Manly Men intended.

A Fresh Perspective
The following are words I’ve waited several years to put into my blog, “Let’s let customers choose.” There, that felt good.

In closing, I’ll remind folks that regardless of how your disks connect to your system, you need to know this:

Hard Drives Are Arcane Technology. So Why Can’t I Realize Their Full Bandwidth Potential?

* I do, of course, know that AMD Opteron-based servers were never bottlenecked by Front Side Bus. I’m trying to make a short blog entry. You can easily google “kevin +opteron” to see related content.

9 Responses to “16GFC Fibre Channel is 16-Fold Better Than 4GFC? Well, All Things Considered, Yes. Part I.”


  1. 1 Anon May 6, 2011 at 6:45 am

    How many 16GFC are connected into the largest VMAX?

    • 2 kevinclosson May 6, 2011 at 2:31 pm

      Anon,

      I’m not a VMAX expert so I’d have to peek at the datasheet to answer that. I’d guess none yet. But, instead of care for connectivity let’s remember data flow. Consider, for a moment, a hypothetical array head that can deliver 16GB/s through the storage processor. It would make no sense to plumb more than 10 16GFC paths to such an array because the SP can’t fill the pipes. 10 x 16GFC == 16GB/s.

  2. 3 Anon May 6, 2011 at 6:48 am

    40Gb/s Infiniband is now old. 16GFC is new in fiber channel. Whoopie!

  3. 4 Ofir Manor May 9, 2011 at 10:04 am

    Hi Kevin,
    that’s quite an optimistic post, I must say… Is the change is affecting your mood? 🙂
    Since a two socket server can read 6.4GB/s, we could probably easily and cheaply cluster a dozen of these babies and get an awesome aggregate scan rate of more than 75GB/s, right? now that would be beautiful – too bad we’re not there yet…
    The reality is that the scan bottleneck have just been moving from the interconnect layer to the storage layer. I haven’t seen traditional vendors really rising up to this challenge – they focus on other storage-related challenges like provisioning/cloning/snapping/remote mirroring etc.
    Also, I don’t think flash is the solution the this in the DW space – it is just too expansive when data volumes start with TBs or 10s of TBs or more.
    Paraphrasing Anon’s remark, I guess just one of these 2-socket / 4-HBA server could totally saturate some of storage racks on the market, while the better ones will be totally saturated by two of these 2-socket server. That’s why modern DB machines vendors have their own storage implementation.

    • 5 kevinclosson May 9, 2011 at 5:18 pm

      Ofir, my friend…I need to make a blog post about you are alluding to.

      You and I know the 75GB/s you are talking about is the aggregate scan rate of a full rack X2 configuration. OK, quick quiz time: What percentage of that 75GB/s has to be discarded before Exadata bottlenecks just on data flow alone? Then, what happens to the scan rate with just a single table join? OK, one more, ready? What happens to the scan rate with a sort clause?

      Last year in Berlin I taught all you guys to be ever mindful of data flow dynamics and the fact that Exadata is an Asymmetrical MPP. All this hype about SELECT COUNT(*) from tab where no_zero_values = 0 is just silly.

      I cover some of this in the upcoming Apress book in my contribution there. http://kerryosborne.oracle-guy.com/2011/02/exatadata-book-2/

      • 6 Ofir Manor May 10, 2011 at 3:53 pm

        Kevin, it’s so nice to have another round of chat…
        I wasn’t really aiming specifically at Exadata, I felt your post had an “EMC announces new hardware, solves world hunger” feel, which was a bit weird for you… Though its nice to hear that you are becoming a bit more optimistic 🙂
        I personally think that in the server space, 16Gb/s FC is a nice step forward, but not really critical. As organizations will upgrade their storage, they can now choose to upgrade to either 8Gb/s or 16Gb/s infrastructure (HBAs, SAN Switches, storage). I’m not sure they would really see a major difference between 8Gb/s and 16Gb/s as their storage will likely be the bottleneck, especially in the common scenario of a storage array shared between dozens of servers – most customers won’t buy a dedicated VMax rack for every two-socket server. So I think it is safe to guess that 16Gb/s infrastructure will take a few years until it will be cost-effective, if ever.
        Regarding Exadata… First, you know it has tons of standard bandwidth capacity, even though it is not 75GB/s between the cells and the database nodes (maybe unless you daisy-chain three racks). Second, Smart Scan can do a really decent job in reducing data transfer in most common scenarios – for example, selecting only 50 columns out of 400, having a where clause that filters 40-70-90% of the rows etc. But that analysis makes more sense when comparing it to alternatives, and I don’t think 16Gb/s HBAs (connected to conventional storage) makes a big difference here.
        (BTW, I guess your “hype” comment relates to storage indexes, which bypass the actual scan, so its a different beast).
        Also, I don’t see how sort is directly related to the scan performance issue. Once you have more than one server and have SQLs that can order data based on any column, some data movement between nodes is a must – in any architecture. But that’s off topic.
        Anyway, I’m more interested in hearing if you think that1 6Gb/s FC is even relevant in the DW space… Or will the future lies elsewhere? Is there a huge performance difference today between modern NFS over 10Gb/s, FCoE (10Gb/s) and FC over 8Gb/s or 16Gb/s?

        • 7 kevinclosson May 10, 2011 at 11:41 pm

          BTW, I guess your “hype” comment relates to storage indexes,

          Ofir,
          I only applied the word hype to the common (yet unwarranted) excitement I see when folks focus too much on scan speed. For instance, queries that return no rows and don’t join tables but cruise along at nose-bleed scan rates (e.g., 75GB/s) are not interesting to me.

          I’ll try to get to your questions in a post.

  4. 8 Kyle Hailey May 25, 2011 at 9:46 pm

    Do have any feel or data points on what the update on 2G, 4G, 8G FC is these days?

    PS its nice to see that it’s OK to use FC, sigh of relief 😉

    • 9 kevinclosson May 25, 2011 at 10:30 pm

      Hi Kyle,

      Thanks for stopping by. I don’t know “the” answer but I know for certain that it is becoming difficult to even find 2GFC stuff and 16GFC is so new. I talk to a lot of folks that skipped from 2GFC to 8GFC as part of technology refreshes in the 2009 timeframe.


Leave a Reply to Ofir Manor Cancel reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.




DISCLAIMER

I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 747 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: