vSphere on Cisco UCS: Fibre Channel Design

So, your UCS System is cabled to your Fabric Interconnects, and you’re about to deploy vSphere on your new blades.  Hold on a minute, did you verify that you really followed best practices for UCS Fibre Channel connectivity?  After all, losing access to your storage can take out a significant portion (or ALL) of your virtual infrastructure.

Cisco’s recommended practices for connecting your Fabric Interconnects (FI) to Fibre Channel (FC) switches include maintaining distinct data paths for each FI.  That is, FI A connects to FC Switch 1, and FI B connects to FC Switch 2 (adjust accordingly for your own naming conventions).

You may be thinking, “…but the FI’s are configured in End Host Mode (EHM), and they look like just another host to my Fiber Channel switches.  So, each Fabric Interconnect can have a path to both FC Switches and improve system high availability.”

It’s true that each FI will have multiple data paths.  However, consider the scenario pictured in the following diagram:

Don't cable your FI FC Connections this way!

Even though you may configure your vHBAs to communicate with different FIs, it is possible that an entire server’s FC communications could go through a single Core FC switch.  This would lead to a significant failure if that Core FC switch has an outage, planned or otherwise.

Now, consider the following design:

UCS Recommended practice for connecting to FC Switches
Better Design

Your FIs would have distinct data paths for FC Traffic, and you can be deterministic about your vHBAs’ FC communications flows.

If you have Cisco MDS Fibre Channel switches, there is an additional benefit for your environment: Fibre Channel Port Channels.

FC Port Channels enable your environment to do the following:

  1. Load Balancing – Traffic flows can be evenly distributed across all the links in the Port Channel.  Load balancing is based on a hash of either the source\destination FCID pair or a hash of the source\destination FCID pair and the Exchange ID.
  2. Quicker Link Failure Recovery – If a physical link goes bad, no “Re-FLOGI” is necessary.  The FC Exchange will continue on one of the remaining links.
  3. Multiple VSANs – If you need to segment FC traffic (ex: multiple security zones within the same UCS system), it is possible to trunk multiple VSANs to the FI’s.  Then, you can assign the appropriate vSAN to the vHBA according to the security zone that the blade should belong to.

Now, are all these FC design considerations really necessary?  After all, vSphere is perfectly capable of handling a hardware outage if you properly configure your DRS\HA clusters.  While that may true, let’s avoid unnecessary hardware-related errors and maximize overall system stability!

If you have any interesting SAN design practices of your own, please share them in the comments section below.

Leave a Reply

Your email address will not be published. Required fields are marked *