Search Jamil Mania

Tuesday, 13 November 2012

Data Center Bridging Using Windows 2012


Data Center Bridging 

Introduction

Data Center Bridging DCB is a suite of Institute of Electrical and Electronics Engineers (IEEE) standards that enable Converged Fabrics in the data center, where storage, data networking, cluster IPC and management traffic all share the same Ethernet network infrastructure. DCB provides hardware-based bandwidth allocation to a specific type of traffic and enhances Ethernet transport reliability with the use of priority-based flow control. Hardware-based bandwidth allocation is essential if traffic bypasses the operating system and is offloaded to a converged network adapter, which might support Internet Small Computer System Interface (iSCSI), Remote Direct Memory Access (RDMA) over Converged Ethernet, or Fiber Channel over Ethernet (FCoE). Priority-based flow control is essential if the upper layer protocol, such as Fiber Channel, assumes a lossless underlying transport.

Many enterprises have large Fiber Channel (FC) storage area network (SAN) installations for storage service. FC SAN requires special network adapters on servers and FC switches in the network. In general, FC hardware is significantly more expensive to deploy than the Ethernet hardware, which results in large capital expenditures. Additionally, having separate adapter and switch hardware to support Ethernet network traffic and FC SAN services requires additional space, power and cooling capacity in a datacenter, which results in additional, ongoing operational expenditures. From a cost perspective, it is advantageous for many enterprises to merge their FC technology with their Ethernet-based hardware solution to provide both storage and data networking services.
For the enterprises that already have a large FC SAN but want to migrate away from additional investment in the FC technology, DCB will enable them to build an Ethernet based converged fabric for both storage and data networking. A DCB converged fabric can reduce the future total cost of ownership (TCO) and simplify the management.
For hosters who have already adopted, or who plan to adopt iSCSI as their storage solution, DCB can provide hardware-assisted bandwidth reservation for iSCSI traffic to ensure performance isolation. And unlike other proprietary solutions, DCB is standard-based and therefore relatively easy to deploy and manage in a heterogeneous network.
Long story short: DCB is a set of Ethernet standards that leverage special functionality in a NIC to allow us to converge mixed classes of traffic onto that NIC such as SAN and LAN, which we would normally keep isolated.  If your host’s NIC has DCB functionality then W2012 can take advantage of it to converge your fabrics.



A Windows Server® 2012-based implementation of DCB alleviates many of the issues that can occur when converged fabric solutions are provided by multiple original equipment manufacturers (OEMs). Implementations of proprietary solutions provided by multiple OEM’s might not interoperate with one another, might be difficult to manage, and will typically have different software maintenance schedules. By contrast, Windows Server® 2012 DCB is standard-based and therefore relatively easy to deploy and manage in a heterogeneous network.

Following is a list that summarizes the functionality that is provided by DCB.
    image
  1. Provides interoperability between DCB-capable network adapters and DCB-capable switches.
  2. Provides a lossless Ethernet transport between a computer running Windows Server® 2012 and its neighbor switch by turning on priority-based flow control on the network adapter.
  3. Provides the ability to allocate bandwidth to a Traffic Control (TC) by percentage, where the TC might consist of one or more classes of traffic that are differentiated by 802.1p.
  4. Enables server administrators or network administrators to assign an application to a particular traffic class or priority based on well-known protocols, well-known TCP/UDP port, or NetworkDirect port used by that application.
  5. Provides DCB management through Windows Server® 2012 Windows Management Instrumentation (WMI) and PowerShell.
  6. Provides DCB management through Windows Server® 2012 Group Policy.
  7. Supports the co-existence of Windows Server® 2012 Quality of Service (QoS) solutions.
When you think about iSCSI, Remote Direct Memory Access (RDMA) and Fibre Channel over Ethernet (FCoE) you can see where the benefits are to be found. We just can keep adding network after network infrastructure for all these applications on a large scale.
  • Integrates with the standard Ethernet networks
  • Prevents congestion in NIC & network by reserving bandwidth for particular traffic types giving better performance for all
  • Windows 2012 provides support & control for DCB and allows to tags packets by traffic type
  • Provides lossless transport for mission critical workloads
You can see why this can be handy in a virtualized world evolving in to * cloud infrastructure. By enabling multiple traffic types to use an Ethernet fabric you can simplify & reduce the  network infrastructure (hardware & cabling).  In some environments this is a big deal. Imagine that a cloud provider does storage traffic over Ethernet on the same hardware infrastructure as the rest of the Ethernet traffic. You can get rid of the isolated storage-specific switches and HBAs reducing complexity, and operational costs. Potentially even equipment costs, I say potentially because I’ve seen the cost of some unified fabric switches and think your mileage may vary depending on the scale and nature of your operations.
Requirements for Data Center Bridging
DCB is based on 4 specifications by the DCB Task Group
  1. Enhanced Transmission Selection (IEEE 802.1Qaz)
  2. Priority Flow Control (IEEE 802.1Qbb)
  3. Datacenter Bridging Exchange protocol
  4. Congestion Notification (IEEE 802.1Qau)
3. & 4. are not strictly required but optional (and beneficial) if I understand things correctly. If you want to dive a little deeper have a look here at the DCB Capability Exchange Protocol Specification and have a chat with your network people on what you want to achieve.
You also need support for DCB in the switches and in the network adaptors. 
Finally don’t forget to run Windows Server 2012 as the operating systems Winking smile. You can find some more information on TechNet  Data Center Bridging (DCB) Overview but it is incomplete. More information is coming!
Understanding what it is and does
So, in the same metaphor of a traffic situation like we used with Data Center TCP  we can illustrate the situation & solution with traffic lanes for emergency services and the like. Instead of having your mission critical traffic stuck in grid lock like the fire department trucks below
image
You could assign an reserved lane, QOS, guaranteed minimal bandwidth, for that mission critical service.  Whilst you at it you might do the same for some less critical services that none the less provide a big benefit to the entire situation as well.
image

No comments:

Post a Comment