Search Jamil Mania

Tuesday, 13 November 2012

Data Center Bridging Using Windows 2012


Data Center Bridging 

Introduction

Data Center Bridging DCB is a suite of Institute of Electrical and Electronics Engineers (IEEE) standards that enable Converged Fabrics in the data center, where storage, data networking, cluster IPC and management traffic all share the same Ethernet network infrastructure. DCB provides hardware-based bandwidth allocation to a specific type of traffic and enhances Ethernet transport reliability with the use of priority-based flow control. Hardware-based bandwidth allocation is essential if traffic bypasses the operating system and is offloaded to a converged network adapter, which might support Internet Small Computer System Interface (iSCSI), Remote Direct Memory Access (RDMA) over Converged Ethernet, or Fiber Channel over Ethernet (FCoE). Priority-based flow control is essential if the upper layer protocol, such as Fiber Channel, assumes a lossless underlying transport.

Many enterprises have large Fiber Channel (FC) storage area network (SAN) installations for storage service. FC SAN requires special network adapters on servers and FC switches in the network. In general, FC hardware is significantly more expensive to deploy than the Ethernet hardware, which results in large capital expenditures. Additionally, having separate adapter and switch hardware to support Ethernet network traffic and FC SAN services requires additional space, power and cooling capacity in a datacenter, which results in additional, ongoing operational expenditures. From a cost perspective, it is advantageous for many enterprises to merge their FC technology with their Ethernet-based hardware solution to provide both storage and data networking services.
For the enterprises that already have a large FC SAN but want to migrate away from additional investment in the FC technology, DCB will enable them to build an Ethernet based converged fabric for both storage and data networking. A DCB converged fabric can reduce the future total cost of ownership (TCO) and simplify the management.
For hosters who have already adopted, or who plan to adopt iSCSI as their storage solution, DCB can provide hardware-assisted bandwidth reservation for iSCSI traffic to ensure performance isolation. And unlike other proprietary solutions, DCB is standard-based and therefore relatively easy to deploy and manage in a heterogeneous network.
Long story short: DCB is a set of Ethernet standards that leverage special functionality in a NIC to allow us to converge mixed classes of traffic onto that NIC such as SAN and LAN, which we would normally keep isolated.  If your host’s NIC has DCB functionality then W2012 can take advantage of it to converge your fabrics.



A Windows Server® 2012-based implementation of DCB alleviates many of the issues that can occur when converged fabric solutions are provided by multiple original equipment manufacturers (OEMs). Implementations of proprietary solutions provided by multiple OEM’s might not interoperate with one another, might be difficult to manage, and will typically have different software maintenance schedules. By contrast, Windows Server® 2012 DCB is standard-based and therefore relatively easy to deploy and manage in a heterogeneous network.

Following is a list that summarizes the functionality that is provided by DCB.
    image
  1. Provides interoperability between DCB-capable network adapters and DCB-capable switches.
  2. Provides a lossless Ethernet transport between a computer running Windows Server® 2012 and its neighbor switch by turning on priority-based flow control on the network adapter.
  3. Provides the ability to allocate bandwidth to a Traffic Control (TC) by percentage, where the TC might consist of one or more classes of traffic that are differentiated by 802.1p.
  4. Enables server administrators or network administrators to assign an application to a particular traffic class or priority based on well-known protocols, well-known TCP/UDP port, or NetworkDirect port used by that application.
  5. Provides DCB management through Windows Server® 2012 Windows Management Instrumentation (WMI) and PowerShell.
  6. Provides DCB management through Windows Server® 2012 Group Policy.
  7. Supports the co-existence of Windows Server® 2012 Quality of Service (QoS) solutions.
When you think about iSCSI, Remote Direct Memory Access (RDMA) and Fibre Channel over Ethernet (FCoE) you can see where the benefits are to be found. We just can keep adding network after network infrastructure for all these applications on a large scale.
  • Integrates with the standard Ethernet networks
  • Prevents congestion in NIC & network by reserving bandwidth for particular traffic types giving better performance for all
  • Windows 2012 provides support & control for DCB and allows to tags packets by traffic type
  • Provides lossless transport for mission critical workloads
You can see why this can be handy in a virtualized world evolving in to * cloud infrastructure. By enabling multiple traffic types to use an Ethernet fabric you can simplify & reduce the  network infrastructure (hardware & cabling).  In some environments this is a big deal. Imagine that a cloud provider does storage traffic over Ethernet on the same hardware infrastructure as the rest of the Ethernet traffic. You can get rid of the isolated storage-specific switches and HBAs reducing complexity, and operational costs. Potentially even equipment costs, I say potentially because I’ve seen the cost of some unified fabric switches and think your mileage may vary depending on the scale and nature of your operations.
Requirements for Data Center Bridging
DCB is based on 4 specifications by the DCB Task Group
  1. Enhanced Transmission Selection (IEEE 802.1Qaz)
  2. Priority Flow Control (IEEE 802.1Qbb)
  3. Datacenter Bridging Exchange protocol
  4. Congestion Notification (IEEE 802.1Qau)
3. & 4. are not strictly required but optional (and beneficial) if I understand things correctly. If you want to dive a little deeper have a look here at the DCB Capability Exchange Protocol Specification and have a chat with your network people on what you want to achieve.
You also need support for DCB in the switches and in the network adaptors. 
Finally don’t forget to run Windows Server 2012 as the operating systems Winking smile. You can find some more information on TechNet  Data Center Bridging (DCB) Overview but it is incomplete. More information is coming!
Understanding what it is and does
So, in the same metaphor of a traffic situation like we used with Data Center TCP  we can illustrate the situation & solution with traffic lanes for emergency services and the like. Instead of having your mission critical traffic stuck in grid lock like the fire department trucks below
image
You could assign an reserved lane, QOS, guaranteed minimal bandwidth, for that mission critical service.  Whilst you at it you might do the same for some less critical services that none the less provide a big benefit to the entire situation as well.
image

Monday, 12 November 2012

Windows 8 to go – Carry your Windows with you Every where 


Windows To Go is a fully manageable corporate Windows 8 desktop on a bootable external USB stick. This allows IT organizations to support the “Bring Your Own PC” trend and businesses can give contingent staff access to the corporate environment without compromising security
One of the more interesting features of Windows 8 is Windows To Go, a way of installing Windows on, and running it from, a USB stick. It's something users have been demanding for some time (and was last seen with Windows 3.11's execute in place ROM option).
Using Windows To Go is very simple. Just plug a Windows To Go USB 3.0 flash drive into a PC and boot from the drive. The first time you boot on a new PC Windows To Go will configure its drivers, before booting into Windows 8. It's a similar process to that used by a Windows image that's installing over a network — and it's not surprising that Windows To Go drives are created using a new version of the familiar Imagex tool. Sadly it's not part of either the Windows 8 client or server previews, but we were lucky to pick up a ready-made Windows To Go installation on a 32GB Kingston USB 3.0 flash drive in a packed session at BUILD.
Run Windows 8 from a USB 3.0 flash drive like this, with Windows To Go
Once Windows 8 has booted, you can use the host PC as if it was your PC. The host system's disks are offline, and the only storage users can see or access is on the flash drive, reducing the risks of malware infecting either the host or the Windows To Go flash drive — although it can work with removable media inserted into the host PC's ports. There's also the option of encrypting Windows To Go with BitLocker, locking drives down and keeping them secure from prying eyes. If a drive is pulled from a PC accidentally, the screen will freeze, and if Windows To Go isn't re-inserted within 60 seconds the host PC is automatically shut down.
Microsoft has been optimising Windows 8 to run from flash, and to use flash storage, over a USB bus. It's not perfect — there's a recommendation to use as short a USB chain as possible, with as few hubs as possible. That may not be possible on some older PCs, and Microsoft is working with OEMs to improve USB implementations. Currently Microsoft recommends using USB 3.0 drives, as they use a better class of memory than USB 2.0 (and the company expects to certify drives for use with Windows). You don't need a USB 3.0 port though, as USB 3.0 drives will work over USB 2.0.
Windows To Go can't access host PC drives, they're blocked automatically
Windows To Go installed easily on our test machine, an HP tm2 convertible tablet PC running Windows 7. Once it had loaded drivers and rebooted we were able to use the bundled Metro-style applications as well as installing traditional Windows tools. Windows To Go found our PC's networking hardware, and was able to take advantage of both touchscreen and touchpad. Even though we were using a USB 2.0 port performance was good, with no noticeable lag.
This really is a very useful way of running Windows 8. It lets users share PCs, or use it where a full VDI infrastructure is impractical or expensive. There's plenty of scope for a tool like this in education, where class PCs could be used by everyone, and by low-income users in developing countries. It even acts as a secure way of working with untrusted machines in internet cafés, or to provide consultants and contractors with access to your business systems.
There are so many things that could be done with Windows To Go, and we're worried that the language Microsoft used to describe it as BUILD means it will just be a subscription benefit for volume licensing customers. Tools like this should be for everyone — not just for enterprises. Why not let every user make a Windows To Go image as part of the standard Windows licence, with additional copies available for a fee?

Windows 7 – VHD Boot – Setup Guideline

Windows 7 has a really useful feature called “VHD Boot”. With that you can boot your entire Windows out of a Virtual Hard Disk file (as those used with Virtual PC or Virtual Server).
This VHD file is mounted as a virtual disk, you can use it as a normal hard disk drive, but all the data is stored in ONE file. The machine is booted physically (unlike with Virtual PC), so you can only run one at a time, but have the full machine’s power.
The advantages are magnificent:
  • Simply copy one file (the .VHD file) and you’re entire system is included.
  • Create incremental VHD files: One VHD file can be based on another one. So if you have different systems, create a base Win7 VHD and make all others incremental. This will save a lot of disk space!
There’re also some small disadvantages :-)
  • The .VHD booted OS needs to be Windows 7, Windows Server 2008 R2 or later.
  • There’s a performance decrease of about 3%.
  • Hibernate and some BitLocker scenarios don’t work
    (BitLocker CAN be used within the guest VHD though, but not on the disk where the VHD resides).
  • Windows Experience index won’t work.
For the last three months, all my machines have been running as VHD booted ones.
Btw. you can exchange physically booted VHD file with Virtual PC VHD files. All you need to do is running sysprep /generalize /oobe. Also the OS needs to be 32-Bit because of Virtual PC.
So how do you install a VHD-Boot machine?
  1. Boot the system with a setup DVD or USB stick.
  2. At the setup screen, don’t choose “Install Now”, but press “Shift-F10”to get into command line mode [thanks to the many feedbacks for this shortcut!].
    2
  3. Enter diskpart to start the partitioning utitlity.
    image
  4. Create a new VHD file by entering

    create vdisk file=”D:\pathToVhd.vhd” type=expandable maximum=maxsizeInMegabyte


    For differencing VHDs you need to add an additional parameterparent=”D:\pathtoparent.vhd”.
    image
  5. Now select the new VHD and attach it as a physical disk.

    select vdisk file=”D:\pathToVhd.vhd”
    attach vdisk

    image
  6. After that switch back to the setup window (e.g. using ALT+TAB) and start the setup.
    2
  7. Proceed the normal setup, but make sure you install to the correct disk (normally the last one), ignore the “Windows cannot install to this disk” warning!!








At next startup, you’ll see Windows 7 in the boot menu!
Optional: If you want to add a VHD manually to the boot menu, you just need to copy an existing entry and set some parameters:
bcdedit /copy {originalguid} /d "New Windows 7 Installation"
bcdedit /set {newguid} device vhd=[D:]\Image.vhd
bcdedit /set {newguid} osdevice vhd=[D:]\Image.vhd
bcdedit /set {newguid} detecthal on

Detail Procedure can be Found @

Ensuring High Availability of DHCP using Windows Server 2012 DHCP Failover


Ensuring High Availability of DHCP using Windows Server 2012 DHCP Failover

Introduction

Ensuring high availability of critical network services like DHCP figures high in the list of priorities for any enterprise. In an environment where clients get their IP addresses and network configuration automatically, uninterrupted network connectivity is dependent on the availability of DHCP service at all times. Let us consider for a moment what high availability of DHCP server is intended for:

-         Any authorized computer which connects to the network should be able to obtain its IP address and network configuration from the enterprise DHCP service at all times.

-         After obtaining an IP address, a computer should be able to renew its lease and continue using the same IP address so that there is no glitch in connectivity.

Windows Server 2012 DHCP provides a new high availability mechanism addressing these critical aspects. Two DHCP servers can be set up to provide a highly available DHCP service by entering into a failover relationship. A failover relationship has a couple of parameters which govern the behavior of the DHCP servers as they orchestrate the failover. One of them is the mode of the failover operation – I will describe this shortly. The other is the set of scopes that are part of the failover relation. These scopes are set up identically between the two servers when failover is configured. Once set up in this fashion, the DHCP servers replicate the IP address leases and associated client information between them and thereby have up-to-date information of all the clients on the network.  So even when one of the servers goes down – either in a planned or in an unplanned manner – the other DHCP server has the required IP address lease data to continue serving the clients.

Modes of Failover Operation

There are two modes of configuring DHCP failover to cater to the various deployment topologies:  Load Balance and Hot Standby. The Load Balance mode is essentially an Active-Active configuration wherein both DHCP servers serve client requests with a configured load distribution percentage. We will look at how the DHCP servers distribute client load in a later post.

The Hot Standby mode results in an Active-Passive configuration. You will be required to designate one of the two DHCP servers as the active server and the other as standby. The standby server is dormant with regard to serving client requests as long as the active server is up. However, the standby server receives all the inbound lease updates from the active DHCP server and keeps its database up to date.

The DHCP servers in a failover relationship can be in different subnets and can even be in different geographical sites.

Deployment Topologies

The support of these two modes enables a wide range of deployment topologies. The most rudimentary one is where two servers in a Load Balance or Hot Standby mode serve a set of subnets which are in the same site.

A slightly more involved deployment where failover is being deployed across two different sites is illustrated in Figure 1. Here, the Hyderabad and the Redmond  site each have a local DHCP server servicing clients in that site. To ensure high availability of the DHCP service at both the sites, one can setup two failover relationships in Hot Standby mode. One of the failover relationships will comprise all subnets/scopes at Hyderabad. It will have the local DHCP server as the active server with the DHCP server at Redmond as the standby. The second failover relationship will comprise all subnets/scopes at Redmond. It will have the local DHCP server as the active server and the DHCP server at Hyderabad as the standby.


http://blogs.technet.com/resized-image.ashx/__size/550x0/__key/communityserver-blogs-components-weblogfiles/00-00-00-43-33/8030.MultiSiteDHCPFailover.png

                   Figure 1: DHCP Failover Deployed Across Two Sites

This deployment construct of two DHCP servers backing up each other for two different set of scopes via two failover relationships is extensible to more than two sites. One can visualize a ring topology involving multiple sites where a server at each site - in addition to being the active server for the local network – is the standby server for another site. The failover relationships can be set up to form a ring topology through the DHCP servers at different sites.

Hub-and-Spoke is another multi-site deployment topology which lends itself quite well to how organizations are looking to deploy failover. Here, a central DHCP server acts as the standby for multiple active DHCP servers each of which serves a different branch office.

 

Better than earlier HA mechanisms

Windows DHCP server has so far met the HA requirement by enabling hosting of the DHCP server on a Windows Failover Cluster or by split scope deployments. These mechanisms have their own disadvantages. The split scope mechanism relies on configuring identical scopes on two DHCP servers and setting up the exclusion ranges in such a fashion that 80% of a subnet’s IP range is used for leasing out IP addresses by one of the servers (primary) and remaining 20% by the other server (secondary). The secondary server is often configured to respond to clients with a slightly delayed response so that clients use IP addresses from the primary server whenever it is available. Split scope deployments suffer from two problems. IPv4 subnets often run at utilization rates above 80%. In such subnets, split scope deployment is not effective given the low free pool of IP addresses available. The other issue with split scope is the lack of IP address continuity for clients in case of an outage of the primary server. Since the IP address given out by the primary DHCP server would be in the exclusion range of the secondary server, the client will not be able to renew the lease on the current IP address and will need to obtain a new IP address lease from the secondary server. In the case of split scope, the two DHCP servers are oblivious to each other’s presence and do not synchronize the IP address lease information.

To host the DHCP server on a Windows Failover Cluster, the DHCP database needs to be hosted on a shared storage accessible to both nodes of a cluster in addition to the deployment of the cluster itself. DHCP servers running on each node of the cluster operate on the same DHCP database hosted on the shared storage. In order to avoid the shared storage being the single point of failure, a storage redundancy solution needs to be deployed. This increases the complexity as well as the TCO of the DHCP high availability deployment.

The Windows Server 2012 DHCP failover mechanism eliminates these shortcomings and provides a vastly simplified deployment experience. Moreover, DHCP failover is supported in all editions (Foundation, Standard, Data Center) of Windows Server 2012.  As one of the server reviewers aptly put it, this is high availability of DHCP on a low budget!

Management interfaces

DHCP failover can be configured using the DHCP MMC as well as the DHCP PowerShell cmdlets. Everything you can do via the MMC for DHCP failover is achievable via the DHCP PowerShell cmdlets as well. The DHCP MMC provides a Failover Setup wizard which greatly eases the setup of failover. There are two launch points in the DHCP MMC from which a user can start the wizard. The right click menu options on the IPv4 node now have a Configure failover… option. If launched from here, all the scopes on the server which are not yet setup for failover are selected for failover configuration. Alternatively, if you select one or more scopes and right click, you will see the same Configure Failover… option. If launched in this fashion, only the selected scopes are configured for failover. Please see the step by step guide to setup failover using DHCP MMC. You can download the “Understanding and Troubleshooting guide” here.

For the command line users, DHCP PowerShell provides the following PowerShell cmdlets for setting up and monitoring failover:

  • Add–DhcpServerv4Failover  - Creates a new IPv4 failover relationship to a DHCP server
  • Add–DhcpServerv4FailoverScope   - Adds the specified scope(s) to an existing failover relationship
  • Get–DhcpServerv4Failover   - Gets the failover relationships configured on the server
  • Remove–DhcpServerv4Failover   - Deletes the specified failover relationship(s)
  • Remove–DhcpServerv4FailoverScope - Removes the specified scopes from the failover relationship
  • Set–DhcpServerv4Failover   - Modifies the properties of an existing failover relationship
  • Invoke-DhcpServerv4FailoverReplication - Replicates scope configuration between failover partner servers

In addition to these, Get–DhcpServerv4ScopeStatistics cmdlet which returns scope statistics has a “-failover” switch. Specifying this switch causes the cmdlet to return failover specific statistics for scopes which are configured for failover.

In conclusion, DHCP failover in Windows Server 2012 provides a high availability mechanism for DHCP which is very easy to deploy and manage and caters to the critical requirements of continuous availability of DHCP service and IP address continuity for the clients. Early adopters of the feature have shared our enthusiasm for this critical DHCP functionality and feedback from them has been very positive.

Give it a spin on Windows Server 2012 Release Candidate and we hope that you will find it useful!

Tuesday, 8 May 2012

Exchange 2010 Edge Server High Availability

Introduction


Exchange Server 2010 Edge Role provides a way to place Exchange Servers in the perimeter network (aka DMZ) that route messages from external to the internal messaging system and vice versa. The edge server role existed from Exchange 2007 and reaches its second release with Exchange Server 2010.



Designing Edge Server Implementations

If an administrator decides to implement relay servers that will typically reside in the DMZ and choose to create them based on Window Servers, then Exchange Server Edge Roles may be the appropriate solution, particularly when the company uses Exchange Servers as their primary messaging environment. Additionally, it might be very interesting to choose Exchange as relay servers.



An Exchange Edge server is a Windows Server 2003 or 2008/R2 based system that is not member of a domain, however, it is a member of a workgroup. This is the most important design thing; otherwise the Active Directory Domain would need to cross internal firewalls, which is quite unsafe.



Edge Servers are a specific design of Hub Transport Servers that don’t rely on Active Directory, their “Directory Service” is the “Active Directory Lightweight Directory Service (ADLDS), which probably was better known as “Active Directory in Application Mode (ADAM).







Figure 1: Exchange Edge Server Concepts & Design



The setup of an Edge Server is quite easy and straight forward; you would only need to choose the correct role in the Exchange Server Setup Utility.





Figure 2: Choose the correct Role in SETUP.EXE



The Edge Server system provides the following functions:



1.Accept incoming Email from external

2.Accept outgoing Email from internal

3.Check if Email is SPAM and probably delete or reject it

4.Check if Email is Virus infected and probably delete the virus or reject the mail

5.Check if Email is for an existing user and probably reject the email

When implementing Edge Servers, a very importing decision is to choose a suitable Antivirus & Antispam solution which is specifically designed for Edge Servers, but in general the most well-known Antivirus Solutions provide this support.



To setup an Edge Server Role using the command line, the syntax is as follows:



Setup.com /roles:EdgeTransport /InstallWindowsComponents



The parameter /InstallWindowsComponents makes sure that missing Windows componects are added automatically to the SETUP component. After a successful installation it is recommended to install the latest cumulative update, as of now this is CU6 and can be downloaded here:







Figure 3: Edge Server Console in action



As you can see above, there is no difference in Exchange Server Console. The properties of each server are:



1.Anti-Spam (for enabling Content Filtering, IP Allow List, IP Allow List Providers, IP Block List, IP Block List Providers, Recipient Filtering, Sender Filtering, Sender ID and Sender Reputation)

2.Receive Connectors (for configuring from which IP address emails are accepted)

3.Send Connectors (for configuring where to send internet emails to)

4.Transport Rules (for modifying emails before sending them to the internet)

5.Accepted Domains (routable domains for which Exchange is responsible)

The following ports need to be opened on the firewalls (Edge or backend Firewall):





Source



Destination

Port

Protocol



Edge Server

è

Internet

25

SMTP (TCP)



Edge Server

è

internal

25

SMTP (TCP)



Internet

è

Edge Server

25

SMTP (TCP)



Intranet

è

Edge Server

25

SMTP (TCP)



Hub-Server

è

Edge Server

50636

User Defined(TCP)







Table 1



High Availability

In general companies need to provide high available messaging solutions. Due to Exchange Server Edge Roles operate on Windows Sockets (IP address + IP port) the easiest way to provide high availability is through “Network Load Balancing”. For Exchange Edge this is the only and supported concept.



There are two ways to provide “Load Balancing”:



1.Hardware Load Balancer

2.Software Load Balancer



With the underlying Windows Operating System you still have a “Software Load Balancer” included called “Network Load Balancing”, so in general this is your first choice if no company internal concepts omit it. Windows Network Load Balancing is supported for single network interface card or those with two network interface cards. The configuration of NLB is nearly the same and quite easy using the NLB Configuration Wizard. If you would like to configure using the command line, then WLBS.EXE or NLB.EXE will become handy.



After another (second) installation of Exchange Server Edge Role with default configuration is quite easy to finish. To configure your second Edge Server the same as the first one, you can transfer the configuration quite easy.



1.Export the configuration using the following command line:

./ExportEdgeConfig -cloneConfigData:"C:\CloneConfigData.xml"

2.Modify the XML and replace the name of Edge Server1 with the one of Edge Server2.

3.Validate Configuration and create a new answer file.

./ImportEdgeConfig -CloneConfigData:"C:\CloneConfigData.xml" -IsImport $false -CloneConfigAnswer:"C:\CloneConfigAnswer.xml"

4.Import the modified configuration using the following command line:

./ImportEdgeConfig -CloneConfigData:"C:\CloneConfigData.xml" -IsImport $true -CloneConfigAnswer:"C:\CloneConfigAnswer.xml"

Enable Edge Server Synchronization

To finally enable Edge Server Synchronization, you need the following PowerShell command on your Edge Server(s):



New-EdgeSubscription –FileName “C:\Edgeinfo.xml”



Now, we need to copy the Edge subscription file to the Hub Transport server in Exchange Management Console and click “New Edge Subscription>New Edge Subscription Wizard”. If you experience any errors, the application log will help you troubleshoot the issue.



To enable a full synchronization we need the following cmdlets in Exchange Management Shell:



Start-EdgeSynchronization -Server -TargetServer -ForceFullSync



Conclusion

advertisement



As you have seen above, the configuration of Exchange Edge Servers for relaying emails to and from the internet is quite easy and straight forward; although, you cannot configure it completely from the Management Console.



If you still experience any questions, please don’t hesitate to contact me.



Windows Server Migration Tools

Migrate Server Roles to Windows Server 2008 R2


Microsoft Hyper-V FailOver Clustering to Add HA for Non Supporting Application - Very Good Options

Step-By-Step Guide for Failover Clustering