Load Balancing in Exchange Server 2016

Similar as when TMG was EOL, and people were asking how should they publish Exchange services now, now we have a similar question with load balancing of Exchange Server, since support for Windows Network Load Balancing is no more. Architectural changes in Exchange Server 2016 which resulted in having only one server role now (now even Client Access Server is integrated with Mailbox Server) makes impossible WNLB for usage. Reason is simple – you can’t use WNLB and Failover Clustering (required by DAG) on the same pair of machines. However, to be honest, even if this is not the case, WNLB is not a solution that I’d recommend for Exchange load balancing to anyone. It’s too old with too many issues for a critical service like Exchange.
So, obviously, we need an external solution for load balancing client access traffic. In most cases, we are actually talking about HTTPS based traffic, as most Exchange traffic (except for SMTP) is using this protocol.
With Exchange 2013 Microsoft announced that it is not required anymore to go with expensive Layer 7 load balancers for Exchange – you can also use cheaper and simpler Layer 4 balancer for client requests. What’s even better, there’s no need for session affinity anymore. This is because client access services now proxy the connection to any Mailbox Server, and it doesn’t really matter where client actually establish the session. This simplifies the requirements for Exchange load balancers even more.
Let’s see what is the real difference actually, and why should you care about this anyway. You might be surprised that main issue here is actually node health check. Luckily, Microsoft provided a very intelligent way to handle this even with cheap (or free) load balancers. As I’m currently writing a course on Exchange 2016, I’ll use some of my draft materials to clarify this.

Load balancers that work on Layer 4 are not aware of the actual traffic content being load balanced. The load balancer forwards the connection based on the IP address and port on which it received the client’s request and it has no knowledge of the target URL or request content. For example, load balancer on the Layer 4 does not recognize if a client is connecting with Outlook on the Web or with Exchange Active Sync as both connections are using the same port (443). Also, load balancers on the Layer 4 are not able to detect the actual functionality of the server node that is included in load balancing pool. For example, Layer 4 load balancer can detect if one of the servers from the pool is completely down, because it does not responds to PING, but it can’t detect if IIS service on that server is working or not. From the client access perspective, if IIS is not working on the server, it is almost the same as the server is done. However, it will not be marked down by Layer 4 load balancer in this case. Some Layer 4 load balancers can provide simple health check by testing availability of a specific virtual directory, such as /owa, but functionality of one virtual directory does not guarantee that others are also working fine.

Load balancers that work on the Layer 7 of OSI model are much more intelligent. Layer 7 load balancer is aware of the type of traffic passing through it. This type of load balancer can inspect the content of the traffic between the clients and the Exchange server. From this inspection, it gets that results and uses this information to make its forwarding decisions. For example, it can route traffic based on the virtual directory to which a client is trying to connect, such as /owa, /ecp or /mapi and it can use a different routing logic, depending on the URL the client is connecting to. When using a Layer 7 load balancer, you can also leverage the capabilities of Exchange Server 2016 Managed Availability feature. This built-in feature of Exchange monitors the critical components and services of Exchange server and based on results it can take actions.

Managed Availability uses Probes, Monitors and Responders as components that work together. These components test, detect, and try to resolve possible problems. Probe component is used first. It tries to gather information or execute a diagnostic tests for a specific Exchange component. After that a Monitor component is used to evaluate the results that Probe provides. Monitor uses the results information to make the decision whether the component is healthy or unhealthy. If a component is unhealthy, a Responder component can take measures to bring that failed component back to a healthy state. This can include service restart, database failover or, in some cases, server reboot.

If a critical server component is healthy, Managed Availability generates a web page named healthcheck.htm. You can find this web page under each virtual directory, for example, /owa/healthcheck.htm or /ecp/healthcheck.htm. If Managed Availability detects that server component is unhealthy, the health check web page is not available and a 403 error is returned. You can use this to point your load balancer to the health check web page for each critical service.

Layer 7 load balancer can use this to detect functionality of critical services, and based on that information decide if it will forward client connections to that node. If the load balancer health check receives a 200 status response from health check web page, then the service or protocol is up and running. If the load balancer receives a 403 status code, then it means that Managed Availability has marked that protocol instance down on the Mailbox server.

Although it might look that load balancer actually performs a simple health check against the server nodes in the pool, health check web page provides an information about workload’s health by taking into account multiple internal health check probes performed by Managed Availability.

It is highly recommended that you configure your load balancer to perform the node health check based on information provided by Managed Availability feature. If you don’t do this, then the load balancer could direct client access requests to a server that Managed Availability has marked unhealthy. At the end, this results in inconsistent management and negative user experience.

Changing certificate on AD FS and DRS

 

If you have AD FS with Device registration service (DRS) configured on your Windows Server 2012 R2, you might have experienced troubles if you decided to change the certificate on AD FS server. Although AD FS management console will allow you to change service certificate for AD FS, it will not let you change the SSL certificate, nor it will allow you to assign rights for group managed service account used by DRS to access the private key of the new certificate. As a result, change of AD FS service certificate only through the AD FS console will make your DRS stop working (and your devices incapable to perform Workplace Join). So, if you want to change this certificate, for whatever reason you have, there is a procedure to follow:

1. First, during certificate enrollment process for the new certificate make sure that you assign rights to access the private key. This is not very obvious thing to do, actually. When you start the certificate request procedure on your AD FS server, choose Web Server template, and then enter its properties to configure more settings. On the Subject tab, make sure that you type all names that you need. First, you need the name of your AD FS cluster (or server), for example, adfs.adatum.com. Make sure that this name is not the same like your AD FS server name. Also, you need this same name as SAN (Subject Alternative Name), and also enterpriseregistration and enterpriseenrollment SAN host names (second one is for Windows 10). See example below:

clip_image001

2. Then, go to the Private Key tab, expand Key Permissions and select Use custom permissions check box. Click Set permissions, then Add, and then select Service accounts as object type, and type your group managed service account that you created when you first configured DRS. See example below :

clip_image002

My group managed service account in this example is FsGmsa1 in Adatum.com domain. When you configure this, finish the enrollment of certificate.

Note : Make sure that this service account has SPN set to your ADFS cluster name. You can check that with following command : setspn –l adatum\FsGmsa1$. Result should something like this:

host/adfs.adatum.com
http/adfs.adatum.com

3. When you finish the certificate enrollment, while you still have Certificates console open, double click the new certificate. Go to Details tab, and scroll down to Thumbprint attribute. Copy the thumbprint value to Notepad, and remove spaces between pairs of characters.

4. Now you have to issue two PowerShell commands to setup new certificate to work with AD FS. First command is :

Set-AdfsCertificate -CertificateType Service-Communications -Thumbprint “your_new_certificate_thumbprint″ – this will set your new certificate as AD FS service certificate. This part you can also do by using AD FS Management console.

Second command will change your SSL certificate for AD FS (that’s the one you need for AD FS, and the one you can’t change with console):

Set-AdfsSslCertificate –Thumbprint “your_new_certificate_thumbprint″

When you finish this, you will be good to. Restart your AD FS and DRS services, and they should start successfully.

Refreshing templates and testing Azure RMS

If you are using Microsoft Azure RMS service together with Office365, in order to extend functionality of basic RMS in Office365, you probably do it to create customized RMS templates. If you activate Office365 Rights Management, it will let you use only two default templates for content protection, and one template for email protection (Do Not Forward). Some users might be fine with these capabilities, but if you want more like on-premise RMS features, you’ll probably want Azure RMS. When you enable Azure RMS with your Office 365 subscription, it will let you create custom templates for RMS. However, if you are like me, and like to play around and make changes frequently, you might want check to speed up Azure custom RMS templates sync to Office365. Also, it might be useful to check and test your Azure RMS configuration from time to time. Luckily, there are few nice Powershell cmdlets that can be used for this purpose.

First, you need to connect to your Office365 tenant. To first store your credentials in PS variable, issue this cmdlet:

$cred = Get-Credential and press Enter. After this you will be prompted to enter your Office365 admin credentials. When you do it, type this:

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $cred -Authentication Basic –AllowRedirection, and press Enter. This will create a session to your Office365 tenant and also store in in Session variable.

Next, you want to import your session, and you will do it by executing Import-PSSession $Session cmdlet.

To check your RMS configuration, just execute Get-IRMConfiguration cmdlet after you imported a session in previous step. Check if internal and external licensing are enabled and also check for the RMSOnlineKeySharingLocation ( for Europe , it should be: https://sp-rms.eu.aadrm.com/TenantManagement/ServicePartner.svc). Besides this, it is useful to test your RMS configuration by executing Test-IRMConfiguration –RMSOnline cmdlet.

If all test are passed, and you want to force sync of your new/changed/deleted templates from Azure RMS to Exchange Online, here is the cmdlet :

Import-RMSTrustedPublishingDomain -Name "RMS Online – 1" -RefreshTemplates -RMSOnline

Make sure that the name of RMSTrustedPublishingDomain is accurate (if you used default values it will be like in example above).

Exchange publishing after TMG/UAG

After Microsoft announced that they will not be developing ForeFront Threat Management Gateway (TMG) anymore, and that this product, together with UAG is end-of-life (you can see more about this here), a lot of people I work with were pretty confused. To most of them, words “end-of-life” sounded like “not supported anymore” which is not the fact. However, from technical perspective, most clients I work with were worried about how to publish Exchange server without TMG.

In this post, I will try to cover most common scenarios with Exchange+TMG and ways how to handle these scenarios in near and far future. You’ll see that things are not as black as they might look at first sight.

First, let’s remind ourselves what TMG actually does for Exchange server. TMG 2010, as firewall/proxy/caching solution, is, among other things, capable to provide application level publishing for Exchange Server, instead of doing that just on transport layer. This means that instead of just forwarding (and protecting) HTTPS and SMTP traffic to your Exchange, TMG is actually “aware” that it has Exchange behinds its back. Because of this fact, it can impersonate Exchange server in some scenarios (for example, for OWA), terminate client connections on itself, authenticate users and then connect to Exchange to fetch the content for the user. TMG is using, so called, listener components to securely publish Exchange (and other) services to the Internet. For the Exchange publishing scenario, listener component is, for example, able to generate form based authentication page for OWA, securely authenticate a user, and then establish the connection to the Exchange server CAS. Besides, TMG was also able to securely publish your SMTP or SMTPS to the Internet, as well as to protect that kind of traffic from malware. This kind of publishing was very convenient for a lot of companies. I mostly work with medium to large companies, and quite a lot of them were using TMG to publish Exchange servers to the Internet, because solution is affordable, reliable, easy to deploy and manage and showed up to be very secure. Of course, there some downsides of using TMG for other purposes, but quite often I was seeing dedicated TMG just for Exchange publishing. In much more rare cases, UAG was also used to protect Exchange, but since it is also EOL, situation is actually the same.

The question we have now is what to do with Exchange publishing in post-TMG/UAG era. So, let’s see some most common scenarios you might run into.

Scenario 1: You already have TMG 2010 and Exchange Server 2003/2007/2010 deployed. What are your options? I want to be very clear on this point: Although TMG is end-of-life product, this does not mean that you have to look for another solution immediately. TMG 2010 will still be supported until year 2020, per support lifecycle policy for this product. This gives you quite a lot of time to find another solution, while still using fully supported configuration. So, no reason to panic, there’s a plenty of time left to think about other options. Make sure that you update your TMG server on regular basis, and you’ll be fine.

Scenario 2: You already have TMG 2010 and Exchange Server 2003/2007/2010 deployed, but you are planning an upgrade to Exchange Server 2013, and you know that TMG does not natively support Exchange Server 2013 publishing. What should you do? Yes, it is true that TMG does not support Exchange Server 2013 publishing out-of-the-box, but there’s pretty simple solution to make it work. First, you don’t need to worry about SMTP publishing – it works as before. You just have to direct your SMTP publishing rule to Exchange Edge server (available only in Exchange Server 2013 SP1) or to Client Access Server (if you don’t have Edge, in version 2013, incoming SMTP is handled by CAS server in Exchange 2013). For publishing Exchange web-based services, such as OWA, OA or Active Sync, you’ll have to run new publishing wizard, select the option to publish Exchange 2010 (as you’ll not have 2013 available), and after you create the rule, you will have to do some small fixes in publishing rule you just created. There’s a great post from Greg Taylor, Principal Program Manager from Exchange PG, that covers this, step-by-step, and you can find it here. Basically, in most cases, you’ll just have to adjust the URL for OWA logoff, which has changed in Exchange 2013. However, if you decided to go with latest and greatest technologies, and use MAPI over HTTP functionality (supported only when using Exchange Server 2013 SP1 with Outlook 2013 SP1), you will also have to add /mapi/* virtual directory path to Outlook Anywhere publishing rule, as this folder was not published before by Exchange publishing wizard. This is not mandatory thing to do. Exchange Server 2013 SP1 can also work with MAPIoverRPCoverHTTPS client connections, like Exchange 2010, but if you have Outlook 2013 SP1 clients you can also enable MAPI over HTTP as it brings several performance and stability enhancements. You can see more about this in great article by Tony Redmond here.

Scenario 3: You already have some 3rd party publishing solution in production, for Exchange Server 2003/2007/2010 publishing, and you’re planning to upgrade your Exchange organization to Exchange Server 2013. In this case, most important is to test your current solution with Exchange Server 2013. It might require some modification to the current publishing configuration, but in most cases, you will be good to go with what you have. If your current solution can’t work with Exchange Server 2013, or you want to replace it for any reason, look for the next scenario.

Scenario 4: You are planning to deploy Exchange Server (in any version) but you don’t have a publishing solution or your current solution does not work with version of Exchange you’re about to deploy and publish. In this case, things can be a bit complicated. First, you should be very clear on what you’re publishing. For the Exchange server, in most cases, you will want to publish SMTP (so your Exchange can receive emails from the Internet) and HTTPS-based services such as Outlook Web App, Outlook Anywhere, Exchange ActiveSync and, in some less frequent scenarios, Exchange Web Services. One of the very nice features of TMG is that it is capable to securely publish all these services, while providing attack protection at the same time. But, you are not able to buy TMG anymore. As said before, it is still supported, but not on the market to buy anymore. That leads us to situation where we must find solution to securely publish SMTP and to publish HTTPS based services. If you are looking for hardware (or virtual) appliance to solve this, most likely, you’ll have to look for two solutions instead of one, like with TMG. You will need one solution for publishing of web-based Exchange services, and another solution for securing and publishing the SMTP. In this case, budget becomes a very important factor. So, let’s look a bit deeper into these scenarios.

Scenario 4.1: You have a very limited budget, but you need a solution that works. I hear this all the time J. Well, actually there are solutions for this scenario, which can be almost free. Instead of using TMG (which was, by the way, pretty affordable), you can achieve reverse proxy functionality by using Application Request Routing component for IIS. This is an extension for IIS in Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012, and it’s free. To use quote from Microsoft, IIS Application Request Routing (ARR) 3 enables Web server administrators, hosting providers, and Content Delivery Networks (CDNs) to increase Web application scalability and reliability through rule-based routing, client and host name affinity, load balancing of HTTP server requests, and distributed disk caching (end of quote). Well, we can also use it to load balance and publish Exchange Web services. It is not complicated for deployment, and if you choose to go this way, you can find a very good guide on how to configure it here. You might have noticed that ARR is not supported on Windows Server 2012 R2. This is because 2012 R2 provides another technology for similar purpose – Web Application Proxy. Unlike ARR which is an extension to IIS, WAP is the subcomponent of Remote Access role in Windows Server 2012 R2. It actually replaces AD FS Proxy from previous Windows versions, but it also provides some publishing functionality. WAP is free, as it is included in Windows, but it requires some more administrative time to deploy and configure. It requires that you deploy AD FS on separate machine to make WAP work. With WAP, you can also use AD FS based authentication. If you have Exchange only on-premise, this might not be very attractive option, but for hybrid deployments with Office365, WAP is a great solution. If you decide to give it a try, you can find good step-by-step instructions to set it up here. However, ARR and WAP can’t publish SMTP for your Exchange server, so you will need another solution for that.
For the SMTP, you’ll need to address multiple keypoints. From the aspect of Exchange, it is very important if you are using Edge role or not. If you are, then you can deploy Edge server (or servers) in DMZ, and publish SMTP on the firewall. You can activate anti-spam agents on your Edge server, and stop most of the dirty SMTP traffic before it enters your internal network. However, you still have to deal with antivirus protection. As a part of Microsoft decision to shut down Forefront family of products, Forefront for Exchange server is also not available anymore. You can go with a third party antivirus solution, or you can choose Exchange Online Protection from Microsoft. If you don’t already have some on-premise AV solution I recommend that you seriously consider Exchange Online Protection. Yes, I know, that emails are going through Microsoft servers before they reach you, I’ve heard this argument a hundreds of times, with all variations of NSA-related conspiracy added to the story. I’m not going to elaborate on that, but will just say that if your email will be going through Microsoft servers before it reaches your Exchange, that will certainly be one well-known point in its path. Do you know anything about others?

Scenario 4.2: You have some budget to spend on Exchange publishing solution. Good for you J. In this case, I recommend that you look for some solutions from F5 and Kemp. I’m not actually promoting any of their products, but in Exchange community these are proven to be reliable and secure solutions for Exchange publishing. With some of the products they are offering, you can publish and load balance both HTTPS and SMTP. Also, if you are looking for a good (but not cheap) SMTP AV/AS/gateway solution, you should definitely check on Cisco Ironport Email Security appliances. I’ve seen these working in a production, and I’m pretty amazed by effectiveness they provide.

Feel free to comment on this article. I tried to summarize some solutions, based on my from-field experience so far. I’m not saying that this is all, but I hope it will help some people to go in to the right direction.

My sessions on upcoming conferences

After delivering very successful four CloudOS events in Sarajevo and Belgrade, with great audience, I’m heavily engaged in preparing sessions for spring conference season in EE region. This year, 4 conferences are happening in very short time frame (10 days), all of them are really great and I highly recommend that you visit at least some, if not all, of them.

tarabica It all starts with first Serbian Community-driven conference called Tarabica. It will happen on April 5th in Belgrade, it is fully organized by Serbian community guys and MVPs, and I’m sure that it will be a great event. It happens on Saturday, so you don’t have to ask your boss for permissions to attend :). On this conference, I will deliver a session about managing smart cards with ForeFront Identity Manager 2010 R2. You can see more about this session (and also register for a conference) here.

WD Right after Tarabica conference, one of the largest Microsoft official conference in region starts – Windays14. This year, again in Umag, beautiful place on Croatian coast. Last year it was very nice there – great place, excellent speakers a lot of fun, and I’m sure it will be same or better this year. On this conference, I’ll deliver a session about BYOD technologies in Windows Server 2012 R2 (session details are here) and also a session about how to efficiently build and manage internal PKI (session details are here).

NTK In the same week as Windays (but starting from Wednesday) Slovenian Microsoft NTK conference will happen. This is the conference with longest history in this region (I think this one is 19th), and it always attracts a lot of people. It happens on Bled, beautiful place on the famous Slovenian lake. On NTK conference, I will deliver a session about using Dynamic Access Control in production environments (session details are here), and a session about how to efficiently build and manage internal PKI (session details are here).

MSN And last, but definitely not less important, in the second week of April, fourth Bosnian Microsoft Network will happen. In the same place as last year (Banja Vrucica hotel resort), MSN4.0 will offer even better content and really great speakers from the whole region. Besides teaching there, I was also a member of content team, and I’m really satisfied with what we have done content wise. I’m very interested to see attendees feedback this year. On this MSN4.0 conference I will deliver a session about Windows Server 2012 R2 as a CloudOS (session details are here) and a session about upgrading to Exchange Server 2013 (session details are here). Besides these official sessions, I will also co-deliver one case-study session about Exchange&ADRMS implementation project in BH Telecom, that I was leading, together with a team member from customer side.

As you can see, a lot of presentation work is in front of me in next few weeks. Although preparation for all this sessions is not an easy job, I’m really looking forward to see my friends, colleagues and fellow MVPs from Serbia, Croatia, Slovenia and Bosnia.

See you there!!

CloudOS MVP Roadshow events – here we go again…

image

After I delivered many Windows Server Roadshows in Bosnia and Serbia last year, and had a great time with participants, I’m very happy and proud to announce that we are set to go again – this time to cover Windows Server 2012 R2 as a Cloud OS.

Every day, more data is generated on many types of devices. Every day, IT administrators and engineers are dealing with various requests and ways to manage data and devices that host the data. More and more apps and services used and installed on premise, are extending to the cloud. To help IT people to work more efficiently in the era of cloud services and BYOD concept, Microsoft provides the CloudOS – Windows Server 2012 R2.

During two whole day events (totally free for community members), one in Sarajevo (Feb 18th – Logosoft CPLS) and one in Belgrade (Feb 10th – Microsoft Serbia) we will explore the concept of CloudOS, see how to enable people-centric IT and will dive into technical benefits that Windows Server 2012 R2 provides. With lots of examples and demos, you will have a chance to see how to protect data in BYOD concept by leveraging technologies in Windows Server 2012 R2 for data access, device management and data protection. Also, we will discuss new approach to storage management with built-in storage technologies in Windows Server, as well as how to efficiently implement VDI in Microsoft based environments. And last, but definitely not less important, you will be able to see new Hyper-V platform with its new features!

So, if you are interested in these topics, I’m sure we will have a great time exploring them again. Registrations for Serbia and already out, stay tuned for Bosnia – we will send invitations soon.

How Quorum works in Windows Server 2012 R2 Failover Clusters

 

Although we hear word “quorum” pretty often in news, and in general it is usually very related to politics and parliaments, I’m sure that real ITPros have different feelings for this word :). Of course, I mean cluster quorum.

When you create a cluster, one of the most important things to care about is how to configure and maintain quorum. Previous Windows server versions (up to 2012 R2) were using quorum modes such as Node Majority, Node and Disk Majority, and Node and File Share Witness Majority. In Windows Server 2012 R2, these quorum modes are not used by default any more. Instead, Windows Server 2012 R2 introduces the concept of Dynamic Quorum. This feature provides the ability for a cluster to recalculate quorum in the event of node failure and still maintain working clustered roles, even when the number of voting nodes remaining in the cluster is less than 50 percent. This results in greater cluster availability and higher up time.

Also in Windows Server 2012 R2, this feature is enhanced additionally by introducing the concept of Dynamic Witness. When you configure a cluster in Windows Server 2012 R2, dynamic quorum is selected by default, but witness vote also is adjusted dynamically based on the number of voting nodes in the current cluster membership. For example, if a cluster has an odd number of votes, a quorum witness does not have a vote in the cluster. If the number of nodes is even, a quorum witness does have a vote. If a witness resource is for some reason failed or offline, the cluster will set the witness vote to a value of 0 automatically. By using this approach, the risk of a malfunctioned cluster because of a failing witness is greatly reduced. If you want to see if a witness has a vote, you can use Windows PowerShell and a new cluster property in the following cmdlet:

(Get-Cluster).WitnessDynamicWeight

A value of 0 indicates that the witness does not have a vote. A value of 1 indicates that the witness has a vote. The cluster can now decide whether to use the witness vote based on the number of voting nodes that are available in the cluster. A much simpler quorum configuration when you create a cluster is an additional benefit. Windows Server 2012 R2 will configure quorum witness automatically when you create a cluster.

Also, when you added or evict cluster nodes, you no longer have to adjust the quorum configuration manually. The cluster now automatically determines quorum management options and quorum witness.

But it’s not the only the witness that can have and lose vote in the quorum automatically. In Windows Server 2012 R2, this can also apply for nodes, by using Tie Breaker for 50% Node Split technology. The cluster can now adjust the running node’s vote status automatically to keep the total number of votes in the cluster at an odd number. For example, if you have a cluster with an even number of nodes and a file share witness, if the file share witness fails, the cluster uses dynamic witness functionality to remove the vote from file share witness automatically. However, because the cluster now has even number of votes, the cluster tie breaker picks a node randomly and remove it from the quorum vote to maintain an odd number of votes. If the nodes are distributed evenly in two sites, this helps to maintain cluster functionality in one site. In previous Windows Server versions, if both sites have an equal number of nodes and a file share witness fails, both sites stop the cluster.

If you want to avoid the node being picked randomly, you can use the LowerQuorumPriorityNodeID property to predetermine which node has its vote removed. You can set this property by using the following Windows PowerShell command, where "1" is the example node ID for a node in the site that you consider less critical:

(Get-Cluster).LowerQuorumPriorityNodeID = 1

However, this is not the only new feature that Microsoft provided for Failover Cluster. Force quorum resiliency provides additional support and flexibility to split brain syndrome cluster scenarios. This bad scenario happens when cluster breaks into subsets of cluster nodes that are not aware of each other. The cluster node subset that has a majority of votes will run while others are turned down. This scenario usually happens in multisite cluster deployments. If you want to start cluster nodes that do not have a majority, you can force quorum to start manually by using the /fq switch. So far, it’s all like before. However, in Windows Server 2012 R2, in such scenarios, the cluster will detect partitions in the cluster automatically as soon as connectivity between nodes is restored. The partition that was started by forcing a quorum is considered authoritative, and other nodes rejoin the cluster. When this happens, the cluster is brought back to a single view of membership. Before, like in Windows Server 2012, partitioned nodes without quorum were not started automatically, and administrator had to start them manually with the /pq switch. In Windows Server 2012 R2, both sides of the split cluster have a view of cluster membership, and they will reconcile automatically when connectivity is restored.

Work Folders in Windows Server 2012 R2

The Work Folders functionality in Server 2012 R2 represents a significant enhancement over current technologies for data synchronization and accessibility. It provides the benefits of cloud-based solutions but still gives administrators the ability to control the technology’s settings and manage users’ data. Work Folders can be very useful for mobile users, especially in a BYOD environment.

Recently, I wrote a deep dive article about this cool technology, and it is published on Windows IT Pro site. Check it out here!

Exchange Server 2013 SP1 (aka CU4) is coming

Great news on TechNet today. Microsoft announced that first SP1 is going to release first service pack with desired functional enhancements at the beginning of next year. Most important new functionalities that will be included in SP1 is following:
  • Windows Server 2012 R2 Support
  • S/MIME support for OWA
  • Edge Transport Server Role
  • Various Fixes and Improvements
Also, we should see CU3 very soon. Start planning your upgrades and migrations, it’s about time. See more here: http://blogs.technet.com/b/exchange/archive/2013/11/20/exchange-server-2013-service-pack-1-coming-in-early-2014.aspx 

Failover Clustering in Windows Server 2012 R2 – Tie Braker for 50% node split

Beside having ability to use Dynamic quorum for Failover clusters, clustering in Windows Server 2012 R2 is enhanced with one more very interesting functionality.
The cluster is now able to automatically adjust running node’s vote status in order to keep total number of votes in the cluster at odd number. This feature is called Tie breaker for 50% node split and it works together with dynamic witness functionality. Dynamic witness functionality is used to adjust the value of quorum witness vote. For example, if you have a cluster with even number of nodes and a file share witness, if the file share witness fails, cluster will use dynamic witness functionality to automatically remove the vote from file share witness.
However, since the cluster now has even number of votes, cluster tie breaker will randomly pick a node, and remove it quorum vote to maintain odd number of votes. If the nodes are evenly distributed in two sites, this will help to maintain cluster functional in one site. In previous Windows Server versions, if both sites have equal number of nodes and file share witness fails, both sites will stop the cluster.

If you want to avoid node being picked randomly you can use LowerQuorumPriorityNodeID property to predetermine which node will have its vote removed. You can set this property by using following Powershell command:

(Get-Cluster).LowerQuorumPriorityNodeID = 1

,where “1” is the example node ID for a node in the site that you consider less critical.
This will be very nice to use with DR scenarios.