Exchange Server 2007 High Availability Part 1 [PATCHED]
The Client Access server provides high availability through a CAS array using a hardware load balancer or Windows Network Load Balancing (NLB). Each has its own advantages and disadvantages. Configuring a hardware load balancer in Exchange 2010 was a bit tricky, and you had to configure the server affinity. This is no longer a requirement. In Exchange 2013, we can use a Layer 4 load balancer (which works on IP addresses and ports) or a DNS load balancer for this purpose. Though the DNS load balancer is just an optional solution, it requires manual DNS settings modification during one or more CAS server failures.
Exchange Server 2007 High Availability Part 1
In April 2009 Microsoft released a public beta of Exchange 2010, the latest and greatest version of a part of its unified communications family of products. Recently in August 2009, a feature complete Release Candidate version was released for public download. In this article Neil Hobson takes a look at some of the high availability features of Exchange 2010.
From my experiences, more organizations have deployed CCR in preference to SCC and it comes as no surprise to learn that SCC has been dropped entirely from Exchange 2010. As you will shortly see, the continuous replication technology lives on in Exchange 2010 but there are many changes in the overall high availability model.
An important component to a DAG is the file share witness, a term that you will be familiar with if you have implemented a CCR environment in Exchange 2007. Like its name suggests, the file share witness is a file share on a server outside of the DAG. This third server acts as the witness to ensure that quorum is maintained within the cluster. There are some changes to the file share witness operation as we shall discuss later in this section. When creating a DAG, the file share witness share and directory can be specified at the time; if they are not, default witness directory and share names are used. One great improvement over Exchange 2007 is that you do not necessarily need to create the directory and the share in advance as the system will automatically do this for you if necessary. As with Exchange 2007, the recommendation from Microsoft is to use a Hub Transport server to host the file share witness so that this component will be under the control of the Exchange administrators. However, you are free to host the file share witness on an alternative server as long as that server is in the same Active Directory forest as the DAG, is not on any server actually in the DAG, and also as long as that server is running either the Windows 2003 or Windows 2008 operating system.
Inside each DAG there will normally exist one or more mailbox servers, although it is possible to create an empty DAG as discussed earlier within this article. On each mailbox server in the DAG, there will typically exist multiple mailbox databases. However, one of the key differences between Exchange 2010 mailbox servers and their Exchange 2007 counterparts is that Exchange 2010 mailbox servers can host active and passive copies of different mailbox databases; remember that in Exchange 2007, an entire server in a CCR environment, for example, was considered to be either active or passive. However, in Exchange 2010, the unit of failover is now the database and not the server, which is a fantastic improvement in terms of failover granularity. Consider the diagram below in Figure 3.
In Figure 3, you can see that a DAG named DAG1 consists of two mailbox servers called MBX1 and MBX2. There are a total of three active mailbox databases, shown in green, across both servers and each active mailbox database has a passive copy, shown in orange, stored on the alternate server. For example, the active copy of DB1 is hosted on the server called MBX1 whilst the passive copy of DB1 is hosted on the server called MBX2. The passive copies of mailbox databases are kept up-to-date via log shipping methods in the same way that was used in Exchange 2007, such as between the two cluster nodes within a single Exchange 2007 CCR environment. As you might expect, the active copy of the mailbox database is the one which is used by Exchange. Within a DAG, multiple passive copies of a mailbox database can exist but there can only be a single active copy. Furthermore, any single mailbox database server in a DAG can only host 1 copy of any particular mailbox database. Therefore, the maximum possible number of passive copies of a mailbox database is going to be one less than the number of mailbox servers in a DAG, since there will always be one active copy of the mailbox database. For example, if a DAG consisted of the maximum of 16 mailbox servers, then there could be a maximum of 15 passive copies of any single mailbox database. However, every server in a DAG does not have to host a copy of every mailbox database that exists in the DAG. You can mix-and-match between servers however you wish.
In Exchange 2007, Outlook clients connect directly to the mailbox servers whilst other forms of client access, such as OWA, Outlook Anywhere, POP3, IMAP4 and so on, connect via a Client Access Server. The Client Access Server is then responsible for making the connection to the mailbox server role as required. In Exchange 2010, one other fundamental change over previous versions of Exchange is that Outlook clients no longer connect directly to the mailbox servers.
On each Client Access Server, there exists a new service known as the RPC Client Access Service that effectively replaces the RPC endpoint found on mailbox servers and also the DSProxy component found in legacy versions of Exchange. The DSProxy component essentially provides the Outlook clients within the organization with an address book service either via a proxy (pre-Outlook 2000) or referral (Outlook 2000 and later) mechanism. A likely high availability design scenario will therefore see a load-balanced array of Client Access Servers deployed, using technologies such as Windows Network Load Balancing or 3rd party load balancers, which will connect to two or more mailbox servers in a DAG as shown below in Figure 4.
Now that Exchange Server 2010 has had its first birthday, it's a good time to remind folks about the built-in features for high availability, site resilience and disaster recovery in Exchange 2010. If you're already running Exchange 2010, then you probably already know about database availability groups, mailbox database copies, and Active Manager. But if you're running Exchange Server 2007 or Exchange Server 2003, there will be new concepts and technology with new benefits for your organization as you upgrade to Exchange 2010, such as incremental deployment, datacenter switchovers, and recovery databases.
Building on the native replication capabilities introduced in Exchange Server 2007, Exchange 2010 integrates high availability into the core architecture of Exchange, enabling customers of all sizes and in all segments to economically deploy a messaging continuity service in their organization. Exchange 2010 reduces the cost and complexity of deploying a highly available and site resilient messaging solution while providing higher levels of end-to-end availability, simplifying administration, and supporting large mailboxes.
In previous versions of Exchange, service availability for the Mailbox server role was achieved by deploying Exchange in a Windows failover cluster. To deploy Exchange in a cluster, you had to first build a failover cluster, and then install the Exchange program files. This process created a special Mailbox server called a clustered Mailbox server (or Exchange Virtual Server prior to Exchange 2007). If you had already installed the Exchange program files on a non-clustered server and you decided you wanted high availability, you had to build a cluster using new hardware, or rebuild the existing server by removing Exchange, installing failover clustering, and reinstalling Exchange.
Exchange 2010 introduces the concept of incremental deployment, which enables you to deploy service and data availability for all Mailbox servers and databases after Exchange is installed. Service and data redundancy is achieved by using new features in Exchange 2010 such as database availability groups and mailbox database copies. In Exchange 2010, the days of building clusters and clustered mailbox servers, and the complexity that goes with those tasks, are gone. Mailbox servers can be added to a database availability group and mailbox databases hosted on those servers can be replicated across the servers to provide automatic recovery at the mailbox database level instead of at the server level. Fast database-level failover times (
Moreover, you can add site resilience to your existing high availability deployments with less complexity by simply extending database availability group across multiple physical locations (for example, primary and standby datacenters). By combining the native site resilience capabilities in Exchange 2010 with proper planning, a standby datacenter can be rapidly activated to serve a failed datacenter's clients. In the event of a disaster affecting your primary datacenter, you can use the built-in Exchange PowerShell cmdlets for site resilience to quickly perform a datacenter switchover to move the Exchange service namespaces and data endpoints from the primary datacenter to the standby datacenter. This transition is seamless for end-users; they don't need to use separate accounts, maintain multiple passwords or learn a new URL. They use the same URLs and namespaces as in the primary datacenter; they use the same account as in the primary datacenter, and they are accessing the same data as in the primary datacenter.
There's a lot of information out there that will help you plan, design and manage your high availability or site resilience solution. For example, you might want to start with this four-part video blogcast on high availability in Exchange 2010: High Availability in Exchange Server 2010 - Part 1
High Availability in Exchange Server 2010 - Part 2
High Availability in Exchange Server 2010 - Part 3
High Availability in Exchange Server 2010 - Part 4