Quantcast
Channel: You Had Me At EHLO…
Viewing all 301 articles
Browse latest View live

iOS6 devices erroneously take ownership of meetings

$
0
0

One of the great benefits to running one of the world’s largest Exchange deployments is that we at Microsoft get to see all the things that our customers face on a daily basis. With the recent release of iOS6, we have noticed a marked increase in support calls due to meetings having the owner of the meeting changed (sometimes called “meeting hijacking”). Most instances reported to us to date involve users with delegates who first open a meeting request in Outlook and then act on that same meeting in iOS.

Meeting issues are a large part of the challenges that we know some organizations see with 3rd party devices (here is our list). Unfortunately the recent iOS update has exacerbated one of these issues. We wanted to let you know about this issue as well as let you know that we have discussed this issue with Apple. We are also looking at ways that we can continue to harden the Exchange infrastructure to protect our servers and service from poorly performing clients.

In the meantime we wanted to offer a few mitigation options:

  • Tell users not to take action on calendars on iOS We're not seeing this particular issue if users don't take action on their calendar items (for example, accept, delete or change meetings).

  • Switch iOS users to POP3/IMAP4 Another option is to switch users over to POP/IMAP connections. This will remove calendar and contacts functionality while allowing users to still use email (though the email may shift to pull from push while using these protocols).

  • 3rd party clients/OWA Moving impacted users over to another email client that is not causing these issue for your organization may help alleviate the pain here. There are a number of other client options (OWA being one of them of course). Numerous clients are available in mobile application stores. We don’t recommend any particular client.

  • Block delegates Many of the issues we are seeing involve delegates. An admin can take the less drastic step of using the Allow/Block/Quarantine list to block only users who are delegates, or have a delegate, to minimize the impact here.

  • Block iOS 6 devices Exchange server comes with the Allow/Block/Quarantine functionality that enables admins to block any device or user.

  • Tell users not to upgrade to iOS 6 or to downgrade their devices – This solution may work as a temporary fix until Apple provides a fix but many users may have already made the decision to update.

  • Wait We do not have any information on the timeline of a fix from Apple but if this timeline is short, this may be the easiest course of action. Please contact Apple about any potential fix or timeline for its delivery.

  • Our support team has also published a KB article on this issue that you can read here. And we will update this post when a fix is available or we have additional information.

  • Adam Glick
    Sr. Technical Product Manager


Configure certificate-based authentication for Exchange ActiveSync

$
0
0

In previous posts, we have discussed certificate based authentication (CBA) for Outlook Web App, and Greg Taylor has covered publishing Outlook Web App and Exchange ActiveSync (EAS) with certificate based authentication using ForeFront TMG in this whitepaper. Certificate based authentication can also be accomplished using ForeFront Unified Access Gateway.

In this post, we will discuss how to configure CBA for EAS for Exchange 2010 in deployments without TMG or UAG.

To recap some of the common questions administrators and IT decision-makers have regarding CBA:

What is certificate based authentication?
CBA uses a user certificate to authenticate the user/client (in this case, to access EAS). The certificate is used in place of the user entering credentials into their device.

What certificate based authentication is not:
By itself, CBA is not two-factor authentication. Two-factor authentication is authentication based on something you have plus something you know. CBA is only based on something you have.

However, when combined with an Exchange ActiveSync policy that requires a device PIN, it could be considered two-factor authentication.

Why would I want certificate based authentication?
By deploying certificate based authentication, administrators gain more control over who can use EAS. If users are required to obtain a certificate for EAS access, and the administrator controls certificate issuance, access control is assured.

Another advantage: Because we're not using the password for authentication, password changes don't impact device access. There will be no interruption in service for EAS users when the they change their password.

Things to remember:
There will be added administration overhead. You will either need to stand up your own internal Public Key Infrastructure (PKI) using Active Directory Certificate Services (AD CS, formerly Windows Server Certificate Services) or a 3rd-party PKI solution, or you will have to purchase certificates for your EAS users from a public certification authority (CA). This will not be a one-time added overhead. Certificates expire, and when a user’s certificate expires, they will need a new one, requiring either time invested in getting the user a new certificate, or budget invested in purchasing one.

Prerequisites:

You need access to a CA for client certificates. This can be a public CA solution, individual certificates from a vendor, or an Active Directory Certificate Services solution. Regardless, the following requirements must be met:

  • The user certificate must be issued for client authentication. The default User template from an AD CS server will work in this scenario.
  • The User Principal Name (UPN) for each user account must match the Subject Name field in the user's certificate.
  • All servers must trust the entire CAtrust chain. This chain includes the root CA certificate and any intermediate CA certificates. These certificates should be installed on all servers that may require them, to include (but not limited to) ISA/TMG/UAG server(s) and the Client Access Server (CAS).
  • The root CA certificate must be in the Trusted Root Certification Authorities store, and any intermediate CA certificates in the intermediate store on all of these systems. The root CA certificate, and intermediate CA certificates must also be installed on the EAS device.
  • The user’s certificate must be associated with the user’s account in Active Directory

Client Access Server Configuration

To configure the Client Access server to enforce CBA:

1. Verify if Certificate Mapping Authentication is installed on the server.

  1. Right click on Computer in the start menu and choose Manage.
  2. Expand Roles and click on Web Server (IIS)
  3. Scroll down to the Role Services section. Under the Security section you should see Client Certificate Mapping Authentication installed.

image

  • If you don't see Client Certificate Mapping Authentication installed, click add Role Services> (scroll) Security and select Client Certificate Mapping Authentication and then click Install.

image

  • Reboot your server.

      2. Enable Active Directory Client Certificate Authentication for the server root in IIS.

      1. Start IIS Manager
      2. Click on the Server Name.
      3. Double click Authentication in the middle pane.

        image

      4. Right click on Active Directory Client Certificate Authentication and select Enable

        image

      5. When this is complete, restart the IIS Admin service from the Services console (or use Restart-Service IISAdmin" from the Shell).

        Important: IISreset does not pick up the changes properly. You must restart this service.

        image

      3. Require Client Certificates in Exchange management console.

      1. In the Exchange Management Console, expand Server Configuration and then select the Client Access server you want to configure
      2. On the Exchange ActiveSync tab, right click on the Microsoft-Server-ActiveSync directory and choose Properties.
      3. On the Authentication tab:
        1. Clear the Basic authentication (password is sent in clear text) checkbox
        2. Select Require client certificates

          image

      4. Modify the ActiveSync XML configuration file to allow Client Certificate Mapping Authentication.

      1. In IIS manager open the configuration editor under the Microsoft-Server-ActiveSync directory.

        image

      2. Browse the option for clientCertificateMappingAuth and set the value to True

        image

        image

      Once this is configured, all that's left to do is client configuration.

      Client Configuration

      You'll need to get the user’s certificate to the device. This will be accomplished in different ways for different devices. Some possibilities include:

      • Make the Trusted Root Certification Authority (Root CA) certificate available on a web site or other means of distribution.
      • Make the user’s certificate, including the private key, available on a website or other means of distribution.

        Caution: Use appropriate security measures to ensure that only the user who owns the certificate is able to access it from the device.

      • If the device supports external storage, you can place the certificate on a memory card or other external storage device and install it from the card.
      • Some devices have a certificate manager available for a host operating system. You can plug the device into your computer, run the certificate manager and install the certificate.

      Once the certificate is on the device, the user can configure the Exchange ActiveSync client (usually a mail app) on the device. When configuring EAS for the first time, users will be required to enter their credentials. When the device communicates with the Client Access Server for the first time, users will be prompted to select their certificate. After this is configured, if users check the account properties, they'll see a message similar to the following:

      Microsoft Exchange uses certificates to authenticate users when they log on. (A user name and password is not required.)

      Exchange Server 2003 Mailbox Coexistence

      This is an added step that requires some simple changes, and must be performed whether TMG is used to access Exchange 2010 or not. Use the following steps to enable this for access to Exchange 2003 mailboxes.

      1. Apply the hotfix from the following article (or one that has a later version of EXADMIN.DLL) to the Exchange 2003 servers where the mailboxes are homed.

        937031 Event ID 1036 is logged on an Exchange 2007 server that is running the CAS role when mobile devices connect to the Exchange 2007 server to access mailboxes on an Exchange 2003 back-end server

      2. The article instructs you to run a command to configure IIS to support both Kerberos and NTLM. You must run the command the command prompt using CSCRIPT, as shown below:

        1. Click Start> Run> type cmd and press Enter.
        2. Type cd \Windows\System32\AdminScripts and press Enter.
        3. Type the following command and press enter:

          cscript adsutil.vbs set w3svc/WebSite/root/NTAuthenticationProviders "Negotiate,NTLM"

    1. On the Exchange 2003 mailbox server, launch Exchange System Manager and follow these steps:

      1. Expand the Administrative Group that contains the Exchange 2003 server.
      2. Expand Servers> expand <server name>> Protocols> HTTPExchange Virtual Server
      3. Right click the Microsoft-Server-ActiveSync virtual directory and select Properties
      4. On the Access tab, click Authentication and ensure that only Integrated Windows Authentication checkbox is checked.
    2. Use the following steps to configure the Exchange 2010 to Exchange 2003 communication for Kerberos Constrained Delegation (KCD).

      1. Open Active Directory Users and Computers on the CAS
      2. Right click the CAS computer account > Properties
      3. In the Delegation tab, select Trust this computer for delegation to specified services only
      4. Select Use any authentication protocol option and then click Add
      5. In the Add Services window, click Users or Computers and enter the name of the Exchange 2003 server that the CAS is delegating to
      6. Click OK
      7. A list of Service Principal Names (SPN) will be displayed for your server.
      8. Select the appropriate HTTP SPN for the Exchange 2003 server. You'll need to add the correct SPN, HTTP/serverFQDN

        Note: You may need to add the SPN as per Setspn Overview

    3. Thanks to:
      DJ Ball for his previous work in documenting certificate based authentication for Outlook Web App (see How to Configure Certificate Based Authentication for OWA - Part I and How to Configure Certificate Based Authentication for OWA - Part II
      Mattias Lundgren, for starting the documentation process on certificate based authentication for EAS.
      DJ Ball and Will Duff for reviewing this document.
      Henning Peterson and Craig Robicheaux for reviewing this document.
      Greg Taylor for technical review.

      Jeff Miller

    Decommissioning your Exchange 2010 servers in a Hybrid Deployment

    $
    0
    0

    Many organizations have chosen to configure a hybrid deployment with Exchange Online to take advantage of different features such as rich mailbox moves and cross-premises calendar free/busy sharing. This includes Exchange 2003, Exchange 2007 and Exchange 2010 organizations that require a long-term hybrid configuration with Exchange Online and organizations that are using a hybrid deployment as a stepping stone to migrating fully to Exchange Online. So, at what point should these organizations decide to get rid of their on-premises Exchange servers used for the hybrid deployment? What if they have moved all of the on-premises mailboxes to Exchange Online? Is there a benefit to keeping on-premises Exchange servers? While it may seem like a no-brainer, the decision to get rid of the on-premises Exchange servers is not simple and definitely not trivial.

    Mailbox Management

    Organizations that have configured a hybrid deployment for mailbox management and hybrid feature support have also configured Office 365 Active Directory synchronization (DirSync) for user and identity management. For organizations intending on keeping DirSync in place and continuing to manage user accounts from the on-premises organization, we recommend not removing the last Exchange 2010 server from the on-premises organization. If the last Exchange server is removed, you cannot make changes to the mailbox object in Exchange Online because the source of authority is defined as on-premises. The source of authority refers to the location where Active Directory directory service objects, such as users and groups, are mastered (an original source that defines copies of an object) in a hybrid deployment. If you needed to edit most mailbox settings, you would have to be sure the Active Directory schema was extended on-premises and use unsupported tools such as Active Directory Service Interfaces Editor (ADSI Edit) for common administrative tasks. For example, adding a proxy address or putting a mailbox on litigation hold when there isn’t an Exchange Management Console (EMC) or Exchange Management Shell (Shell) on-premises becomes difficult and these simple (and other more complex) tasks cannot be done in a supported way.

    Note: A hybrid deployment is not required in order to manage Exchange objects from an on-premises organization. You can effectively manage Exchange objects with an on-premises Exchange server even if you do not have an organization relationship, Federation Trust, and third-party certificate in place. This Exchange server gives you a supported method for creating and managing your Exchange recipient objects. It is recommended to use Exchange Server 2010 for management tasks since this will give you the option to create objects such as remote mailboxes with the New-RemoteMailbox cmdlet. The server role needed is at least a Client Access Server (CAS) role, for management tools to work properly.

    Online Organizations without On-Premises Exchange Servers

    Some Exchange Online organizations may have removed all Exchange servers from their on-premises organization and have felt the user management pain mentioned above first hand. Each situation is unique, but in many cases an Exchange 2010 server can simply be added back to the organization to simplify the management process. These organizations will need to ensure that a mail-enabled user is in place for all Exchange Online mailboxes in order to properly configure the mailboxes. Assuming DirSync is still deployed in the on-premises organization, duplicate object issues shouldn’t be a problem.

    Managing Users from the On-Premises Organization when Source of Authority is Online

    There are some organizations that have created an Office 365 service tenant and started to use Exchange Online only to realize they want to consolidate the user management tasks. There are also some organizations that came from hosted environments or migrated from Business Productivity Online Services (BPOS) where they did not manage their users from an on-premises organization. Now that they are in Office 365 and using Exchange Online, they want to simplify the user management process. In either case, if you have DirSync deployed and you are using Exchange Online, you should have an on-premises Exchange server for user management purposes.

    The process for changing the source of authority after the users are created in Office 365 would be to use the DirSync “soft match” process outlined here. This will allow organizations to manage the user account and Exchange Online mailboxes from the on-premises organization. Organizations need to verify that there was a mail-enabled user in the on-premises directory for the corresponding Exchange Online mailboxes. Organizations that haven’t had an Exchange server deployed previously will need to install an Exchange 2010 server. Office 365 for enterprises customers can obtain an Exchange Server 2010 license at no charge by contacting customer support. This license has limitations and doesn’t support hosting on-premises mailboxes.

    Removing the HybridConfiguration Object created by the Hybrid Configuration Wizard

    When a hybrid deployment is created using the Hybrid Configuration Wizard, the wizard creates the HybridConfiguration Active Directory object in the on-premises organization. The HybridConfiguration object is created when the New-HybridConfiguration cmdlet is called by the Hybrid Configuration Wizard. The object stores the hybrid configuration information so that the Update-HybridConfiguration cmdlet can read the settings stored in the object and use them to provision the hybrid configuration settings.

    Removing the HybridConfiguration object isn’t supported in Exchange Server 2010. There isn’t a cmdlet that will remove the HybridConfiguration object and the object can reside in Active Directory without adverse effects as long as the Hybrid Configuration Wizard isn’t run again.

    However, removing the HybridConfiguration object is supported in Exchange Server 2013. The new Remove-HybridConfiguration cmdlet will remove the HybridConfiguration object from the configuration container, however it will not disable or remove any existing hybrid deployment configuration settings.

    Although many people want to remove the HybridConfiguration object as part of their Exchange decommissioning plan, it isn’t critical and is optional.

    Removing a Hybrid Deployment

    The proper way to remove a hybrid deployment is to disable it manually. The following actions should be performed to remove the objects created and configured by the Hybrid Configuration Wizard:

    1. Re-point your organization’s MX record to the Office 365 service if it is pointing to the on-premises organization. If you are removing Exchange and don’t point the MX record to Office 365, inbound Internet mail flow won’t function.

    2. Using the Shell in the on-premises organization, run the following commands:

    Remove-OrganizationRelationship –Identity “On Premises to Exchange Online Organization Relationship”
    Remove-FederationTrust –Identity “Microsoft Federation Gateway”
    Remove-SendConnector “Outbound to Office 365”

    3. Using EMC, you can also remove the <your organization domain>.mail.onmicrosoft.com domain that was added as part of the email address policy for your organization.

    image

    4. OPTIONAL - Remove the remote domains created by the Hybrid Configuration wizard in the Exchange Online organization. From the EMC, select the Hub Transport in the Exchange Online forest node and remove all remote domains starting with “Hybrid Domain” shown below:

    image

    5. Remove the organization relationship from the Exchange Online organization with the following command. You must use Remote PowerShell to connect to Exchange Online connected to Exchange Online. For detailed steps, see Connect Windows PowerShell

    Remove-OrganizationRelationship –Identity “Exchange Online to On Premises Organization Relationship”

    6. OPTIONAL - Disable the Inbound and Outbound Forefront Online Protection for Exchange (FOPE) connectors created by the Hybrid Configuration Wizard. These connectors can be disabled using the FOPE Administration Console and the release option shown below:

    image

    Note: Removing or modifying objects with ADSIEDIT isn’t supported.

    Conclusion

    Most of the time the reason for most organizations that have configured a hybrid deployment, removing the last Exchange server from the on-premises environment will have adverse effects. In most cases, we recommend that you leave at least one Exchange 2010 Server on-premises for mailbox management unless you are getting rid of the on-premises messaging and identity management dependencies all together.

    Timothy Heeney

    Released: Update Rollup 5 v2 for Exchange 2010 SP2, Exchange 2010 SP1 RU8 and Exchange 2007 SP3 RU9

    $
    0
    0

    Today the Exchange CXP team released the following update rollups to the Download Center. All three releases cover Security Bulletin MS12-080. Because this is a security release, the updates will also be available on Microsoft Update.

    Update Rollup 5-v2 for Exchange Server 2010 SP2

    This update contains a number of customer reported and internally found issues. For a list of updates included in this rollup, see KB 2785908 Description of Update Rollup 5 version 2 for Exchange Server 2010 Service Pack 2. We would like to specifically call out the following fixes which are included in this release:

    Note: Some of the following KB articles may not be available at the time of publishing this post.

    • 2748766 Retention policy information does not show "expiration suspended" in Outlook Web App when the mailbox is set to retention hold in an Exchange Server 2010 environment
    • 2712595 Microsoft Exchange RPC Client Access service crashes when you run the New-MailboxExportRequest cmdlet in an Exchange Server 2010 environment
    • 2750847 An Exchange Server 2010 user unexpectedly uses a public folder server that is located far away or on a slow network

    For DST Changes: http://www.microsoft.com/time

    Exchange Team

    Windows Management Framework 3.0 on Exchange 2007 and Exchange 2010

    $
    0
    0

    Recently, Windows Update began offering the Windows Management Framework 3.0 as an Optional update. This includes all forms of update distribution, such as Microsoft Update, WSUS, System Center Configuration Manager and other mechanisms. The key bit here is that the Windows Management Framework 3.0 includes PowerShell 3.0.

    Windows Management Framework 3.0 is being distributed as KB2506146 and KB2506143 (which one is offered depends on which server version you are running - 2008 Sp2 or 2008 R2 Sp1).

    What does that mean to you?

    Windows Management Framework 3.0 (specifically PowerShell 3.0) is not yet supported on any version of Exchange except Exchange Server 2013 (which requires it). If you install Windows Management Framework 3.0 on a server running Exchange 2007 or Exchange 2010, you will encounter problems, such as Rollups that will not install, or the Exchange Management Shell may not run properly.

    We have seen rollups not installing with the following symptoms:

    • If rollup is installed through Microsoft Update, the installation might error with error code of 80070643
    • If rollup is installed from a download, the error displayed is “Setup ended prematurely because of an error."
    • In both cases, event log might show the error with an error code of “1603”

    Our guidance at this time is that Windows Management Framework 3.0 should not be deployed on servers running Exchange 2007 or Exchange 2010, or on workstations with the Exchange Management Tools for either version installed. If you have already deployed this update, it should be removed. Once the update is removed, functionality should be restored.

    Ben Winzenz

    Exchange 2010 Calendar Repair Assistant

    $
    0
    0

    Exchange 2010 had many new enhancements and improvements over prior versions of Exchange. One really cool feature was the introduction of the Calendar Repair Assistant (CRA). The CRA is a mailbox assistant that is configurable through the Exchange Management Shell and runs within the MS Exchange Mailbox Assistants service. Its intended purpose is to help maintain consistency of calendar meetings between an organizer and the attendees by comparing the meeting copies of the organizer and the attendees. Why did the Exchange Product Group build this really awesome component into Exchange 2010? I’m glad you asked. I want to take you on a journey back to…The Good Ole Days!

    The Good Ole Days

    In the “Good Ole Days” (or even as recently as last week), the Exchange support team logged numerous calls and cases on calendar meeting issues for prior versions of Exchange. Because of the flexibility allowed in terms of what end users can do with meetings in their calendars, meetings can become inconsistent across organizers and attendees calendars. These issues would range from inconsistent meeting times between organizers and attendees to meetings being “unknowingly” deleted from a user’s calendar. We saw cases where meetings would show up in Outlook but not a mobile device or vice versa. Sometimes meetings would be duplicated on a calendar or would lose their organizer. Delegates would reportedly make a change to a meeting but it wouldn’t show up on the mailbox owner’s mobile device. The list goes on. The troubleshooting of these issues, though normally pretty straight forward can be tedious and time consuming. I went ahead and included a troubleshooting reference guide for you below not only to point out how we would troubleshoot and identify these problems, but also just in case you’ve stumbled onto this blog and you’re experiencing something similar!

    Exchange calendar troubleshooting reference

    Getting to the most recent service pack and update of your Outlook client and Exchange Server is very important as doing that might actually address your reported problems. Depending on the type of issue you are experiencing, there are several different ways to troubleshoot. If you are trying to determine why a meeting keeps changing or is moved to the deleted items folder, etc. MFCMapi is a good tool to use. You can launch MFCMapi and connect to the mailbox profile of the user that is reporting the issue. Find the calendar item that you are looking for and save out the properties of that item to a text file. Search for the PR_LAST_MODIFIER_NAME, PR_LAST MODIFIED_DATE, PR_SENDER_EMAIL, and PR_SENT_REPRESENTING_NAME. These properties will paint a pretty good picture for you. Often times we’ve seen that the last user to modify the item is either a service account for a MAPI/CDO device, a delegate, or the mailbox owner themselves.

    Another great tool you can use to see what is happening to a meeting is Exchange Trace or “Trace Control” which can be found in the Exchange Troubleshooting Assistant in Exchange 2007 and 2010. You can setup your trace, check all of the boxes under “Trace Types” and then on the “Components to Trace” you will select “Store” with the following “Trace Tags”: tagCalendarChange, tagCalendarDelete, tagMtgMessageChange, tagMtgMessageDelete. You can then setup a mailbox filter for the mailbox in question and kick off the trace. At this point you will need to reproduce the issue (or wait until it reproduces). Once it does stop the trace. You’ll need to get with Microsoft support at this point to convert and analyze the trace for you.

    One of my colleagues over on the Outlook team, Randy Topken, created a great tool called CalCheck. It performs a wide variety of tests and is very helpful in troubleshooting Calendar issues. I won’t go through all of them as he has published everything you need to know here: CalCheck - The Outlook Calendar Checking Tool.

    There are many other issues that we have encountered, but this is a glimpse of the most common types of issues we have seen (and actually continue to see) and how we go about troubleshooting those problems. Hopefully this gives you a little insight as to why we were so excited to see the Calendar Repair Assistant in Exchange 2010. I’ve included some additional links to assist you in troubleshooting calendar below:

    CRA RTM

    Now let’s get to the heart of this article, THE CRA! The CRA was built to detect and correct calendar inconsistencies like I described above between a meeting organizer and attendees. The CRA must be configured through the Exchange Management Shell in order to run, as it is not enabled by default. Once enabled, the CRA will run against all mailboxes on the server you have configured it for. You can disable it for specific mailboxes by using the Set-Mailbox cmdlet and setting the CalendarRepairDisabled to True. In the RTM release of Exchange 2010, the CRA was a time-based assistant that was configured to run on a specified schedule and could not be throttled to adjust for server resource utilization. CRA made decisions only by comparing an organizer’s copy of a meeting with the attendee’s copy of the meeting. CRA uses the organizer’s meeting copy as the master and assumes it is the correct copy. It then checks attendee response status and assumes that the attendee’s response status is correct and updates the tracking information for the organizer. The problem was that CRA had no idea of how the inconsistencies occurred, meaning it didn’t take into account client intent i.e an attendee intentionally changes a meeting start time or location on their own calendar. One of the biggest issues we saw with this lack of client intent checking in RTM was if an attendee deleted a meeting from their calendar but did not send an update to the organizer, the CRA would recreate the meeting in the attendees calendar and would continue to do so until the attendee sent a response to the organizer (this has been corrected in SP1). When the CRA detects an inconsistency on a calendar, it issues a special meeting message called a Repair Update Message. The message is processed by the Calendar Attendant and then the message is placed in the user’s Deleted Items folder. Any repairs made are recorded in the CRA log (see Troubleshooting CRA below).

    CRA SP1 and later

    Exchange 2010 SP1 saw some pretty cool changes to the CRA. As mentioned earlier, in the RTM version CRA was time based and ran on a specified schedule. In SP1 and later this changed to a throttle based assistant to help prevent it from impacting server resources or end user experience. You can enable CRA with the Set-MailboxServer cmdlet. You will need to set the CalendarRepairWorkCycle and the CalendarRepairWorkCycleCheckpoint. The work cycle is the time span allocated for CRA to scan and process all mailboxes on the server i.e. if this is set to seven days, that means every mailbox on the server will be processed once every seven days. The throttling assistant calculates the slowest pace at which mailboxes can be processed by dividing the total number of mailboxes by the work cycle. There is also a built in mechanism to monitor certain resources like store and throttle the processing back if the load starts to rise. The checkpoint is the period of time in which the list of mailboxes in the CRA’s queue is refreshed during the existing work cycle, adding new mailboxes, moved mailboxes and mailboxes that have not processed yet into its queue. For example, the following command will set the CRA to check all mailboxes on the server once every seven days and refresh its work queue every day with the list of calendars that still need to process repairs within that seven day period:

    Set-MailboxServer –Identity MBXSRV1 –CalendarRepairWorkCycle 7.00:00:00 –CalendarRepairWorkCycleCheckPoint 1.00:00:00

    You can verify the settings for calendar repair options that are set for all mailbox servers by running the following in Exchange Management Shell:

    Get-MailboxServer | fl name,*calendar*

    Another great change in SP1 was the introduction of “client intent”. The CRA can now distinguish intentional versus unintentional inconsistencies between a meeting organizer and attendees. When the CRA is running against an attendee calendar, it will validate a meeting item against the organizer’s copy. If it is running against the organizer, it will validate the item against all attendees. The CRA will correct inconsistencies, but only for mailboxes on the server for which it is running. It reads from other Exchange 2010 mailbox servers, but each server enabled for calendar repair will have its own instance of the CRA running that is responsible for the mailboxes on that server. When the CRA finds an inconsistency, it goes about determining client intent by using snapshots of calendar items that are stored in something called the Calendar Version Store (CVS) and calendar metadata. Before a calendar item is changed, the Exchange 2010 copy on write functionality takes a snapshot of the calendar item and stores it along with additional metadata (like the source or the application that initiated the change) in the root of the Recoverable Items folder for a default 120 days before being purged. The Calendar Version Store maintains a historical record of these calendar item snapshots that are in the Recoverable Items folder. Client intent metadata is added to a calendar item as a named property: Name id = 0x0015 (dispidClientIntent), Named Prop GUID = 11000E07-B51B-40D6-AF21-CAA85EDAB1D0. Any non-zero Hex value represents an intentional change. Before a specific change is made to a calendar item, the snapshot is taken which preserves the existing metadata as well. When the CRA is determining client intent, it queries the Calendar Version Store. During the initial query, a special dynamic search folder is created called CalendarVersionStore in the root of the non-IPM subtree with a search scope of the IPM subtree and the Recoverable Items folder and criteria of all calendar items and meeting messages. Once this search folder is populated with the requested stored versions of a particular item from the Calendar Version Store, the CRA will go about determining client intent. It must first determine the target version of the particular item. It does this in one of two ways. The first method the CRA uses to determine the target version for client intent is when the resulting change, and not the state prior to the change, is important, ie an item is deleted. The Calendar Version store is sorted with the most recent item on top and then queried backwards. The oldest version found that is in the target state is used as the target version and it is the one whose client intent metadata is validated. Let’s say that item A is deleted and there are 4 snapshots of item A in the CVS. We will call them A4-A1. First A4 is queried and we see that it matches the target deleted state. Then A3 is queried and (for hypothetical purposes) we see it is also in the target deleted state. Then A2 is queried and the CRA determines that this snapshot is not in the target deleted state. Now the CRA will go back to A3 and review the metadata on it for client intent. Any non-zero entry on this item indicates an intentional change. The second method the CRA uses to determine the target version for client intent is when how an item transitioned into a certain state is important, ie the time or location on an attendee copy is different from the organizer’s copy. The main difference of this method is that it looks at the chain of snapshots from initial state to target state. If any snapshot in the chain does not show client intent, then the whole change is considered unintentional. Let’s say that the CRA finds that a meeting attendee for meeting B has a different time listed than the organizer. There are 5 snapshots of B in the attendees CVS, B5-B1. The CRA searches backwards (B5-B4-B3-B2-B1) and finds that B2 is the last snapshot with the same time as the organizer. It will then look at the metadata on B3-B5. If any of these have a zero entry, the entire chain is considered unintentional and will be corrected. All of the items in this chain must have a non-zero metadata client intent property value set for CRA to read this as an intentional change. In 2010 SP1, when the CRA detects an unintentional inconsistency it uses the same process (Repair Update Message) as it did in 2010 RTM. When the CRA detects an intentional location or time inconsistency, no repair action is taken on the calendar item. If there is an intentional missing organizer or attendee item, the CRA uses a special inquiry message to repair the item on the opposite calendar. For instance, if a meeting organizer deletes a meeting but doesn’t send out a cancellation, the meeting will still exist on the attendee’s calendar. The CRA will send a special inquiry message that is processed by the calendar attendant and trigger a normal meeting cancellation on the attendee’s calendar. For more information on CRA conflict detection and resolution logic see the “Conflict Detection and Resolution” section here: Understanding Calendar Repair

    Caveats

    • CRA does not run against resource mailboxes.
    • If the meeting organizer does not enable the “Request a response to this invitation” in OWA or “Request Responses” option in Outlook, calendar repair will not take place on that meeting.
    • If you set calendar repair to disabled for a specific mailbox, calendar repair will not take place for that mailbox. You can verify this with get-mailbox | ft name,CalendarRepairDisabled . If it’s set to true, then calendar repair is disabled.
    • If you disable the Calendar Version Store for a specific mailbox, then client intent cannot be determined. You can verify this get-mailbox | ft name,CalendarVersionStoreDisabled . If it’s set to true, then the Calendar Version Store is disabled.
    • The CRA only check calendar items for mailboxes that have the calendar attendant enabled (it is by default). This allows the Repair Update Message to be processed. See Set-CalendarProcessing.
    • The primary clients that write client intent metadata are OWA 2010 SP1 and Outlook 2010. Client intent for clients utilizing EWS or EAS may be limited. See Client Application Experience.
    • CRA will not change the meeting organizer, if it has been improperly changed.

    Troubleshooting CRA

    Troubleshooting calendar issues in 2010 can be approached in the same way as listed in the earlier troubleshooting section. There are some additional things you can look at when trying to see if the CRA processed a repair to a meeting and why. The first is the CRA log. It is stored on the mailbox server that it is running on and is located in <ExchangeInstallPath>\Logging\Calendar Repair Assistant and is in the format of CRA<date>yyyymmdd-<sequence>.<user>.log. Locate the CRA logs for specific users and then review them for any repairs that the CRA made. get-CalendarDiagnosticLog can be used to search the Calendar Version Store for a particular user and a particular item. Then you can use MFCMapi as outlined above to search the properties of the item and determine if there was positive client intent.

    Conclusion

    Hopefully I’ve shed a little light on the importance of the Calendar Repair Assistant. It is a critical component and has, in my opinion, reduced the number of cases in Exchange 2010 than what we saw for calendar issues in prior versions of Exchange. Troubleshooting, while you can still use the same techniques used in legacy versions of Exchange, has been simplified with the CalCheck, Calendar Repair Log, and using the Get-CalendarDiagnosticLog to pull calendar items from the Calendar Version Store for analysis on client intent cases.

    Additional resources

    I wanted to thank Mike Manjunath, Rob Whaley and Jonathan Runyon for their review of this blog post.

    Charles Lewis

    Released: Exchange Server 2010 SP3

    $
    0
    0

    Earlier last year, we announced that Exchange 2010 Service Pack 3 would be coming in the first half of 2013. Later, we updated the timeframe to Q1 2013. Today, we're pleased to announce the availability of Exchange Server 2010 Service Pack 3, which is ready to download.

    Service Pack 3 is a fully slipstreamed version of Exchange 2010. The following new features and capabilities are included within SP3:

    • Coexistence with Exchange 2013:Customers who want to introduce Exchange Server 2013 into their existing Exchange 2010 infrastructure will need the coexistence changes shipping in SP3.

      NOTE: Exchange 2010 SP3 allows Exchange 2010 servers to coexist with Exchange 2013 CU1, which is also scheduled to be released in Q1 2013. Customers can test and validate this update in a representative lab environment prior to rolling out in their production environments as an important coexistence preparatory step before introducing Exchange Server 2013 CU1.

    • Support for Windows Server 2012: With SP3, you can install and deploy Exchange Server 2010 on computers that are running Windows Server 2012.
    • Support for Internet Explorer 10: With SP3, you can use IE10 to connect to Exchange 2010.
    • Customer Requested Fixes: All fixes contained within update rollups released before SP3 will also be contained within SP3. Details of our regular Exchange 2010 release rhythm can be found in Exchange 2010 Servicing.

    In addition to the customer reported issues resolved in previous rollups, this service pack also resolves the issues that are described in the following Microsoft Knowledge Base (KB) articles:

    Note: Some of the following KB articles may not be available at the time of publishing this post.

    2552121 You cannot synchronize a mailbox by using an Exchange ActiveSync device in an Exchange Server 2010 environment

    2729444 Mailboxes are quarantined after you install the Exchange Server 2010 SP2 version of the Exchange Server 2010 Management Pack

    2778100 Long delay in receiving email messages by using Outlook in an Exchange Server 2010 environment

    2779351 SCOM alert when the Test-PowerShellConnectivity cmdlet is executed in an Exchange Server 2010 organization

    2784569 Slow performance when you search a GAL by using an EAS device in an Exchange Server 2010 environment

    2796950 Microsoft.Exchange.Monitoring.exe process consumes excessive CPU resources when a SCOM server monitors Exchange Server 2010 Client Access servers

    2800133 W3wp.exe process consumes excessive CPU and memory resources on an Exchange Client Access server after you apply Update Rollup 5 version 2 for Exchange Server 2010 SP2

    2800346 Outlook freezes and high network load occurs when you apply retention policies to a mailbox in a mixed Exchange Server 2010 SP2 environment

    2810617 Can't install Exchange Server 2010 SP3 when you define a Windows PowerShell script execution policy in Group Policy

    2787500 Declined meeting request is added back to your calendar after a delegate opens the request by using Outlook 2010

    2797529 Email message delivery is delayed on a Blackberry mobile device after you install Update Rollup 4 for Exchange Server 2010 SP2

    2800080 ErrorServerBusy response code when you synchronize an EWS-based application to a mailbox in an Exchange Server 2010 environment

    Exchange 2010 SP3 FAQ

    Here are answers to some frequently asked questions.

    • Q. Does Exchange 2010 SP3 include the fixes in Exchange 2010 SP2 RU6?
      A. Yes. Service Packs are cumulative - they include all fixes included in previous RUs and service packs. Although Exchange 2010 SP2 RU6 was released on the same day as Exchange 2010 SP3, fixes in RU6 are included in SP3.

    • Q. Does Exchange 2010 SP3 include the security fix mentioned in Microsoft Security Bulletin MS13-012?
      Yes, fix for the vulneraiblity in MS13-012 is included in Exchange 2010 SP2 RU6, and (as stated above) fixes from SP2 RU6 are inclued in SP3. We recommend reviewing the related security bulletin before applying an update that contains security fixes.

    • Q. Why release RU6 for SP2 at all if all fixes are included in SP3?
      Exchange 2010 SP2 is a supported service pack (see Exchange Server Support Lifecycle). Customers typically deploy update rollups quickly but take longer to deploy service packs.

    • Q. Is Exchange 2010 SP3 compatible with WMF 3.0/PowerShell 3.0?
      A. No. Exchange 2010 SP3 does not support WMF 3.0/PowerShell 3.0. Although on Windows Server 2012 you can have WMF3/PS3, Exchange 2010 SP3 will require and use PowerShell 2.0.

    • Q. Does Exchange 2010 SP3 require an Active Directory schema update?
      A. Yes, as mentioned in Exchange 2010 SP3 Release Notes, an Active Directory schema is required.

    • Q. Can I install Exchange 2013 RTM in my Exchange 2010 organization after upgrading to Exchange 2010 SP3?
      A. As mentioned in the post, coexistence of Exchange 2013 in an Exchange 2010 SP3 org requires Exchange 2013 CU1, also scheduled for release in this quarter (Q1 2013).

    • Q. It's great that Exchange 2010 SP3 is supported on Windows Server 2012! Can I upgrade the OS my Exchange Server's running on from Windows Server 2008/2008 R2 to Windows Server 2012 after installing SP3?
      A. No. Upgrading the operating system after Exchange Server installation is not supported. You would have to uninstall Exchange, upgrade the OS, then reinstall Exchange. Or install Exchange 2010 SP3 on a fresh Windows 2012 install.

    • Q. Is MRM 1.0 supported on Exchange 2010 SP3?
      A. Yes, MRM 1.0 (Managed Folders) is a supported feature in Exchange 2010. We replaced MRM 1.0 management support in EMC with MRM 2.0 (Retention tags) in Exchange 2010 SP1. You can still use the Shell to manage MRM 1.0.

    • Q. Will I be able to restore and mount database backups created before a server is upgraded to SP3?
      A. Yes. When you restore and mount the database, it will be updated.

    • Q. Is Exchange 2010 SP3 supported with <My great third-party Exchange add-on/tool>?
      A. Please contact the partner / third-party software vendor for this info. We recommend testing in a representative non-production environment before you deploy in production.

    Exchange Team

    Released: Update Rollup 6 for Exchange Server 2010 SP2 and Exchange 2007 SP3 RU10

    $
    0
    0

    Today the Exchange CXP team released the following update rollups to the Download Center. All three releases cover Security Bulletin MS13-012 (KB 2809279). Because this is a security release, the updates will also be available on Microsoft Update.

    Update Rollup 6 for Exchange Server 2010 Service Pack 2

    Update Rollup 10 for Exchange Server 2007 Service Pack 3

    Update Rollup 6 for Exchange Server 2010 SP2:

    This update contains a number of customer reported and internally found issues. In particular we would like to specifically call out the following fixes which are included in this release:

    Note: Some of the following KB articles may not be available at the time of publishing this post.

    2489941 The "legacyExchangeDN" value is shown in the "From" field instead of the "Simple Display Name" in an email message in an Exchange Server 2010 environment

    2779387 Duplicated email messages are displayed in the Sent Items folder in a EWS-based application that accesses an Exchange Server 2010 Mailbox server

    2751581 OAB generation fails with event IDs 9126, 9330, and either 9338 or 9339 in an Exchange Server 2010 environment

    2784081 Store.exe process crashes if you add certain registry keys to an Exchange Server 2010 Mailbox server

    Update Rollup 10 for Exchange Server 2007 SP3:

    This update contains a number of customer reported and internally found issues. In particular we would like to specifically call out the following fixes which are included in this release:

    2783779 A hidden user is still displayed in the Organization information of Address Book in OWA in an Exchange Server 2007 environment

    Note: Exchange 2007 SP3 RU10 allows Exchange 2007 servers to coexist with Exchange 2013 CU1, which is also scheduled to be released in Q1 2013. Customers can test and validate this update in a representative lab environment prior to rolling out in their production environments as an important coexistence preparatory step before introducing Exchange Server 2013 CU1.

    For DST Changes: http://www.microsoft.com/time

    Exchange Team


    Time to go PST hunting with the new PST Capture 2.0

    $
    0
    0

    Ahoy, Exchange Ninjas! It’s time to buckle up and go PST hunting again with the new PST Capture 2.0!

    Earlier this week, we released a brand new version of the free PST Capture tool that allows you to hunt down PST files on client computers across your network. After it finds PST files on users’ computers, the tool consolidates the PST files to a central location, and then easily injects PST data to primary or archive mailboxes on your on-premises Exchange Servers or Exchange Online.

    What’s new in PST Capture 2.0?

    PST Capture 2.0 includes the following improvements:

    • Support for Microsoft Exchange Server 2013
    • Updated profile generation code to use Outlook Anywhere (RPC over HTTP).
    • Fixed Exchange Online import failure issue when PST Capture is not installed on an Exchange server.
    • Removed User Interface limit of 1,000 users when performing an import to Exchange Online.
    • General performance improvements

    Download PST Capture 2.0 from the Download Center (aka.ms/getpstcapture), check out the PST Capture documentation (aka.ms/pstcapture) and then go get those PSTs!

    Exchange Team

    Updated: Exchange Server 2010 Deployment Assistant for Exchange 2010 SP3 and Office 365

    $
    0
    0

    We're happy to announce that the Exchange Server 2010 Deployment Assistant (ExDeploy) now includes updates for supporting hybrid deployments with Exchange 2010 Service Pack 3 (SP3) organizations and the latest release of Office 365.

    Important information to know about the SP3 update for the Exchange 2010 hybrid deployment scenarios:

    • Updated information is available only in English at this time. Support in additional languages will follow shortly.
    • We’ve removed the qualifying question about existing Forefront Online Protection for Exchange for on-premises organizations. Forefront Online Protection for Exchange (FOPE), now known as Exchange Online Protection (EOP), is no longer a limiting factor in hybrid transport routing options. Organizations no longer need to request that their existing EOP service be merged with the EOP service provided with the Microsoft Office 365 tenant. The EOP services are merged automatically and do not require any additional configuration.
    • We’ve also removed the limitations on configuring centralized mail transport in the Manage Hybrid Configuration wizard. Centralized mail transport in hybrid deployments can be configured regardless of existing EOP service configuration.
    • If you have previously configured a hybrid deployment using ExDeploy and Exchange 2010 SP2, you’ll need to take a few basic administrative actions as part of updating your hybrid deployment for the latest release of Office 365. See Understanding Upgrading Office 365 Tenants for Exchange 2010-based Hybrid Deployments for more details.

    We’d also like to remind everyone that we’ve released the Exchange Server 2013 Deployment Assistant for those organizations that want to take advantage of the features of a new Exchange 2013 installation and for Exchange 2010 and Exchange 2007 organizations that are interested in the benefits of an Exchange 2013-based hybrid deployment. See Now Available: Exchange Server 2013 Deployment Assistant.

    Exchange 2010-based hybrid deployments offer Exchange 2010, Exchange 2007 and Exchange 2003 organizations the ability to extend the feature-rich experience and administrative control they have with their existing on-premises Microsoft Exchange organization to the cloud. It provides the seamless look and feel of a single Exchange organization between an on-premises organization and an Exchange Online organization in Office 365. In addition, hybrid deployments can serve as an intermediate step to moving completely to a cloud-based Exchange Online organization. This approach is different than the simple Exchange migration (“cutover migration”) and staged Exchange migration options currently offered by Office 365 outlined in E-Mail Migration Overview.

    About the Exchange Server Deployment Assistant

    The Exchange Server Deployment Assistant (ExDeploy) is a web-based tool that helps you upgrade to Exchange 2010 on-premises, configure a hybrid deployment between an on-premises and Exchange Online organization or migrate to Exchange Online.

    Screenshot: Exchange Deployment Assistant home page
    Figure 1:The Exchange Deployment Assistant generates customized instructions to help you upgrade to Exchange 2010 on-premises or in the cloud

    It asks you a small set of simple questions, and then based on your answers, it provides a checklist with instructions to deploy or configure Exchange 2010 that are customized to your environment. These environments include:

    • Stand-alone on-premises Exchange installations and upgrades
    • Hybrid deployment configurations and
    • Cloud-only Exchange deployment scenarios.

    Besides getting the checklist online, you can also print instructions for individual tasks and download a PDF file of your complete configuration checklist.

    Your feedback is very important for the continued improvement of this tool. We would love your feedback on this new scenario and any other area of the Deployment Assistant. Feel free to either post comments on this blog post, provide feedback in the Office 365 community Exchange Online migration and hybrid deployment forum, or send an email to edafdbk@microsoft.com via the Feedback link located in the header of every page of the Deployment Assistant.

    Exchange Deployment Assistant Team

    Announcing Microsoft Connectivity Analyzer (MCA) 1.0 and Microsoft Remote Connectivity Analyzer (RCA) 2.1

    $
    0
    0

    Back in November 2012, we announced our MCA Beta client. We have been very busy working to improve the testing options that are available from the MCA client. Here’s what we’ve built for the 1.0 release:

    image 

    Microsoft Connectivity Analyzer Tool 1.0

    We are excited to announce the 1.0 release of the Microsoft Connectivity Analyzer.  This tool is a companion to the Microsoft Remote Connectivity Analyzer web site.  The MCA tool provides administrators and end users with the ability to run connectivity diagnostics for five common connectivity symptoms directly from their local computer.  Users can test their own connectivity, and save results in an HTML format that administrators will recognize from viewing results on the RCA website. 

    Install the MCA 1.0 tool here:https://testconnectivity.microsoft.com/?tabid=client

    Watch the Introduction Video:

    The MCA tool offers five test symptoms:

    • “I can’t log on with Office Outlook”– This test is equivalent to the Exchange RCA test for “Outlook Anywhere (RPC over HTTP)”. There is an option to run the SSO test provided on the parameters page.
    • “I can’t send or receive email on my mobile device”.   – This test is equivalent to the Exchange RCA test for Exchange ActiveSync.
    • ***New MCA Test*** “I can’t log on to Lync on my mobile device or the Lync Windows Store App”– This test checks for the Domain Name Server (DNS) records for your on-premise domain to ensure they are configured correctly for supporting Mobile Lync clients. Also it connects to the Autodiscover web service and makes sure that the authentication, certificate, web service for Mobility is correctly set up
    • ***New MCA Test*** “I can’t send or receive email from Outlook (Office 365 only)”– This test checks Inbound/Outbound SMTP mail flow and also includes Domain Name Server validation checks for O365 customers.
    • ***New MCA Test*** “I can’t view free/busy information of another user”– This test verifies that an Office 365 mailbox can access the free/busy information of an on-premises mailbox, and vice versa (one direction per test run).

    clip_image002

    Microsoft Lync Connectivity Analyzer Tool:  You will also notice the Lync Connectivity Analyzer Tool on the client page.  We are working on combining MCA with MLCA in the near future but wanted to make both these great tools available to customers now to improve our client diagnostics options. To learn more about MLCA – go HERE

    Feedback: Send all feedback to the MCA Feedback alias.  Please let us know what you think of the tool and whether this will be helpful in troubleshooting connectivity scenarios. Also feel free to provide feedback on additional tests you would like to see added in the future.

    image

    Microsoft Remote Connectivity Analyzer 2.1

    We are excited to announce the 2.1 release of the Microsoft Remote Connectivity Analyzer web site.  The tool provides administrators and end users with the ability to run connectivity diagnostics for our servers to test common issues with Exchange, Lync and Office 365.  We have added new Office 365 Domain Name Server tests, enhanced existing tests, and improved the overall site experience.

    Check out the updates to the website here:https://testconnectivity.microsoft.com

    Here are the highlights of the 2.1 RCA release:

    Version 2.1 (March 2013)

    • Added support for localized language support for 60 languages
    • Updated version of the downloadable Microsoft Connectivity Analyzer v1.0 Tool for troubleshooting connectivity from the local machine
    • Added Microsoft Lync Connectivity Analyzer downloadable tool for troubleshooting Lync issues from the local machine
    • Added Office 365 General Tests section
    • Added Office 365 Exchange Domain Name Server (DNS) Connectivity Test

    clip_image001

    Enjoy!

    Thanks.

    Brian Feck on behalf of the entire MCA/RCA team.
    Follow the team on Twitter - @ExRCA

    Mysterious mail loop on Edge Transport server: Check your size limits!

    $
    0
    0

    I'm a support enginer in CSS. I was working with a customer who reported a mail loop error for a specific domain like contoso.com. This error was only observed in large emails.  Yeah that’s really mysterious until you figure out that the mail loop is due to size restriction on the Send connector.  I thought this was curious enough to share.

    Understanding the configuration and root cause of the issue:

    I initially thought that this might have been the outcome of the Edge server being configured to use an external DNS server (a DNS server that resolves external hosts). Usually, when the Edge Tranport server is configured to use an external DNS, it resolves the domain name to the public IP addresses (generally pointing to itself, the exernal firewall, or the service provider) instead of a Hub Transport server in the Active Diectory site, causing a mail loop.

    On reproducing the issue, I found out that the Edge Transport server was not configured to use an external DNS server. The environment I set up to reproduce the issue looked like the diagra below:

    clip_image002

     

    Here's what happens in this scenario: When the Edge Transport server receives a 20 MB email from an Internet sender, it accepts it. The Edge Transport server has two connectors that match the address space - one for the address space contoso.com to the Active Directory site and one for the address space *. When making the routing decision based on all available connectors, the one from the Edge to Hub is not considered because of the size restriction (it has 10 MB size limit). The best match is the * connector from Edge to the Internet (Please go over the connector selection algorithm documented in Understanding Message Routing) which has a message size limit of 30 MB.

    End result: The message is routed back to the Internet causing the message loop between the Internet and the Edge Server.

    Based on whether the Send connector to the Internet is configured to use DNS or a Smart Host to deliver oubound mail, we will get one of the following NDRs:

    If using DNS:

    #554 5.4.4 SMTPSEND.DNS.MxLoopback; DNS records for this domain are configured in a loop ##

    If using a Smart Host:

    5.4.6 smtp;554 5.4.6 Hop count exceeded - possible mail loop> #SMTP#

    The Solution

    This behavior is by design and can be easily rectified by modifying the message size limit on the connector. Based on your requirement, you can choose either of the following options:

    • Set the MaxMessageSize parameter on the Receive Connector (which receives inbound mail from the Internet) to 10MB, so messages from the Interent are restricted to 10 MB.
    • Set the MaxMessageSize on the Send connector from Edge to HUB to 30MB, which will allow you to receive 30 MB messages from external senders.

    Mystery solved! Thanks to Arindam Thokder and Scott Landry, who helped me with getting this ready for the blog!

    Suresh Kumar (XCON)

    Sending common or canned responses from a shared or repository mailbox

    $
    0
    0

    As a Microsoft Premier Field Engineer (PFE), I assist companies with Lotus Notes and GroupWise migrations to Exchange/Outlook environments and have found that different applications act differently. One of the common questions is: Can Outlook/Exchange send canned responses from a common mailbox? The answer is yes, but it is done a little differently than other products.

    While everyone here is familiar with the different tools and process in this article, I usually find that the ‘leap of faith’ to put them together in this combination is not always made. So here we go, easy steps put together to create an end result worthy of solving a situation that may apply to your environment.

    The first question is, in the Exchange/Outlook realm, is it possible to have a common response from a single mailbox (sometimes referred to as a ‘repository’) from multiple people? Yes it is possible in Outlook, but several steps have to be setup for this to occur correctly and consistently.

    Let’s say you have an ‘IT Helpdesk’ mailbox, that acts like a repository and you have several people accessing this resource throughout the day. The first step is to remind ourselves where ‘signatures’ are stored in Outlook. Most peoples’ first thought is within the Outlook profile. That’s not correct. The signatures are actually stored as .htm files on a user’s local machine:

    By default on Vista\Windows 7\8:

    C:\Users\%UserName%\AppData\Roaming\Microsoft\Signatures

    By default on Windows XP:

    C:\Documents and settings\%UserName%\Application Data\Microsoft\Signatures

    Why do I mention %UserName%? That is the wild card entry for the Alias or AD User Account name of whoever is logged onto the computer. You’ll see later that we reference this again.

    Another question is, ‘What can you do with an HTML file?’ And of course the better question is, ‘What CAN’T you do with an HTML file?’ The beauty of having these files on the computer is that we can act upon them at any time and Outlook will use the current file. In fact, even with Outlook open, you could add a new file in this location and it will show up in the available signature list, which is one of the more dynamic options available in Outlook. Within this HTML file, you could put all kinds of interesting mechanisms: hyperlinks, formatted text, images, marching ant text, blinking text, all of the cool and possibly annoying functions that a web page allows us to accomplish.

    The next step we have to do is to be able to get common files to end users’ desktops. This is accomplished by a simple GPO (Group Policy Object) via AD (Active Directory). Hence, if a file needs to be copied to a desktop, a simple call to a UNC path that acts upon a logon process can get the file to the proper location. Something like:

    xCopy \\ServerName\Share\*.* C:\Users\%UserName%\AppData\Roaming\Microsoft\Signatures

    This will copy all files in this share location to whoever has the GPO applied to and get it to their desktop every time they log onto any machine they have access to. This also ensures consistency of implementing updated files and ease of deploying new machines. For those of you that thought the signature files were in the Outlook profile, you can now see that you can have signatures setup for an end user before they ever launch Outlook on a newly provisioned client machine. Pretty cool stuff.

    You can also set permissions on the network folder share to only allow say a manager to edit the files inside the folder. This would allow only the proper people to have access that impacts many other users. Example, around December the files could mention holiday specific information like: “Thank you for the contact, our staff is cycling through holiday time off and your request may take a little longer than usual to get to.” Or at the beginning of the year: “Welcome to the New Year, we are excited about deploying our new products this year.” Whatever the message is, it can easily be updated and controlled from a central location and be deployed to the proper end users that send out the common responses.

    sharedmbx1

    Figure 1: Setting share permissions on a folder share.

    The next problem we run into is when someone ‘reads’ a message in Outlook, it is marked as ‘read’. Now other mail products allow you to read a message and have it not be marked as read. We don’t do that in Outlook/Exchange. Once a message is read by anyone, you have to go back and mark it as ‘unread’. A training of end users is the only answer here. There are no Outlook settings available to set to change this behavior.

    sharedmbx2

    Figure 2: Right click a message and select ‘Mark as Unread’ or on the Ribbon, select Unread/Read option.

    We’re almost done. The last few steps are back on the client machines. After you’ve trained your end users to simply apply the appropriate signature file and select the correct mailbox account to send from, you have one more training step. You have to inform them of which ‘sent items’ folder to send from. Yep, there’s another issue here, but one that can also be easily resolved.

    sharedmbx3

    Figure 3: Selecting specific signature and which mailbox account to send from.

    By default, when sending a message from within Outlook, the message is recorded from the ‘sent item’ folder of the user sending the item. This is not the ideal behavior when we have a common resource mailbox that others have to view sent items from people who are also acting as a single voice from the identical repository location. This is accomplished by a registry edit on the client side. So once again we go back to our friend AD and edit a GPO for consistency. Remember that all registry settings can be edited via GPO’s. Depending on the version of AD you’re on, you may have to use a logon file that calls to a .reg file, or for 2008 and above DC’s, use GPO Preferences to achieve the same process.

    There is no universal fix; each user who can ‘Send As’ another mailbox must perform one of the following on their computer, or deploy using a patch management solution, like System Center Configuration Manager (SCCM) and/or GPO’s.

    Outlook 2003 (KB317865 and KB953804)

    Install the office2003-KB953803-GLB.exe hotfix (http://support.microsoft.com/kb/953803). There is no reboot required; but, Outlook has to be closed.

    1. Click Start, click Run, type regedit, and then click OK.

    2. Locate and then click the following registry subkey:

    HKEY_CURRENT_USER\Software\Microsoft\Office\11.0\Outlook\Preferences

    3. On the Edit menu, point to New, and then click DWORD Value.

    4. Type DelegateSentItemsStyle, and then press ENTER.

    5. Right-click DelegateSentItemsStyle, and then click Modify.

    6. In the Value data box, type 1, and then click OK.

    7. Exit Registry Editor.

    Outlook 2007 (KB972148 and KB970944)

    1. Click Start, click Run, type regedit, and then click OK.

    2. Locate and then click the following registry subkey:

    HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Outlook\Preferences

    3. On the Edit menu, point to New, and then click DWORD Value.

    4. Type DelegateSentItemsStyle, and then press ENTER.

    5. Right-click DelegateSentItemsStyle, and then click Modify.

    6. In the Value data box, type 1, and then click OK.

    7. Exit Registry Editor.

    Outlook 2010 (KB2181579)

    1. Click Start, click Run, type regedit, and then click OK.

    2. Locate and then click the following registry subkey:

    HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Outlook\Preferences

    3. On the Edit menu, point to New, and then click DWORD Value.

    4. Type DelegateSentItemsStyle, and then press ENTER.

    5. Right-click DelegateSentItemsStyle, and then click Modify.

    6. In the Value data box, type 1, and then click OK.

    7. Exit Registry Editor.

    Outlook 2013

    1. Click Start, click Run, type regedit, and then click OK.

    2. Locate and then click the following registry subkey:

    HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Outlook\Preferences

    3. On the Edit menu, point to New, and then click DWORD Value.

    4. Type DelegateSentItemsStyle, and then press ENTER.

    5. Right-click DelegateSentItemsStyle, and then click Modify.

    6. In the Value data box, type 1, and then click OK.

    7. Exit Registry Editor.

    The other alternative is to have the sender manually move the sent message to the Sent Items folder of the user named as the sender in the email.

    Important: After you set the DelegateSentItemsStyle registry value to 1, the functionality is only available when the Microsoft Exchange account is set to Use Cached Exchange Mode. The DelegateSentItemsStyle registry value will not work consistently on an Exchange account that is configured in Online mode.

    sharedmbx4

    Figure 4: Creating a GPO, using ‘preferences’, to set which ‘sent items’ folder is allowed to be selected.

    So there you have it. Sending a consistent response, from a commonly shared mailbox, using signature files, Outlook client regedit, GPO’s, and a UNC share. Now go out and improve your commonly used shared resource mailboxes and present a stronger corporate image at the same time. Thank you and happy improvements.

    Mike O'Neill

    Preserving Activation Blocks After Performing DAG Member Maintenance

    $
    0
    0

    In Exchange 2010, when a database availability group (DAG) member needs service, it can be placed into maintenance mode. Exchange 2010 includes scripts the StartDagServerMaintenance.ps1 and StopDagServerMaintenace.ps1 scripts to place/remove a DAG member from maintenance mode. For a summary of what these scripts do, see this post.

    Within a DAG, it is not uncommon to have one or more databases or servers blocked from automatic activation by the system. Some customers configure entire servers to be blocked from activation, some block individual copies, and some do a combination of both, based on their business requirements. Administrators using the maintenance mode scripts will find their configured activation blocks reset to the unblocked. This behavior is not a problem with the scripts; in fact, the scripts are working as designed.

    Here is an example of a database copy that has had activation suspended:

    [PS] C:\>Get-MailboxDatabaseCopyStatus DAG-DB0\MBX-2 | fl name,activationSuspended

    Name                             : DAG-DB0\MBX-2
    ActivationSuspended              : True

    Here is an example of a server that has activation blocked:

    [PS] C:\>Get-MailboxServer MBX-2 | fl name,databasecopyautoactivationpolicy

    Name                             : MBX-2
    DatabaseCopyAutoActivationPolicy : Blocked

    When the administrator executes the stopDagServerMaintenance.ps1 script, these states are reset back to their defaults. Here is an example of the states post StopDagServerMaintenance.ps1:

    [PS] C:\Program Files\Microsoft\Exchange Server\V14\Scripts>Get-MailboxDatabaseCopyStatus DAG-DB0\MBX-2 | fl name,activationSuspended

    Name                             : DAG-DB0\MBX-2
    ActivationSuspended              : False

    [PS] C:\Program Files\Microsoft\Exchange Server\V14\Scripts>Get-MailboxServer MBX-2 | fl name,databasecopyautoactivationpolicy

    Name                             : MBX-2
    DatabaseCopyAutoActivationPolicy : Unrestricted

    Although the maintenance scripts behavior is by design, it can lead to undesirable scenarios, such as lagged database copies being activated. Of course, to eliminate these issues, an administrator can record database and server settings before and after maintenance and reconfigure any reset settings as needed.

    To help with this, here is a sample script I created that records database and server activation settings into a CSV file which can then be used with the maintenance scripts to adjust the states automatically.

    What the script does

    When you run the script, it creates two CSV files (on the server you run it on) along with a transcript that contains the results of the command executed. The first CSV file contains all database copies assigned to the server and their activation suspension status. Here's an example:

    #TYPE Selected.Microsoft.Exchange.Management.SystemConfigurationTasks.DatabaseCopyStatusEntry
    "Name","ActivationSuspended"
    "DAG-DB0\DAG-3","True"
    "DAG-DB1\DAG-3","True"

    The second CSV file contains the database copy auto activation policy of the server. For example:

    #TYPE Selected.Microsoft.Exchange.Data.Directory.Management.MailboxServer
    "Name","DatabaseCopyAutoActivationPolicy"
    "DAG-3","Blocked"

    Using maintenanceWrapper.ps1 to start and stop maintenance

    Because this scipt is unsigned, you'll need to relax the execution policy on the server to allow for unsigned scripts.

    IMPORTANT: Allowing unsigned PowerShell scripts to execute is a security risk. For details, see Running Windows PowerShell Scripts. If this option does not meet your organization's security policy, you can sign the script (requires a code-signing certificate).

    This command sets the execution policy on a server to allow unsigned PowerShell scripts to execute:

    Set-ExecutionPolicy UNSIGNED

    You can use the maintenanceWrapper.ps1 script to start and stop maintenance procedure on a DAG member.

    1. Use this command to start the maintenance procedure:

      maintenanceWrapper.ps1 –serverName <SERVERNAME> –action START

      The command creates the CSV files containing the original database states and then invokes the StartDagServerMaintenance.ps1 script to place the DAG member in maintenance mode.

    2. After maintenance is completed, you can stop the maintenance procedure using this command:

      maintenanceWrapper.ps1 –serverName <SERVERNAME> –action STOP

      The command calls the StopDagServerMaintenance.ps1 script to remove the DAG member from maintenance mode and then resets the database and server activation states from the states recorded in the CSV file.

    Give the script a try and see if it makes maintenance mode for activation-blocked servers and databases easier for you. I hope you find this useful, and I welcome any and all feedback.

    *Special thanks to Scott Schnoll and Abram Jackson for reviewing this content and David Spooner for validating the script.

    Tim McMichael

    Troubleshooting Rapid Growth in Databases and Transaction Log Files in Exchange Server 2007 and 2010

    $
    0
    0

    A few years back, a very detailed blog post was released on Troubleshooting Exchange 2007 Store Log/Database growth issues.

    We wanted to revisit this topic with Exchange 2010 in mind. While the troubleshooting steps needed are virtually the same, we thought it would be useful to condense the steps a bit, make a few updates and provide links to a few newer KB articles.

    The below list of steps is a walkthrough of an approach that would likely be used when calling Microsoft Support for assistance with this issue. It also provides some insight as to what we are looking for and why. It is not a complete list of every possible troubleshooting step, as some causes are simply not seen quite as much as others.

    Another thing to note is that the steps are commonly used when we are seeing “rapid” growth, or unexpected growth in the database file on disk, or the amount of transaction logs getting generated. An example of this is when an Administrator notes a transaction log file drive is close to running out of space, but had several GB free the day before. When looking through historical records kept, the Administrator notes that approx. 2 to 3 GBs of logs have been backed up daily for several months, but we are currently generating 2 to 3 GBs of logs per hour. This is obviously a red flag for the log creation rate. Same principle applies with the database in scenarios where the rapid log growth is associated to new content creation.

    In other cases, the database size or transaction log file quantity may increase, but signal other indicators of things going on with the server. For example, if backups have been failing for a few days and the log files are not getting purged, the log file disk will start to fill up and appear to have more logs than usual. In this example, the cause wouldn’t necessarily be rapid log growth, but an indicator that the backups which are responsible for purging the logs are failing and must be resolved. Another example is with the database, where retention settings have been modified or online maintenance has not been completing, therefore, the database will begin to grow on disk and eat up free space. These scenarios and a few others are also discussed in the “Proactive monitoring and mitigation efforts” section of the previously published blog.

    It should be noted that in some cases, you may run into a scenario where the database size is expanding rapidly, but you do not experience log growth at a rapid rate. (As with new content creation in rapid log growth, we would expect the database to grow at a rapid rate with the transaction logs.) This is often referred to as database “bloat” or database “space leak”. The steps to troubleshoot this specific issue can be a little more invasive as you can see in some analysis steps listed here (taking databases offline, various kinds of dumps, etc.), and it may be better to utilize support for assistance if a reason for the growth cannot be found.

    Once you have established that the rate of growth for the database and transaction log files is abnormal, we would begin troubleshooting the issue by doing the following steps. Note that in some cases the steps can be done out of order, but the below provides general suggested guidance based on our experiences in support.

    Step 1

    Use Exchange User Monitor (Exmon) server side to determine if a specific user is causing the log growth problems.

    1. Sort on CPU (%) and look at the top 5 users that are consuming the most amount of CPU inside the Store process. Check the Log Bytes column to verify for this log growth for a potential user.
    2. If that does not show a possible user, sort on the Log Bytes column to look for any possible users that could be attributing to the log growth

    If it appears that the user in Exmon is a ?, then this is representative of a HUB/Transport related problem generating the logs. Query the message tracking logs using the Message Tracking Log tool in the Exchange Management Consoles Toolbox to check for any large messages that might be running through the system. See #15for a PowerShell script to accomplish the same task.

    Step 2

    With Exchange 2007 Service Pack 2 Rollup Update 2 and higher, you can use KB972705 to troubleshoot abnormal database or log growth by adding the described registry values. The registry values will monitor RPC activity and log an event if the thresholds are exceeded, with details about the event and the user that caused it. (These registry values are not currently available in Exchange Server 2010)

    Check for any excessive ExCDO warning events related to appointments in the application log on the server. (Examples are 8230 or 8264 events). If recurrence meeting events are found, then try to regenerate calendar data server side via a process called POOF.  See http://blogs.msdn.com/stephen_griffin/archive/2007/02/21/poof-your-calender-really.aspx for more information on what this is.

    Event Type: Warning
    Event Source: EXCDO
    Event Category: General
    Event ID: 8230
    Description: An inconsistency was detected in username@domain.com: /Calendar/<calendar item> .EML. The calendar is being repaired. If other errors occur with this calendar, please view the calendar using Microsoft Outlook Web Access. If a problem persists, please recreate the calendar or the containing mailbox.

    Event Type: Warning
    Event ID : 8264
    Category : General
    Source : EXCDO
    Type : Warning
    Message : The recurring appointment expansion in mailbox <someone's address> has taken too long. The free/busy information for this calendar may be inaccurate. This may be the result of many very old recurring appointments. To correct this, please remove them or change their start date to a more recent date.

    Important: If 8230 events are consistently seen on an Exchange server, have the user delete/recreate that appointment to remove any corruption

    Step 3

    Collect and parse the IIS log files from the CAS servers used by the affected Mailbox Server. You can use Log Parser Studio to easily parse IIS log files. In here, you can look for repeated user account sync attempts and suspicious activity. For example, a user with an abnormally high number of sync attempts and errors would be a red flag. If a user is found and suspected to be a cause for the growth, you can follow the suggestions given in steps 5 and 6.

    Once Log Parser Studio is launched, you will see convenient tabs to search per protocol:

    image

    Some example queries for this issue would be:

    image

    Step 4

    If a suspected user is found via Exmon, the event logs, KB972705, or parsing the IIS log files, then do one of the following:

    • Disable MAPI access to the users mailbox using the following steps (Recommended):
      • Run

        Set-Casmailbox –Identity <Username> –MapiEnabled $False

      • Move the mailbox to another Mailbox Store. Note: This is necessary to disconnect the user from the store due to the Store Mailbox and DSAccess caches. Otherwise you could potentially be waiting for over 2 hours and 15 minutes for this setting to take effect. Moving the mailbox effectively kills the users MAPI session to the server and after the move, the users access to the store via a MAPI enabled client will be disabled.
    • Disable the users AD account temporarily
    • Kill their TCP connection with TCPView
    • Call the client to have them close Outlook or turn of their mobile device in the condition state for immediate relief.

    Step 5

    If closing the client/devices, or killing their sessions seems to stop the log growth issue, then we need to do the following to see if this is OST or Outlook profile related:

    Have the user launch Outlook whileholding down the control key which will prompt if you would like to run Outlook in safe mode. If launching Outlook in safe mode resolves the log growth issue, then concentrate on what add-ins could be attributing to this problem.

    For a mobile device, consider a full resync or a new sync profile. Also check for any messages in the drafts folder or outbox on the device. A corrupted meeting or calendar entry is commonly found to be causing the issue with the device as well.

    If you can gain access to the users machine, then do one of the following:

    1. Launch Outlook to confirm the log file growth issue on the server.

    2. If log growth is confirmed, do one of the following:

    • Check users Outbox for any messages.
      • If user is running in Cached mode, set the Outlook client to Work Offline. Doing this will help stop the message being sent in the outbox and sometimes causes the message to NDR.
      • If user is running in Online Mode, then try moving the message to another folder to prevent Outlook or the HUB server from processing the message.
      • After each one of the steps above, check the Exchange server to see if log growth has ceased
    • Call Microsoft Product Support to enable debug logging of the Outlook client to determine possible root cause.

    3. Follow the Running Process Explorer instructions in the below article to dump out dlls that are running within the Outlook Process. Name the file username.txt. This helps check for any 3rd party Outlook Add-ins that may be causing the excessive log growth.
    970920  Using Process Explorer to List dlls Running Under the Outlook.exe Process
    http://support.microsoft.com/kb/970920

    4. Check the Sync Issues folder for any errors that might be occurring

    Let’s attempt to narrow this down further to see if the problem is truly in the OST or something possibly Outlook Profile related:

    • Run ScanPST against the users OST file to check for possible corruption.
    • With the Outlook client shut down, rename the users OST file to something else and then launch Outlook to recreate a new OST file. If the problem does not occur, we know the problem is within the OST itself.

    If renaming the OST causes the problem to recur again, then recreate the users profile to see if this might be profile related.

    Step 6

    Ask Questions:

    • Is the user using any type of devices besides a mobile device?
    • Question the end user if at all possible to understand what they might have been doing at the time the problem started occurring. It’s possible that a user imported a lot of data from a PST file which could cause log growth server side or there was some other erratic behavior that they were seeing based on a user action.

    Step 7

    Check to ensure File Level Antivirus exclusions are set correctly for both files and processes per http://technet.microsoft.com/en-us/library/bb332342(v=exchg.141).aspx

    Step 8

    If Exmon and the above methods do not provide the data that is necessary to get root cause, then collect a portion of Store transaction log files (100 would be a good start) during the problem period and parse them following the directions in http://blogs.msdn.com/scottos/archive/2007/11/07/remix-using-powershell-to-parse-ese-transaction-logs.aspx to look for possible patterns such as high pattern counts for IPM.Appointment. This will give you a high level overview if something is looping or a high rate of messages being sent. Note: This tool may or may not provide any benefit depending on the data that is stored in the log files, but sometimes will show data that is MIME encoded that will help with your investigation

    Step 9

    If nothing is found by parsing the transaction log files, we can check for a rogue, corrupted, and large message in transit:

    1. Check current queues against all HUB Transport Servers for stuck or queued messages:

    get-exchangeserver | where {$_.IsHubTransportServer -eq "true"} | Get-Queue | where {$_.Deliverytype –eq “MapiDelivery”} | Select-Object Identity, NextHopDomain, Status, MessageCount | export-csv  HubQueues.csv

    Review queues for any that are in retry or have a lot of messages queued:

    Export out message sizes in MB in all Hub Transport queues to see if any large messages are being sent through the queues:

    get-exchangeserver | where {$_.ishubtransportserver -eq "true"} | get-message –resultsize unlimited | Select-Object Identity,Subject,status,LastError,RetryCount,queue,@{Name="Message Size MB";expression={$_.size.toMB()}} | sort-object -property size –descending | export-csv HubMessages.csv

    Export out message sizes in Bytes in all Hub Transport queues:

    get-exchangeserver | where {$_.ishubtransportserver -eq "true"} | get-message –resultsize unlimited | Select-Object Identity,Subject,status,LastError,RetryCount,queue,size | sort-object -property size –descending | export-csv HubMessages.csv

    2. Check Users Outbox for any large, looping, or stranded messages that might be affecting overall Log Growth.

    get-mailbox -ResultSize Unlimited| Get-MailboxFolderStatistics -folderscope Outbox | Sort-Object Foldersize -Descending | select-object identity,name,foldertype,itemsinfolder,@{Name="FolderSize MB";expression={$_.folderSize.toMB()}} | export-csv OutboxItems.csv

    Note: This does not get information for users that are running in cached mode.

    Step 10

    Utilize the MSExchangeIS Client\Jet Log Record Bytes/sec and MSExchangeIS Client\RPC Operations/sec Perfmon counters to see if there is a particular client protocol that may be generating excessive logs. If a particular protocol mechanism if found to be higher than other protocols for a sustained period of time, then possibly shut down the service hosting the protocol. For example, if Exchange Outlook Web Access is the protocol generating potential log growth, then stopping the World Wide Web Service (W3SVC) to confirm that log growth stops. If log growth stops, then collecting IIS logs from the CAS/MBX Exchange servers involved will help provide insight in to what action the user was performing that was causing this occur.

    Step 11

    Run the following command from the Management shell to export out current user operation rates:

    To export to CSV File:

    get-logonstatistics |select-object username,Windows2000account,identity,messagingoperationcount,otheroperationcount,
    progressoperationcount,streamoperationcount,tableoperationcount,totaloperationcount | where {$_.totaloperationcount -gt 1000} | sort-object totaloperationcount -descending| export-csv LogonStats.csv

    To view realtime data:

    get-logonstatistics |select-object username,Windows2000account,identity,messagingoperationcount,otheroperationcount,
    progressoperationcount,streamoperationcount,tableoperationcount,totaloperationcount | where {$_.totaloperationcount -gt 1000} | sort-object totaloperationcount -descending| ft

    Key things to look for:

    In the below example, the Administrator account was storming the testuser account with email.
    You will notice that there are 2 users that are active here, one is the Administrator submitting all of the messages and then you will notice that the Windows2000Account references a HUB server referencing an Identity of testuser. The HUB server also has *no* UserName either, so that is a giveaway right there. This can give you a better understanding of what parties are involved in these high rates of operations

    UserName : Administrator
    Windows2000Account : DOMAIN\Administrator
    Identity : /o=First Organization/ou=First Administrative Group/cn=Recipients/cn=Administrator
    MessagingOperationCount : 1724
    OtherOperationCount : 384
    ProgressOperationCount : 0
    StreamOperationCount : 0
    TableOperationCount : 576
    TotalOperationCount : 2684

    UserName :
    Windows2000Account : DOMAIN\E12-HUB$
    Identity : /o= First Organization/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=testuser
    MessagingOperationCount : 630
    OtherOperationCount : 361
    ProgressOperationCount : 0
    StreamOperationCount : 0
    TableOperationCount : 0
    TotalOperationCount : 1091

    Step 12

    Enable Perfmon/Perfwiz logging on the server. Collect data through the problem times and then review for any irregular activities. You can reference Perfwiz for Exchange 2007/2010 data collection here http://blogs.technet.com/b/mikelag/archive/2010/07/09/exchange-2007-2010-performance-data-collection-script.aspx

    Step 13

    Run ExTRA (Exchange Troubleshooting Assistant) via the Toolbox in the Exchange Management Console to look for any possible Functions (via FCL Logging) that may be consuming Excessive times within the store process. This needs to be launched during the problem period. http://blogs.technet.com/mikelag/archive/2008/08/21/using-extra-to-find-long-running-transactions-inside-store.aspx shows how to use FCL logging only, but it would be best to include Perfmon, Exmon, and FCL logging via this tool to capture the most amount of data. The steps shown are valid for Exchange 2007 & Exchange 2010.

    Step 14

    Export out Message tracking log data from affected MBX server.

    Method 1

    Download the ExLogGrowthCollector script and place it on the MBX server that experienced the issue. Run ExLogGrowthCollector.ps1 from the Exchange Management Shell. Enter in the MBX server name that you would like to trace, the Start and End times and click on the Collect Logs button.

    image

    Note: What this script does is to export out all mail traffic to/from the specified mailbox server across all HUB servers between the times specified. This helps provide insight in to any large or looping messages that might have been sent that could have caused the log growth issue.

    Method 2

    Copy/Paste the following data in to notepad, save as msgtrackexport.ps1 and then run this on the affected Mailbox Server. Open in Excel for review. This is similar to the GUI version, but requires manual editing to get it to work.

    #Export Tracking Log data from affected server specifying Start/End Times
    Write-host "Script to export out Mailbox Tracking Log Information"
    Write-Host "#####################################################"
    Write-Host
    $server = Read-Host "Enter Mailbox server Name"
    $start = Read-host "Enter start date and time in the format of MM/DD/YYYY hh:mmAM"
    $end = Read-host "Enter send date and time in the format of MM/DD/YYYY hh:mmPM"
    $fqdn = $(get-exchangeserver $server).fqdn
    Write-Host "Writing data out to csv file..... "
    Get-ExchangeServer | where {$_.IsHubTransportServer -eq "True" -or $_.name -eq "$server"} | Get-MessageTrackingLog -ResultSize Unlimited -Start $start -End $end  | where {$_.ServerHostname -eq $server -or $_.clienthostname -eq $server -or $_.clienthostname -eq $fqdn} | sort-object totalbytes -Descending | export-csv MsgTrack.csv -NoType
    Write-Host "Completed!! You can now open the MsgTrack.csv file in Excel for review"

    Method 3

    You can also use the Process Tracking Log Tool at http://blogs.technet.com/b/exchange/archive/2011/10/21/updated-process-tracking-log-ptl-tool-for-use-with-exchange-2007-and-exchange-2010.aspx to provide some very useful reports.

    Step 15

    Save off a copy of the application/system logs from the affected server and review them for any events that could attribute to this problem.

    Step 16

    Enable IIS extended logging for CAS and MB server roles to add the sc-bytes and cs-bytes fields to track large messages being sent via IIS protocols and to also track usage patterns (Additional Details).

    Step 17

    Get a process dump the store process during the time of the log growth. (Use this as a last measure once all prior activities have been exhausted and prior to calling Microsoft for assistance. These issues are sometimes intermittent, and the quicker you can obtain any data from the server, the better as this will help provide Microsoft with information on what the underlying cause might be.)

    • Download the latest version of Procdump from http://technet.microsoft.com/en-us/sysinternals/dd996900.aspx and extract it to a directory on the Exchange server
    • Open the command prompt and change in to the directory which procdump was extracted in the previous step.
    • Type

      procdump -mp -s 120 -n 2 store.exe d:\DebugData

      This will dump the data to D:\DebugData. Change this to whatever directory has enough space to dump the entire store.exe process twice. Check Task Manager for the store.exe process and how much memory it is currently consuming for a rough estimate of the amount of space that is needed to dump the entire store dump process.
      Important: If procdump is being run against a store that is on a clustered server, then you need to make sure that you set the Exchange Information Store resource to not affect the group. If the entire store dump cannot be written out in 300 seconds, the cluster service will kill the store service ruining any chances of collecting the appropriate data on the server.

    Open a case with Microsoft Product Support Services to get this data looked at.

    Most current related KB articles

    2814847 - Rapid growth in transaction logs, CPU use, and memory consumption in Exchange Server 2010 when a user syncs a mailbox by using an iOS 6.1 or 6.1.1-based device

    2621266 - An Exchange Server 2010 database store grows unexpectedly large

    996191 - Troubleshooting Fast Growing Transaction Logs on Microsoft Exchange 2000 Server and Exchange Server 2003

    Kevin Carker
    (based on a blog post written by Mike Lagase)


    Exchange 2010 Database Availability Groups and Disk Sector Sizes

    $
    0
    0

    These days, some customers are deploying Exchange databases and log files on advanced format (4K) drives.  Although these drives support a physical sector size of 4096, many vendors are emulating 512 byte sectors in order to maintain backwards compatibility with application and operating systems.  This is known as 512 byte emulation (512e).  Windows 2008 and Windows 2008 R2 support native 512 byte and 512 byte emulated advanced format drives.  Windows 2012 supports drives of all sector sizes.  The sector size presented to applications and the operating system, and how applications respond, directly affects data integrity and performance.

    For more information on sector sizes see the following links:

    When deploying an Exchange 2010 Database Availability Group (DAG), the sector sizes of the volumes hosting the databases and log files must be the same across all nodes within the DAG.  This requirement is outlined in Understanding Storage Configuration.

    Support requires that all copies of a database reside on the same physical disk type. For example, it is not a supported configuration to host one copy of a given database on a 512-byte sector disk and another copy of that same database on a 512e disk. Also be aware that 4-kilobyte (KB) sector disks are not supported for any version of Microsoft Exchange and 512e disks are not supported for any version of Exchange prior to Exchange Server 2010 SP1.

    Recently, we have noted that some customers have experienced issues with log file replication and replay as the result of sector size mismatch.  These issues occur when:

    • Storage drivers are upgraded resulting in the recognized sector size changing.
    • Storage firmware is upgraded resulting in the recognized sector size changing.
    • New storage is presented or existing storage is replaced with drives of a different sector size.

    This mismatch can cause one or more database copies in a DAG to fail, as illustrated below. In my example environment, I have a three-member DAG with a single database that resides on a volume labeled Z that is replicated between each member.

    [PS] C:\>Get-MailboxDatabaseCopyStatus *

    Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex Length Length State
    ---- ------ --------- ----------- -------------------- ------------
    SectorTest\MBX-1 Mounted 0 0 Healthy
    SectorTest\MBX-2 Healthy 0 1 3/19/2013 10:27:50 AM Healthy
    SectorTest\MBX-3 Healthy 0 1 3/19/2013 10:27:50 AM Healthy

    If I use FSUTIL to query the Z volume on each DAG member, we can see that the volume currently has 512 logical bytes per sector and a 512 physical bytes per sector. Thus, the the volume is currently seen by the operating system as having a native 512 byte sector size.

    On MBX-1:

    C:\>fsutil fsinfo ntfsinfo z:

    NTFS Volume Serial Number :       0x18d0bc1dd0bbfed6
    Version :                         3.1
    Number Sectors :                  0x000000000fdfe7ff
    Total Clusters :                  0x0000000001fbfcff
    Free Clusters  :                  0x0000000001fb842c
    Total Reserved :                  0x0000000000000000
    Bytes Per Sector  :               512
    Bytes Per Physical Sector :       512

    Bytes Per Cluster :               4096
    Bytes Per FileRecord Segment    : 1024
    Clusters Per FileRecord Segment : 0
    Mft Valid Data Length :           0x0000000000040000
    Mft Start Lcn  :                  0x00000000000c0000
    Mft2 Start Lcn :                  0x0000000000000002
    Mft Zone Start :                  0x00000000000c0040
    Mft Zone End   :                  0x00000000000cc840
    RM Identifier:        EF486117-9094-11E2-BF55-00155D006BA1

    On MBX-3:

    C:\>fsutil fsinfo ntfsinfo z:

    NTFS Volume Serial Number :       0x0ad44aafd44a9d37
    Version :                         3.1
    Number Sectors :                  0x000000000fdfe7ff
    Total Clusters :                  0x0000000001fbfcff
    Free Clusters  :                  0x0000000001fad281
    Total Reserved :                  0x0000000000000000
    Bytes Per Sector  :               512
    Bytes Per Physical Sector :       512

    Bytes Per Cluster :               4096
    Bytes Per FileRecord Segment    : 1024
    Clusters Per FileRecord Segment : 0
    Mft Valid Data Length :           0x0000000000040000
    Mft Start Lcn  :                  0x00000000000c0000
    Mft2 Start Lcn :                  0x0000000000000002
    Mft Zone Start :                  0x00000000000c0000
    Mft Zone End   :                  0x00000000000cc820
    RM Identifier:        B9B00E32-90B2-11E2-94E9-00155D006BA3

    Effects of storage changes

    But what happens if there is a change in the way storage is seen on MBX-3, so that the volume now reflects a 512e sector size.  This can happen when upgrading storage drivers, upgrading firmware, or presenting new storage that implements advanced format storage.

    C:\>fsutil fsinfo ntfsinfo z:

    NTFS Volume Serial Number :       0x0ad44aafd44a9d37
    Version :                         3.1
    Number Sectors :                  0x000000000fdfe7ff
    Total Clusters :                  0x0000000001fbfcff
    Free Clusters  :                  0x0000000001fad2e7
    Total Reserved :                  0x0000000000000000
    Bytes Per Sector  :               512
    Bytes Per Physical Sector :       4096

    Bytes Per Cluster :               4096
    Bytes Per FileRecord Segment    : 1024
    Clusters Per FileRecord Segment : 0
    Mft Valid Data Length :           0x0000000000040000
    Mft Start Lcn  :                  0x00000000000c0000
    Mft2 Start Lcn :                  0x0000000000000002
    Mft Zone Start :                  0x00000000000c0040
    Mft Zone End   :                  0x00000000000cc840
    RM Identifier:        B9B00E32-90B2-11E2-94E9-00155D006BA3

    When reviewing the database copy status, notice that the copy assigned to MBX-3 has failed.

    [PS] C:\>Get-MailboxDatabaseCopyStatus *

    Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex Length Length State
    ---- ------ --------- ----------- -------------------- ------------
    SectorTest\MBX-1 Mounted 0 0 Healthy
    SectorTest\MBX-2 Healthy 0 0 3/19/2013 11:13:05 AM Healthy
    SectorTest\MBX-3 Failed 0 8 3/19/2013 11:13:05 AM Healthy

    The full details of the copy status of MBX-3 can be reviewed to display the detailed error:

    [PS] C:\>Get-MailboxDatabaseCopyStatus SectorTest\MBX-3 | fl

    RunspaceId                       : 5f4bb58b-39fb-4e3e-b001-f8445890f80a
    Identity                         : SectorTest\MBX-3
    Name                             : SectorTest\MBX-3
    DatabaseName                     : SectorTest
    Status                           : Failed
    MailboxServer                    : MBX-3
    ActiveDatabaseCopy               : mbx-1
    ActivationSuspended              : False
    ActionInitiator                  : Service
    ErrorMessage                     : The log copier was unable to continue processing for database 'SectorTest\MBX-3' because an error occurred on the target server: Continuous replication - block mode has been terminated. Error: the log file sector size does not match the current volume's sector size (-546) [HResult: 0x80131500]. The copier will automatically retry after a short delay.
    ErrorEventId                     : 2152
    ExtendedErrorInfo                :
    SuspendComment                   :
    SinglePageRestore                : 0
    ContentIndexState                : Healthy
    ContentIndexErrorMessage         :
    CopyQueueLength                  : 0
    ReplayQueueLength                : 7
    LatestAvailableLogTime           : 3/19/2013 11:13:05 AM
    LastCopyNotificationedLogTime    : 3/19/2013 11:13:05 AM
    LastCopiedLogTime                : 3/19/2013 11:13:05 AM
    LastInspectedLogTime             : 3/19/2013 11:13:05 AM
    LastReplayedLogTime              : 3/19/2013 10:24:24 AM
    LastLogGenerated                 : 53
    LastLogCopyNotified              : 53
    LastLogCopied                    : 53
    LastLogInspected                 : 53
    LastLogReplayed                  : 46
    LogsReplayedSinceInstanceStart   : 0
    LogsCopiedSinceInstanceStart     : 0
    LatestFullBackupTime             :
    LatestIncrementalBackupTime      :
    LatestDifferentialBackupTime     :
    LatestCopyBackupTime             :
    SnapshotBackup                   :
    SnapshotLatestFullBackup         :
    SnapshotLatestIncrementalBackup  :
    SnapshotLatestDifferentialBackup :
    SnapshotLatestCopyBackup         :
    LogReplayQueueIncreasing         : False
    LogCopyQueueIncreasing           : False
    OutstandingDumpsterRequests      : {}
    OutgoingConnections              :
    IncomingLogCopyingNetwork        :
    SeedingNetwork                   :
    ActiveCopy                       : False

    Using the Exchange Server Error Code Look-up tool (ERR.EXE), we can verify the definition of the error code –546.

    D:\Utilities\ERR>err -546

    # for decimal -546 / hex 0xfffffdde
      JET_errLogSectorSizeMismatch                                   esent98.h
    # /* the log file sector size does not match the current
    # volume's sector size */
    # 1 matches found for "-546"

    In addition, the Application event log may contain the following entries:

    Log Name:      Application
    Source:        MSExchangeRepl
    Date:          3/19/2013 11:14:58 AM
    Event ID:      2152
    Task Category: Service
    Level:         Error
    User:          N/A
    Computer:      MBX-3.exchange.msft
    Description:
    The log copier was unable to continue processing for database 'SectorTest\MBX-3' because an error occured on the target server: Continuous replication - block mode has been terminated. Error: the log file sector size does not match the current volume's sector size (-546) [HResult: 0x80131500]. The copier will automatically retry after a short delay.

    The cause

    Why does this issue occur?
    Each log file records in the header the sector size of the disk where a log file was created.  For example, this is the header of a log file on MBX-1 with a native 512 byte sector size:

    Z:\SectorTest>eseutil /ml E0100000001.log

    Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
    Version 14.02
    Copyright (C) Microsoft Corporation. All Rights Reserved.
    Initiating FILE DUMP mode... 
          Base name: E01
          Log file: E0100000001.log
          lGeneration: 1 (0x1)
          Checkpoint: (0x38,FFFF,FFFF)
          creation time: 03/19/2013 09:40:14
          prev gen time: 00/00/1900 00:00:00
          Format LGVersion: (7.3704.16.2)
          Engine LGVersion: (7.3704.16.2)
          Signature: Create time:03/19/2013 09:40:14 Rand:11019164 Computer:
          Env SystemPath: z:\SectorTest\
          Env LogFilePath: z:\SectorTest\
         Env Log Sec size: 512 (matches)
          Env (CircLog,Session,Opentbl,VerPage,Cursors,LogBufs,LogFile,Buffers)
              (    off,   1227,  61350,  16384,  61350,   2048,   2048,  44204)
          Using Reserved Log File: false
          Circular Logging Flag (current file): off
          Circular Logging Flag (past files): off
          Checkpoint at log creation time: (0x1,8,0) 
          Last Lgpos: (0x1,A,0)
    Number of database page references:  0
    Integrity check passed for log file: E0100000001.log
    Operation completed successfully in 0.62 seconds.

    The sector size that is chosen is determined through one of two methods:

    • If the log stream is brand new, read the sector size from disk and utilize this sector size.
    • If the log stream already exists, use the sector size of the given log stream.

    In theory, since the sector size of disks should not be changing across nodes and the sector size of all disks must match, this should not cause a problem.  In our example, and in some customer environments, these sector sizes are actually changing.  Since most of these databases already exist, the existing sector size of the log stream is utilized, which in turn causes a mismatch between DAG members.

    When a mismatch occurs, the issue only prevents the successful use of block mode replication.  It does not affect file mode replication.  Block mode replication was introduced in Exchange 2010 Service Pack 1.  For more information on block mode replication, see New High Availability and Site Resilience Functionality in Exchange 2010 SP1.

    Why does this only affect block mode replication?
    When a log file is addressed we reference locations within a log file based off a log file position.  The log file position is a combination of the log generation, the sector, and offset within that sector.  For example, in the previous header dump you can see the “Last LGPOS” is (0x1,A,0) – this just happens to be the last log file position within the log.  Let us say we were creating a block for block mode replication within a log file generation 0x1A, sector 8, offset 1 – this would be reflected as an LGPOS of (0x1a,8,1).  When this block is transmitted to a host with an advanced sector size disk, the log position would actually have to be translated.  On an advanced format disk this same log position would be (0x1a,1,1).  As you can see, it could create significant problems if incorrect positions within a log file were written to or read from.

    The resolution

    How do I go about correcting this condition?
    To fix this condition, first ensure that the same sector sizes exist on all disks across all nodes that host Exchange data, and then reset the log stream.

    The following steps can show you how to do this with minimal downtime.

    1. Ensure that Exchange 2010 Service Pack 2 or later is installed on all DAG members.

      Note: Exchange 2010 Service Pack 1 and earlier do not support 512e volumes).

    2. Disable block mode replication on all hosts.  This step requires restarting the replication service on each node.  This will temporarily cause all copies to fail on passive nodes when the service is restarted on the active node.  When the service is restarted on the passive node only passive copies on that node will enter a failed state.  Databases that are mounted and client connections are not impacted by this activity.  Block mode replication should remain disabled until all steps have been completed on all DAG members.
      1. Launch registry editor.
      2. Navigate to HKLM\Software\Microsoft\ExchangeServer\V14\Replay\Parameters
      3. Right click in the parameters key and select New–> DWORD
      4. The name for the DWORD is DisableGranularReplication
      5. The value for the DWORD is 1
    3. Restart the Microsoft Exchange Replication service on each member using the Shell: Restart-Service MSExchangeRepl

    4. Validate that all copies of databases across DAG members are healthy at this time:

      [PS] C:\>Get-MailboxDatabaseCopyStatus *

      Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex Length Length State
      ---- ------ --------- ----------- -------------------- ------------
      SectorTest\MBX-1 Mounted 0 0 Healthy
      SectorTest\MBX-2 Healthy 0 0 3/19/2013 12:28:34 PM Healthy
      SectorTest\MBX-3 Healthy 0 0 3/19/2013 12:28:34 PM Healthy

    5. Apply the appropriate hotfix for Windows Server 2008 or Windows Server 2008 R2 and Advanced Format Disks.  Windows Server 2012 does not require a hotfix.

      • Windows 2008 R2:KB 982018 An update that improves the compatibility of Windows 7 and Windows Server 2008 R2 with Advanced Format Disks is available
      • Windows 2008:KB 2553708 A hotfix rollup that improves Windows Vista and Windows Server 2008 compatibility with Advanced Format disks
    6. Repeat the procedure that caused the disk sector size to change.  For example, if the issue arose as a result of upgrading drivers and firmware on a host utilize your maintenance mode procedures to complete the driver and firmware upgrade on all hosts.

      Note: If your installation does not allow for you to use the same sector sizes across all DAG members, then the implementation is not supported.

    7. Utilize FSUTIL to ensure that the sector sizes match across all hosts for the log and database volumes. 

      On MBX-1:

      C:\>fsutil fsinfo ntfsinfo z:

      NTFS Volume Serial Number :       0x18d0bc1dd0bbfed6
      Version :                         3.1
      Number Sectors :                  0x000000000fdfe7ff
      Total Clusters :                  0x0000000001fbfcff
      Free Clusters  :                  0x0000000001fac6e6
      Total Reserved :                  0x0000000000000000
      Bytes Per Sector  :               512
      Bytes Per Physical Sector :       4096

      Bytes Per Cluster :               4096
      Bytes Per FileRecord Segment    : 1024
      Clusters Per FileRecord Segment : 0
      Mft Valid Data Length :           0x0000000000040000
      Mft Start Lcn  :                  0x00000000000c0000
      Mft2 Start Lcn :                  0x0000000000000002
      Mft Zone Start :                  0x00000000000c0040
      Mft Zone End   :                  0x00000000000cc840
      RM Identifier:        EF486117-9094-11E2-BF55-00155D006BA1

      On MBX-2

      C:\>fsutil fsinfo ntfsinfo z:

      NTFS Volume Serial Number :       0xfa6a794c6a790723
      Version :                         3.1
      Number Sectors :                  0x000000000fdfe7ff
      Total Clusters :                  0x0000000001fbfcff
      Free Clusters  :                  0x0000000001fac86f
      Total Reserved :                  0x0000000000000000
      Bytes Per Sector  :               512
      Bytes Per Physical Sector :       4096

      Bytes Per Cluster :               4096
      Bytes Per FileRecord Segment    : 1024
      Clusters Per FileRecord Segment : 0
      Mft Valid Data Length :           0x0000000000040000
      Mft Start Lcn  :                  0x00000000000c0000
      Mft2 Start Lcn :                  0x0000000000000002
      Mft Zone Start :                  0x00000000000c0040
      Mft Zone End   :                  0x00000000000cc840
      RM Identifier:        5F18A2FC-909E-11E2-8599-00155D006BA2

      On MBX-3

      C:\>fsutil fsinfo ntfsinfo z:

      NTFS Volume Serial Number :       0x0ad44aafd44a9d37
      Version :                         3.1
      Number Sectors :                  0x000000000fdfe7ff
      Total Clusters :                  0x0000000001fbfcff
      Free Clusters  :                  0x0000000001fabfd6
      Total Reserved :                  0x0000000000000000
      Bytes Per Sector  :               512
      Bytes Per Physical Sector :       4096

      Bytes Per Cluster :               4096
      Bytes Per FileRecord Segment    : 1024
      Clusters Per FileRecord Segment : 0
      Mft Valid Data Length :           0x0000000000040000
      Mft Start Lcn  :                  0x00000000000c0000
      Mft2 Start Lcn :                  0x0000000000000002
      Mft Zone Start :                  0x00000000000c0040
      Mft Zone End   :                  0x00000000000cc840
      RM Identifier:        B9B00E32-90B2-11E2-94E9-00155D006BA3

    At this point, the DAG should be stable, and replication should be occurring as expected between databases using file mode. In order to restore block mode replication and fully recognize the new disk sector sizes, the log stream must be reset.

    IMPORTANT: Please note the following about resetting the log stream:

    • The log stream must be fully reset on all database copies.
    • All lagged database copies must be replayed to current log.
    • If backups are utilized as a recovery method this will introduce a gap in the log file sequence preventing  a full roll forward recovery from the last backup point.

    You can use the following steps to reset the log stream:

    1. Validate the existence of a replay queue:

      [PS] C:\>Get-MailboxDatabaseCopyStatus *

      Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex Length Length State
      ---- ------ --------- ----------- -------------------- ------------
      SectorTest\MBX-1 Mounted 0 0 Healthy
      SectorTest\MBX-2 Healthy 0 0 3/19/2013 1:34:37 PM Healthy
      SectorTest\MBX-3 Healthy 0 138 3/19/2013 1:34:37 PM Healthy

    2. Set the replay and truncation lag times values to 0 on all database copies. This will ensure that logs replay to current while allowing the databases to remain online. In this example, MBX-3 is a lagged copy database. When the configuration change is detected, log replay will occur allowing the lagged copy to eventually catch up. Note that depending on the replay lag time, this could take several hours before proceeding to next steps.

      [PS] C:\>Set-MailboxDatabaseCopy SectorTest\MBX-3 -ReplayLagTime 0.0:0:0 -TruncationLagTime 0.0:0:0

      Validate that the replay queue has caught up and is near zero.

      [PS] C:\>Get-MailboxDatabaseCopyStatus *

      Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex Length Length State
      ---- ------ --------- ----------- -------------------- ------------
      SectorTest\MBX-1 Mounted 0 0 Healthy
      SectorTest\MBX-2 Healthy 0 0 3/19/2013 1:34:37 PM Healthy
      SectorTest\MBX-3 Healthy 0 0 3/19/2013 1:34:37 PM Healthy

    3. Dismount the database.

      CAUTION: Dismounting the database will cause a client interruption, which will continue until the database is mounted.

      [PS] C:\>Dismount-Database SectorTest

      Confirm
      Are you sure you want to perform this action?
      Dismounting database "SectorTest". This may result in reduced availability for mailboxes in the database.
      [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [?] Help (default is "Y"): y
      [PS] C:\>Get-MailboxDatabaseCopyStatus SectorTest\*
      Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex Length Length State
      ---- ------ --------- ----------- -------------------- ------------
      SectorTest\MBX-1 Dismounted 0 0 Healthy
      SectorTest\MBX-2 Healthy 0 0 3/25/2013 5:41:54 AM Healthy
      SectorTest\MBX-3 Healthy 0 0 3/25/2013 5:41:54 AM Healthy

    4. On each DAG member hosting a database copy, open a command prompt and navigate to the log file directory. Execute eseutil /r ENN to perform a soft recovery. This step is necessary to ensure that all log files are played into all copies.

      Z:\SectorTest>eseutil /r e01

      Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
      Version 14.02
      Copyright (C) Microsoft Corporation. All Rights Reserved.
      Initiating RECOVERY mode...
          Logfile base name: e01
                  Log files: <current directory>
               System files: <current directory>
      Performing soft recovery...
                            Restore Status (% complete) 
                0    10   20   30   40   50   60   70   80   90  100
                |----|----|----|----|----|----|----|----|----|----|
                ...................................................
      Operation completed successfully in 0.203 seconds.

    5. On each DAG member hosting a database copy open a command prompt and navigate to the database directory. Execute eseutil /mh <EDB> against the database to dump the header. You must validate that the following information is correct on all database copies:

      • All copies of the database show in clean shutdown.
      • All copies of the database show the same last detach information.
      • All copies of the database show the same last consistent information.

      Here is example output of a full /mh dump followed by a comparison of the data across our three sample copies.

      Z:\SectorTest>eseutil /mh SectorTest.edb

      Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
      Version 14.02
      Copyright (C) Microsoft Corporation. All Rights Reserved.
      Initiating FILE DUMP mode...
               Database: SectorTest.edb
      DATABASE HEADER:
      Checksum Information:
      Expected Checksum: 0x010f4400
        Actual Checksum: 0x010f4400
      Fields:
              File Type: Database
               Checksum: 0x10f4400
         Format ulMagic: 0x89abcdef
         Engine ulMagic: 0x89abcdef
      Format ulVersion: 0x620,17
      Engine ulVersion: 0x620,17
      Created ulVersion: 0x620,17
           DB Signature: Create time:03/19/2013 09:40:15 Rand:11009066 Computer:
               cbDbPage: 32768
                 dbtime: 601018 (0x92bba)
      State: Clean Shutdown
           Log Required: 0-0 (0x0-0x0)
          Log Committed: 0-0 (0x0-0x0)
         Log Recovering: 0 (0x0)
        GenMax Creation: 00/00/1900 00:00:00
               Shadowed: Yes
             Last Objid: 3350
           Scrub Dbtime: 0 (0x0)
             Scrub Date: 00/00/1900 00:00:00
           Repair Count: 0
            Repair Date: 00/00/1900 00:00:00
      Old Repair Count: 0
      Last Consistent: (0x138,3FB,1A4)  03/19/2013 13:44:11
            Last Attach: (0x111,9,86)  03/19/2013 13:42:29
      Last Detach: (0x138,3FB,1A4)  03/19/2013 13:44:11
                   Dbid: 1
          Log Signature: Create time:03/19/2013 09:40:14 Rand:11019164 Computer:
             OS Version: (6.1.7601 SP 1 NLS ffffffff.ffffffff)

      Previous Full Backup:
              Log Gen: 0-0 (0x0-0x0)
                 Mark: (0x0,0,0)
                 Mark: 00/00/1900 00:00:00

      Previous Incremental Backup:
              Log Gen: 0-0 (0x0-0x0)
                 Mark: (0x0,0,0)
                 Mark: 00/00/1900 00:00:00

      Previous Copy Backup:
              Log Gen: 0-0 (0x0-0x0)
                 Mark: (0x0,0,0)
                 Mark: 00/00/1900 00:00:00

      Previous Differential Backup:
              Log Gen: 0-0 (0x0-0x0)
                 Mark: (0x0,0,0)
                 Mark: 00/00/1900 00:00:00

      Current Full Backup:
              Log Gen: 0-0 (0x0-0x0)
                 Mark: (0x0,0,0)
                 Mark: 00/00/1900 00:00:00

      Current Shadow copy backup:
              Log Gen: 0-0 (0x0-0x0)
                 Mark: (0x0,0,0)
                 Mark: 00/00/1900 00:00:00 

           cpgUpgrade55Format: 0
          cpgUpgradeFreePages: 0
      cpgUpgradeSpaceMapPages: 0 

             ECC Fix Success Count: none
         Old ECC Fix Success Count: none
               ECC Fix Error Count: none
           Old ECC Fix Error Count: none
          Bad Checksum Error Count: none
      Old bad Checksum Error Count: none 

        Last checksum finish Date: 03/19/2013 13:11:36
      Current checksum start Date: 00/00/1900 00:00:00
            Current checksum page: 0

      Operation completed successfully in 0.47 seconds.

      MBX-1:

      State: Clean Shutdown
      Last Consistent: (0x138,3FB,1A4)  03/19/2013 13:44:11
      Last Detach: (0x138,3FB,1A4)  03/19/2013 13:44:11

      MBX-2:

      State: Clean Shutdown
      Last Consistent: (0x138,3FB,1A4)  03/19/2013 13:44:12
      Last Detach: (0x138,3FB,1A4)  03/19/2013 13:44:12

      MBX-3:

      State: Clean Shutdown
      Last Consistent: (0x138,3FB,1A4)  03/19/2013 13:44:13
      Last Detach: (0x138,3FB,1A4)  03/19/2013 13:44:13

      In this case, the values match across all copies so further steps can be performed.

      If the values do not match across copies for any reason, do not continue and please contact Microsoft support.

    6. Reset the log file generation for the database.

      Note: Use Get-MailboxDatabaseCopyStatus to record database locations and status prior to performing this activity.

      Locate the log file directory for each ACTIVE (DISMOUNTED) database. Remove all log files from this directory first. Failure to remove log files from the ACTIVE (DISMOUNTED) database may result in the Replication service recopying log files, a failure of this procedure, and subsequent need to reseed all database copies.

      IMPORTANT: If log files are located in the same location as the database and catalog data folder, take precautions to not remove the database or the catalog data folder.

      In our example MBX-1 hosts the ACTIVE (DISMOUNTED) copy.

      [PS] C:\>Get-MailboxDatabaseCopyStatus SectorTest\*

      Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex Length Length State
      ---- ------ --------- ----------- -------------------- ------------
      SectorTest\MBX-1 Dismounted 0 0 Healthy
      SectorTest\MBX-2 Healthy 0 0 3/25/2013 5:41:54 AM Healthy
      SectorTest\MBX-3 Healthy 0 0 3/25/2013 5:41:54 AM Healthy

      Locate the log file directory for each PASSIVE database. Remove all log files from this directory. Failure to remove all log files could result in this procedure failing, and the need to reseed this or all database copies. If log files are located in the same location as the database and catalog data folder take precautions to not remove the database or the catalog data folder.

      In our example MBX-2 and MBX-3 host the passive database copies.

      [PS] C:\>Get-MailboxDatabaseCopyStatus SectorTest\*

      Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex Length Length State
      ---- ------ --------- ----------- -------------------- ------------
      SectorTest\MBX-1 Dismounted 0 0 Healthy
      SectorTest\MBX-2 Healthy 0 0 3/25/2013 5:41:54 AM Healthy
      SectorTest\MBX-3 Healthy 0 0 3/25/2013 5:41:54 AM Healthy

    7. Mount the database using Mount-Database <DBNAME>, and verify it has mounted.

      [PS] C:\>Mount-Database SectorTest
      [PS] C:\>Get-MailboxDatabaseCopyStatus *

      Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex Length Length State
      ---- ------ --------- ----------- -------------------- ------------
      SectorTest\MBX-1 Mounted 0 0 Healthy
      SectorTest\MBX-2 Healthy 0 1 3/25/2013 5:57:28 AM Healthy
      SectorTest\MBX-3 Healthy 0 1 3/25/2013 5:57:28 AM Healthy

    8. Suspend and resume all passive database copies.

      Note: The error on suspending the active database copy is expected.

      [PS] C:\>Get-MailboxDatabaseCopyStatus SectorTest\* | Suspend-MailboxDatabaseCopy

      The suspend operation can't proceed because database 'SectorTest' on Exchange Mailbox server 'MBX-1' is the active mailbox database copy.
          + CategoryInfo          : InvalidOperation: (SectorTest\MBX-1:DatabaseCopyIdParameter) [Suspend-MailboxDatabaseCopy], InvalidOperationException
          + FullyQualifiedErrorId : 5083D28B,Microsoft.Exchange.Management.SystemConfigurationTasks.SuspendDatabaseCopy
          + PSComputerName        : mbx-1.exchange.msft

      Note: The error on resuming the active database copy is expected.

      [PS] C:\>Get-MailboxDatabaseCopyStatus SectorTest\* | Resume-MailboxDatabaseCopy

      WARNING: The Resume operation won't have an effect on database replication because database 'SectorTest' hosted on server 'MBX-1' is the active mailbox database.

    9. Validate replication health.

      [PS] C:\>Get-MailboxDatabaseCopyStatus *

      Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex Length Length State
      ---- ------ --------- ----------- -------------------- ------------
      SectorTest\MBX-1 Mounted 0 0 Healthy
      SectorTest\MBX-2 Healthy 0 0 3/19/2013 1:56:12 PM Healthy
      SectorTest\MBX-3 Healthy 0 0 3/19/2013 1:56:12 PM Healthy

    10. Using Set-MailboxDatabaseCopy, reconfigure any replay lag or truncation lag time on the database copy. This example implements a 7 day replay lag time.

      set-mailboxdatabasecopy –identity SectorTest\MBX-3 –replayLagTime 7.0:0:0

    11. Repeat the previous steps for all databases in the DAG including those databases that have a single copy.

      IMPORTANT: DO NOT proceed to the next step until all databases have been reset.

    12. Enable block mode replication. Using registry editor navigate to HKLM \Software\Microsoft\ExchangeServer \V14 \Replay, and then remove the DisableGranularReplication DWORD value.

    13. Restart the replication service on each DAG member.

      Restart-Service MSExchangeREPL

    14. Validate database health using Get-MailboxDatabaseCopyStatus.

      [PS] C:\>Get-MailboxDatabaseCopyStatus *

      Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex Length Length State
      ---- ------ --------- ----------- -------------------- ------------
      SectorTest\MBX-1 Healthy 0 0 3/19/2013 2:25:56 PM Healthy
      SectorTest\MBX-2 Mounted 0 0 Healthy
      SectorTest\MBX-3 Healthy 0 230 3/19/2013 2:25:56 PM Healthy

    15. Dump the header of a log file and verify that the new sector size is reflected in the log file stream. To do this, open a command prompt and navigate to the log file directory for the database on the active node. Run eseutil /ml against any log within the directory, and verify that the sector size reflects 4096 and (matches).

      Z:\SectorTest>eseutil /ml E0100000001.log

      Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
      Version 14.02
      Copyright (C) Microsoft Corporation. All Rights Reserved.

      Initiating FILE DUMP mode... 
            Base name: E01
            Log file: E0100000001.log
            lGeneration: 1 (0x1)
            Checkpoint: (0x17B,FFFF,FFFF)
            creation time: 03/19/2013 13:56:11
            prev gen time: 00/00/1900 00:00:00
            Format LGVersion: (7.3704.16.2)
            Engine LGVersion: (7.3704.16.2)
            Signature: Create time:03/19/2013 13:56:11 Rand:2996669 Computer:
            Env SystemPath: z:\SectorTest\
            Env LogFilePath: z:\SectorTest\
           Env Log Sec size: 4096 (matches)
            Env (CircLog,Session,Opentbl,VerPage,Cursors,LogBufs,LogFile,Buffers)
                (    off,   1227,  61350,  16384,  61350,   2048,    256,  44204)
            Using Reserved Log File: false
            Circular Logging Flag (current file): off
            Circular Logging Flag (past files): off
            Checkpoint at log creation time: (0x1,1,0) 
            Last Lgpos: (0x1,2,0)
      Number of database page references:  0
      Integrity check passed for log file: E0100000001.log
      Operation completed successfully in 0.250 seconds.

    If the above steps have been completed successfully, and the log file sequence recognizes a 4096 sector size, then this issue has been resolved.

    This guidance was validated in the following configurations:

    • Windows 2008 R2 Enterprise with Exchange 2010 Service Pack 2
    • Windows 2008 R2 Enterprise with Exchange 2010 Service Pack 3
    • Windows 2008 SP2 Enterprise with Exchange 2010 Service Pack 3
    • Windows 2012 Datacenter with Exchange 2010 Service Pack 3

    Tim McMichael

    Troubleshoot your Exchange 2010 database backup functionality with VSSTester script

    $
    0
    0

    Frequently in support, we encounter several backup related calls for Exchange 2010 databases. A sample of common issues we hear from our customers are:

    • “My backup software is not able to take a successful snapshot of the databases”
    • “My backups have been failing for quite a while. I have several thousand log files consuming disk space and I will eventually run out of disk space”
    • “My backup software indicates that the backup is successful but at the end of my backup, logs do not truncate”
    • “The Exchange Writer /VSS writer is not in a stable state (state is listed as ‘Retryable‘, ’Waiting for completion‘ or ’Failed’)”
    • “We suspect that the Volume Shadow Copy Service (VSS) is failing on the server and hence there are no successful backups”

    It is critical to understand how backups and log truncation work in Exchange 2010. If you haven't already done so, check out our three-part blog series by Jesse Tedoff on backups and log truncation in Exchange 2010, Everything You Need to Know About Exchange Backups*.

    When troubleshooting backups in Exchange 2010 we are interested in two writers– the Exchange Information Store Writer (utilized for active copy backups) and the Exchange Replica Writer (utilized for passive copy backups). The writers are responsible for providing the metadata information for databases to the VSS Requestor (aka the backup software). The VSS Provider is the component that creates and maintains shadow copies. At the end of successful backups, when the Volume Shadow Copy Service signals backup is complete, the writers initiate post-backup steps which include updating the database header and performing log truncation. (For more details, see Exchange VSS Writers on MSDN.)

    As explained above, it is the responsibility of the VSS Requestor to get metadata information from Exchange writers and at the end of successful backup, VSS service signals backup complete to the Exchange writers so the writers can perform post-backup operations.

    The purpose of this blog is to discuss the VSSTester script, its functionality and how it can help diagnose backup problems.

    What does the script to?

    The script has two major functions:

    1. Perform Diskshadow backup of a selected Exchange database so we can exercise the VSS framework in the system, so at the end of a successful snapshot, database header is updated and log files are truncated. We will discuss in detail what Diskshadow is and what it does.
    2. The second function of this script is to collect diagnostic data. For backup cases, there is a lot of data that needs to be collected. To get the diagnostic data you may have to manually go to different places in the Exchange server and turn on logging. If that is not done correctly, we will miss getting crucial logs during the time of the issue. The script makes the data collection process much easier.

     

    Script requirements

    1. The current version of the script works only on Exchange 2010 servers.
    2. The script needs to be run on the Exchange server that is experiencing backup issues. If you are having issues with passive copy backups, please go to the appropriate node in the DAG and run the script. For example: You may have Database A having copies on Server1, Server2 and Server3. Server1 hosts the active copy of the database. If backups of the active copy have previously failed run the script on Server1. Otherwise run script on whichever of the remaining servers has failed previously when backing up the passive copy.
    3. Please ensure that you have enough space on the drive you save the configuration and output files. Exchange and VSS traces, Diagnostic logs can occupy up to several GBs of drive space depending on the time taken for taking backup. For example: Running the script in a lab environment consumed close to 25MB of drive space a minute.
    4. The script is unsigned. On the server where you run the script you will have to set the execution policy to allow unsigned PowerShell scripts. Please see this for how to do this.

    The script can be run on any DAG configuration. You can use this to troubleshoot Mailbox and Public folder database backup issues. Databases and log files can be on regular drives or mount points. Mix and match of the two will also work!

    Let us discuss in detail the two main functionalities of the script.

    Diskshadow functionality and how the script uses it

    What is Diskshadow and why do we utilize it in VSSTester script?

    Diskshadow.exe is a command line tool built in to Windows Server 2008 operating system family as well as Windows Server 2012. Diskshadow is an in-box VSS requestor. It is utilized to test the functionality provided by the Volume Shadow Copy Service (VSS). For more details on Diskshadow please visit:

    http://technet.microsoft.com/en-us/library/ee221016(v=ws.10).aspx

    http://blogs.technet.com/b/josebda/archive/2007/11/30/diskshadow-the-new-in-box-vss-requester-in-windows-server-2008.aspx

    The best part about Diskshadow is that it includes a script mode for automating tasks. This feature of Diskshadow is utilized in the VSSTester. The shadow copy done by Diskshadow is a snapshot of the entire volume at a given point in time. This copy is read-only.

    More details on how a shadow copy is created, please visit the following link: http://technet.microsoft.com/en-us/library/ee923636(v=ws.10).aspx

    During the course of the blog post, I will be mentioning the term “Diskshadow backup”. It is very important to understand that the term “backup” is relative here. Diskshadow uses the VSS service and gets the appropriate writer to be utilized for the snapshot. The writer will provide the metadata information of database /log files to the Diskshadow. After which Diskshadow utilizes the VSS Provider to create a shadow copy.

    After a successful shadow copy /snapshot of databases and log files, the VSS Provider signals an end-backup to Exchange writers. To Exchange this looks like a full backup has been performed on the database. The key to understand here is NO data is actually transferred to a device, tape etc. This is only a test! You will see events in the application logs that usually show up when you take a regular backup, but NOdata is actually backed up. Diskshadow has simply run all the backup APIs through the backup process without transferring any data.

    The VSS Provider will take a snapshot of all the databases and logs (if present) on the volume. We will be doing a mirrored snapshot of the entire volume at the point in time when Diskshadow was run. Anything that is on the volume will be part of the snapshot. During the Diskshadow backup, we will be utilizing either the Information store writer (for active copy backup) or the Replica Writer (Passive copy backup) to provide the metadata information for the database.

    When you use the VSSTester script, it prompts you for a database to be selected to perform the Diskshadow backup. When we take a snapshot of the volume all other databases (if present on the same drive) will be part of the snapshot, but post-backup operations will happen only on the selected database. This is because we will be utilizing either the Information store Writer (Active Copy Backup) or the Replica Writer (Passive copy backup) that is associated with the selected database. DB headers get updated based on VSS Requestor interaction with the Exchange writer that was utilized, which in turn leads to log truncation. Hence the header of the selected database will be only updated and logs will be purged (only for that the selected database) without being backed up.

    When would you be interested in utilizing this Diskshadow functionality of the script?

    You would be interested to utilize this functionality in almost all scenarios that I discussed at the start of this blog post. In addition to those scenarios another one that is not related to backups sometimes arises:

    • “I had an unexpected high transactional log growth issue in my exchange 2010 environment and now I am on the verge of losing all disk space in the logs directory. I do not have the time to perform a backup to truncate logs and my goal is to safely remove all the log files”

    In the scenario mentioned above (and, by the way, if you have that problem, please go here), Exchange administrators would like to avoid causing a service outage by dismounting the database, removing log files and remounting the database. Another downside to manually removing the log files is breaking replication if the database has replicas across Database Availability Group members.

    If you are willing to forgo a backup of the log files you can use the Diskshadow functionality of the script to trigger the backup APIs and tell Exchange to truncate the log files. The truncation commands will replicate to the other database copies and purge log files there as well. If successful, the net result is that the database will not go offline for lack of disk space on the log drive, but you will not have the security of retaining those log files for a future restore.

    A sample run of the VSSTester script (with Diskshadow functionality)

    Let me demonstrate the Diskshadow functionality of the script.

    The Script can be downloaded from TechNet gallery here.

    The script initializes and gives us the following options.

    image

    We select the option 1 to test backup using the built-in Diskshadow function.

    image

    If the path does not exist, the script will create the folder for you.

    We gather the server name and verify it is an Exchange 2010 server. The script will check for the VSS writer status on the local machine. If we detect, any of the writers are not in a “Stable” state, the script will exit. You will need to restart the service associated with the writer to get the writers to a stable state (The Replication service for the Replica Writer or the Information Store service for the Exchange Writer).

    The script then gets a list of databases present on the local server and displays the database name, if database is mounted or not and what is the server that holds the active copy of the database. You will have to select the number of the database.

    Note: If the user does not provide an input, the script will automatically select the last database in the list.

    In my case, I selected database mdb5. The number to enter would be 8.

    image

    The next important check is ensuring that the database’s replicas (if present) are healthy. If we detect that one of the copies is not healthy, the script will exit mentioning that the database copies need to be in healthy status before running the script.

    image

    The script next detects the location of the database file and log files. We create the Diskshadow configuration file on the fly every time a database is selected. This configuration file is also saved to the location you had specified earlier (in the example screenshots of this blog c:\vsstesterlogs) to save the configuration and output files. In this case the log files are in a mount point and the database file is on a regular volume. The script will add the appropriate volumes to the disk shadow file.

    image

    The script will then prompt you to provide the drive letters to expose the snapshots. A common question that arises is, do I need to initialize the drive before I specify a drive letter? The answer is no!

    You will be specifying a drive letter that is currently not in use, so Diskshadow will create a virtual drive and expose the snapshot. Remember, the virtual drive that exposes the shadow copy is a read-only volume. The shadow copy is a read only copy .If the database and logs are in the same mount point / drive only, one drive letter is required to expose the snapshot, otherwise you will need to provide two different drive letters. One for exposing database snapshot and another for log files.

    image

    When you select the option to perform the Diskshadow backup, the script will automatically collect Diagnostic logs, ExTRA traces and VSS traces. Also verbose logging is turned on for Diskshadow. Whatever activity the script does is also logged in to transcript log and saved in the output files directory (c:\vsstesterlogs in this example).

    image

    Note: If you are performing a passive copy backup, ExTRA tracing will also be turned on in the active node. At the end of the script, we turn off ExTRA tracing in the active node and it will be automatically moved to the passive node. The active node ETL will be placed in the logs folder you had specified at the start of the script. .

    Now, the main Diskshadow function will execute.

    In the screenshots below we have excluded all other writers on the system that are associated with all other databases on the node (that are mounted or be replicas) and we are ONLY utilizing the writer associated with the selected database. This node hosts the passive copy of the database MDB5. Hence, the writer utilized will be associated with the Replication service aka the Microsoft Exchange Replica Writer.

    image

    image
    (please click on above two screenshots to see them)

    From the screen shot below, you can see that VSS Provider has taken a successful snapshot of the database and signaled end backup to the replica writer.

    image

    Now that we performed a successful snapshot of the database and log files, all the logging that was turned on will be turned off. The log files will be consolidated in the logs folder that you specified earlier at the start of the script. The script checks the VSS writer status after the backup is complete.

    image

    When the snapshot operation is complete, you will be prompted for an option to either remove the snapshot or leave the snapshots exposed in Windows Explorer.

    image
    (click to view)

    I selected the option to remove the snapshot; hence we will be invoking Diskshadow again to delete the snapshot created earlier.

    Let us discuss in detail exposing and removing snapshot functionality:

    1. Remove snapshots - The snapshots that were taken earlier (database or log files) will be exposed in the Windows explorer, if the snapshot operation was successful. In this script we expose the snapshots as a drive letter (that you had specified earlier). If you do not want to have a copy of the log files, you may chose this option and the snapshot will be deleted. All the logs that got purged after post-backup will be present in this read only volume and when this volume is removed they will be deleted forever.
    2. Expose Snapshots– You may choose to have the snapshots exposed. Later, if you want to delete the snapshot, please do the following
      • Open Command prompt
      • Type in Diskshadow
      • Delete shadows exposed <volume>

    Note: It is highly recommended to take a full backup of the database using your regular backup software after utilizing Diskshadow.

    After this, the script collects the application and system logs. The script filters them to cover only the period you started the script to the present. The transcript log is also stopped. The logs will be saved as a text file and saved in the output folder you had specified earlier (c:\vsstesterlogs in this example).

    image

    The most reliable method to verify log truncation takes place is to get the log sequence before and after the backup. Hence, before running the script I ran eseutil/ml ENN (the log generation prefix associated with database).

    image

    Post-backup, when I ran the same command, and can see:

    image

    We can clearly see a difference in the start of the sequence, meaning log truncation has occurred for the database. One more verification that can be done is to check the database header. We can see that the database header got updated to the most recent time, where Diskshadow was run.

    image

    I ran the script; what have I accomplished?

    If the script finished successfully:

    • We were able to successfully test and exercise the underlying VSS framework in the server. Volume Shadow Copy service was able to successfully identify and utilize the Exchange writers in the box
    • The Exchange writers are able to provide the metadata information to the VSS Requestor (Diskshadow)
    • VSS Provider was able to successfully create a snapshot /shadow copy
    • VSS successfully signaled the Exchange writers on backup complete
    • The Exchange writers were able to perform the post snapshot operations which included log truncation.

    Let us now look in to the other major functionality of the script.

    Enable logging to troubleshoot backup issues

    Use this if you do not want to test backup using Diskshadow and you just want to collect diagnostic logs for troubleshooting backup issues.

    You may collect the diagnostic logs and have them handy before calling Microsoft Support saving a lot of time in the support incident because you can provide the files at the beginning of the case.

    This time we will be selecting option 2 to enable logging.

    image

    Selecting this option does the majority of the things that the script did earlier, EXCEPT Diskshadow of course!

    After checking the writer status, you can select the database to backup. We will be enabling all the logging like before (Diagnostic Logging, ExTRA, VSS tracing). Remember that, even though you would still be selecting one database - diagnostic logging, ExTRA tracing, VSS tracing are not database specific and are turned on at the server level. When you are utilizing the script to troubleshoot backup issues you can select any one database on the server and it will turn on appropriate logging on the server.

    After the logging is turned on and traces enabled, you will see:

    image
    (click to view)

    Now you will need to start your regular backup. After the backup completes/fails, you will need to come back to the PowerShell window where you are running the script and use the “ENTER” key to terminate the data collection. The script then disables diagnostic logging and tracing that was turned up earlier. If needed it will copy diagnostic logs from the active node for that database copy as well.

    The script will again check for writer status after the backup then collect the application and system logs. It will stop the transcript log as well.

    At this point, in order to troubleshoot the issue, you can open a case with Microsoft Support and upload the logs.

    I hope this script helps you in better understanding the core concepts in Exchange 2010 backups, thus helping you troubleshoot backup issues! You can utilize Diskshadow to test Volume Shadow Copy Service and also check if the Exchange writers are performing as intended. If Diskshadow completes successfully without any error and you are still experiencing issues with backup software, you may need to contact the backup vendor to further troubleshoot the issue.

    Your feedback and comments are most welcome.

    Special thanks to Michael Barta for his contribution to the script, Theo Browning and Jesse Tedoff for reviewing the content.

    Muralidharan Natarajan

    Use Exchange Web Services and PowerShell to Discover and Remove Direct Booking Settings

    $
    0
    0

    Prior to Exchange 2007, there were two primary methods of implementing automated resource scheduling – Direct Booking and the AutoAccept Agent(a store event sink released as a web download for Exchange 2003). In Exchange 2007, we changed how automated resource scheduling is implemented. The AutoAccept Agent is no longer supported, and the Direct Booking method, technically an Outlook function, has been replaced with server-side calendar booking function called the Resource Booking Attendant.

    Note There are various terms associated with this new Resource Booking function, such as: Calendar Processing, Automatic Resource Booking, Calendar Attendant Processing, Automated Processing and Resource Booking Assistant. We will be using the “Resource Booking Attendant” nomenclature for this article.

    While the Direct Booking method for resource scheduling can indeed work on Exchange Server 2007/2010/2013, we strongly recommend that you disable Direct Booking for resource mailboxes and use the Resource Booking Attendant instead. Specifically, we are referring to the “AutoAccept” Automated Processing feature of the Resource Booking Attendant, which can be enabled for a mailbox after it has been migrated to Exchange 2007 or later and upgraded to a Resource Mailbox.

    Note The published resource mailbox upgrade guidance on TechNet specifies to disable Direct Booking in the resource mailbox while still on Exchange 2003, move the mailbox, and then enable the AutoAccept functionality via the Resource Booking Attendant. This order of steps can introduce an unnecessary amount of time where the resource mailbox may be without automated scheduling capabilities.

    We are currently working to update that guidance to reflect moving the mailbox first, and only then proceed with disabling the Direct Booking functionality, after which the AutoAccept functionality via the Resource Booking Attendant can be immediately enabled. This will shorten the duration where the mailbox is without automated resource scheduling capabilities.

    This conversion process to resource mailboxes utilizing the Resource Booking Attendant is sometimes an honest oversight or even deliberately ignored when migrating away from Exchange 2003 due to Direct Booking’s ability to continue to work with newer versions of Exchange, even Exchange Online. This will often result in resource mailboxes (or even user mailboxes!) with Direct Booking functionality remaining in place long after Exchange 2003 is ancient history in the environment.

    Why not just leave Direct Booking enabled?

    There are issues that can arise from leaving Direct Booking enabled, from simple administrative burden scenarios all the way to major calendaring issues. Additionally, Resource Booking Attendant offers advantages over Direct Booking functionality:

    1. Direct Booking capabilities, technically an Outlook function, has been deprecated from the product as of Outlook 2013. It was already on the deprecation list in Outlook 2010 and required a registry modification to reintroduce the functionality.
    2. Direct Booking and Resource Booking Attendant are conflicting technologies, and if simultaneously enabled, unexpected behavior in calendar processing and item consistency can occur.
    3. Outlook Web App (as well as any non-MAPI clients, like Exchange ActiveSync (EAS) devices) cannot use Direct Booking for automated resource scheduling. This is especially relevant for Outlook Web App-only environments where the users do not have Microsoft Outlook as a mail client.
    4. The Resource Booking Attendant AutoAccept functionality is a server-side solution, eliminating the need for client-side logic in order to automatically process meeting requests.

    How do I check which mailboxes have Direct Booking Enabled?

    How does one validate if Direct Booking settings are enabled on mailboxes in the organization, especially if mailboxes had previously been hosted on Exchange 2003?

    Screenshot: Resource Scheduling properties
    Figure 1: Checking Direct Booking settings in Microsoft Outlook 2010

    Unfortunately, the manual steps involve assigning permissions to all mailboxes, creating MAPI profiles for each mailbox, logging into each mailbox, checking Tools> Options> Calendar> Resource Scheduling, note which of the three Direct Booking checkboxes are checked, click OK/Cancel a few times, log out of mailbox. Whew! That can be a major undertaking even for a small to midsize company that has more than a handful of mailboxes! Having staff perform this type of activity manually can be a costly and tedious endeavor. Once you have discovered which mailboxes have the Direct Booking settings enabled, you would then have to repeat this entire process to disable these settings unless you removed them at the time of discovery.

    Having an automated method to discover, track, and even disable Direct Booking settings would be nice right?

    Look no further, we have the solution for you!

    Using Exchange Web Services (EWS) and PowerShell, we can automate the discovery of Direct Booking settings that are enabled, track the results, and even disable them! We wrote Remove-DirectBooking.ps1, a sample script, to do exactly that and even more to aid in automating this manual effort.

    After you've downloaded it, rename the file and remove the .txt extension.

    IMPORTANT  The previously uploaded script had the last line truncated to Stop-Tran (instead of Stop-Transcript). We've uploaded an updated version to TechNet Gallery. If you downloaded the previous version of the script, please download the updated version. Alternatively, you can open the previously downloaded version in Notepad or other text editor and correct the last line to Stop-Transcript.

    Let’s break down the major tasks the PowerShell script does:

    1. Uses EWS Application Impersonation to tap into a mailbox (or set of mailboxes) and read the three MAPI properties where the Direct Booking settings are stored. It does this by accessing the localfreebusy item sitting in the NON_IPM_SUBTREE\FreeBusy Data folder, which resides in the root of the Information Store in the mailbox. The three MAPI properties and their equivalent Outlook settings the script looks at are:

      • 0x686d Automatically accept meeting requests and remove canceled meetings
      • 0x686f Automatically decline meeting requests that conflict with an existing appointment or meeting
      • 0x686e Automatically decline recurring meeting requests

      These three properties contain Boolean values mirroring the Resource Scheduling checkboxes found in Outlook (see Figure 1 above).

    2. For mailboxes where Direct Booking settings were detected, it checks for conflicts by determining if the mailbox also has Resource Booking Attendant enabled with AutomateProcessing set to AutoAccept.
    3. Optionally, disables any enabled Direct Booking settings encountered.

      Note It is important to understand that by default the script runs in a read-only mode. Additional command line switches are available to run the script to disable Direct Booking settings.

    4. Writes a detailed runtime processing log to console and log file.
    5. Creates a simple output text file containing a list of mailboxes that can be later leveraged as an input file to feed the script for disabling the Direct Booking functionality.
    6. Creates a CSV file containing statistics of the list of mailboxes processed with detailed information, such as what was discovered, any errors encountered, and optionally what was disabled. This is useful for performing analysis in the discovery phase and can also be used as another source to create an input file to feed into the script for disabling the Direct Booking functionality.

    Example Scenarios

    Here are a couple of example scenarios that illustrate how to use the script to discover and remove enabled Direct Booking settings.

    Scenario 1

    You've recently migrated from Exchange 2003 to Exchange 2010 and would like to disable Direct Booking for your company’s conference room mailboxes as well as any user mailboxes that may have Direct Booking settings enabled. The administrator’s logged in account has Application Impersonation rights and the View-Only RecipientsRBACrole assigned.

    1. On a machine that has the Exchange management tools & the Exchange Web Services API 1.2 or greater installed, open the Exchange Management Shell, navigate to the folder containing the script, and run the script using the following syntax:

      .\Remove-DirectBooking.ps1 –identity * -UseDefaultCredentials

    2. The script will process all mailboxes in the organization with detailed logging sent to the shell on the console. Note, depending the number of mailboxes in the org, this may take some time to complete
    3. When the script completes, open the Remove-DirectBooking_<timestamp>.txtfile in Notepad, which will contain list of mailboxes that have Direct Booking enabled:

      Screnshot: The Remove-Directbooking log generated by the script
      Figure 2: Output file containing list of mailboxes with Direct Booking enabled

    4. After reviewing the list, rerun the script with the InputFile parameter and the RemoveDirectBookingswitch:

      .\Remove-DirectBooking.ps1 –InputFile ‘.\Remove-DirectBooking_<timestamp>.txt’ –UseDefaultCredentials -RemoveDirectBooking

    5. The script will process all the mailboxes listed in the input file with detailed logging sent to the shell on the console. Because you specified the RemoveDirectBooking switch, it does not run in read-only mode and disables all currently enabled Direct Booking settings encountered.
    6. When the script completes, you can check the status of the removal operation by checking the Remove-DirectBooking_<timestamp>.csv file. A column called Direct Booking Removed? will record if the removal was successful. You can also check the runtime processing log file RemoveDirectBooking_<timestamp>.logas well.

      Log file results in Excel
      Figure 3: Reviewing runtime log file in Excel (see larger screeshot))

    Note The Direct Booking Removed? column now shows Yes where applicable, but the three Direct Booking settings columns still show their various values as “Yes”; this is because we record those three values pre-removal. If you were to run the script again in read-only mode against the same input file, those columns would reflect a value of N/A since there would no longer be any Direct Booking settings enabled. The Resource Room?, AutoAccept Enabled?, and Conflict Detected all have a value of N/A regardless because they are not relevant when disabling the Direct Booking settings.

    Scenario 2

    You're an administrator who's new to an organization. You know that they migrated from Exchange 2003 to Exchange 2007 in the distant past and are currently in the process of implementing Exchange 2013, having already migrated some users to Exchange 2013. You have no idea what resources mailboxes or even user mailboxes may be using Direct Booking and would like to discover who has what Direct Booking settings enabled. You would then like to selectively choose which mailboxes to pilot for Direct Booking removal before taking action on the majority of found mailboxes.

    Here's how you would accomplish this using the Remove-DirectBooking.ps1 script:

    1. Obtain a service account that has Application Impersonation rights for all mailboxes in the org.
    2. Ensure service account has at least Exchange View-Only Administrator role (2007) and at least have an RBAC Role Assignment of View Only Recipients (2010/2013).
    3. On a machine that has the Exchange management tools & the Exchange Web Services API 1.2 or greater installed, preferably an Exchange 2013 server, open the Exchange Management Shell, navigate to the folder containing the script, and run the script using the following syntax:

      .\Remove-DirectBooking.ps1 –Identity *

    4. The script will prompt you for the domain credentials of the account you wish to use because no credentials were specified. Enter the service account’s credentials.
    5. The script will process all mailboxes in the organization with detailed logging sent to the shell on the console. Note, depending the number of mailboxes in the org, this may take some time to complete.
    6. When the script completes, open the Remove-DirectBooking_<timestamp>.csvin Excel, which will looks something like:


      Figure 4: Reviewing the Remove-DirectBooking_<timestamp>.csv in Excel (see larger screeshot))

    7. Filter or sort the table by the Direct Booking Enabled? column. This will provide a list that can be scrutinized to determine which mailboxes are to be piloted with Direct Booking removal, such as those that have conflicts with already having the Resource Booking Attendant’s Automated Processing set to AutoAccept (which you can also filter on using the AutoAccept Enabled? column).
    8. Once the list has been reviewed and the targeted mailboxes isolated, simply copy their email addresses into a text file (one address per line), save the text file, and use it as the input source for the running the script to disable the Direct Booking settings:

      .\Remove-DirectBooking.ps1 –InputFile ‘.\’ -RemoveDirectBooking

    9. As before, the script will prompt you for the domain credentials of the account you wish to use. Enter the service account’s credentials.
    10. The script will process all the mailboxes listed in the input file with detailed logging sent to the shell on the console. It will disable all enabled Direct Booking settings encountered.
    11. Use the same validation steps at the end of the previous example to verify the removal was successful.

    Script Options and Caveats

    Please see the script’s help section (via “get-help .\remove-DirectBooking.ps1 -full”) for full information on all the available parameters. Here are some additional options that may be useful in certain scenarios:

    1. EWSURL switch parameter By default, the script will attempt to retrieve the EWS URL for each mailbox via AutoDiscover. This is preferred, especially in complex multi-datacenter or hybrid Exchange Online/On-premises environments where different EWS URLs may be in play for any given mailbox depending on where it resides in the org. However, there may be times where one would want to supply an EWS URL manually, such as when AutoDiscover is having “issues”, or the response time for AutoDiscover requests is introducing delays in overall script execution (think very large quantity of number of mailbox identities to churn through) and the EWS URL is the same across the org, etc. In these situations, one can use the EWSURL parameter to feed the script a static EWS URL.
    2. UseDefaultCredentials If the current user is the service account or perhaps simply has both the Impersonation and the necessary Exchange Admin rights per the script’s requirements and they don’t wish to be prompted to type in a credential (another great example is scheduling the script to run as a job for instance), you can use the UseDefaultCredentials to run the script under that security context.
    3. RemoveDirectBooking By default, the script runs in read-only mode. In order to make changes and disable Direct Booking settings on the mailbox, you mus specify the RemoveDirectBooking switch.

    The script does have several prerequisites and caveats to ensure proper operation and meaningful results:

    1. Application Impersonation rights and minimum Exchange Admin rights must be used
    2. Exchange Web Services Managed API 1.2 or later must be installed on the machine running the script
    3. Exchange management tools must be installed on the machine running the script
    4. Script must be executed from within the Exchange Management Shell
    5. The Shell session must have the appropriate execution policy to allow the script to be executed (by default, you can't execute unsigned scripts).
    6. AutoDiscover must be configured correctly (unless the EWS URL is entered manually)
    7. Exchange 2003-based mailboxes cannot be targeted due to lack of EWS capabilities
    8. In an Exchange 2010/2013 environment that also has Exchange 2007 mailboxes present, the script should be executed from a machine running Exchange 2010/2013 management tools due to changes in the cmdlets in those versions

    Summary

    The discovery and removal of Direct Booking settings can be a tedious and costly process to perform manually, but you can avoid and automate it using current functions and features via PowerShell and EWS in Microsoft Exchange Server 2007, 2010, & 2013. With careful use, the Remove-DirectBooking.ps1 script can be a valuable tool to aid Exchange administrators in maintaining automated resource scheduling capabilities in their Microsoft Exchange environments.

    Your feedback and comments are welcome.

    Thank you to Brian Day and Nino Bilic for their guidance in content review, and to our customers (you know who you are) for piloting the script.

    Seth Brandes& Dan Smith

    Using Exchange Web Services to Apply a Personal Tag to a Custom Folder

    $
    0
    0

    In Exchange 2010, we introduced Retention Tags, a Messaging Records Management (MRM) feature that allows you to manage email lifecycle. You can use retention policies to retain mailbox data for as long as it’s required to meet business or regulatory requirements, and delete items older than the specified period.

    One of the design goals for MRM 2.0 was to simplify administration compared to Managed Folders, the MRM feature introduced in Exchange 2007, and allow users more flexibility. By applying a Personal Tag to a folder, users can have different retention settings apply to items in that folder than the default tag applied to the entire mailbox(known as a Default Policy Tag). Similarly, users can apply a different tag to a subfolder than the one applied to the parent folder. Users can also apply a Personal Tag to individual items, allowing them the freedom to organize messages based on their work habits and preference, rather than forcing them to move messages, based on the retention requirement, to an admin-controlled Managed Folder.

    You can still use Managed Folders in Exchange 2010, but they’re not available in Exchange 2013.

    For a comparison of Retention Tags with Managed Folders and migration details, see Migrate Managed Folders.

    If you like the Managed Folders approach of being able to create a folder in the user’s mailbox and configure a retention setting for that folder, you can use Exchange Web Services (EWS) to accomplish something similar, with some caveats mentioned later in this post. You can write your own code or even a PowerShell script to create a folder in the user’s mailbox and apply a Personal Tag to it. There are scripts available on the interwebs, including some code samples on MSDN to accomplish this. For example:

    Note: The above scripts are examples for your reference. They’re not written or tested by the Exchange product group.

    But is it supported?

    We frequently get questions about whether this is supported by Microsoft. Short answer: Yes. Exchange Web Services (EWS) is a supported and documented API, which allows ISVs and customers to create custom solutions for Exchange.

    When using EWS in your code or PowerShell script to apply a Personal Tag to a folder, it’s important to consider the following:

    For Developers

    • EWS is meant for developers who can write custom code or scripts to extend Exchange’s functionality. As a developer, you must have a good understanding of the functionality available via the API and what you can do with it using your code/script.
    • Support for EWS API is offered through our Exchange Developer Support channels.

    For IT Pros

    • If you’re an IT Pro writing your own code or scripts, you’re a developer too! Above applies to you.
    • If you’re an IT Pro using 3rd-party code or scripts, including the code samples & scripts available on MSDN, TechNet or elsewhere on the interwebs, we recommend that you follow the general best practices for using such code or scripts, including (but not limited to)the following:
      • Do not use code/scripts from untrusted sources in a production environment.
      • Understand what the script or code does. (This is easy for scripts – you can look at the source in a text editor.)
      • Test the script or code thoroughly in a non-production environment, including all command-line options/parameters available in it, before installing or executing it in your production environment.
      • Although it’s easy to change the PowerShell execution policy on your servers to allow unsigned scripts to execute, it’s recommended to allow only signed scripts in production environments. You can easily sign a script if it's unsigned, before running it in a production environment.

    So should I do it?

    If using EWSto apply a Personal Tag to custom folders helps you meet your business requirements, absolutely! However, do note and consider the following:

    • You’re replicating some of the functionality available via Managed Folders, but it doesn’t turn the folder into a Managed Folder.
    • Remember - it’s a Personal Tag! Users can remove the tag from the folder using Outlook or Outlook Web App.
    • If you have additional Personal Tags available in your environment, users can change the tag on the custom folder.
    • Users can tag individual items with a different Personal Tag. There is no way to enforce inheritance of retention tag if Personal Tags have been provisioned and available to the user.
    • Users can rename or delete custom folders. Unlike Managed Folders, which are protected from changes or deletion by users, custom folders created by users or by admin are just like any other (non-default) folder in the mailbox.

    Provisioning custom folders with different retention settings (by applying Personal Tags) may help you meet your organization’s retention requirements. As an IT Pro, make sure you understand the above and follow the best practices.

    Bharat Suneja

    Ambiguous URLs and their effect on Exchange 2010 to Exchange 2013 Migrations

    $
    0
    0

    With the recent releases of Exchange Server 2013 RTM CU1, Exchange 2013 sizing guidance, Exchange 2013 Server Role Requirements Calculator, and the updated Exchange 2013 Deployment Asistant, on-premises customers now have the tools you need to begin designing and performing migrations to Exchange Server 2013. Many of you have introduced Exchange 2013 RTM CU1 into your test environments alongside Exchange 2010 SP3 and/or Exchange 2007 SP3 RU10, and are readying yourselves for the production migrations.

    There's one particular Exchange 2010 design choice some customers made that could throw a monkey wrench into your upgrade plans to Exchange 2013, and we want to walk you through how to mitigate it so you can move forward. If you're still in the design or deployment phase of Exchange Server 2010, we recommend you continue reading this article so you can make some intelligent design choices which will benefit you when you migrate to Exchange 2013 or later.

    What is the situation we need to look for?

    In Exchange 2010, all Outlook clients in the most typical configurations will utilize MAPI/RPC or Outlook Anywhere (RPC over HTTPS) connections to a Client Access Server. The MAPI/RPC clients connect to the CAS Array Object FQDN (also known as the RPC endpoint) for Mailbox access and the HTTPS based clients connect to the Outlook Anywhere hostname (also known as the RPC proxy endpoint) for all Mailbox and Public Folder access. In addition to these primary connections, other HTTPS based workloads such as EAS, ECP, OAB, and EWS may be sharing the same FQDN as Outlook Anywhere. In some environments you may also be sharing the same FQDN with POP/IMAP based clients and using it as an SMTP endpoint for internal mail submissions.

    In Exchange 2010, the recommendation was to utilize split DNS and ensure that the CAS Array Object FQDN was only resolvable via DNS by internal clients. External clients should never be able to resolve the CAS Array Object FQDN. This was covered previously in item #4 of Demystifying the CAS Array Object - Part 2. If you put those two design rules together you come to the conclusion your ClientAccessArray FQDN used by the mailbox database RpcClientAccessServer property should have been an internal-only unique FQDN not utilized by any workload besides MAPI/RPC clients.

    Take the following chart as an example of what a suggested configuration in a split DNS configuration would have looked like.

    FQDNUsed ByInternal DNS resolves toExternal DNS resolves to
    mail.contoso.comAll HTTPS WorkloadsInternal Load Balancer IPPerimeter Network Device
    outlook.contoso.comMAPI/RPC WorkloadsInternal Load Balancer IPN/A

    If your do not utilize split DNS, then a suggested configuration may have been.

    FQDNUsed ByDNS resolves to
    mail.contoso.comExternal HTTPS WorkloadsPerimeter Network Device
    mail-int.contoso.comInternal HTTPS WorkloadsInternal Load Balancer IP
    outlook.contoso.comInternal MAPI/RPC WorkloadsInternal Load Balancer IP

    In speaking with our Premier Field Engineers and MCS consultants, we learned that some of our customers did not choose to use a unique ClientAccessArrayFQDN. This design choice may manifest itself in one of two ways. The MAPI/RPC and HTTPS workloads may both utilize the mail.contoso.com FQDN internally and externally, or a unique external FQDN of mail.contoso.com is used while internal MAPI/RPC and HTTPS workloads share mail-int.contoso.com. The shared FQDN in either situation is ambiguous because we can't look at it and immediately understand the workload type that's using it. Perhaps we were not clear enough in our original guidance, or customers felt fewer names would help reduce overall design complexity since everything appeared to work with this configuration.

    Take a look at the figure below and the FQDNs in use for some of the different workloads. Shown are EWS, ECP, OWA, CAS Array Object, and Outlook Anywhere External Hostname. The yellow arrow specifically points out the CAS Array Object, the value used as the RpcClientAccessServer for Exchange 2010 mailbox databases, and seen in the Server field of an Outlook profile for an Exchange 2010 mailbox.

    image
    An Exchange 2010 deployment with a single ambiguous URL for all workloads.

    Let us pause for a moment to visualize what we have talked about so far. If we were to compare an Exchange 2010 environment using ambiguous URLs to one not using ambiguous URLs, it would look like the following diagrams. Notice the first diagram below uses the same FQDN for Outlook MAPI/RPC based traffic and HTTPS based traffic.

    image

    If we were to then look at an environment not utilizing ambiguous URLs, we see the clients utilize unique FQDNs for MAPI/RPC based traffic and HTTPS based traffic. In addition, the FQDN utilized for MAPI/RPC based traffic is only resolvable via internal DNS.

    image

    If your environment does not look like the one above using ambiguous URLs, then you can go hit the coffee shop for a while or play some XBOX 360. Tell your boss we gave the okay. If your environment does look similar to the first example using ambiguous URLs or you are in the planning stages for Exchange 2010, then please read on as we need you to perform some extra steps when migrating to Exchange 2013.

    So what’s the big deal? It is functional this way isn’t it?

    While this may be working for you today, it certainly will not work tomorrow if you migrate to Exchange 2013. In this scenario where both the MAPI/RPC and HTTP workloads are using the same FQDN you cannot successfully move the FQDN to CAS 2013 without breaking your MAPI/RPC client connectivity entirely. I repeat, your MAPI/RPC clients will start failing to connect via MAPI/RPC once their DNS cache expires after the shared FQDN is moved to CAS 2013. The MAPI/RPC clients will fail to connect because CAS 2013 does not know how to handle direct MAPI/RPC connections as all Windows based Outlook clients utilize MAPI over a RPC over HTTPS connection in Exchange 2013. There is a chance your Outlook clients may successfully fall back to HTTPS only if Outlook Anywhere is currently enabled for Exchange 2010 when the failure to connect via MAPI/RPC takes place, but this article will help with the following.

    1. Ensure you are in full control of what will take place
    2. Ensure you are in full control of when #1 takes place
    3. Ensure you are in a supported server + client configuration
    4. Ensure environments with Outlook Anywhere disabled for Exchange 2010 know their path forward
    5. Help remove the possibility of any clients not automatically falling back to HTTPS
    6. Remove the potentially long delay when Outlook does fail to via MAPI/RPC even though it can resolve the MAPI/RPC URL and then falls back to HTTPS

    Shoot… this looks like us. What should we do immediately?

    First off, if you are still in the planning stages of Exchange 2010 you need to take our warning to heart and immediately change your design to use a specific internal-only FQDN for MAPI/RPC clients. If you are in the middle of a 2010 deployment using an Ambiguous URL I recommend you change your ClientAccessArray FQDN to a unique name and update the mailbox database RpcClientAccessServer values on all Exchange 2010 mailbox databases accordingly. Fixing this item mid-migration to Exchange 2010 or even in your fully migrated environment will ensure any newly created or manually repaired Outlook profiles are protected, but it will not automatically fix existing Outlook clients with the old value in the server field.

    While not necessary as long as you go through our mitigation steps below, any existing Outlook profiles could be manually repaired to reflect the new value. If you are curious why a manual repair is necessary you can refer to items #5 and #6 in Demystifying the CAS Array Object - Part 2. Again, forcing this update is not necessary if you follow our mitigation steps later in this article. However, if you were to choose to update some specific Outlook profiles we suggest you perform those steps in your test environment first to make sure you have the process down correctly.

    Additionally as we previously discussed in item #3 of Demystifying the CAS Array Object – Part 1, the ClientAccessArray FQDN is not needed in your SSL certificate as it is not being used for HTTPS based traffic. Because of this, the only thing you would need to do is create a new internal DNS record, update your ClientAccessArray FQDN, and finally update your Exchange 2010 Mailbox Database RpcClientAccessServer values. It bears repeating that you do not have to get a new SSL certificate only to fix an Ambiguous URL situation.

    Ok, fixed that… now what about the clients we don’t want to repair manually?

    Our suggestion is to implement Outlook Anywhere internally for all users prior to introducing Exchange Server 2013 to the environment.

    Many of our customers have already moved to Outlook Anywhere internally for all Windows Outlook clients. In fact, those of you reading this with OA in use internally are good to proceed to the coffee shop or go play XBOX 360 with the other folks if you’d like to.

    Now for the rest of you… sit a little closer. Go ahead and fill in, there are plenty of seats in the front row like usual.

    In Exchange Server 2013 all Windows Outlook clients operate in Outlook Anywhere mode internally. By following these mitigation steps you will be one step ahead of where you will end up after your migration to Exchange Server 2013 anyways.

    If you do not have Outlook Anywhere enabled at all in your environment, please see Enable Outlook Anywhere on TechNet for steps on how to enable it in Exchange 2010. If your company does not wish to provide external access for Outlook Anywhere that is ok. By simply enabling Outlook Anywhere you will not be providing remote access unless you also publish the /rpc virtual directory to the Internet.

    It is suggested customers, especially very large ones, consider enabling Kerberos authentication to avoid any potential performance issues you may run into utilizing the default NTLM authentication. Information on how to configure Kerberos Authentication can be found here on TechNet for Exchange Server 2010 and the steps for Exchange Server 2013 are similar which we will have documentation for in the near future. However, please keep in mind Kerberos authentication with Outlook Anywhere is only supported with Windows Vista or later.

    By default with Outlook Anywhere enabled in the environment your clients prefer RPC/TCP connections when on Fast Networks as seen below.

    image

    The trick we use to force Outlook Anywhere to also be used internally is via Autodiscover. Using Autodiscover we can make Windows Outlook clients prefer RPC/HTTPS on both Fast and Slow networks as seen here.

    image

    The method used to make clients always prefer HTTPS is configuring the OutlookProviderFlags option via the Set-OutlookProvider cmdlet. The following commands are executed from the Exchange 2010 Management Shell.

    Set-OutlookProvider EXPR -OutlookProviderFlags:ServerExclusiveConnect

    Set-OutlookProvider EXCH -OutlookProviderFlags:ServerExclusiveConnect

    If for any reason you need to put the configuration back to its default settings, issue the following commands and clients will no longer prefer HTTP on Fast Networks.

    Set-OutlookProvider EXPR -OutlookProviderFlags:None

    Set-OutlookProvider EXCH -OutlookProviderFlags:None

    You can prepare to introduce Exchange Server 2013 to your environment once all of your Windows Outlook clients are preferring HTTP on both fast and slow networks and are connecting through mail.contoso.com for RPC over HTTPS connections.

    There are a small number of things we would like to call out as you plan this migration to enable Outlook Anywhere for all internal clients.

    First, your front end infrastructure (CAS 2013, Load Balancer, etc…) must ready to immediately handle the full production load of Windows Outlook clients when you re-point the mail.contoso.com FQDN in DNS.

    Second, if your Exchange 2010 Client Access Servers were not scaled for 100% Outlook Anywhere connections then performance should be monitored when OA is enabled and all clients are moved from MAPI/RPC based to HTTPS based workloads. You should be ready to scale out your CAS 2010 infrastructure if necessary to mitigate any possible performance issues.

    Lastly, Windows Outlook clients older than Outlook 2007 are not supported going through CAS 2013 even if their mailbox is on an older Exchange version. All Windows Outlook clients going through CAS 2013 have to be at least the minimum versions supported by Exchange 2013. Any unsupported clients, such as Outlook 2003, do not support Autodiscover and would have to be manually with a new MAPI/RPC specific endpoint to assure they continue communicating with Exchange 2010 until the client can be updated and the mailbox migrated to Exchange 2013.

    Note: The easiest way to confirm what major/minor version of Outlook you have is to look at the version of OUTLOOK.EXE and EMSMDB32.DLL via Windows Explorer or to run an inventory report through Microsoft System Center Configuration Manager or similar software. The minimum version numbers Exchange Server 2013 supports for on-premises deployments are provided below.

    • Outlook 2007: 12.0.6665.5000 (SP3 + the November 2012 Public Update or any later PU)
    • Outlook 2010: 14.0.6126.5000 (SP1 + the November 2012 Public Update or any later PU)
    • Outlook 2013: 15.0.4420.1017 (RTM or later)

    If we were to visualize the mitigation steps from start to end we need to compare it between phases.

    First, the upper area of the below diagram depicts the start state of the environment with internal Windows Outlook clients utilizing MAPI/RPC and ambiguous URLs for their HTTPS based workloads. The lower area of the diagram depicts the same environment, but we have now forced Outlook Anywhere to be used by internal Windows Outlook clients. This change has forced all mailbox and public folder access traffic over HTTPS through the mail.contoso.com Outlook Anywhere FQDN.

    image

    We now have all Windows Outlook clients utilizing Outlook Anywhere internally by levering Autodiscover to force the preference of HTTPS. Now that all Windows Outlook traffic is routed through mail.contoso.com via HTTPS, the ambiguous URL problem has been mitigated. However, you may have other applications integrating with Exchange whom are unable to utilize Outlook Anywhere and/or Autodiscover. These applications will also be affected if you were to update the mail.contoso.com DNS entry to point at Exchange 2013. Before moving onto the second step it may be most efficient to add a HOSTS file entry on the servers hosting these external applications to force resolution of mail.contoso.com to the Layer-7 Load Balancer used by Exchange 2010. This should allow you to temporarily continue routing external application traffic that needs to talk to only Exchange 2010 via MAPI/RPC while you work on updating the applications to be Outlook Anywhere compatible, which they will need to be before they can ever connect to Exchange 2013.

    Having dealt with both the Windows Outlook clients and third-party applications whom cannot utilize Outlook Anywhere, we can now move onto the second step. The second step is executed when you are ready to introduce Exchange 2013 to the environment.

    The below diagram starts by showing where we finished after executing step one. The lower area of the below diagram shows that we have updated DNS to point the mail.contoso.com entry to the new IP of the new Exchange 2013 load balancer configuration. Because of the HOSTS entry we made our application server continues talking to the old Layer-7 load balancer for its MAPI over RPC/TCP connections. Exchange 2013 CAS will now receive all client traffic and then we proxy traffic for users still on Exchange 2010 back to the Exchange 2010 CAS infrastructure. The redundant CAS was removed from the diagram to simplify the view and simply show traffic flow.

    image

    In summary, we hope those of you in this unique configuration will be able to smoothly migrate from Exchange 2010 to Exchange 2013 now that you have these mitigation steps. Some of you may identify other potential methods to use and wonder why we are offering only a single mitigation approach. There were many methods investigated, but this mitigation approach came back every time as the most straightforward method to implement, maintain, and support. Given the potential complexity of this change we invite you to ask follow-up questions at the follow Exchange Server Forum where we can often better interact with you than the comments format allows.

    Exchange Server Forum: Exchange Server 2013 – Setup, Deployment, Updates, and Migration

    Brian Day
    Senior Program Manager
    Exchange Customer Experience

    Viewing all 301 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>