Quantcast
Channel: You Had Me At EHLO…
Viewing all 301 articles
Browse latest View live

Released: Calculator Updates Galore!

$
0
0

Today, we have released an updated version of the Exchange 2013 Server Role Requirements Calculator that addresses several issues found since its initial release.  You can view what changes have been made, or download the update directly.

In addition, we are releasing an updated version of the Exchange 2010 Server Role Requirements Calculator as well. You can view what changes have been made, or download the update directly.

Ross Smith IV
Principal Program Manager
Exchange Customer Experience


Comparing public folder item counts

$
0
0

A question that is often asked of Support in regard to legacy Public Folders is whether they're replicating and how much progress they're making.  The most common scenario arises when the administrator is adding a new Public Folder database to the organization and replicating a large amount of data to it.  What commonly happens is that the administrator calls Support and says:

The database on the old server is 300GB, but the new database is only 150GB!  How can I tell what still needs to be replicated?  Is it still progressing??

You can raise diagnostic logging for public folders, but reading the events to see which folders are replicating is tedious.  Most administrators want a more detailed way of estimating the progress of replication than comparing file sizes.  They also want to avoid checking all the individual replication events.

There are a number of ways to monitor replication progress so that one can make an educated guess as to how long a particular environment will take to complete an operation.  In this post, I'm going to provide a detailed example of one approach to estimating the progress of replication by comparing item counts between different public folder stores.

Getting Public Folder item counts

To get the item counts in an Exchange 2003 Public folder database you can use PFDAVAdmin.  The process is outlined in this previous EHLO blog post.  For what we're doing below, you'll need the DisplayName, Folderpath and the total number of items in the folder. The rest of the fields aren't necessary.

To get the item counts on an Exchange 2007 server, use (remember there is only one Pub per server):

Get-PublicFolderStatistics -Server <servername> | Export-Csv c:\file1.txt

To get the item counts on an Exchange 2010 server, you use:

Get-PublicFolderStatistics -Server <servername> -ResultSize unlimited | Export-Csv c:\file1.txt

Comparing item counts

There are some very important caveats to this whole procedure.  The things you need to watch out for are:

  • We're only checking item counts.  If you delete 10 items and add 10 items between executions of the statistics data, gathering this type of query will not reveal whether they have replicated.  Therefore, having the same number on both sides is not necessarily an assurance that the folders are in sync
  • If you're comparing folders that contain recurring meetings, the item counts can be different on Exchange 2007 and older because of the way WebDAV interacts with those items.
  • I've seen many administrators try to compare the size of one Public Folder database to the size of another.  Such an approach to checking on replication does not take into account space for deleted items, overhead and unused space.  Checking item counts is more reliable than simply comparing item sizes
  • The two databases might be at very different stages of processing replication messages.  It is unlikely that both pubs will present the same numbers of items if the folders are continuously active.  Even if the folders are seeing relatively low activity levels, it's not uncommon for the item count to be off by one or two items because the replication cycle (which defaults to every 15 minutes) simply hasn’t gotten to the latest post
  • If you really want to know if two replicas are in sync, try to remove one.  If Exchange lets you remove the instance, you know Exchange believes the folders are in sync.  If Exchange cannot confirm the folders are in sync, it'll keep the instance until it can complete the backfill from it.  In most cases, the administrators I have spoken with are not in a position where they can use this approach.

For the actual comparison you can use any number of products.  For this blog I have chosen Microsoft Access for demonstrating the process of comparing the CSV files from the different servers.  To keep things simple I am going to use the Access database.  There are some limitations to my approach:

  • Access databases have a maximum file size of 2GB. If your public folder infrastructure is particularly large (i.e.  your CSV files are over 500MB) you may have to switch to using Microsoft SQL.
  • I am not going to compare public folders with a Folder path greater than 254 characters because the Jet database engine that ships with Access cannot join memo fields in a query.  Working around the join limitation by splitting the path across multiple text fields is beyond the scope of this blog.
  • I am going to look at folders that exist in both CSV files.   If the instance has not been created and its data exported into the CSV file the folder will not be listed.

An outline of the process is:

  1. Export the item counts from the two servers you wish to compare
  2. Import the resulting text files
  3. Clean up the data for the final query
  4. Run a query to list the item counts for all folders that are in Both files and the difference in the item counts between the originally imported files

Assumptions for the steps below:

  • You have exported the public folder statistics with the PowerShell commands presented above
  • You have fields named FolderPath, ItemCount and Name in the CSV file

If your file is different than expected you will have to modify the steps as you go along

Here are the steps for conducting the comparison:

1. Create a new blank Microsoft Access database in a location that has more than double the size of your CSV files available as free space.

2. By default, the Export-Csv cmdlet includes the .NET type information in the first line of the CSV output. Because this line will interfere with the import, we'll need to remove it.  Open each CSV file in notepad (this can take a while for larger files) and remove the line highlighted below.  In this example the line starting with “AdminDisplayName” would become the topmost line of the file.  Once the top line is deleted close and save the file.

image
Figure 1

TIP You can avoid this step by including the -NoTypeInformation switch when using the Export-CSV cmdlet, which filters out the .NET object type information from the CSV output. For details, see Using the Export-Csv cmdlet on TechNet. (Thanks to #MSExchange MVP @SteveGoodman for the tip!)

3. Import the CSV file to a new table:

  • Click on the External Data tab as highlighted in Figure 2
  • Browse to the CSV file and select it (or type in its path and name directly)
  • Make sure the “Import the source data into a new table in the current database’ option is selected
  • Click OK

image
Figure 2

4. In the wizard that starts specify the file is delimited as shown and then click Next.

image
Figure 3

5. Tell the wizard that the text qualifier is the double quote (character 34 in ASCII), the delimiter is the comma and that the “First Row Contains Field Names” as shown in Figure 4.

Note:  It is possible that you will receive a warning when you click “First Row Contains Field Names”.  If any of the field names violate the rules for a field name Access will display a warning.  Don’t panic.  Access will replace the non-conforming names with ones it considers appropriate (typically Field1, Field2, etc.).  You can change the names if you wish on the Advanced screen.

image
Figure 4

6. Switch to Advanced view (click the Advanced button highlighted in Figure 4) so that we can change the data type of the FolderPath field.  In Access 2010 and older the data type needs to be changed from Text to Memo.  In Access 2013 it needs to be changed from Short Text to Long Text.  While we are in this window you have the option to exclude columns that are not needed by placing a checkmark in the box from the skip column.  In this blog we are only going to use the FolderPath, name and the item count.  You can also exclude fields earlier in the process by specifying what fields will be exported when you do the export-csv.  The following screenshots show the Advanced properties window.

image
Figure 5a: Access 2010 and older

image
Figure 5b: Access 2013

Note:  If you think you will be doing this frequently you can use the Save As button to save your settings.  The settings will be saved inside the Access database and can then be selected during future imports by clicking on the Specs button.

7. Click OK on the Advanced dialog and then click Finish in the wizard.

8. When prompted to save the Import steps click Close.  If you think you will be repeating this process in the future feel free to explore saving the import steps.

9. Access will import the data into a table.  By default the table will have the same name as the source CSV file.  The files used in creating this blog were called 2007PF_120301 and 2010 PF_120301.  If there are any import errors they will be saved in a separate table.  Take a moment to examine what they are.  The most common is that a field got truncated.  If that field is the folderpath it will affect the comparisons later.  If there are other problems you will have to troubleshoot what is wrong with the highlighted lines (typically there should be no import errors as long as the FolderPath is set as a Memo field).

10. Go back to Step 2 to import the second file that will be used in the comparison. 

11. Now a query must be run to determine if any folderpath exceeds 255 characters.  Fields longer than 255 characters cannot be used for a join in an Access query.  If we have values that exceed 255 characters in this field we will need to exclude them from the comparison.  Additional work to split a long path across multiple fields can be done, but that is being left as an exercise for any Access savvy readers. 

12. To get started select the options highlighted in Yellow in Figure 6:

image
Figure 6

13. Highlight the table where we want to check the length of the folderpath field as shown in Figure 7.  Once you have selected the table click Add and then Close:

image
Figure 7

14. Switch to SQL view as shown in Figure 8:

image
Figure 8

15. Replace the default select statement with one that looks like this (please make sure you substitute your own table name for the one that I have Bolded in the example):

SELECT Len([FolderPath]) AS Expr1, [2007PF_120301].FolderPath
FROM 2007PF_120301
WHERE (((Len([FolderPath]))>254));

Note:  Be sure the semi-colon is the last character in the statement.

16. Run the query using the red “!” as shown in Figure 9: 

image
Figure 9

image
Figure 10

17. If the result is a single empty row (as shown in Figure 10) then skip down to step 19.  If the result is at least one row then go back to SQL view (as shown in Figure 8) and change the statement to look like this one (as before please make sure 2007PF_120301 is replaced with the table name actually being used in your database):

SELECT [2007PF_120301].FolderPath, [2007PF_120301].ItemCount,
[2007PF_120301].Name, [2007PF_120301].Identity INTO 2007PF_120301_trimmed
FROM 2007PF_120301
WHERE (((Len([FolderPath]))<255));

18. You will get a prompt like the one in Figure 11 when you run the query.  Select Yes:

image
Figure 11

19. After it is done repeat steps 11-18 for the other CSV file that was imported to be part of the comparison.  If you have done steps 11-18 for both files you will be comparing then advance to step 20.

20. Originally the FolderPath was imported as a memo field (Long Text if using Access 2013).  However we cannot join memo fields in a query.  We need to convert them to a text field with a length of 255. 

If you got a result greater than zero rows in step 16 this step and the subsequent steps will all be carried out on the table specified in the INTO clause of the SQL statement (in this blog that table is named 2007PF_120301_trimmed). 

If you were able to skip steps 17 and 18 this step and the subsequent steps will be carried out on the table you imported (2007PF_120301 in this example).

Open the table in Design view by right-clicking on it and selecting Design View as shown in Figure 12.  If you select the wrong tables for the subsequent steps you will get a lot of unwanted duplicates in your final comparison output.

image
Figure 12

21. Change the folderpath from Memo to Text as shown in Figure 13.  If you are using Access 2013 change it from Long Text to Short Text.

image
Figure13

22. With the FolderPath field highlighted look to the lower part of the Design window where the properties of the currently selected field are displayed.  Change the field size of folderpath to 255 characters as shown in Figure 14.

image
Figure 14

23. Save the table and close its design view.  You will be prompted as shown in Figure 15.  Don’t panic.  All the folderpaths should be shorter than the 255 characters specified in the properties of the table.  The dialog is just a standard warning from Access.  No data should be truncated (the earlier queries should have seen to that).  Say Yes and repeat steps 20-23 for the other table being used in this comparison.  If you make a mistake here remember that you will still have your original CSV files and can always fix the mistake by removing the tables and redoing the import.

image
Figure 15

24. We have been on a bit of a journey to make sure we prepared the tables.  Now for the comparison.  Create a new query (as shown in Figure 6) and highlight both tables that have had the FolderPath shortened to 255 characters as shown in Figure 16.  Once they are highlight click Add and then close.

image
Figure 16

25. Drag Folderpath from the table that is the source of your replication to Folderpath on the other database.  The result will look like Figure 17.

image
Figure 17

26.   In the top half of the Query Design window we have the tables with their fields listed.  In the bottom half we have the query grid.  You can make fields appear in the grid in 3 ways:

  • Switch the SQL view and add them to the Select statement
  • Double-click the field in the top half of the window
  • Drag the field from the top half of the window to the grid
  • Click in the Field line of the grid and a drop down will appear that you can use to select the fields
  • Type the field name you want on the Field in the grid

For this step we need to add:

  • One copy of the folderpath field from one table (doesn’t matter which one)
  • The ItemCount field from each table

27.   Go to an empty column in the grid.  We need to enter the text that will tells us the difference between the two item counts.  Type the following text into the column (be sure to use the table names from your own database and not my example): 

Expr1:  Abs([2007PF_120301_trimmed].[itemcount]-[2010pf_120301_trimmed].[itemcount])

Note:  After steps 25-27 the final result should look like  Figure 18.  The equivalent SQL looks like this:

SELECT [2007PF_120301_trimmed].FolderPath, [2007PF_120301_trimmed].ItemCount, [2010PF_120301_trimmed].ItemCount, Abs([2007PF_120301_TRIMMED].[ItemCount]-[2010PF_120301_TRIMMED].[ItemCount]) AS Expr1
FROM 2007PF_120301_trimmed INNER JOIN 2010PF_120301_trimmed ON [2007PF_120301_trimmed].FolderPath = [2010PF_120301_trimmed].FolderPath;

image
Figure 18

28. Run the query using the red “!” shown in Figure 9.  The results will show you all the folders that exist in BOTH public folder databases, the itemscount in each database and the difference between them.  I like the difference reported as a positive number, but you might prefer to remove the absolute value function.

There is more that can be done with this.  You can use Access to run a Find Unmatched query to find all items from one table that are not in the other table (thus locating folders that have an instance in one database, but not the other).  You can experiment with different Join types in the query and you can deal with Folderpaths longer than a single text field can accommodate.  These and any other additional functionality you desire are left as an exercise for the reader to tackle.  I hope this provides you with a process that can be used to compare the item counts between two Public Folder stores (just remember the caveats at the top of the article).

Thanks To Bill Long for reviewing my caveats and Oscar Goco for reviewing my steps with Access.

Chris Pollitt

Released: Update Rollup 1 for Exchange Server 2010 SP3

$
0
0

Update 6/13/13: we added a known issue with transport rules to the blog post below.

Today the Exchange CXP team released Update Rollup 1 for Exchange Server 2010 SP3 to the Download Center.

Note: Some of the following KB articles may not be available at the time of publishing this post.

This update contains fixes for a number of customer-reported and internally found issues. For more details, including a list of fixes included in this update, see KB 2803727. We would like to specifically call out the following fixes which are included in this release:

  • 2561346 Mailbox storage limit error when a delegate uses the manager's mailbox to send an email message in an Exchange Server 2010 environment
  • 2756460 You cannot open a mailbox that is located in a different site by using Outlook Anywhere in an Exchange Server 2010 environment
  • 2802569 Mailbox synchronization fails on an Exchange ActiveSync device in an Exchange Server 2010 environment
  • 2814847 Rapid growth in transaction logs, CPU use, and memory consumption in Exchange Server 2010 when a user syncs a mailbox by using an iOS 6.1 or 6.1.1-based device
  • 2822208 Unable to soft delete some messages after installing Exchange 2010 SP2 RU6 or SP3

For DST changes, see Daylight Saving Time Help and Support Center (microsoft.com/time).

A known issue with Exchange 2010 SP3 RU1 Setup

You cannot install or uninstall Update Rollup 1 for Exchange Server 2010 SP3 on the double-byte character set (DBCS) version of Windows Server 2012 if the language preference for non-Unicode programs is set to the default language. To work around this issue, you must first change this setting. To do this, follow these steps:

  1. In Control Panel, open the Clock, Region and Language item, and then click Region.
  2. Click the Administrative tab.
  3. In the Language for non-Unicode programs area, click Change system locale.
  4. On the Current system locale list, click English (United States), and then click OK.

After you successfully install or uninstall Update Rollup 1, revert this language setting, as appropriate.

We have identified the cause of this problem and plan to resolve it in a future rollup, but did not want to further delay the release of RU1 for customers who are not impacted by it.

A known issue with transport rules after E2010 SP3 RU1 is installed

We have an issue where the messages stick in poison queue and transport continually crashes after this rollup is applied.

We have gathered enough information and have determined the issue.  Specifically, the issue is caused by a transport rule (disclaimer) attempting to append the disclaimer to the end of HTML formatted messages.   When this occurs, messages will be placed in the poison queue and the transport service will crash with an exception.  We are investing resources to develop a code fix.  You can either disable or reconfigure the disclaimer transport rule.

Exchange Team

Log Parser Studio 2.0 is now available

$
0
0

Since the initial release of Log Parser Studio (LPS) there have been over 30,000 downloads and thousands of customers use the tool on a daily basis. In Exchange support many of our engineers use the tool to solve real world issues every day and in turn share with our customers, empowering them to solve the same issues themselves moving forward. LPS is still an active work in progress; based on both engineer and customer feedback many improvements have been made with multiple features added during the last year. Below is a short list of new features:

Improved import/export functionality

For those who create their own queries this is a real time-saver. We can now import from multiple XML files simultaneously only choosing the queries we wish to import from multiple query libraries or XML files.

Search Query Results

The existing feature allowing searching of queries in the library is now context aware meaning if you have a completed query in the query window, the search option searches that query. If you are in the library it searches the library and so on. This allows drilling down into existing query results without having to run a new query if all you want to do is narrow down existing result sets.

Input/Output Format Support

All LP 2.2 Input and Output formats contain preliminary support in LPS. Each format has its own property window containing all known LP 2.2 settings which can be modified to your liking.

Exchange Extensible Logging Support

Custom parser support was added for most all Exchange logs. These are covered by the EEL and EELX log formats included in LPS which cover Exchange logs from Exchange 2003 through Exchange 2013.

Query Logging

I can't tell you how many times myself or another engineer spent lots of time creating the perfect query for a particular issue we were troubleshooting, forgetting to save the query in the heat of the moment and losing all that work. No longer! We now have the capability to log every query that is executed to a text file (Query.log). What makes this so valuable is if you ran it, you can retrieve it.

Queries

There are now over 170 queries in the library including new sample queries for Exchange 2013.

image

image

PowerShell Export

You can now export any query as a standalone PowerShell script. The only requirement of course is that Log Parser 2.2 is installed on the machine you run it on but LPS is not required. There are some limitations but you can essentially use LPS as a query editor/test bed for PowerShell scripts that run Log Parser queries for you!

image

Query Cancellation

The ability to submit a request to cancel a running query has been added which will allow you to cancel a running query in many cases.

Keyboard Shortcuts

There are now 23 Keyboard shortcuts. Be sure to check these out as they will save you lots of time. To display the short cuts use CTRL+K or Help > Keyboard Shortcuts.

There are literally hundreds of improvements and features; far too many to list here so be sure and check out our blog series with existing and upcoming tutorials, deep dives and more. If you are installing LPS for the first time you'll surely want to review the getting started series:

If you are already familiar with LPS and are installing this latest version, you'll want to check out the upgrade blog post here:

Additional LPS articles can be found here:

http://blogs.technet.com/b/karywa/

LPS doesn't require an install so just extract to the folder of your choice and run LPS.EXE. If you have the previous version of LPS and you have added your own custom queries to the library, be sure to export those queries as a backup before running the newest version. See the "Upgrading to LPS V2" blog post above when upgrading.

Kary Wall

A significant update to Remove-DirectBooking script is now available

$
0
0

A short while ago, we posted an article on how to Use Exchange Web Services and PowerShell to Discover and Remove Direct Booking Settings. We received a lot of constructive feedback with some noting that users can experience an issue when enabling the Resource Booking Attendant on mailboxes that were cleansed of their direct booking settings via the sample script we provided. Specifically, the following error can be encountered when the organizer is scheduling a regular non-recurring meeting against the resource mailbox:

“…declined your meeting because it is recurring. You must book each meeting separately with this resource.”

We have updated the script to account for this scenario to prevent and correct this from occurring and we have also updated the article to reflect the changes as well.

In a nutshell, the issue is encountered when we have a divergence of what settings are enabled/disabled between the Schedule+ Free/Busy System (Public) Folder item representing the user’s mailbox and the user’s local mailbox free/busy item. Outlook’s Direct Booking process actually queries the Schedule+ item’s Direct Booking settings when attempting to perform Direct Booking functionality. The Schedule+ folder tree normally contains an item that contains a synced set of Direct Booking settings of that which is stored in the user’s localfreebusy mailbox item. The issue is encountered when the settings between the Schedule+ item and the local mailbox item do not match.

Normally, Outlook triggers a sync of the local mailbox item to the Schedule+ item via deliberate native MAPI code. However, in our case we are using EWS in the sample script, and that syncing trigger does not natively exist. We therefore updated the script to find the Schedule+ item and ensure its settings are congruent with the local item’s settings. The logic for this is actually a bit complicated for two main reasons:

  1. No Schedule+ item exists in the organization – There are valid scenarios where the Schedule+ item may not exist, such as the mailbox was never opened with Outlook and the Direct Booking settings were enabled via another means, such as MFCMAPI and so on.
  2. Co-existent versions of Exchange - EWS is rather particular on how public folder and public folder item bindings can occur. EWS by design will not allow a cross-version public folder (or item) bind operation. Period. This means a session, for example on a mailbox on Exchange 2010 would not be able to bind to a public folder or its items on Exchange 2007, there would need to be a replica of the folder on Exchange 2010 for the bind operation to be successful. Further, continuing our example, even if the there is a replica on Exchange 2010, the bind operation would still fail if the user’s mailbox database’s “default public folder database” is set to a non-2010 public folder database (i.e. an Exchange 2007 database). The EWS session would kick back an error stating: ‘There are no public folder servers available’

With these guidelines in mind, we approached the script update to maximize the congruency potential between the local mailbox item and the public folder item. We only disable the direct booking settings in the local mailbox item if one of the following criteria is met regarding the Schedule+ item:

  • We can successfully bind to the user’s Schedule+ item
    • There is a replica we can touch with the EWS session, and we found the item representing the user and we can therefore safely keep congruency between the local and the Schedule+ items.
  • There is no replica present that would potentially contain an item representing the user
    • There is no replica in the org (any version of exchange) that would contain an item for the user so there is no potential for getting into an incongruent state between the local and the Schedule+ items.
  • There is a replica of the Schedule+ folder on the same version of Exchange that the EWS session is connected to, AND the default public folder database of the user is likewise on the same version of Exchange.
    • We could not find a Schedule+ item for the user (if we did, we would have satisfied condition 1 above), but not because there was no replica containing the item (if we did, we would have satisfied condition 2 above), and not because we could not bind to the folder via the EWS limitations we outlined above. We can therefore state that congruency between the local and the Schedule+ items are not at risk and there is no Schedule+ item representing the user.

It should be noted that we will always take action to disable the Direct Booking settings from the Schedule+ item even if the local mailbox item does not have its Direct Booking settings enabled – this keeps us true to our “congruency” logic.

In closing, please remember that the script is a sample and does not cover every possible scenario out there – we made this update because the aforementioned issue reported is central to having the script produce the desired outcome of fully disabling Direct Booking. We are thankful for and welcome your continued feedback!

Dan Smith & Seth Brandes
Exchange PFEs

Life in a Post TMG World – Is It As Scary As You Think?

$
0
0

Let’s start this post about Exchange with a common question: Now that Microsoft has stopped selling TMG, should I rip it out and find something else to publish Exchange with?

I have occasionally tried to answer this question with an analogy. Let’s try it.

My car (let’s call it Threat Management Gateway, or TMG for short), isn’t actively developed or sold any more (like TMG). However, it (TMG) works fine right now, it does what I need (publishes Exchange securely) and I can get parts for it and have it serviced as needed (extended support for TMG ends 2020) and so I ‘m keeping it. When it eventually either doesn’t meet my requirements (I want to publish something it can’t do) or runs out of life (2020, but it could be later if I am ok to accept the risk of no support) then I’ll replace it.

Now, it might seem odd to offer up a car analogy to explain why Microsoft no longer selling TMG is not a reason for Exchange customers to panic, but I hope you’ll agree, it works, and leads you to conclude that when something stops being sold, like your car, it doesn’t immediately mean you replace it, but instead think about the situation and decide what to do next. You might well decide to go ahead and replace TMG simply based on our decision to stop selling or updating it, that’s fine, but just make sure you are thinking the decision through.

Of course, you might also decide not to buy another car. Your needs have changed. Think about that.

Here are some interesting Exchange-related facts to help further cement the idea I’m eventually going to get to.

  1. We do not require traffic to be authenticated prior to hitting services in front of Exchange Online.
  2. We do not do any form of pre-authentication of services in front of our corporate, on-premises messaging deployments either.
  3. We have spent an awfully large amount of time as a company working on securing our code, writing secure code, testing our code for security, and understanding the threats that exist to our code. This is why we feel confident enough to do #1 and #2.
  4. We have come to learn that adding layers of security often adds little additional security, but certainly lots of complexity.
  5. We have invested in getting our policies right and monitoring our systems.

This basically says we didn’t buy another car when ours didn’t meet our needs any more. We don’t use TMG to protect ourselves any more. Why did we decide that?

To explain that, you have to cast your mind back to the days of Exchange and Windows 2000. The first thing to admit is that our code was less ‘optimal’ (that’s a polite way of putting it), and there were security issues caused by anonymous access. So, how did we (Exchange) tell you to guard against them? By using something called ISA (Internet Security and Acceleration – which is an odd name for what it was, a firewall). ISA, amongst other things, did pre-authentication of connections. It forced users to authenticate to it, so it could then allow only authenticated users access to Exchange. It essentially stopped anonymous users getting to Windows and Exchange. Which was good for Windows and Exchange, because there were all kinds of things that they could do if they got there anonymously.

However once authenticated users got access, they too could still do those bad things if they chose to. And so of course could anyone not coming through ISA, such as internal users. So why would you use ISA? It was so that you would know who these external users were wouldn’t you?

But do you really think that’s true? Do you think most customers a) noticed something bad was going on and b) trawled logs to find out who it was who did it? No, they didn’t. So it was a bit like an insurance policy. You bought it, you knew you had it, you didn’t really check to see if it covers what you were doing until you needed it, and by then, it was too late, you found out your policy didn’t cover that scenario and you were in the deep doo doo.

Insurance alone is not enough. If you put any security device in front of anything, it doesn’t mean you can or should just walk away and call it secure.

So at around the same time as we were telling customers to use ISA, back in the 2000 days, the whole millennium bug thing was over, and the proliferation of the PC, and the Internet was continuing to expand. This is a very nice write up on the Microsoft view of the world.

Those industry changes ultimately resulted in something we called Trustworthy Computing. Which was all about changing the way we develop software – “The data our software and services store on behalf of our customers should be protected from harm and used or modified only in appropriate ways. Security models should be easy for developers to understand and build into their applications.” There was also the Secure Windows Initiative. And the Security Development Lifecycle. And many other three letter acronyms I’m sure, because whatever it was you did, it needed a good TLA.

We made a lot of progress over those ten years since then. We delivered on the goal that the security of the application can be better managed inside the OS and the application rather than at the network layer.

But of course most people still seem to think of security as being mainly at the network layer, so think for a moment about what your hardware/software/appliance based firewall does today. It allows connections from a destination, on some configurable protocol/port, to a configured destination protocol/port.

If you have a load balancer, and you configure it to allow inbound connections to an IP on its external interface, to TCP 443 specifically, telling it to ignore everything else, and it takes those packets and forward them to your Exchange servers, is that not the same thing as a firewall?

Your load balancer is a packet filtering firewall. Don’t tell your load balancing vendor that, they might want to charge you extra for it, but it is. And when you couple that packet level filtering firewall/load balancer with software behind it that has been hardened for 10 years against attacks, you have a pretty darn secure setup.

And that is the point. If you hang one leg of your load balancer on the Internet, and one leg on your LAN, and you operate a secure and well managed Windows/Exchange Server – you have a more secure environment than you think. Adding pre-authentication and layers of networking complexity in front of that buys you very little extra, if anything.

So let’s apply this directly to Exchange, and try and offer you some advice from all of this. What should YOU do?

The first thing to realize is that you now have a CHOICE. And the real goal of this post is to help you make an INFORMED choice. If you understand the risks, and know what you can and cannot do to mitigate them, you can make better decisions.

Do I think everyone should throw out that TMG box they have today and go firewall commando? No. not at all. I think they should evaluate what it does for them, and, if they need it going forward. If they do that, and decide they still want pre-auth, then find something that can do it, when the time to replace TMG comes.

You could consider it a sliding scale, of choice. Something like this perhaps;

TMGScale

So this illustrated that there are some options and choices;

  1. Just use a load balancer– as discussed previously, a load balancer only allowing in specified traffic, is a packet filtering firewall. You can’t just put it there and leave it though, you need to make sure you keep it up to date, your servers up to date and possibly employ some form of IDS solution to tell you if there’s a problem. This is what Office 365 does.
  2. TMG/UAG– at the other end of the scale are the old school ‘application level’ firewall products. Microsoft has stopped selling TMG, but as I said earlier, that doesn’t mean you can’t use it if you already have it, and it doesn’t stop you using it if you buy an appliance with it embedded.

In the middle of these two extremes (though ARR is further to the left of the spectrum as shown in the diagram) are some other options.

Some load balancing vendors offer pre-authentication modules, if you absolutely must have pre-auth (but again, really… you should question the reason), some use LDAP, some require domain joining the appliance and using Kerberos Constrained Delegation, and Microsoft has two options here too.

The first, (and favored by pirates the world over) is Application Request Routing, or ARR! for short. ARR! (the ! is my own addition, marketing didn’t add that to the acronym but if marketing were run by pirates, they would have) “is a proxy based routing module that forwards HTTP requests to application servers based on HTTP headers and server variables, and load balance algorithms” – read about it here, and in the series of blog posts we’ll be posting here in the not too distant future. It is a reverse proxy. It does not do pre-authentication, but it does let you put a non-domain joined machine in front of Exchange to terminate the SSL, if your 1990’s style security policy absolutely requires it, ARR is an option.

The second is WAP. Another TLA. Recently announced at TechEd 2013 in New Orleans is the upcoming Windows Server 2012 R2 feature – Web Application Proxy. A Windows 2012 feature that is focused on browser and device based access and with strong ADFS support and WAP is the direction the Windows team are investing in these days. It can currently offer pre-authentication for OWA access, but not for Outlook Anywhere or ActiveSync. See a video of the TechEd session here (the US session) and here (the Europe session).

Of course all this does raise some tough questions. So let’s try and answer a few of those;

Q: I hear what you are saying, but Windows is totally insecure, my security guy told me so.

A: Yes, he’s right. Well he was right, in the yesteryear world in which he formed that opinion. But times have changed, and when was the last time he verified that belief? Is it still true? Do things change in this industry?

Q: My security guy says Microsoft keeps releasing security patches and surely that’s a sign that their software is full of holes?

A: Or is the opposite true? All software has the potential for bugs and exploits, and not telling customers about risks, or releasing patches for issues discovered is negligent. Microsoft takes the view that informed customers are safer customers, and making vulnerabilities and mitigations known is the best way of protecting against them.

Q: My security guy says he can’t keep up with the patches and so he wants to make the server ‘secure’ and then leave it alone. Is that a good idea?

A: No. It’s not (I hope) what he does with his routers and hardware based firewalls is it? Software is a point in time piece of code. Security software guards against exploits and attacks it knows of today. What about tomorrow? None of us are saying Windows, or any other vendor’s solution is secure forever, which is why a well-managed and secure network keeps machines monitored and patched. If he does not patch other devices in the chain, overall security is compromised. Patches are the reality of life today, and they are the way we keep up with the bad guys.

Q: My security guy says his hardware based firewall appliance is much more secure than any Windows box.

A: Sure. Right up to the point at which that device has a vulnerability exposed. Any security device is only as secure as the code that was written to counter the threats known at that time. After that, then it’s all the same, they can all be exploited.

Q: My security guy says I can’t have traffic going all the way through his 2 layers of DMZ and multitude of devices, because it is policy. It is more secure if it gets terminated and inspected at every level.

A: Policy. I love it when I hear that. Who made the policy? And when? Was it a few years back? Have the business requirements changed since then? Have the risks they saw back then changed any? Sure, they have, but rarely does the policy get updated. It’s very hard to change the entire architecture for Exchange, but I think it’s fair to question the policy. If they must have multiple layers, for whatever perceived benefit that gives (ask them what it really does, and how they know when a layer has been breached), there are ways to do that, but one could argue that more layers doesn’t necessarily make it better, it just makes it harder. Harder to monitor, and to manage.

Q: My security guy says if I don’t allow access from outside except through a VPN, we are more secure.

A: But every client who connects via a VPN adds one more gateway/endpoint to the network don’t they? And they have access to everything on the network rather than just to a single port/protocol. How is that necessarily more secure? Plus, how many users like VPN’s? Does making it harder to connect and get email, so people can do their job, make them more productive? No, it usually means they might do less work as they cannot bothered to input a little code, just so they can check email.

Q: My security guy says if we allow users to authenticate from the Internet to Exchange then we will be exposed to an account lockout Denial of Service (DoS).

A: Yes, he’s right. Well, he’s right only because account lockout policies are being used, something we’ve been advising against for years, as they invite account lockout DoS’s. These days, users typically have their SMTP address set to equal their User Principal Name (UPN) so they can log on with (what they think is) their email address. If you know someone’s email address, you know their account logon name. Is that a problem? Well, only if you use account lockout policies rather than using strong password/phrases and monitoring. That’s what we have been telling people for years. But many security people feel that account lockouts are their first line of defense against dictionary attacks trying to steal passwords. In fact, you could also argue that a bad guy trying out passwords and getting locked out now knows the account he’s trying is valid…

Note the common theme in these questions is obviously – “the security guy said…..”. And it’s not that I have it in for security guys generally speaking, but given they are the people who ask these questions, and in my experience some of them think their job is to secure access by preventing access. If you can’t get to it, it must be safe right? Wrong. Their job is to secure the business requirements. Or put another way, to allow their business to do their work, securely. After all, most businesses are not in the business of security. They make pencils. Or cupcakes. Or do something else. And is the job of the security folks working at those companies to help them make pencils, or cupcakes, securely, and not to stop them from doing those things?

So there you go, you have choices. What should you choose? I’m fine with you choosing any of them, but only if you choose the one that meets your needs, based on your comfort with risk, based on your operational level of skill, and based on your budget.

Greg Taylor
Principal Program Manager Lead
Exchange Customer Adoption Team

A reminder on real life performance impact of Windows SNP features

$
0
0

I think we are due for a reminder on best practices related to Windows features collectively known as “Microsoft Scalable Networking Pack (SNP)”, as it seems difficult to counter some of the “tribal knowledge” on the subject. Please also see our previous post on the subject.

Recently we had a customer that called in on the subject and this particular case stressed the importance of the Scalable Networking Pack features. The background of this case was that the customer was Running Exchange 2010 SP2 RU6 on Windows 2008 R2 SP1, and they had multiple physical sites with a stretched DAG. This customer had followed our guidance from Windows 2003 times and disabled all of the relevant options on all of their servers, similar to below:

Receive-Side Scaling State : disabled
Chimney Offload State : disabled
NetDMA State : disabled
Direct Cache Acess (DCA) : disabled
Receive Window Auto-Tuning Level : disabled
Add-On Congestion Control Provider : ctcp
ECN Capability : disabled
RFC 1323 Timestamps : disabled

The current problem was that the customer was trying to add copies of all their databases to a new physical site, so that they could retire an old disaster recovery site. The majority of these databases were around 1 to 1.5TB in size, with some ranging up to 3TB. The customer stated that the databases took five days to reseed, which was unacceptable in his mind, especially since he had to decommission this site in two weeks. After digging into this case a little bit more and referencing this article, we first started by looking at the network drivers. With all latency issues or transport issues overs a WAN or LAN, we should always make sure that the network drivers are updated. Since the majority of the servers in this customer’s environment were virtual servers running the latest version of the virtual software, we switched our focus over to physical machines. When we looked at the physical machines we saw they had a network driver with a publishing date of December 17, 2009.

At this point I recommended to update the network driver to a newer version with at least a driver date of 2012 or newer. We then tested again, and still saw the transfer speeds roughly similar to those before updating the drivers. At that point I asked the customer to change the scalable network pack items from above to:

Receive-Side Scaling State : enabled
Chimney Offload State : automatic
NetDMA State : enabled

(Here is how you change these items.)

The customer changed the SNP features and then rebooted the machines in question. At that time he started to reseed a 2.2TB database across their WAN at around 12pm. The customer sent me an email later that night that stated the database would now take around 12 hours to reseed. The next morning he sent me another email and the logs were copied over before he showed up for work at 7am. This time the reseed took 19 hours to complete compared to 100+ hours with SNP features disabled. Customer stated that he was very happy, and started planning how to upgrade network drivers on all other physical machines in his environment. Once that was done he was going to change RSS, TCP Chimney, and NetDMA to the recommended values on all of his other Windows 2008 R2 SP1 machines.

The following two articles show the current recommendations for the Scalable Networking Pack features:

  1. Here is the document reference above that shows the correct settings for each version
  2. Even thoughthis article specifies SQL, this is still relevant to the operating system that Exchange sits on.

So, what exactly is our point?

Friends don’t let friends run their modern OS-servers with old network drivers and SNP features turned off! As mentioned in our previous blog post on the subject, please make sure that you update network-level drivers first, as many vendors made various fixes in their driver stacks to make sure that SNP features function correctly. The above is just one illustration of issues that incorrect settings in this area can bring to your environment.

David Dockter

Released: Update Rollups for Exchange 2007 & Exchange 2010 and Security Updates for Exchange 2013

$
0
0

Update 8/14/13: Due to an issue with the Exchange 2013 Security Update installation process, the Exchange 2013 updates have been removed from the Download Center. For more information, please see Exchange 2013 Security Update MS13-061 Status Update.

Today, Exchange Servicing released several updates for the Exchange product line to the Download Center:

  • Update Rollup 11 for Exchange Server 2007 SP3
  • Update Rollup 7 for Exchange Server 2010 SP2
  • Update Rollup 2 for Exchange Server 2010 SP3
  • Exchange Server 2013 RTM CU1 MSRC Security bulletin MS13-061
  • Exchange Server 2013 RTM CU2 MSRC Security bulletin MS13-061

Note: Some of the following KB articles may not be available at the time of this article’s publishing.

Exchange 2007 Rollups

The Exchange 2007 SP3 RU11 update contains two fixes in addition to the changes for MS13-061. For more details, including a list of fixes included in this update, see KB 2873746 and the MS13-061 security bulletin. We would like to specifically call out the following fixes which are included in this release:

  • 2688667 W3wp.exe consumes excessive CPU resources on Exchange Client Access servers when users open recurring calendar items in mailboxes by using OWA or EWS
  • 2852663 The last public folder database on Exchange 2007 cannot be removed after migrating to Exchange 2013 

Exchange 2010 Rollups

The Exchange 2010 SP2 RU7 update contains the changes for MS13-061.  For more details, see the MS13-061 security bulletin.

The Exchange 2010 SP3 RU2 update contains fixes for a number of customer-reported and internally found issues, as well as, the changes for MS13-061. For more details, including a list of fixes included in this update, see KB 2866475 and the MS13-061 security bulletin. We would like to specifically call out the following fixes which are included in this release:

  • 2861118 W3wp.exe process for the MSExchangeSyncAppPool application pool crashes in an Exchange Server 2010 SP2 or SP3 environment
  • 2851419 Slow performance in some databases after Exchange Server 2010 is running continuously for at least 23 days
  • 2859596 Event ID 4999 when you use a disclaimer transport rule in an environment that has Update Rollup 1 for Exchange Server 2010 SP3 installed
  • 2873477 All messages are stamped by MRM if a deletion tag in a retention policy is configured in an Exchange Server 2010 environment
  • 2860037 iOS devices cannot synchronize mailboxes in an Exchange Server 2010 environment
  • 2854564 Messaging Records Management 2.0 policy can't be applied in an Exchange Server 2010 environment

Exchange Server 2013

MS13-061 is the first security update released for Exchange Server 2013 utilizing the new servicing model.  MS13-061 is available as a security update for:

Important: If you have previously deployed CU2, you must ensure you are running build 712.24 in order to apply the security update. For more information about build 712.24, please see Now Available: Updated Release of Exchange 2013 RTM CU2.

Ross Smith IV
Principal Program Manager
Exchange Customer Experience


Hybrid Mailbox moves and EMC changes

$
0
0

As customers are upgraded or sign up for the latest version of Office 365 for business, they may notice some changes in their recipient and organization management experience. For instance, we now allow the Organization and Hybrid Migration management experience to be initiated from the Exchange Administration Center (EAC) in Exchange Online. This allows you as the administrator to take advantage of many of the enhancements that were added to the migration and organization management experience. The good news is you can do this without having to upgrade your on-premises Exchange 2010 Hybrid server to Exchange 2013.

Hybrid Mailbox Moves

In hybrid deployments with Exchange Online, you should consider using the EAC to perform your hybrid mailbox moves. Some of the main features that have been added or enhanced are listed in Mailbox Moves in Exchange 2013:

  • Ability to move multiple mailboxes in large batches
  • Email notification during move with reporting
  • Automatic retry and automatic prioritization of moves
  • Option for manual move request finalization, which allows you to review your move before you complete it
  • Periodic incremental syncs to update migration changes

The other benefit to using the EAC in Exchange Online - it's a web-based tool, and you have access to the latest enhancements as they are made available in Exchange Online. The Migration team is continually working to improve the migration experience and resolve issues that we may face along the way.

The Exchange Management Console (EMC) in Exchange 2010 SP3 is still a supported tool for performing hybrid mailbox moves. However, you'd be missing out on the enhancements that were built into the EAC and you could potentially run into some of the limitations that exist in the EMC when connecting to the new service.

Previous EMC move mailbox experience

Most customers with a hybrid deployment are currently running Exchange 2010 in their on-premises environment. Many customers are accustomed to performing their mailbox moves from the EMC on-premises. If there was an issue initiating the move, the customer would be notified with an error message such as the following:

Screenshot: Generic error when performing a remote mailbox move using the EMC in Exchange 2010
Figure 1: A generic error when performing a remote mailbox move using the Exchange Management Console in Exchange 2010 SP3

This would let the administrator know that they needed to start investigating the moves. In many cases, when you get a generic error (such as the error above) or if you wanted to get a more verbose error message you would use PowerShell to connect to Exchange Online and initiate the move, as shown in the following steps:

  1. Connect to Exchange Online Using Remote PowerShell
  2. Initiate the move request:

    $OnPremAdmin=Get-Credential

    When prompted, enter the on-premises administrator credentials.

    New-MoveRequest -Remote -RemoteHostName mail.contoso.com -RemoteCredential $OnPremAdmin -TargetDeliveryDomain “Contoso.mail.onmicrosoft.com”

Current EMC move mailbox experience

With the latest version of Office 365, if you were to use the EMC on-premises to initiate a mailbox move and there was an issue, you would not get ANY errors. Meaning you may think the move was submitted successfully even if it was never submitted to the Mailbox Replication Service (MRS). This could mean no move logs, no moves in the queue, and no record that a move was even attempted! Although using the EMC to move mailboxes to Exchange Online is (still) supported, all of the move features are not available in the EMC. You should really use the EAC in the cloud to get the best and most reliable administration experience for your mailbox moves.

Moving mailboxes to Exchange Online using the EAC

Here's a quick walkthrough of how to move a mailbox from Exchange 2013/2010/2007/2003 in a hybrid environment with the new service.

This is assuming you've successfully completed hybrid configuration and your on-premises Exchange organization happily coexists with your cloud-based Exchange organization. You can use the Microsoft Exchange Server Deployment Assistant (EDA), our web-based tool, to get step-by-step instrucions for configuring a hybrid environment.

  1. Log into https://portal.MicrosoftOnline.com with your tenant administrator credentials

  2. In the top ribbon, click Admin and then select Exchange

    image

  3. Click Migration> click + and then select the appropriate move mailbox option. In this example, we're moving a mailbox to the cloud so we select Migrate to Exchange Online.

    image

  4. On the Select a migration type page, select Remote move migration as the migration type for a hybrid mailbox move

    image

  5. On the Select the users page, select the mailboxes you want to move to the cloud.

    image

  6. On the Enter on-premises account credentials page, provide your on-premises administrator credentials in the domain\user format.

    image

  7. On the Confirm the migration endpoint page, ensure that the on-premises endpoint shown is the CAS with MRS Proxy enabled.

    Note This will be the same endpoint regardless of wehther you're moving a mailbox to the cloud (onboard request) or moving a mailbox from the cloud to your on-premises Exchange organization (offboard request).

    image

  8. Enter a name for the migration batch and initiate the move.

EMC Changes

In Exchange 2010, the EMC has three sections: 1) Organization Configuration 2) Server Configuration and 3) Recipient Configuration. These sections will be visible to administrators as long as they have permissions, via RBAC, to the objects that are in that container.

Before the Service upgrade, if a customer was in a hybrid deployment and connected their Exchange 2010 SP2 EMC to Office 365, they'd see two sections (Organization Configuration and Recipient Configuration in their cloud organization. The Tenant Administrator would not see server configuration settings because they have no control or view into the Server Configuration settings.

Screenshot: Exchange Management Console (EMC) showing the on-premises and cloud organizations
Figure 2: Exchange 2010 SP2 EMC connected to Exchange Online before the service uprade

After the service upgrade, the customers will actually see only one container in the cloud organization in Exchange 2010 SP3 EMC - Recipient Configuration. We removed the Organization Configuration container because we don't support (or test) allowing an Exchange 2010 EMC to control or change organizational settings in a newer version of Exchange. To prevent issues, we've completely removed that container from the view in EMC. Configuration changes to the tenant’s Exchange organizational settings in Exchange Online should be accomplished by connecting to the EAC in Exchange Online.

Screenshot: Exchange 2010 SP3 EMC connected to Exchange Online after the service upgrade
Figure 3: Exchange 2010 SP3 EMC connected to Exchange Online after the service uprade

Timothy Heeney

Released: The new Exchange Server Deployment Assistant

$
0
0

We’ve listened to your feedback for improving the on-premises and hybrid Exchange Server deployment experience, and we’re happy to announce the release of the new, consolidated Deployment Assistant!

The Exchange Server Deployment Assistant now combines all the on-premises and hybrid deployment scenarios from both the Exchange 2013 Deployment Assistant and the Exchange 2010 Deployment Assistant into a single tool. We’ve eliminated the need for the installation of Silverlight and provide guidance for all Exchange Server deployments in a true one-stop shop experience. We’ve also kept the same, convenient question-and-answer format to create a customized, step-by-step checklist with instructions to deploy Exchange 2013 or Exchange 2010.

Starting your Exchange deployment is familiar and convenient, no matter whether you’re deploying Exchange 2013 or Exchange 2010. Just select the one of the three basic deployment tracks: On-premises, Hybrid, or Cloud Only to get started, as shown in Figure 1.

image
Figure 1:
The Exchange Server Deployment Assistant home page

After selecting your basic deployment track, you’ll be able to choose from either an Exchange 2013-based or Exchange 2010-based path for either on-premises or hybrid deployment scenarios. If you selected the Cloud Only scenario, the deployment path is the same for both Exchange 2013 and Exchange 2010 organizations and you’re on your way to getting started with an Exchange Online-only deployment.

image
Figure 2:
Choose either the Exchange 2013 or Exchange 2010 deployment path

After you’ve chosen either the Exchange 2013 or Exchange 2010 deployment path (see Figure 2), you’ll answer a few questions about your deployment needs and you’re off to the races with your customized deployment checklist! For example, see Figure 3, which shows a checklist for an on-premises Exchange 2013 deployment.

image
Figure 3:
Exchange 2013 on-premises deployment path

And, here’s some more good news! If you’ve bookmarked links to the Exchange 2013 and Exchange 2010 Deployment Assistants, there’s no action required to go to the new tool when using your bookmarks. You’ll be automatically redirected to the new tool when using the URL for the previous version of the Deployment Assistant.

We hope you enjoy the convenience of having all the Exchange 2013 and Exchange 2010 deployment scenario guidance in a single tool. We’d love your feedback and comments! Please feel free to leave a comment here, or send an email to edafdbk@microsoft.com directly or via the 'Feedback' link located in the header of every page of the Deployment Assistant.

Happy deploying!

The Deployment Assistant Team

Under The Hood: Exchange ActiveSync Mailbox Log Analysis

$
0
0

Note: Part 2 of this series can be found here.

One of best troubleshooting tools for Exchange ActiveSync (EAS) is mailbox logging. This logging allows us to see the incoming request sent by the device and the outgoing response from the Exchange server. Exchange ActiveSync Mailbox Logging provides the steps for enabling ActiveSync mailbox logging and breaks down the components of the log. Here we're going to use one of these logs to analyze how a mobile device running an EAS client (for e.g. a Windows Phone) initializes a profile with Exchange and a few standard commands.

Provision

A device must be provisioned before it can synchronize with Exchange. The device sends the Provision command with the device settings contained within the request. The server response includes the security settings based on the ActiveSync mailbox policy associated with the mailbox. It is important to note now that there are two status codes for most ActiveSync requests. The HttpStatus code only provides the IIS response to the request, and a 200 response does not mean the request was successful. The second status code is for the ActiveSync command and varies depending on the command sent by the device. A status code of 1 is most commonly a success.

image

The following example shows the request and response from a Provision command:

image

image

The device sends another Provision command to complete the provisioning process.This request includes the policy key from the previous response. The following example shows the second Provision command sending the PolicyKey.

image

For more detailed analysis of EAS provisioning process & Policies, see Provisioning, Policies, Remote Wipe, and ABQ in Exchange ActiveSync on The Exchange Dev Blog.

FolderSync

Once the device is provisioned, it will send a FolderSync command to obtain the folder hierarchy of the mailbox. If you capture this FolderSync request in the ActiveSync mailbox log, you will have the folder name that correlates to the CollectionId values in future ActiveSync requests. Alternatively, see ActiveSync - Mapping a Collection ID to a Mailbox Folder to determine which folder the CollectionId represents. The following example shows the response from a FolderSync request:

image

Sync

After the EAS client has obtained the folder hierarchy of the mailbox from Exchange, it can begin to populate folders on the device . Windows Phone leverages a hangingSync request to retrieve data from these folder. We should however expect the first Sync request by this device to have an immediate response with new items for one or more folders. It is also important to notice the SyncKey sent by the device in this first request is 0. This is because the folders (have just been created on the device and) currently have no synchronization state. The response for each Sync request will include a new SyncKey value that the subsequent Sync request should send. The following example shows the request and response for a Sync command:

image

image

We will typically either see no status code or a status code of 1 in the response for a Sync request. Any other status code would require further investigation by reviewing the protocol document. A status code of 1 represents a successful Sync request and no status code simply means there were no changes within the heartbeat interval for the request.

Typically you will see items being added to the device in the Sync response. You can see detailed information including the sender and subject if verbose logging has been enabled on the CAS servers. The following example shows a response sending a new item to the Inbox:

image

ItemOperations

There are several uses for the ItemOperations command and one of the most common requests is for downloading an attachment onto the device. The request will contain the FileReference value for the attachment which can be seen in the Sync response if available. The following examples show two responses for the ItemOperations command. We can see the first response is a success and the server sending the attachment. However the second response throws an exception and has a different status code. Lookup this status code in the protocol documentfor more information on the exception.

image

image

Were you able to determine the issue? The exception within the ActiveSync mailbox log for this example does provide enough detail to know the attachment was too large. It is very important to know how to use the protocol document to look up status codes for the various ActiveSync commands.

Calendaring

Now that we have covered the most common command that will be found in the ActiveSync mailbox log (that is the Sync command), it is now time to dig a little bit deeper. One of the most common issues with ActiveSync devices is calendaring. Most ActiveSync users rely on their mobile device to have accurate calendar information so they do not miss an appointment. Calendar items can be added to a mailbox as either an appointment created by the mailbox or a meeting request sent by either the mailbox owner or another organizer.

We are going to review the life of an appointment as we find it within the ActiveSync mailbox log. This appointment will start as a meeting request sent by an organizer within the organization. The following example shows a Sync response where this appointment is first added to the device:

image

image

Here we can see the appointment has a unique ServerId value for this item on the device. We also know that the appointment is currently showing a status of tentative in the BusyStatus. This is the standard placeholder that Exchange creates in the calendar when a new meeting request is received. The following example shows the corresponding meeting request:

image

The complex part of this process begins when the user responds to the appointment on the device. This response results in several requests which include MeetingResponse, SendMail, MoveItems, and Sync commands. We are going to cover each of these steps to see how the commands impact the items on the device and within the mailbox.

image

The MeetingResponse command is in the first ActiveSync command sent by the device to accept, decline, or tentatively accept the meeting. This request does not sent the response to the organizer. The request includes the meeting request item within the Inbox the response is for while the response from the Exchange server also includes the appointment item. The following examples show the request and response for a MeetingResponse command:

image

The SendMail command is the response message sent back to the organizer. The following is an example of the request for a SendMail command:

image

The MoveItems command is sent by the device to move the meeting request item from the Inbox to the Deleted Items folder. The following example shows the request for a MoveItem command:

image

The Sync command is sent by the device to update the calendar item on the mailbox. This Sync request is sending a Change for the appointment to update this status from Tentative to Busy. The following example shows the request sending a change for the BusyStatus:

image

You may also notice another Sync command sent by the device and that the response includes an Add for the Sent Items folder. Here we are getting the meeting acceptance message from the Sent Items and adding in onto the device.

image

Meeting Updates

All that we just covered was the original meeting request being received by the device. That is the origin of the appointment for our example. Next we need to look at how this appointment changes as time moves forward. The next evolution in this appointment’s life is a when the organizer sends an update for a single instance of the series.

A change to a recurring appointment is called an exception and that is exactly what we will see in the ActiveSync mailbox log. The first part of the response shows us that we have a Change for an item and further down within that response we will see the exceptions. The following example shows our appointment receiving a change and the exception includes a new start time:

image

image

Wait. Our appointment has not stopped experiencing life changes. The organizer has decided to cancel an instance of this recurring appointment. The following example once again shows a change for our appointment but this time the exceptions have grown. This example was done intentionally so we can see how difficult it becomes to read these logs when an appointment has a large number of exceptions.

image

image

The good news is these exceptions are sent in the order in which they were made, so the last exception is the most recent. In our example above, the last exception shows this instance of the meeting has been canceled.

The focus has intentionally been on the Calendar item and its changes. However we cannot forget that with each change to the appointment the user also gets an updated meeting request. This means we will also see a Sync request that includes a response adding the meeting request to the Inbox. The following example shows the response adding the updated meeting request:

image

Just like the original meeting request for the series, the user has the ability to accept and decline the changes from the device. If you do not remember the process, don’t hesitate to jump back and take a second look. That is exactly what this article is intended to show.

SendMail

The last topic we are going to cover is sending a message from a Windows Phone device. There are two commands that we may see from an ActiveSync device when a user is sending a message. The Search command will be sent when a user types text into the To field and perform a search against the Global Address List. The following examples show a request and response for a Search command:

image

image

Then the device will send a SendMail command when the user hits the Send icon. Unless an error is encountered during this request there should be an empty response from the Exchange server. The following example shows a request for the SendMail command:

image

Conclusion

At this point you should have some understanding of how Exchange ActiveSync functions and what to look for in the ActiveSync mailbox log. Here are a few reminders:

  • Whenever the device initiates a new item or change, the request from the device will contain this data. Whenever the change is made on the mailbox, the response from the Exchange server will contain the data.
  • Windows Phone uses a hanging Sync command to wait for changes on the mailbox. This request contains a heartbeat interval which determines how long the server should wait before sending a response. A success will return a status code of 1 indicating there are changes. If there are no changes, then no status code is returned.
  • An updated meeting contains all of the exceptions for that appointment and the last exception is the most recent.
  • Accepting a meeting request on an EAS device is a complex process with multiple steps. It is recommended that you review this process if many users use their devices to accept meetings.
  • Current versions of Exchange require a minimum search length of four characters before Exchange will perform the query.
  • The SendMail command does not return a status code unless an error is encountered.

Jim Martin (EXCHANGE)

Analyzing Exchange Transaction Log Generation Statistics

$
0
0

Update 11/5/2013: added a section on firewall rules to try.

Overview

When designing a site resilient Exchange Server solution, one of the required planning tasks is to determine how many transaction logs are generated on an hourly basis. This helps figure out how much bandwidth will be required when replicating database copies between sites, and what the effects will be of adding additional database copies to the solution. If designing an Exchange solution using the Exchange Server Role Requirements Calculator, the percent of logs generated per hour is an optional input field.

Previously, the most common method of collecting this data involved taking captures of the files in each log directory on a scheduled basis (using dir, Get-ChildItem, or CollectLogs.vbs). Although the log number could be extracted by looking at the names of the log files, there was a lot of manual work involved in figuring out the highest the log generation from each capture, and getting rid of duplicate entries. Once cleaned up, the data still had to be analyzed manually using a spreadsheet or a calculator. Trying to gather data across multiple servers and databases further complicated matters.

To improve upon this situation, I decided to write an all-in-one script that could collect transaction log statistics, and analyze them after collection. The script is called GetTransactionLogStats.ps1. It has two modes: Gather and Analyze. Gather mode is designed to be run on an hourly basis, on the top of the hour. When run, it will take a single set of snapshots of the current log generation number for all configured databases. These snapshots will be sent, along with the time the snapshots were taken, to an output file, LogStats.csv. Each subsequent time the script is run in Gather mode, another set of snapshots will be appended to the file. Analyze mode is used to process the snapshots that were taken in Gather mode, and should be run after a sufficient amount of snapshots have been collected (at least 2 weeks of data is recommended). When run, it compares the log generation number in each snapshot to the previous snapshot to determine how many logs were created during that period.

Script Features

Less Data to Collect

Instead of looking at the files within log directories, the script uses Perfmon to get the current log file generation number for a specific database or storage group. This number, along with the time it was obtained, is the only information kept in the output log file, LogStats.csv. The performance counters that are used are as follows:

Exchange 2013:

MSExchangeIS HA Active Database\Current Log Generation Number

Exchange 2007/2010:

MSExchange Database ==> Instances\Log File Current Generation

Note: The counter used for Exchange 2013 only contains the active databases on that server. The counter used for Exchange 2007/2010 contains all databases on that server, including passive copies. To only get data from active databases on an Exchange 2007/2010 server, make sure to manually specify the databases for that server in the TargetServers.txt file.

Multi Server/Database Support

The script takes a simple input file, TargetServers.txt, where each line in the file specifies the server, or server and databases to process. If you want to get statistics for all databases on a server, only the server name is necessary. If you want to only get a subset of databases on a server (for instance if you wanted to omit secondary copies on an Exchange 2007 and 2010 server), then you can specify the server name, followed by each database you want to process.

Built In Analysis Capability

The script has the ability to analyze the output log file, LogStats.csv, which was created when run in Gather mode. It does a number of common calculations for you, but also leaves the original data in case any other calculations need to be done. Output from running in Analyze mode is sent to multiple .CSV files, where one file is created for each database, and one more file is created containing the average statistics for all analyzed databases. The following columns are added to the CSV files:

  • Hour: The hour that log stats are being gathered for. Can be between 0 – 23.
  • TotalLogsCreated: The total number of logs created during that hour for all days present in LogStats.csv.
  • TotalSampleIntervalSeconds: The total number of seconds between each valid pair of samples for that hour. Because the script gathers Perfmon data over the network, the sample interval may not always be exactly one hour.
  • NumberOfSamples: The number of times that the log generation was sampled for the given hour.
  • AverageSample: The average number of logs generated for that hour, regardless of sample interval size. Formula: TotalLogsCreated / NumberOfSamples.
  • PercentDailyUsage: The percent of a full days’ worth of logs that the AverageSample value for that hour accounts for. Formula: (AverageSample / AverageNumberOfLogsPer24Hours) * 100.
  • AverageSamplePer60Minutes: Similar to AverageSample, but adjusts the value like each sample was taken exactly 60 minutes apart. Formula: (TotalLogsCreated / TotalSampleIntervalSeconds) * 3600 * 24.
  • PercentDailyUsagePer60Minutes: Similar to PercentDailyUsage, but adjusts the value like each sample was taken exactly 60 minutes apart. (AverageSamplePer60Minutes / AverageNumberOfLogsPer24Hours) * 100.

Parameters

The script has the following parameters:

  • -Gather: Switch specifying we want to capture current log generations. If this switch is omitted, the -Analyze switch must be used.
  • -Analyze: Switch specifying we want to analyze already captured data. If this switch is omitted, the -Gather switch must be used.
  • -ResetStats: Switch indicating that the output file, LogStats.csv, should be cleared and reset. Only works if combined with –Gather.
  • -WorkingDirectory: The directory containing TargetServers.txt and LogStats.csv. If omitted, the working directory will be the current working directory of PowerShell (not necessarily the directory the script is in).
  • -LogDirectoryOut: The directory to send the output log files from running in Analyze mode to. If omitted, logs will be sent to WorkingDirectory.
  • -MaxSampleIntervalVariance: The maximum number of minutes that the duration between two samples can vary from 60. If we are past this amount, the sample will be discarded. Defaults to a value of 10.
  • -MaxMinutesPastTheHour: How many minutes past the top of the hour a sample can be taken. Samples past this amount will be discarded. Defaults to a value of 15.
  • -MonitoringExchange2013: Whether there are Exchange 2013 servers configured in TargetServers.txt. Defaults to $true. If there are no 2013 servers being monitored, set this to $false to increase performance.

Usage

Run the script in Gather mode, taking a single snapshot of the current log generation of all configured databases:

PS C:\> .\GetTransactionLogStats.ps1 -Gather

Run the script in Gather mode, and indicates that no Exchange 2013 servers are configured in TargetServers.txt:

PS C:\> .\GetTransactionLogStats.ps1 -Gather -MonitoringExchange2013 $false

Run the script in Gather mode, and changes the directory where TargetServers.txt is located, and where LogStats.csv will be written to:

PS C:\> .\GetTransactionLogStats.ps1 -Gather -WorkingDirectory "C:\GetTransactionLogStats" -ResetStats

Run the script in Analyze mode:

PS C:\> .\GetTransactionLogStats.ps1 -Analyze

Run the script in Analyze mode, sending the output files for the analysis to a different directory. Specifies that only sample durations between 55-65 minutes are valid, and that each sample can be taken a maximum of 10 minutes past the hour before being discarded:

PS C:\> .\GetTransactionLogStats.ps1 -Analyze -LogDirectoryOut "C:\GetTransactionLogStats\LogsOut" -MaxSampleIntervalVariance 5 -MaxMinutesPastTheHour 10

Example TargetServers.txt

The following example shows what the TargetServers.txt input file should look like. For the server1 and server3 lines, no databases are specified, which means that all databases on the server will be sampled. For the server2 and server4 lines, we will only sample the specified databases on those servers. Note that no quotes are necessary for databases with spaces in their names.

image

Output File After Running in Gather Mode

When run in Gather mode, the log generation snapshots that are taken are sent to LogStats.csv. The following shows what this file looks like:

image

Output File After Running in Analyze Mode

The following shows the analysis for a single database after running the script in Analyze mode:

image

Notes

By default, the Windows Firewall on an Exchange 2013 server running on Windows Server 2012 does not allow remote Perfmon access. I suspect this is also the case with Exchange 2013 running on Windows Server 2008 R2, but haven’t tested. If either of the below errors are logged, you may need to open the Windows Firewall on these servers to allow access from the computer running the script.

ERROR: Failed to read perfmon counter from server SERVERNAME

ERROR: Failed to get perfmon counters from server SERVERNAME

Update:

After noticing that multiple people were having issues getting this to work through the Windows Firewall, I tried enabling different combinations of built in firewall rules until I could figure out which ones were required. I only tested on an Exchange 2013 server running on Windows Server 2012, but this should apply to other Windows versions as well. The rules I had to enable were:

File and Printer Sharing (NB-Datagram-In)
File and Printer Sharing (NB-Name-In)
File and Printer Sharing (NB-Session-In)

Mike Hendrickson

Do you have a sleepy NIC?

$
0
0

I continue to run into this issue over and over in the field so I wanted people to be aware of this possible problem. In a Database Availability Group (DAG), if your databases are randomly mounting or flipping from one server to another, for no apparent reason (including across datacenters) you may be suffering from your network interface card (NIC) going to sleep. And that’s not a good thing.

Power Management on the NIC

In the power management settings for the NIC on Windows Server, make sure you are not allowing the NIC to go into power save mode. Why is this important? It seems like at least once a month I’ve run into customers who have this power management setting turned on and more than one of them even had it turned on for their replication network. They were seeing some odd behavior - for example, their databases randomly flipping from one DAG node to another for no apparent reason. And yes, they were all on physical machines.

Here are the steps to look at this configuration: use Device Manager to change the power management settings for a network adapter.

To disable all Power Management settings in Device Manager, expand Network Adapters, right-click the adapter > Properties> Power Management, and then clear the Allow the computer to turn off this device to save power check box.

Screenshot: Network adapter properties | Power Management tab
Figure 1: Disable power management for the network adapter from the Power Management tab

Some of your network adapters may not have the Power Management tab available. This is a good thing, as your NIC is not able to go to sleep. This means there is one less item to worry about in your setup!

CAUTION Be careful when you change this setting. If it's enabled and you decide to disable it, you must plan for this modification as it will likely interrupt network traffic. It may seem odd that by just making a seemingly non-impacting change that the NIC will reset itself, but it definitely can. Trust me; I had a customer ‘test’ this during the day by accident… oops!

PowerShell to the rescue

In addition, now that PowerShell is able to be used for just about everything, there is this page that has a PS script available to make this change. There are additional links and related forum threads to review with supplementary information near the bottom of the script download page.

This modifying script will run against all physical adapters to the machines you deploy it to, and you can also modify the script to disable wireless NIC’s. With PS, don’t forget that you can use this script to blast down these changes to all of your Exchange servers with a single step.

GPO and regedit

For those of you that are more comfortable with regedit and creating GPO’s to help control these settings, that option is also available. This page has information on both ‘one off’ fixes that you can download a .reg file and manually deploy, or using GPO Preferences, you can edit the values in a GPO and apply those changes to an Exchange Server OU (Organizational Unit).

The one step to note with the regedit process is which NIC you are working with and how many NIC’s your server has. The registry only knows of the first, second, third, etc. number of NIC’s. Now if you have identical builds between all of your servers, then this option certainly will ensure that all current and any future servers placed into an OU with the GPO applied will adhere to the proper registry settings.

Also don’t forget, you can record all of your changes on a Windows Server 2008R2 or higher OS, by using the Problem Steps Recorder (PSR) tool.

There you have it: if your DAG Databases are randomly becoming active from one server to another with no apparent reason, you may have a sleepy NIC. Please confirm that you have avoided this setting as you build out not only your DAG environment, but all Exchange related servers. Thank you.

Mike O'Neill

Released: Update Rollup 3 For Exchange 2010 Service Pack 3

$
0
0

The Exchange team is announcing today the availability of Update Rollup 3 for Exchange Server 2010 Service Pack 3. Update Rollup 3 is the latest rollup of customer fixes available for Exchange Server 2010. The release contains fixes for customer reported issues and previously released security bulletins. Update Rollup 3 is not considered a security release as it contains no new previously unreleased security bulletins. A complete list of issues resolved in Exchange Server 2010 Service Pack 3 Update Rollup 3 may be found in KB2891587.

Note: The KB article may not be fully available at the time of publishing this post.

The release is now available on the Microsoft Download Center.

The Exchange Team

Under The Hood: Exchange ActiveSync Mailbox Log Analysis – Part 2

$
0
0

The previous post for Exchange ActiveSync mailbox log analysis gave an overview of the various commands a device may send. Now we want to dig just a little bit deeper and provide a way to link items within an EAS mailbox log to the items inside the mailbox.

Unless verbose logging is enabled you do not see the full details of the item (subject, sender, etc.) This leads us to the question: How do you know what item the ActiveSync request/response was for within the mailbox? The next few sections will show you how to correlate an appointment, message, and attachment between the mailbox log and mailbox contents.

Calendar items

The first step is locating the item within the mailbox and pulling the Global Object ID (GOID) property value for the item. We cannot do this using Outlook, so we will need to download MFCMAPI. Launch MFCMAPI, go to the Session menu and select Logon to select your Outlook profile. Open the mailbox and expand the Root Container and Top of Information Store. Right-click on the Calendar and select Open contents table.

image

Find your appointment inside the Calendar table. Then right-click on the tag 0x80000102 and select Edit property. In this example, we will use the appointment with the subject “Blog demo”.

image

Copy this binary value so you will have it available for a search against the mailbox log.

image

The Mailbox Log Parser utility allows you to search and review mailbox logs easily. Here we can use this tool to search for the GOID of the appointment. Launch Mailbox Log Parser, click Import Mailbox Logs to Grid, locate your mailbox log and click Open. Once the log is open, enter the binary value you copied from MFCMAPI into the Search raw log data for strings text box and click Search. The search results will filter the log entries so you only see log entries containing the GOID value of your appointment. Here you will notice the UID value within the mailbox log matches the GOID value from MFCMAPI (click to see full resolution):

image

Review each log entry to determine what action was taken against the appointment. The above image shows a log entry where a Sync request resulted in a change to the appointment. The details for the update can be found within the log entry on the far right. You may also want to consider performing a search using the ServerId value for the appointment found in the log entry. There may be responses that do not contain the UID such as a Delete.

Now let us look at how we can take the calendar item from the mailbox log and find the appointment within the mailbox. For our example we will use the UID value from the mailbox log we used earlier (in image above). We need to open the Calendar contents table using the steps outlined earlier using MFCMAPI. Inside the Calendar table, go to the Table menu and select Set columns.

image

Click OK on the Set Columns window. In the Column set window, click the Add button. In the Property Tag Editor window, enter the Property Tag value 0x80000102 and click OK twice. This will add the UID column to our table view.

image

Sort your Calendar table by this Property tag column you just added and then scroll down until you find the matching UID from the mailbox log. Here you can see we found our appointment once again with the subject “Blog demo”.

image

E-mail message

Launch MFCMAPI, go to the Session menu, and select Logon to select your Outlook profile. Open the mailbox and expand the Root Container and Top of Information Store. Right-click on the folder where the message resides and select Open contents table. This time we want to locate a message within the table. Next, right-click on the tag 0x00710102 and select Edit property. For this example, we will use the message with the subject “RE: Blog message #1”.

image

Copy the binary value and paste it into a tool like Notepad. This value is not as straightforward as the Global Object ID for an appointment. We need to break down this value into a few parts. The following example is from the third message in a conversation thread:

01CEC617632457F0D646F5744F4990165503AB61C52F00000CF610

The value is broken down as follows:

  1. Remove the first byte – 01
  2. The next five bytes (10 characters) represent the Conversation Index for the message or the current system time- CEC6176324
  3. The next 16 bytes (32 characters) represent the Conversation Id for the message or the globally unique identifier (GUID) - 57F0D646F5744F4990165503AB61C52F
  4. The remaining bytes are added to the Conversation Index (only present for additional messages within the thread)

Note: Additional information on tracking conversations can be found here.

Alright, so what does that mean to us? Once again we will use the Mailbox Log Parser tool to search for our item. This time enter the ConversationId value extracted from the previous step into the Search raw log data for strings window. In the results below, you can see we found two messages with this ConversationId value. Remember, this search will return all messages related to the conversation including messages in Sent Items.

image

Analysis of the log entry shows the item being added to the folder on the device.

image

Keep in mind we have two results for this conversation. You need to use the Conversation Index value to locate the exact message in the log.

What about the reverse? Just make note of the ConversationId value from the mailbox log for your message. Then open MFCMAPI to open the content table for the folder where the message resides. Sort the table using the Conversation ID column and search for the ConversationId value from the mailbox log. You should find your message(s) for this conversation.

image

We can see in this example there are two messages within this conversation using the Conversation ID. We would need to examine the property further for each item to obtain the Conversation Index value to locate the exact message.

Attachments

What about those attachment errors you see in the mailbox log? The mailbox log does give us the information we need to locate the attachment inside the mailbox. The following example shows the FileReference value of the attachment is 5%3a10%3a1. This equates to 5:10:1 or attachment 1 for ServerId 5:10.

image

First we have to search the mailbox log for this ServerId to determine the message if we do not already know it. Using the example attachment above, we can see the message being added to the folder:

image

Now we can use the steps from earlier section to locate the message within MFCMAPI using the ConversationId. Once we locate the message, right-click on the message and select Attachments> Display attachment table.

image

We can determine what attachment the ActiveSync mailbox log reference by matching the Num column from the log value. In our example, the attachment referenced was _Read~1.pdf.

image

Conclusion

Each item that is synchronized to and from Exchange contains a unique identifier that we can use to locate the item in either the mailbox or ActiveSync client. Calendar items have a unique Global Object ID and mail items have a ConversationIndex and ConversationId value. Now you can review an Exchange ActiveSync mailbox log with more confidence, knowing that you can associate items within the log with items inside the mailbox.

Jim Martin


Released: Microsoft Security Bulletin MS13-105 for Exchange

$
0
0

Today the Exchange team released security bulletin MS13-105. Updates are being made available for the following versions of Exchange Server:

  • Exchange Server 2007 SP3
  • Exchange Server 2010 SP2
  • Exchange Server 2010 SP3
  • Exchange Server 2013 CU2
  • Exchange Server 2013 CU3

Customers who are not running one of these versions will need to upgrade to an appropriate version in order to receive the update.

Security bulletin MS13-105 contains details about the issues resolved, including download links.

For Exchange Server 2007/2010 customers, the update is being delivered via an Update Rollup per standard practice. Due to the timing of the release of our most recent Update Rollups, the only difference between the previously released Update Rollup and the Security Update Rollup released today is the inclusion of the security updates identified in MS13-105. We did not include updates for any other customer reported issues in these packages to ease their adoption.

For Exchange Server 2013 customers, security updates are always delivered as discrete updates and contain no other updates. Security updates for Exchange 2013 are cumulative in nature based upon a given Cumulative Update. This means customers who are running CU2 who have not deployed MS13-061 can move straight to the MS13-105 update because it will contain both security updates. Customers who are already running MS13-061 on CU2 may install MS13-105 on top of MS13-061 without removing the previous security update. If MS13-061 was previously deployed, Add/Remove Programs will indicate that both updates are installed. If MS13-061 was not previously deployed, only MS13-105 will appear in Add/Remove Programs.

These updates are being made available via Microsoft Update and on the Microsoft Download Center.

Exchange Team

Exchange 2010 and 2013 Database Growth Reporting Script

$
0
0

Introduction

Often times in Exchange Support we get cases reporting that the size of one or more Exchange databases is growing abnormally. The questions or comments that we get will range from “The database is growing in size but we aren’t reclaiming white space” to “All of the databases on this one server are rapidly growing in size but the transaction log creation rate is “normal”. This script is aimed at helping collect the data necessary in determining what exactly is happening. For log growth issues, you should also reference Kevin Carker’s blog post here.

Please note when working with Microsoft Support, there may still be additional data that needs to be captured. For instance, the script does not capture things like mailbox database performance monitor logging. Depending on the feedback we get, we can always look at building in additional functionality in the future. Test it, use it, but please understand it is NOT officially supported by Microsoft. Most of the script doesn’t modify anything in Exchange, it just extracts and compares data.

Note: The space dump function will stop (and then restart) the Microsoft Exchange Replication service on the target node and replay transactions logs into a passive copy of the selected database, so use this with caution. We put this function in place because the only way to get the true white space of a database is with a space dump. People often think that the AvailableNewMailboxSpace is the equivalent of whitespace, but as Ross Smith IV notes in his 2010 Database Maintenance blog“Note that there is a status property available on databases within Exchange 2010, but it should not be used to determine the amount of total whitespace available within the database. AvailableNewMailboxSpace tells you how much space is available in the root tree of the database. It does not factor in the free pages within mailbox tables, index tables, etc. It is not representative of the white space within the database.” So again, use caution when executing that function of script as you probably don’t want to bring a lagged database copy into a clean shutdown state, etc.

Before we get into an example of the script, I wanted to point out something you should always check when you are troubleshooting database growth cases – what is the total deleted item size in the database and are any users on Litigation hold.

The following set of commands will export the mailbox statistics for any user that is on Litigation Hold for a specific database and furthermore will give you the sum of items in the recoverable items folder for those users (remember we use the subfolders Versions and Purges when Lit Hold is enabled).

1. Export the users mailbox statistics per database that have litigation hold enabled

get-mailbox -database <database name> -Filter {LitigationHoldEnabled -eq $true} | get-mailboxstatistics | Export-CSV LitHoldUsers.csv

2. Import the new CSV as a variable:

$stats = Import-Csv .\LitHoldUsers.csv

3. Get the sum of Total Deleted Item Size for the Lit Hold users in the spreadsheet

$stats | foreach { $bytesStart = $_.TotalDeletedItemSize.IndexOf("(") ; $bytes = $_.TotalDeletedItemSize.Substring($bytesStart + 1) ; $bytesEnd = $bytes.IndexOf(" ") ; $bytes = $bytes.Substring(0, $bytesEnd) ; $bytes } | Measure-Object –Sum

This will give you the sum for the specific database of recoverable items for users on litigation hold. I’ve seen cases where this amount represented more than 75% of the total database size. You also want to confirm what version of Exchange you are on. There was a known store leak fix that was ported to Exchange 2010 SP3 RU1. I don’t believe the KB is updated with the fix information, but the fix was put in place, so before you start digging in too deep with the script, make sure to install SP3 RU1 and see if the issue continues.

Ok moving onto the script. What can the script do you ask? The script can do the following:

  • Collects mailbox statistics across the specified database, adding mutable note properties for future use in differencing
  • Collects database statistics for the specified database, adding mutable note properties for later differencing.
  • Collects mailbox folder statistics for all mailboxes on the specified database, adding mutable properties for later differencing
  • Compares size and item count attributes of the input database from the differencing database, returning a database type object with the modified attributes
  • Compares size and item count attributes of the input mailbox from the differencing mailbox, returning a mailbox type object with the modified attributes
  • Compares size and item count attributes of the input folder from the difference folder, returning a folder type object with the modified attributes.
  • Compares size and item count attributes of the input report from the difference report, returning a report type object with the modified attributes.
  • Exports a copy of a report (database, mailbox, and folder statistics) to the specified path or current directory in *.XML format
  • Imports an *.XML report and exports it to *.CSV format.
  • Imports the report details from the specified file path (database, mailbox, and folder statistics)
  • Outputs database details and top 25 mailboxes by size and top 25 folders by size
  • Collects a space dump, ESEUTIL /MS, from a passive copy of the specified database and writes to *.TXT
  • Searches for events concerning Online Maintenance Overlap and Possible Corruption, outputting them to the screen
  • Collects and exports current Store Usage Statistics to *.CSV

You can download the script from here.

Sample script run

Issue reported: “Mailbox Database 0102658021” is rapidly growing in size.

  • List options to use with -mode switch:

image

  • Choose Collect and Export the data for the database that is growing in size (enter mode 1)
  • Specify a path or use the current working directory. Quotes around the path is optional, but the path must already exist.
  • Specify the database name. It will run against the active copy. Quotes are optional.

image

  • Depending on the size of the database, folder counts, etc. this could take some time to run from here. Once the report is generated you will be prompted to select the top # of items to display from each report. 25 is the default if you just press enter.

image

  • The onscreen reports will now generate. Note DB size on disk here is 1.38GB.

clip_image002

The onscreen reports that you can scroll through include the Database size details and individual reports for the Top 25 of the following: mailboxes by item size, mailboxes by DeletedItemSize, mailboxes by item count, mailboxes by deleted item count, mailboxes by associated items, Folders by size, Folders by item count, and Folders by deleted item count.

The Full XML report will be stored in the location you specified.

image

If you close out of the PowerShell window and wish to review the reports again, just run the script in mode 2 (quotes are optional).

image

Now we have a valid report at a single point in time of what's going on in the database. Since we are troubleshooting a “Database Growth” issue, we will need to wait some time for the database to grow. If you have ample space on the database drive, then I would run the report every 24 hours.

Once you ready, compile a second report of the database (same way you did the first above)

Press enter for top 25 items and the onscreen report will start scrolling through. As you can see below our database size increased on disk from 1.38 GB to 1.63 GB.

image

So what grew? Well now we will use Mode 3 of the script to compare the 2 XML reports. Note the second XML report in the directory:

image

Run the script with –mode 3. You will be prompted to enter the full file path for the original report and then the second report after the DB growth was recognized.

image

Once the differential is completed you will see a report that is similar to the first two reports. Keep in mind this is a DIFFERENTIAL Report, so it is reporting on how many items in a particular folder grew or how much the DB grew, etc.

image

As you can see above the size on disk shows 256mb. This is actually how much the database grew as we know that it went from 1.38gb to 1.63gb. If I scroll through the reports, I can see that the Administrator mailbox is where most of the growth took place (which is where I added the content).

image

This data can be used to tell what user(s) might be causing the additional growth. As noted earlier, we have had some “phantom” growth cases as well where we had known store leaks which is why it is imperative to make sure you have installed Exchange 2010 SP3 RU1. Its possible that you could run into that type of scenario here, but the data should support that as you would see DB on disk grow but no real growth in the mailboxes at which point you would need to engage Microsoft Support.

A quick note on the Actual Overhead value. This is calculated by taking the physical size of the database and subtracting the AvailableNewMailboxSpace, TotalItemSize and TotalDeletedItemSize. Remember that AvailableNewMailboxSpace is not the true amount of whitespace, so the actual number may be a little higher than what is reported here.

Other script parameters

The remaining modes of the script should be pretty self explanatory.

Mode 4 – Export Store Usage Statistics just uses the built in Get-StoreUsageStatistics function allowing you to run it at a server or database level.

Mode 5 – Will search the application log for events concerning Online Maintenance Overlap and Possible Corruption, outputting them to the screen. We probably didn’t get every event listed here, so we can add events as we see them.

Mode 6 – Will search the server that it is run on for passive copies of databases. It will alert you to any that are configured as lagged copies. If you choose to run this against a passive copy to get the true white space, then it will stop the Microsoft Exchange Replication service, do a soft replay of logs needed to bring the passive copy into a clean shutdown, and then run an ESEUtil /MS against the passive copy. Once completed it will restart the Replication service.

Mode 7 – will just read in one of the XML reports created from Mode 1 and break it out into its individual component reports in CSV format.

Jesse and I decided to build this because we continue to see cases on database growth, so a special thanks to him for running with the idea and compiling the core components of the script. We both had been running our own versions of this while troubleshooting cases, but alas, his core script was better (I still got to add some of the fun ancillary components). We’d like to thank Bill Long for planting the idea in our heads as he worked so many of these cases from a debugging standpoint as well as David Dockter and Rob Whaley for their technical review.

Hopefully this helps you troubleshoot any database growth issues you run across. We look forward to your comments and are definitely open to suggestions on how we can make this better for you.

Happy Troubleshooting!

Jesse Newgard and Charles Lewis
Sr. Support Escalation Engineers

Released: Update Rollup 5 for Exchange 2010 Service Pack 3 and Update Rollup 13 for Exchange 2007 Service Pack 3

$
0
0

The Exchange team is announcing the availability of the following updates:

Exchange Server 2010 Service Pack 3 Update Rollup 5 resolves customer reported issues and includes previously released security bulletins for Exchange Server 2010 Service Pack 3. A complete list of the issues resolved in this rollup is available in KB2917508.

Exchange Server 2007 Service Pack 3 Update Rollup 13 provides recent DST changes and adds the ability to publish a 2007 Edge Server from Exchange Server 2013. Update Rollup 13 also contains all previously released security bulletins and fixes and updates for Exchange Server 2007 Service Pack 3. More information on this rollup is available in KB2917522.

Neither release is classified as a security release but customers are encouraged to deploy these updates to their environment once proper validation has been completed.

Note: KB articles may not be fully available at the time of publishing of this post.

The Exchange Team

Now Available: GetLogFileUsage.ps1 script

$
0
0

Whether you’re using the Exchange Server Role Requirements Calculator by Ross Smith IV or the Exchange Client Network Bandwidth Calculatorby Neil Johnson, you’ll need to provide statistics about your log file usage to determine bandwidth requirements.

Whenever I’ve done that previously, I’d pipe the directory content to a text file and then start working on it in Excel. That is quite tedious and laborious work, and to be honest; very few people would probably do it for more than one or two log sets if at all.

Then why not automate it? If it’s a question of finding files and a bit of string handling, PowerShell should be able to do it for you… And sure enough, writing the first lines of code in a Seattle hotel, the project started taking form.

The script, GetLogFileUsage.ps1, is controlled by command line inputs, if no arguments are given, the help screen will display:

Log1

By default the script will grab log files from the last 24 hours but can be set to use a specific date using the “-Date” parameter. You need to make sure that transaction log files have not been truncated by a backup in the set time window. Needless to say, this will not work with circular logging.

You can specify a single database using the “-Database <db name>” parameter, a specific server by using the “-Server <server name>” parameter, or simply just all servers in the Organization by using “-Server All”.

Important: To support the database and server parameters, the script must run in the Exchange Management Shell.

If you’re unable to run the script in EMS or you’re collecting statistics from legacy Exchange or even a non-Exchange server, you can use a path file to input servers and paths to the script. The format of the file is:

Log2

Using the path file as input will provide support for any transaction log based database, not just Exchange. So feel free to mess around with it, I’d be happy to hear how many third party products that are supported.

To use the default file “.\paths.txt”, just add the parameter “-File” or use it in combination with “-PathFile <file name>” to specify a file of your choice.

If you want to use another CSV file delimiter than semicolon, specify it on the command line using the “-Delimiter” parameter, or change it in the script if you want it to be permanent.

Depending on how many databases or servers you’ve selected, the script will run for a while until it shows the output displayed in the following screen:

Log3

This serves only as a means for you to check if the numbers look right. The “Percent” column is automatically placed in your clipboard for you to paste into Notepad (first line is blank so you need to remove it before pasting into Excel).

The pasted numbers will look like this in the Exchange Server Role Requirements Calculator (you have to select site resilient deployment to enter numbers into these cells):

Log4

And this is how it looks when used in the Exchange Client Network Bandwidth Calculator (found on the bottom far right of the “Client Mix” worksheet):

Log5

As an added bonus, a CSV file with all your log drive statistics is created for you to see if a better distribution of log files is needed to even out load on the drives.

Log6

I hope you will find this script useful and time saving when working with the calculators. Feedback and comments for improvement are more than welcome.

Karsten Palmvig
Principal Consultant, MCS Denmark

This mailbox database contains one or more mailboxes…

$
0
0

Let’s say you are in process of removing the last copy of a mailbox database or uninstalling an Exchange server and run into the following error message:

“This mailbox database contains one or more mailboxes…”

You check again and again but can’t find a mailbox on the server being uninstalled or the database you are deleting, but the error continues haunting you!!

Sounds familiar? Ran into this before? Read on!

Let’s take the example of DB2, a database present on an Exchange Server 2013 that I am trying to remove and am getting the error:

image

As suggested by the error message, I verified for the presence of all sorts of possible mailboxes one can create, namely normal user mailboxes, arbitration mailboxes, public folder mailboxes and archive mailboxes.

Checking for regular mailboxes reveals nothing:

image

Checking for Archive mailboxes reveals nothing:

image

Checking for public folder mailboxes reveals, you guessed it, nothing:

image

Finally, checking for Arbitration mailboxes gives similar results:

image

Still, the removing mailbox database continues failing with same error. What to do? Do I need a flamethrower?

If you are using Exchange 2013, you can use the Remove-MailboxDatabase as it indicates the DN of the user that still has mailbox on the database, but only when -Verbose parameter is specified. Like so:

image

If you do not have Exchange Server 2013, don’t despair…

Another possibility is that the database you’re trying to remove is an Archive Database for a mailbox residing on a different mailbox database.

The following command helps you list mailboxes using a specific database as Archive Database:

Get-Mailbox | where {$_.ArchiveDatabase -eq "<databaseName>"}

Here’s the output from my example:

image

Aha! Mystery solved! So, I just moved the archive mailbox to another database:

image

…And the database could now be removed after the move was complete.

We realize that the experience here is not ideal and the relevant team is aware. In the meantime, hope this helps some of you.

Bhalchandra Atre

Viewing all 301 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>